Scientists from top companies working together to find solutions to five AI safety issues

Scientists from Google’s deep-learning research unit, Google Brain, the Elon Musk-backed OpenAI, and Stanford and Berkeley universities have joined hands in order to find solutions to five safety problems that could take place when AI robot are put to work in home, office and industry.

Google said that it wants to have concrete answers to questions with regard to safety and artificial intelligence (AI). Many a times, AI robots have been blamed for stealing jobs and even to have capable of destroying humanity.

Chris Olah, one of the Google Brain contributors to the paper, said that it is very important to settle down the concerns when it comes to real-machine learning research. There is a need to come up with practical solutions for engineering AI systems that work safely and are reliable.

The paper titled, ‘Concrete Problems in AI Safety’, looks at accidents in machine-learning systems. As per Google, AI technologies will prove useful and beneficial for humanity. Eric Schmidt, Chairman of Google parent firm Alphabet, does not find the concerns that machines will one day outsmart humans and destroy them to have any sort of weight.

The research paper also provides ways as to how agents can be stopped from going astray. As per the researchers, if reward hacking as to be stopped then reward function itself should be made an intelligent agent and less prone to be being gamed. In fact, the researchers have suggested to involve trip wires, which if triggered would make a human alerted and could stop the AI. But there is also a possibility that AI sees it and intentionally stops it.

The list of problems included in the research paper, include avoiding negative side effects, scalable oversight, safe exploration and robustness to distributional shift.

The researchers have also suggested to include computer security concepts like sandboxing to counter exploits. Along with this, the researchers have also mentioned that it is difficult to completely solve the problem. But the researchers also believe that the measures might lead to lessen the scale of problem or lead to more robust solutions.

Other aspects being tackled by the researchers in the paper include as how to monitor AI systems on a large scale, how to enable AI agents to explore new avenues without risking the lives of people, how to ensure an agent recognizes that it is in the environment for which it has been designed for.

The researchers said that in order to prevent negative side effects, they need to take action against unwanted changes to the environment, but also making sure to provide ways for robot to explore and learn.

Certain solutions suggested by the researchers include human oversight and simulated and constrained exploration. “With [AI] systems controlling industrial processes, health-related systems, and other mission-critical technology, small-scale accidents … could cause a justified loss of trust in automated systems”, said the researchers. In the paper, the researchers have mentioned that it becomes even more difficult to estimate the risk larger accidents.

Therefore, it would be better and a cautious step to come up with a principled and forward-looking approach to safety that continues to remain relevant, as autonomous systems become more powerful.

According to a report in E Week News by Jaikumar Vijayan, “Google has released technical paper on AI safety produced in collaboration with researchers from Stanford University, University of California, Berkeley and OpenAI. Concerns about things going wrong with the artificial intelligence systems of the future to have gotten the attention of researchers at Google. Barely two weeks after the company’s DeepMind group announced a partnership with researchers at the University of Oxford to develop a kill switch for rogue AI systems, Google has released a technical paper devoted to addressing AI safety risks.”

Machine learning and artificial intelligence are important areas for Google. The company has said it wants to leverage advances in these areas to make its core technologies better. The company already applies AI and machine intelligence techniques in applications like Google Translate, Google Photos and voice search. Company CEO Sundar Pichai has said that Google expects to see AI radically transforming the way people travel, accomplish daily tasks and tackle problems in areas like health care and climate change.

A report published in the ZD Net said, “While possible AI safety risks have received a lot of public attention, most previous discussion has been very hypothetical and speculative. We believe it’s essential to ground concerns in real machine-learning research, and to start developing practical approaches for engineering AI systems that operate safely and reliably.”

AI robots are also prone to “reward hacking”, which might happen when an agent finds a software bug in its reward function. For the agent, this isn’t a flaw but a feature that it can validly exploit for achieving a greater reward. “If our cleaning robot is set up to earn reward for not seeing any messes, it might simply close its eyes rather than ever cleaning anything up. Or if the robot is rewarded for cleaning up messes, it may intentionally create work so it can earn more reward.”

“Since AI agents are “unlikely to behave optimally all the time,” Google DeepMind and University of Oxford researchers previously proposed (pdf) a “big red button” method; if a human is supervising an AI agent and catches it continuing “a harmful sequence of actions,” then the human hits the whammy button to stop the harmful action. The AI might attempt to disable the red button so it is not interrupted and still receives its reward; the research paper looks at ways to stop AI from learning how to stop a human from interrupting its actions,” according to a news report published by Computer World.

“With the realistic possibility of machine learning-based systems controlling industrial processes, health-related systems, and other mission-critical technology, small-scale accidents seem like a very concrete threat, and are critical to prevent both intrinsically and because such accidents could cause a justified loss of trust in automated systems. The risk of larger accidents is more difficult to gauge, but we believe it is worthwhile and prudent to develop a principled and forward-looking approach to safety that continues to remain relevant as autonomous systems become more powerful.”

Leave a Reply

Your email address will not be published. Required fields are marked *