Google DeepMind develops switch to prevent robots from causing destruction

Google acquired Google DeepMind, a London-based artificial intelligence company, in 2014 to advance in robotics segment. Google DeepMind is conducting research to develop a kill switch for the robots that will help human destroy the machine by pressing a single button. This research may come as relief to those who might be afraid of outcome of a world full of robots, which could lead to undesirable outcomes for humanity.

The research paper was co-written by Laurent Orseau, who is a research scientist with Google DeepMind, as well as by Stuart Armstrong, a researcher with the Future of Humanity Institute at the University of Oxford in the UK. The research is just needed in a time when people are trying to create robots and are anticipating their risks to the human world. The researchers associated with the study are trying to devise a way to prevent human interventions from happening.

The fear about a world human will share with robots concern even those of high-tech entrepreneur like Elon Musk and renowned physicist Steven Hawking, who even said that development of full artificial intelligence could spell the end of the human race. A research paper belongs to DeepMind and the University of Oxford that stated there should be a way to repeatedly safely interrupt an algorithm.

“If an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions — harmful either for the agent or for the environment”, researchers wrote in a paper posted on the Machine Intelligence Research Institute website.

A report published in Popsci revealed, “What separates humans from artificial intelligence — for now — is our ability to learn quickly with just a few examples. We can see a dog once and determine if most other animals we see are dogs or not. Computers, our binary friends, aren’t so easily adaptable. It usually takes millions of examples to teach a computer to recognize a cat or understand language. That’s why now, researchers are investing a lot of time to make machines that learn faster and from fewer examples.”

The latest research from Google’s artificial intelligence-focused DeepMind division explores a new way to build artificial curiosity—incentivizing the A.I. to learn by making it want to win the game. And the algorithm is playing the game just like humans would, by looking at the screen and making decisions based on what’s happening on the screen. And the algorithm gets a digital reward for exploring more of the game.

“Each room poses a number of challenges,” DeepMind researchers wrote in their June 6 paper. “To escape the very first room, the agent must climb ladders, dodge a creature, pick up a key, then backtrack to open one of two doors.”

“With so many people taking their cues from the movies on what a future with artificial intelligence will look like, some who fear one day having robotic overlords will be heartened by research that Google is doing,” according to a news report published by Computer World.

“If an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions — harmful either for the agent or for the environment,” researchers wrote in a paper posted on the Machine Intelligence Research Institute website. “However, if the learning agent… learns in the long run to avoid such interruptions, for example by disabling the red button, it is an undesirable outcome.”

The paper was co-written by Laurent Orseau, a research scientist with Google Deep Mind, and Stuart Armstrong, a researcher with the Future of Humanity Institute at the University of Oxford in the U.K. The researchers are looking for ways to keep a machine from learning about human interventions and stopping them from happening.

According to a story published on the topic by IBTimes, “People who’ve watched Stanley Kubrick’s “2001: A Space Odyssey” can probably relate to how a sentient AI can be something to lose sleep over. Stephen Hawking, Elon Musk and Bill Gates agree that artificial intelligence (AI) is getting closer to being self aware, and Google feels it’s necessary to have a means to rein in an out-of-control AI.”

Laurent Orseau, director of Google’s DeepMind (the division that built AlphaGo, which defeated champion Lee SeDol in the strategy board game Go) has authored a paper titled “Safely Interruptible Agents” proposing the incorporation of a kill switch while programming an AI. The paper was written in collaboration with Oxford University professor Stuart Armstrong.

In the paper, Orseau and Armstrong talk about how AI may not necessarily operate optimally in the real world. The duo believes that if the AI is functioning under human supervision, then in the event that the AI is going to do something that will result in more harm than good, the AI should be stopped by pressing what they call “the big red button”.

Leave a Reply

Your email address will not be published. Required fields are marked *