Interrupting might mean freezing in place, shutting down, on going into remote-control mode where a human operator can guide the robot to safety.ĪI and robots have always had off buttons. Regardless of the cause of the error, it is good to have a big red button on hand to stop the robot or AI in its proverbial tracks. Since their learning is incomplete, they may make mistakes or try out new actions that are dangerous or harmful. Robots can be trained “online”, meaning they are learning as they are attempting to perform the tasks.Robots have imperfect senses and can perceive the world incorrectly, causing them to perform the wrong behaviors at the wrong times.Defining “harmful” is non-trivial, especially when we consider psychological harm. We want to simply tell a robot “perform task X” but what we really mean is “perform task X without doing anything dangerous or harmful”. Robots can be given the wrong objective function.There are many reasons an AI or robot can “go rogue”. Google and OpenAI together published a comprehensive list of AI safety challenges that can arise as AI and robots become more sophisticated, but provide few solutions. Google DeepMind introduced the problem of robots learning to prevent humans from interrupting them or turning them off. Research from myself, Michael Littman, and Peter Abeel and Stuart Russell addresses how to teach robots about human values. Scientists have already started conducting research in “AI Safety”.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |