Would highly intelligent robots want to eliminate humanity? The question has inspired umpteen sci-fi movies and books, mainly due to the fears it instantly raises. Are they coming for us? Will they eventually supersede us? Well, robots have become an even more tangible part of our reality and hence, we do need this doubt, if not fear, to be properly addressed.
The latest efforts in this field are being made by the University of Hertfordshire scientists who have put forward the concept of “Empowerment”. The idea would teach a robot to safeguard humans while at service and also ensure its own safety at the same time. The concept rings the bell of the famous three laws of robotics dictated by Sir Isaac Asimov as seen in the movie I, Robot.
The three famous laws of Robotics are as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The concept as explained by Christoph Salge, one of the scientists involved in the research, goes as, “Empowerment means being in a state where you have the greatest potential influence on the world you can perceive.”
He further adds, “So, for a simple robot, this might be getting safely back to its power station, and not getting stuck, which would limit its options for movement. For a more futuristic, human-like robot this would not just include movement, but could incorporate a variety of parameters, resulting in more human-like drives.”
The concept parallels one’s safety and worth with empowerment. Being empowered and keeping others empowered would be the goal of a robot when it interacts in a human-filled environment, which is basically the gist of Asimov’s laws of robotics.
The Conflicting definition of “Harm”
Human language can be easily misconstrued. Even talking among ourselves, we often misunderstand each other, leave alone robots making complete sense of our words. Same is the case with the use of “harm” in the first law of robotics;
“A robot may not injure a human being or, through inaction, allow a human being to come to harm”
As depicted in Asimov’s own stories, robots fall short of decoding complex situations where preventing harm often brings it about. This is because robots can become intelligent beings but that doesn’t mean they will act conscientiously as well.
Therefore, the team has tried to encode this complex logic via the concept of Empowerment mathematically into robots. Moreover, the research significantly widened the horizon of this concept to include human Empowerment. Robots would need to not only take care of their own empowerment but also keep a check on the Empowerment of the humans they work with.
This would ensure everyone’s safety as a default virtue rather than acting trying to ensure it in obscure ways. Mr. Salge comments, “We don’t want to be oppressively protected by robots to minimize any chance of harm, we want to live in a world where robots maintain our Empowerment.”
This is one way of teaching the robots to not start an apocalypse back on us. The concept sure looks good on paper however it will be interesting to see practically. Thoughts? Let us know in comments.