Isaac Asimov is not only a pioneer of the science fiction genre, but he is also the creator of one of the most popularized concepts in robotics and A.I. theory: The Laws of Robotics. It is safe to say that all science fiction written about robots or artificial intelligence centers around either the adherence to these laws or, more likely, some deviation from them. But these laws are not only the product of an artist’s imagination; they are practical, pragmatic guidelines for how roboticists should program these advanced tools. These laws already govern the operation of many machines and computer algorithms, and in some cases, they have already been neglected.
Asimov’s Three Laws of Robotics
Isaac Asimov was a visionary. His stories contain parables, allegories, and prognostications about the inevitable path of humankind. Asimov (correctly) predicted that robots would become ubiquitous in homes and industry. He also knew that there was an inherent danger in handing over the decision making responsibilities to artificial creations. So, Asimov came up with the Three Laws of Robotics as a plot device to create tension in his stories, but also as a way to guide future generations in their development of smart, thinking, inorganic beings.
The Three Laws of Robotics are as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
And finally, Asimov extrapolated the Laws to all of humanity and derived the Zeroth Law of Robotics: A robot may not harm humanity or, through inaction, allow humanity to come to harm.
The Flaws of This Model
Like many forward thinkers, Asimov was not able to foresee all of the nuances of how the robotics and artificial intelligence industries would play out. There are numerous complexities involved with programming a computer to recognize what a human is. For example, how might a robot distinguish between a human and another robot that is designed to appear to be human? A simple task for us might be an unexpected challenge for an A.I.
Another problem, often referred to by the name given to it in the Terminator movies, is known as the “Skynet Problem.” What if robots become advanced enough to see the flaws in human thinking? Could robots override or ignore the Three Laws if they deem humans their own biggest threat? In other words, could robots reprogram themselves so that the Zeroth Law and Second Law supersede the first? Many worry that this outcome is not only a possibility, but probability. And we’re not talking about conspiracy theorists either. Notable leaders in technology and computer science such as Bill Gates, Elon Musk, and Alan Turing vocalized their concerns about the dangers of A.I. learning.
Technical Problems with The Three Laws of Robotics
Aside from the conceptual flaws, there are technical obstacles that prevent the Three Laws of Robotics from working successfully. For one thing, there is no standard model for what is “human.” We come in all shapes, sizes, and appearances. Some humans are in wheelchairs, while others have electronic devices, like pacemakers, embedded into them. How can a computer determine that someone who’s appearances or composition strays from the standard model is still human?
Attempts at Creating Valid Robotic Laws
Satya Nadella, a former CEO of Microsoft Corporation, told Slate magazine in 2016 what she thought might be a more realistic set of rules to govern intelligent robots and A.I. units:
- “A.I. must be designed to assist humanity.”
- Humans should know and be able to understand how A.I. units work.
- A.I. must safely maximize efficiencies.
- “A.I. must be designed for intelligent privacy,” meaning that it earns trust through guarding their information.
- “A.I. must have algorithmic accountability so that humans can undo unintended harm.”
- “A.I. must guard against bias” so that they must not discriminate against people.”
—
As technology evolves, ethical questions about A.I. and robotics become increasingly important. There will come a time when we will have to create real laws that govern the entire robotics industry and move past conceptual and philosophical thought-experiments. In the meantime, you can learn more about how robotics could help you in everyday industrial tasks by visiting DIY-Robotics and getting familiar with our friendly, safe robotics designed to work with just the right balance of autonomy and human control.