Rules for Robots

Robots and people need rules to function.

Imagine if you never had to clean your bedroom or your desk or anything. What would happen?

At some point, you would run out of money to buy new clothes to replace the dirty ones buried in some far corner of your room. And you would lose sight of your books and other stuff. Rules like “clean your room” are needed to make sure you have clean clothes, can find your stuff, and don’t waste too much time digging through garbage.

You might argue with parents about what kind of rules, and how often to apply them, but everyone knows rules exist for a good reason. It’s why, left alone with no rules, everyone would straighten up their room and their desk eventually. Some would do so more often than others, of course.

Robots and artificial intelligence software also need rules to work well. Without a rule to stop doing something after some condition is met, robots would repeat actions forever or until they powered down. Rules for robots also ensure they don’t hurt people or property and that they perform tasks they’re designed to complete.

What rules would you write to make sure robots didn’t make a mess and destroy everything?

Nick Bostrom, a philosopher, worries that we could create an artifical intelligence that does one thing really well but without being programmed to act based on human values. In his example, the AI might build paper clips so well it goes on to destroy the human race in its efforts to build paper clips with all kinds of materials. Then the AI figures out how to go into space. Who would stop it?

In Bostrom’s example, clearly his paper clip robot would need rules to use only one material, not humans or trees or any material that would endanger human life.

Susan Blackmore has studied memes — think cat videos: ideas that replicate themeselves as they move from one person to another — and the technology equivalent called temes — ideas technology spreads to keep itself alive. In an interesting TedX video, Blackmore suggests we could view human evolution as technology’s way to evolve itself into artifical intelligence and existence without the need for humans.

In Blackmore’s example, if temes are real, if technology uses the human race to evolve itself into existence apart from humans, then we would need rules to make sure technology always needed humans to operate.

The question of what rules humans should apply to robots has a long history. In modern times, the science fiction writer Isaac Asimov published a short story in 1942, Runaround, which suggested humans should apply three key rules to all robots:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

He later added a law numbered zero, before the first one and which has the highest importance:

  1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Asimov’s Three Laws (plus 0!) summarized ideas in his head, as a writer, and ideas from science fiction stories he had read and conversations with other people. Since publication of Runaround and later I, Robot which included the story, writers, scientists, programmers, and many others have debated these three ideas as well as others.

The concern about the dangers of artificial intelligence has a name, AI Anxiety. The term might sound funny but we live in a world where computers use AI to recognize your face, decide what stories to show you in Facebook, fly planes, brake your car, and do many other things. For example, computer speech recognition software today understands more than 90% of what is spoken into a microphone, with the error rate going down quickly.

Asimov had a subtle response to the dangers of robots and AI. In talking about how he created his Three Laws of Robots, he said, “one of the stock plots of science fiction was … robots were created and destroyed their creator. Knowledge has its dangers, yes, but is the response to be a retreat from knowledge? Or is knowledge to be used as itself a barrier to the dangers it brings?”

Today, with lots of experiences with software failures, we know knowledge has its limits, too. Humans can’t think of every possible outcome and program to prevent the worst ones. The risk is that there are outcomes a smart computer could think up to circumvent any human rule. In other words, we should proceed with caution and, as Asimov points out, value the pursuit of knowledge as we make decisions about what rules for robots we set up.

The fear of robots and human creations smarter than us that might destroy us also is part of our culture. Think of Frankenstein or the Golem stories. And some countries like Japan have positive views of robots and technologies while European and US countries have a history of worrying about using machines and technology to play God.

Finally, to make things more interesting and complicated, there is a moral issue with self-driving cars that applies to all AI: if your car encounters an unavoidable accident, should it minimize the loss of life, even if that means killing people in its car? Or should your car protect the occupants of its car at all costs? And how should the AI in the self-driving car make this decision, randomly or with strict rules?

Building safer cars and limiting speeds for self-driving cars would be useful rules to reduce this moral problem, but the problem will always exist. It’s a problem humans have always had with technology and our existence on this planet. As we reduce or eliminate one problem or discomfort, others appear that need to be dealt with. The question is where, when, and how to set rules.

Learn More

Some Scientists Fear Super Intelligent Machines

http://www.washingtonpost.com/sf/national/2015/12/27/aianxiety/

Memes and Temes

https://www.ted.com/talks/susan_blackmore_on_memes_and_temes

Why Self-Driving Cars Must Be Programmed to Kill

http://www.technologyreview.com/view/542626/why-self-driving-cars-must-be-programmed-to-kill/

Three Laws of Robotics

https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

Do We Need Asimov’s Laws?

http://www.technologyreview.com/view/527336/do-we-need-asimovs-laws/
http://arxiv.org/abs/1405.0961
http://arxiv.org/ftp/arxiv/papers/1405/1405.0961.pdf

Why Asimov’s Three Laws Of Robotics Can’t Protect Us

A really excellent in-depth discussion about the limits of Asimov’s Three Laws and alternative more cooperative ways to set rules and limits for AI.
http://io9.gizmodo.com/why-asimovs-three-laws-of-robotics-cant-protect-us-1553665410

When Should a Robot Say No to Its Human Owner?

There are valid reasons machines should be programmed to say no.
http://www.slate.com/articles/technology/future_tense/2016/01/when_should_a_robot_say_no_to_its_human_owner.single.html

Author

  • Tim Slavin

    Tim is an award-winning writer and technologist who enjoys teaching tech to non-technical people. He has many years experience with web sites and applications in business, technical, and creative roles. He and his wife have two kids, now teenagers, who are mad about video games.

Also In The February 2016 Issue

Ideas for most young kids (and their families), from board games and more offline options to online games and apps.

Computers can be programmed to make intelligent decisions. Does that make a computer intelligent?

The many pieces that make up AI have been built and used for thousands of years in many cultures.

Math circles are groups of students who come together to have fun discussing and solving intriguing math questions.

Unit testing tests a set of code with data to test with the code and details about how the code is used and operated upon.

There are several places to go online to play classic video games like Donkey Kong and Castlevania.

How we manage limited resources and share costs is an important question far beyond software development.

For twenty years, since 1996, cars have used computers to control different parts of the car.

Danny Fenjves currently is the founder of Upperline, teaching students how to turn their ideas into reality through programming.

This Computational Fairy Tale explains how loops work through the sad tale of Simon, the hapless apprentice to a blacksmith.

Links from the bottom of all the February 2016 articles, collected in one place for you to print, share, or bookmark.

Interesting stories about computer science, software programming, and technology for February 2016.

Interested but not ready to subscribe? Sign-up for our free monthly email newsletter with curated site content and a new issue email announcement that we send every two months.

No, thanks!