Asimov's laws are nonsense

Jul 30, 2020

Hal 9000

OpenAI, GPT-3, and Skynet

Recently OpenAI released an API beta to interface with their products. They use the GPT-3 language model and from the demos it looks incredibly powerful.

GPT-3 has demonstrated it can write poetry, generate stories, code, build excel docs, and many other impressive tasks that’ll make you wonder if the robot apocalypse is around the corner. 🤖

These impressive results from GPT-3 are again bringing up conversations around general AI and the likelihood of us "summoning the demon". This is a problem we should be preparing for, but the likelihood of creating a general AI in the same intellectual league as a human is still incredibly far away. Nonetheless, people keep bringing up the dangers of AI, and along with that, how we can protect ourselves.

Asimov’s three laws of robotics are always brought up in these conversations as an answer to save us all from a "Skynet situation". These laws—while great for a science fiction novel—are nonsense. The one takeaway you should have from this: Don't take these laws seriously.

Asimov’s three laws of robotics

Let's break down these ridiculous rules.

The laws originally seen in Runaround and I, Robot:

– First Law. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

– Second Law. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

– Third Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Asimov eventually realized there were some obvious holes with these three alone, so he eventually added another law to precede the others

– Zeroth Law. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

It should be noted these laws don't even work in the books. Yikes. 🤦

Loopholes: Why these laws don't work

How do we define "human"? Intuitively this is a straightforward question to answer. I'm human. You're human. Your family is all human. Your puppy isn't human, despite loving her more than other humans.

However, computers are dumb and in order to understand what a human is we need to strictly define the term. This step opens the door to a lot of weird quirks of philosophy that we otherwise rarely think about. It forces us to comprehensively solve ethics. Some examples:

Is an unborn fetus a human? This has been a major issue for years and in order to define humans you have to clearly take a stance on this. If you can solve this one you’ll win a nobel prize.

What about someone who has died? Most people might be inclined to say no, but then you’d end up an AI that would never attempt CPR. How long does someone have to be dead in order to count? Maybe you don’t have a time limit, but then you could end up with robots wandering earth trying to revive the dead. That doesn’t even sound like a terrible idea, until you realize it’s a recipe for zombies. 🧟

You can’t program Asimov’s rules without taking a firm ethical stance on practically every issue

Should you fear the demon?

Asimov’s laws can’t save us, but is a general AI even close to reality? Elon Musk compares creating a general AI to "summoning the demon". A little too alarmist for my taste, but I don’t think he’s wrong. The creation of an intelligent AI would be an existential threat to the human race.

The key to avoiding this is to ensure an AI’s goals are aligned with ours. If this alignment is done successfully it would bring about the greatest age of human prosperity in history. Any sort of misalignment of goals could result in our new robot-zombie world. This is not as good a situation as it is a great plotline for a new Terminator movie.

Conclusion

I love Asimov’s books, especially the Foundation series. But his laws are nonsense, filled with loopholes, and won’t help with a robot uprising.

I don’t have fears of AI becoming an issue anytime soon, but it’s something everyone should be thinking about to make sure we’re prepared. Until then Asimov’s laws should only be talked about in your book club discussions.

A handsome man.

I'm Wes. I live in Boston and work on Wonderment.

Go back to home