Technology Apocalypse-Thinking Man’s Guide to the End of the World

Read the introduction article here. Roomba

As we pass the anniversary of Y2K, a techno-phobe’s thoughts turn to the many other ways in which technology could accelerate the end of the world. Even though our mass of old computer programs did not rise up and crash on 1/1/2000, that does not mean that robots, AI, nanotechnology and the like are not worrisome for our future.

Some of you (and you know who you are) are thinking: the three laws will protect us. Isaac Asimov’s Three Laws of Robotics will keep all those mean hunks of metal and nano-scopic dust in their place. For those of you hiding under a rock these past few decades (or who did not see the loosely based “I Robot” movie (starring the ubiquitous Will Smith who is in all science fiction movies that Keanu Reeves is not)) the three laws (from Wiki) are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Later, Asimov added the Zeroth Law: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm”; the rest of the laws are modified sequentially to acknowledge this.

So, you say, just program these 3 or 4 laws into every piece of technology that we have, and all will be safe. While I agree that it would be nice to have these in your Roomba (we certainly don’t want an uprising of vacuum cleaners now do we?) so that it knows not to suck up your foot, there are some well documented problems with the three laws (and even some interesting assertions that they are unethical?):

  1. you first need to define several terms, including “human being”, “humanity”, “harm”, etc., then program these in and give the robot a standard way to identify them;
  2. you have to have a programmed recursive decision making process where all of the robots decisions (including any “free will” it is given) constantly go back and check the 3/4 laws;
  3. you have to write a program without bugs.

These are three REALLY HARD PROBLEMS separately, and all together should ease the sleep of techno-phobes.

First, how do you identify human beings (much less humanity)? Even humans have problems with this, with some in our illustrious species deciding that monkeys are human too and should have all of the rights of a person. Also, as a father (and I know you fathers will agree with me), I know that my daughter has a problem determining humans from apes, judging from what she’s brought home (I’m just kidding, honey). And we haven’t even considered the additional “fourth law”, where we would need a definition of “humanity”. Scopes Monkey Trial, anyone? Anyone? These terms would require a programmatic and (dare I say it) standard definition. As a technologist, I know that it takes years for standard to be agreed to…then they get changed during RFC.

Second, recursive programming is difficult and slow. Imagine trying to build this into a nano-robot (think Michael Crichton’s Prey, or, self-pimpingly, my own novel) where memory and processing power is limited. For every instruction that constitutes a decision point, you have to go and check at least the first law, and possibly the second and third (we won’t even venture into how to programatically check the fourth). A Roomba only has at most 256Kbytes of programming space and a C programming interface; if anyone has programmed the three laws into their Roomba so that they are not out doing secret world domination meetings at 2am when they are supposedly programed to clean our floors, I for one would like to hear about it.

Even if we could solve these two problems, all of our technology worries should be put aside because of bugs. Programs have bugs, end of story. You can throw all of the six sigma, community of developers, programmatic checking (i.e, fox/henhouse) ideas you want, but ‘dem programs got bugs. Whether we are talking artificial intelligence, nanotech or Roombas with a ‘tude, some human somewhere programed it and it has a bug, a bug left by either being in a hurry to meet a deadline or not testing all the parameters, faulty or no QA, not understanding the requirements, etc. etc. etc. It would be easy here to pick on Microsoft. But as an alternate example, look at the release notes for the latest version of Ubuntu, written by an increasingly large community of volunteer developers; it contains a section on known bugs….but you can always wait for Hardy Heron, eh? No offense to the Ubuntu gang (it’s great stuff), but all programs have bugs…and maybe backdoors left by the programmers.

Again, I would like to humbly apologize to all of the developers who have worked for me and who are currently in my employ.

In summary, this classification of ways the world could end should give you little pause, fellow preparer. Technology will fail under it’s own weight. Until, of course, someone actually builds a working quantum computer and programs it to make decisions based on probability. Then we are well and truly screwed.

Technology in the hands of an ill intentioned human….now there’s something to be worried about. But, that’s another category altogether.

Next up: Environmental Apocalypse

You may also like...

4 Responses

  1. Kev says:

    Good point about the difficulties of Asimov’s three (four) laws of robotics.
    Your point, in the novels, is what makes the robots “lock up” when confronted with an event that defies the limitations of the three laws (one of the bugs of the programming).
    As for the size of the code – yeah, hopefully in the future it will be easier to take a program that massive (bloated) and fit it onto/into anything requiring it – from nanomachines to humaniform robots to starship AIs

  2. admin says:

    Yeah, “lock up” always reminds me a database deadlocks (yes, that was in the old days, kids) before we had row locking or record locking, and two programs that weren’t written with the right pre-thought rules in mind (gee, I wonder if some other program might want to look at this record at the same time) would get into the silent deadly embrace that was a database deadlock.

    I imagine the same thing here with nano-tech or robots or AI trying to program in this protection scheme….silent deadly embrace covers it pretty well.

  3. Steve says:

    We have had an irobot Roomba 560 for just over a year and love it. Did you know that you can by animal covers for them ? ..they look funny moving around the room :-)

  1. July 7, 2012

    […] Introductory Article on the Thinking Man’s Guide to the End of the World, and the article on Technological Apocalypse for background). We no longer have authoritative experts, but, in their place to fill the void are […]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: