As we pass the anniversary of Y2K, a techno-phobe’s thoughts turn to the many other ways in which technology could accelerate the end of the world. Even though our mass of old computer programs did not rise up and crash on 1/1/2000, that does not mean that robots, AI, nanotechnology and the like are not worrisome for our future.
Some of you (and you know who you are) are thinking: the three laws will protect us. Isaac Asimov’s Three Laws of Robotics will keep all those mean hunks of metal and nano-scopic dust in their place. For those of you hiding under a rock these past few decades (or who did not see the loosely based “I Robot” movie (starring the ubiquitous Will Smith who is in all science fiction movies that Keanu Reeves is not)) the three laws (from Wiki) are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Later, Asimov added the Zeroth Law: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm”; the rest of the laws are modified sequentially to acknowledge this.
So, you say, just program these 3 or 4 laws into every piece of technology that we have, and all will be safe. While I agree that it would be nice to have these in your Roomba (we certainly don’t want an uprising of vacuum cleaners now do we?) so that it knows not to suck up your foot, there are some well documented problems with the three laws (and even some interesting assertions that they are unethical?):
- you first need to define several terms, including “human being”, “humanity”, “harm”, etc., then program these in and give the robot a standard way to identify them;
- you have to have a programmed recursive decision making process where all of the robots decisions (including any “free will” it is given) constantly go back and check the 3/4 laws;
- you have to write a program without bugs.
These are three REALLY HARD PROBLEMS separately, and all together should ease the sleep of techno-phobes.
First, how do you identify human beings (much less humanity)? Even humans have problems with this, with some in our illustrious species deciding that monkeys are human too and should have all of the rights of a person. Also, as a father (and I know you fathers will agree with me), I know that my daughter has a problem determining humans from apes, judging from what she’s brought home (I’m just kidding, honey). And we haven’t even considered the additional “fourth law”, where we would need a definition of “humanity”. Scopes Monkey Trial, anyone? Anyone? These terms would require a programmatic and (dare I say it) standard definition. As a technologist, I know that it takes years for standard to be agreed to…then they get changed during RFC.
Second, recursive programming is difficult and slow. Imagine trying to build this into a nano-robot (think Michael Crichton’s Prey, or, self-pimpingly, my own novel) where memory and processing power is limited. For every instruction that constitutes a decision point, you have to go and check at least the first law, and possibly the second and third (we won’t even venture into how to programatically check the fourth). A Roomba only has at most 256Kbytes of programming space and a C programming interface; if anyone has programmed the three laws into their Roomba so that they are not out doing secret world domination meetings at 2am when they are supposedly programed to clean our floors, I for one would like to hear about it.
Even if we could solve these two problems, all of our technology worries should be put aside because of bugs. Programs have bugs, end of story. You can throw all of the six sigma, community of developers, programmatic checking (i.e, fox/henhouse) ideas you want, but ‘dem programs got bugs. Whether we are talking artificial intelligence, nanotech or Roombas with a ‘tude, some human somewhere programed it and it has a bug, a bug left by either being in a hurry to meet a deadline or not testing all the parameters, faulty or no QA, not understanding the requirements, etc. etc. etc. It would be easy here to pick on Microsoft. But as an alternate example, look at the release notes for the latest version of Ubuntu, written by an increasingly large community of volunteer developers; it contains a section on known bugs….but you can always wait for Hardy Heron, eh? No offense to the Ubuntu gang (it’s great stuff), but all programs have bugs…and maybe backdoors left by the programmers.
Again, I would like to humbly apologize to all of the developers who have worked for me and who are currently in my employ.
In summary, this classification of ways the world could end should give you little pause, fellow preparer. Technology will fail under it’s own weight. Until, of course, someone actually builds a working quantum computer and programs it to make decisions based on probability. Then we are well and truly screwed.
Technology in the hands of an ill intentioned human….now there’s something to be worried about. But, that’s another category altogether.
Next up: Environmental Apocalypse