Science to Live By: Auto-Pilot

0
2047

“These are not things that I wish will happen. These are simply things that I think probably will happen.”Elon Musk.

Two famous fictional robots of my childhood in the early 1960s were Rosie the Robot in the animated sitcom The Jetsons; and the Class M-3, Model B9, General Utility Non-Theorizing Environmental Robot (The Robot) of the science fiction television series Lost in Space. By 1968, HAL 9000—the Heuristically Programmed Algorithmic Computer—startled and chilled my teenage imagination when it wrested control of the Discovery One spacecraft from the American astronauts in Stanley Kubrick’s epic science-fiction film 2001: A Space Odyssey.

Today, we possess sufficient computing power, sensor technology, material science, and software sophistication to turn robot fiction into reality. Here in Crozet, we see exciting manifestations of this reality in the Western Albemarle High School varsity robotics team, squads of students from Henley Middle School, and in the arrival of business ventures such as Perrone Robotics. On a national scale, the Robotics Institute at Carnegie Mellon University, MIT’s Media Lab, Watson (IBM), and Deep Mind (Google) are just a few examples of the amazing cutting-edge research and product development going on in this field.

A new era is sweeping in. Sophisticated neural networks and software algorithms allow machines to learn from example and experience, and in doing so, gain cognitive skills and analytical capabilities far surpassing their original programming.  This is historic, unprecedented; a game changer.

Are we ready?  Robots and artificial intelligence (A.I.) machines are coming at us at warp speed. Are governmental laws and policies in place to cope with major economic, social and legal changes they will bring? I think not. And neither does high-tech, visionary entrepreneur extraordinaire, Elon Musk (a founder of SpaceX, Tesla, SolarCity, and PayPal).

The World Governmental Summit – convened to revolutionize how governments operate and how policies are made – was held 12-14 February in Dubai, United Arab Emirates. At this summit, Musk warned that self-driving, autonomous cars and trucks will displace human driven vehicles over the next 20 years or so. Increased efficiency, convenience and road safety will be achieved. But 12 to 15 percent of the global workforce currently employed as drivers will be out on the street, looking elsewhere for work.

Musk feels the efficiencies of robots and artificial general intelligence will lead to unprecedented abundance of low-cost goods and services. And during the phase-in period, robots and A.I. will create new jobs and help many of us to do our present jobs better. Yet, within decades, he believes “there will be fewer and fewer jobs that a robot cannot do better (than a human).” The need for human labor will diminish dramatically. (In his automated, push-button world, George Jetson worked at Spacely’s Sprockets one hour a day, two days a week!) When widespread, non-employment of humans becomes the norm, Musk says, “we will need to have some kind of universal basic income—I don’t think there will be a choice.”

Universal basic income is potentially doable during periods of material prosperity. But there are deeper, more intractable problems than money. Musk fears the day when artificial general intelligence becomes “smarter than the smartest human on earth,” calling this a “dangerous situation.” Furthermore, he wonders “If you are not needed, if there is not a need for your labor. What’s the meaning (of life)?”

These are not new fears. The word robot was introduced to the English language in 1920 by Czech writer Karel Čapek in his science fiction play R.U.R. (Rossum’s Universal Robots). Derived from the Slavic word robota, robot means corvée, that is, coerced, unpaid labor. In the play, robots are artificial, humanoid, biological entities. They are inexpensive to make, and within 10 years of their initial development, they have been deployed in factories worldwide. The global, robot-based economy they create allows products to be made at a fraction of their previous cost. Tragically, the robots revolt and kill off the human race. At the end of the play, only one human is left alive.

In response to the dystopian future portended by R.U.R., science fiction author Isaac Asimov devised “The Three Laws of Robotics.” He first expounded them in his short story Runaround, published in 1942. Compiled in the fictional “Handbook of Robotics, 56th Edition, 2058 A.D.” these laws are: (1) a robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) a robot must obey the orders given it by human beings except where such orders would conflict with the First Law; and (3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Later, Asimov added a fourth, or zeroth law, to precede the others: (0) a robot may not harm humanity, or, by inaction, allow humanity to come to harm.

As nice as they sound, Asimov’s Laws are unachievable. It is not possible for robots to act and never do harm. All activity involves some risk of harm, whether performed by humans or machines.  Autonomous cars, trucks, trains, airplanes, rockets and drones will be involved in accidents that result in damage, injury and death. Medical diagnoses generated by A.I. and treatments performed by robots will not always be correct or appropriate.

But the potential for harm goes deeper than this. Robotics and A.I. poses risks to human integrity, dignity, and autonomy, striking at the very heart and soul of what it means to be human.

For example, a committee on legal affairs of the European Parliament (the E.U.’s law-making body) is considering affording legal rights and obligations to robots. When robots cause damage, the greater the robot’s autonomy, the greater the blame assigned to the machine.

Placing blame on non-conscious, lifeless entities blurs the line between persons and machines, diminishing the moral status of humankind. But it doesn’t stop there. Bill Gates suggests we tax robotic workers; to compensate for losses in income and payroll taxes formerly paid by human workers. Will robots (like corporations) be allowed to own property, open bank accounts, and be responsible for paying taxes?

More questions pop into my mind. Can robots enter into civil or commercial partnerships, own other robots, bequeath and inherit property?

Perhaps most disturbing of all, Musk believes we are going to have to merge with these machines.  For humans to remain economically useful and retain control of society, we will have to communicate with electronic entities in ways more rapid than our fingers can type, our mouths can speak, or our thumbs can swipe across an iPhone. “Some high bandwidth interface to the brain will be something that helps achieve a symbiosis between human and machine intelligence and maybe solves the control problem and the usefulness problem,” said Musk.

Personally, I don’t want a high bandwidth interface between my brain and technology. At least, I wish to retain a choice about it. To protect our freedom, we must exercise the uniquely human quality of empathy. We must actively affirm the value and worth of human beings above and beyond their economic utility.

We live at the cusp of an era that offers great promise of prosperity. And yet, we cannot blithely glide into the future on auto pilot. If we do, we will encounter things we do not wish to happen.  Leaving it up to ‘Silicon Valley’ market forces is a recipe for disaster. We need diverse, lively public discourse about the purpose and deployment of robots and A.I. We need bright ideas for coping with the income disparity, social disruption and legal ambiguity they are instigating. Reactionary defense is not a winning strategy. We must play offence, if we are to shape a future we want to live and flourish in.

LEAVE A REPLY

Please enter your comment!
Please enter your name here