Futurists (people who make predictions about the future based upon current trends, especially with regards to technology) often suggest that machines will eventually take over the world. They typically see machines as somehow attaining the ability to think and act of their own accord, ignoring—if they so desire—the commands of people.
The futurists are right about machines taking over, but it will not happen because of innate intelligence on the part of computers. It will happen (and is happening already) because people put so much faith into the competence of computers that they are extremely unlikely to contradict these machines.
Last year I had the misfortune of having to replace my washing machine and a heat pump. Despite having spent way more money than I would have liked, I encountered months-long problems with these appliances, each of which depends upon a computer for proper functioning (an idea I dislike intensely).
Each computer informed the technicians that certain components needed to be replaced, which they dutifully took care of without question. Yet the original problem persisted. Following months of basically refurbishing my two brand-new pieces of equipment, I insisted that the computers had to be the problem. I could not believe what a hard sell this idea was!
People so believe in the infallibility of computers—even though they are no better than the people who make and program them, not all of whom possess enough skill to do a good job—that no one wanted to accept that they could be spewing incorrect diagnoses! But, yes, they were.
Once the computer was replaced in my heat pump system, it worked fine. However, even after the computer was replaced in the washing machine, it continued to put out error messages later found to be incorrect. In the end, the manufacturer had to replace the entire machine for me.
Do you think that after this experience I would want to place my life solely under the control of a computer? Absolutely not. What I witnessed was several perfectly capable technicians doubting their own competence and refusing to make their own informed decisions because of the supposed superiority of a machine.
Indeed, kowtowing to computers can be very dangerous. On May 19, 2017, the “man who saved the world” died at his home in Moscow with little fanfare. Yet if this Soviet military officer of the Cold War era hadn’t had the courage of his convictions, nuclear war could have ensued.
During the early hours of September 26, 1983, Stanislav Petrov’s computers identified five U.S. missiles headed towards Moscow. Mr. Petrov had only twenty minutes to act. Based upon my local experience, I believe most, if not all, people in his position would have warned the military of an impending nuclear attack. Instead, this man—unafraid to use his own intelligence—informed his superiors of a system malfunction.
In a 2013 interview with the BBC’s Russian service, he said that he had all of the data to suggest an ongoing missile attack, and if he had sent his report up the chain of command, nobody would have said a word against it. “The siren howled, but I just sat there for a few seconds, staring at the big, back-lit, red screen with the word ‘launch’ on it.”
An investigation later found that Soviet satellites had misidentified sunlight reflecting off clouds for intercontinental ballistic missile engines. In 1999 Mr. Petrov told The Washington Post that he did not rush to start a war because “We needed to understand, ‘What’s next?’” His gut feeling was that people don’t start a war with only five missiles. The New York Times reported that he said his decision to stand down was at best, a “50-50 guess.”
But Stanislav Petrov employed common-sense analysis, undoubtedly saving the world from a catastrophe. How sad that the death of such a brave man should have received so little notice, the significance of his decision basically unrecognized and underappreciated.
While life-and-death decisions do not comprise most situations, the inability of people to act because of their reticence to contradict a computer certainly results in time and money wasted for everyone involved. It can also result in serious consequences for humans and their environment.
Consider the water situation in Charlottesville at the end of the summer of 2017. On September 30, local news agencies reported that water levels at area reservoirs were lower than normal, but the water authority was not expecting to declare a drought watch. Why weren’t they?
After all, the director of the state climatology office at the University of Virginia had reported below-normal precipitation since May, and area temperatures had been above normal for much of September. It would be surprising if these two factors did not produce drought conditions, and indeed, they had.
Because I get my water from a well, I worry about groundwater when drought is threatening. Therefore, I had been keeping an eye on the streams in my area, and I witnessed one after another drying up. I wondered why no authorities were discussing the drought we were so obviously experiencing, and instituting water-conservation measures.
When the stream at the end of my road dried up on September 29, something that I had not ever seen happen until the serious drought of 2002, I knew groundwater was in poor shape. Finally, on October 5, the Rivanna Water and Sewer Authority (RWSA) issued a drought warning. Why were they so slow to get folks to limit their water usage?
A major reason is that people today have somehow been made to feel inadequate when it comes to using their own reasoning ability. Thus, numerous straightforward decisions that governmental agencies should determine for themselves are instead farmed out to “experts” who, it is presumed, are better equipped to make them.
Hence, the RWSA, instead of sending someone out to look at streams to see what was happening, instead paid a contractor to run a computer model to predict a probability of a shortage of local water. (dailyprogress.com/news/local/water-levels-are-low-but-drought-watch-not-expected/article_e196c0da-a639-11e7-910c-9b50322f67c0.html)
However, a computer program is not better able to indicate the likelihood of drought than simple observation of local conditions.
We have been deluded into thinking that computers are infallible, and our naivete leads us to make our lives ever more dependent upon them. When I am exercising, I commonly get asked for directions from deliverymen because their GPS device has led them astray. This would never happen if they relied on a good old-fashioned map, which many younger people no longer even know how to read.
Therein lies the real danger of entrusting computers to take care of so many things in life. When they fail, people are helpless.