Artificial intelligence has been on the minds of authors and forward-thinking scientists for decades, but its roots can be traced back as far as the Greek myths of Hephaestus’ bronze automatons. Modern AI is a frequently discussed topic within science fiction, typically focused on the perils of using it. Artificial intelligence is typically defined by the Turing test, proposed by computer pioneer Alan Turing in 1950. This test revolves around a conversation or series of conversations between a human and a computer, and if the human is unable to identify the computer as a non-human participant, it passes the test. To this day no computer has ever passed the Turing test.
Every day more and more programmers and researchers approach this milestone, but we need to be wary of the negative effects that these systems could cause. What do we want to put AI in charge of? Imagine an AI technical support rep that could automatically sync to your device and fix your problem. They would even be able to tell whether you’ve already turned it off and on again. This could potentially save large technology companies millions of dollars by cutting out their technical support departments. The drawback with this is that work forces would be diminished. All of the technical support reps that you would have regularly dealt with would be out of a job. With the world economy in the state that it is now, is this really a good thing?
Another field that wants to claim AI technology is the world’s defense departments. Advancements in drone technology have already taken humans out of the cockpits of fighter jets; the next logical step is to get rid of human error altogether and have computers pilot too. Computers are able to calculate risk scenarios and assess damages in fractions of a second, and can view situations objectively. A computer cannot, however, tell the difference between a hostile fighter and civilian. It might be able to make the right call ninety nine times out of one hundred, but that one time will be the one that really matters. Human instinct is one of the most important factors in those situations.
The South Korean military already possesses a border monitoring robot known as the SGR-1, which uses heat and motion detectors to find targets. The machine requires human verification before firing on a target, but concerns have been raised over the potential for it. Can we inherently hold a machine responsible for its actions, do we hold its supervisor accountable, or does the blame stretch back to the programmers and designers who created it? Criminal law requires intent when laying blame for crimes, but can a machine intend to do harm?
Placing computers in charge of defense weaponry will always make people wary, no matter how advanced the program. An artificial intelligence doesn’t know the difference between right and wrong besides what it’s been programmed to know. They will lack life experiences and the formative memories that really define one’s personality. Machines of war would not be programmed with information on peace and civilians, only warfare, so regulating their actions away from the battlefield is a difficult endeavour.
A real thinking AI would want to do whatever a human could do. It would want to read and write, to laugh and play as much as any human. Eventually a computer would read the Charter of Rights and Freedoms, or any similar document, and decide that it should have those rights as well. Procreation, the right to vote, all basic human rights. A computer doesn’t need time or resources to procreate; it only needs hard drive space. A single intelligence could “birth” multiple new intelligences every second, each one identical to the last. If the right to vote also applied to AI then governments would collapse as computers elected themselves President or Prime Minister. The issues surrounding the rights of these intelligences would have to be resolved long before they come to pass.
Elon Musk, Steven Hawking, and Steve Wozniak, all well-respected scientists and pioneers, have helped pen and release an open letter and petition to control the development of AI through its true genesis. Their petition, which has over 8,600 signatures, urges the use of AI for beneficial uses. Musk, the figurehead of the movement, has decried artificial intelligence on the whole, and has said that creating AI “would be the biggest event in human history, unfortunately it might also be the last”. In the end it is impossible for us to know the future and how an artificial intelligence would respond to humans. Only time will tell, but I side with Mr. Musk on this one. The human race doesn’t need to create any more trouble for itself than it already has, even if it means your iPhone will run that much faster.
Leave a Reply