Point vs. Counterpoint

For Researching AI

A large portion of science fiction deals with the theme of artificial intelligence, and more often than not with the destruction or subjugation of the human race because of its development. One just has to watch the Terminator movies to know the extent of the damage a rogue AI can cause. Now that smarter machines are becoming a reality, the debate of whether to continue research in the field is shifting from literature to the conference rooms of governments, corporations, and universities. So, considering the potential danger artificial intelligence poses to us, is it still ethical to continue its development? As with so many other technologies, the answer isn’t black and white: each application of AI needs to be considered separately from the rest. For some applications, it’s clear that using an AI is the ethical choice; for others, the ethical implications of using an AI aren’t quite so clear-cut.

Let’s start with the applications of AI that are clearly the ethical alternative, which for the most part include tasks that are extremely dangerous for humans to do. Take space exploration, for instance – ethically there’s little difference between sending a probe and an AI to space, whereas the ethical benefit of sending an AI rather than a human is huge. Sending people to space isn’t known for being safe, and it sounds especially bad when you phrase it this way: “Taxpayers paid $450 million to watch seven people die on national television.” The fewer number of people we send on space exploration missions, the better, and the development of artificial intelligence is one way to continue space missions without endangering humans, while profiting from benefits such as not needing to bring the AI back to Earth, not needing to send food and water, and not needing to monitor its psychological state. The same goes for other tasks such as undersea exploration and construction, as well as humanitarian work like providing aid to victims of a disaster site.

Now for the contentious areas, such as the applications where a human puts their life in the hands of a machine. The use of AI that has we’ve been hearing the most about recently is the self-driving car, which raises questions such as, “Should a machine be directly responsible for the life of a human being?” and “Who is at fault if a self-driving car hits something?” Probably one of the most common misconceptions about self-driving cars, and AI in general, is that their output is a complete unknown, an idea that comes from popular culture (SkyNet again) as well as from the fact that it’s very hard for the layperson to know what a machine is “thinking”. If that were truly the case, there’s no way self-driving cars would ever be allowed on the roads. In reality, self-driving cars are programmed to follow the rules that engineers give them. For that reason, I think that the engineer should be at fault if part of the automation software malfunctions, in the same way that an engineer is at fault if the engine explodes. So when you’re “putting your life in the hands of an autonomous machine”, you’re really putting your life in the hands of the engineer who designed the automation software, in the same way you trust your car’s mechanical components not to randomly fail or you trust a taxi driver to get you safely from point A to point B. So I’ll argue that from a passenger’s point of view, if a self-driving car is statistically safer than a human operator, we should be all in favour of researching self-driving cars, and by extension artificial intelligence in general.

Obviously the snag with this plan is that, if AI turns out to be better than humans at certain tasks, then a lot of people are going to be losing their jobs. Considering the chaos that has historically ensued from high unemployment rates in a society where people need jobs to survive, if we are to continue research on AI systems, governments and companies will have to find a way to prevent or mitigate damage from the civil unrest caused by angry people trying to survive. Undoubtedly there will be many challenges ahead, but development of new technologies has always resulted in better living conditions for those who have access to it, and given that AI software is so accessible (it’s already in everyone’s phones), this could even be a way to mitigate inequality in our current society.

Oh, one last reason to study artificial intelligence: if someone messes up and the Terminators are coming for us, knowing how they work could help you survive…

 

 

 

Leave a Reply