How often have you lied and gotten away with it? Conversely, how frequently does everyone seems to think you are lying when you are actually telling the truth? Between 1759 and 1962 approximately 710 Canadians received the death penalty and were publicly executed, over twice that number were given the death sentence. In 1976, the death penalty was outlawed in Canada. Perhaps this is because of the troubling possibilities that the person sentenced may not in fact be guilty of the crime they allegedly committed. As humans we deal in possibilities, probabilities, hunches and sometimes bribes but rarely in absolute certainties. A concrete way of detecting deception and increasing public safety is required. A new digital type of border screening may be the beginning of a whole new type of legal system.
“Deception is futile when Big Brother’s lie detector turns its eyes on you” is the title of the article in the February 2013 edition of Wired magazine. An “interrogation bot” is still being tested, and aims to make deceiving border officials more challenging. It involves you walking up to a kiosk on the border where a kind human avatar asks you a battery of standardized questions. Answer truthfully and you are on your way, lie and you will be flagged for a human to human interrogation. This Border Guard Bot is the brainchild of 75 year old Jay Nunamaker, and his co-conspirator, 64 year old Judee Burgoon. Nunamaker is a mechanical engineer who has spent his career on the cutting edge of software development while Burgoon is a psychologist. Combining their respective training has allowed them to identify key aspects of deceptive behaviours and create a system to detect them.
This may be starting to sound quite a bit like a prologue to the movie Minority Report. Humans are being replaced by machines. Hasn’t that been happening for well over a century now? Where once analysts spent hundreds of hours in secret rooms shifting through surveillance, information software might be able to flag areas of interest in real time. Although our brain’s neural networks are incredibly complex, they come in a highly inefficient package that requires frequent injections of nutrients and caffeine while still needing to be powered down for approximately 1/2 of the time they are active. In this fast paced world, if you can strip a task down to its simplest component and make a robot to do it for you, then why not? People simply aren’t efficient enough.
Many people will argue that only humans will be able to detect the nuances of a conversation and catch lies. Besides, we will always be needed for our intellect, analytical abilities and abstract thought… right? Where a human security officer can only monitor a couple of things at once the machine can constantly monitor many. Slight changes in pitch or shifting eyes, factors identified by humans to indicate untruthfulness, can be continuously monitored, and the changes tagged. With the help of Arizona graduate student Aaron Elkins, the dozens of deception detecting technologies were narrowed down to three main systems. An HD camera documents the interview while an infrared camera captures pupil dilation and glance locations, a fingerprint scanner affirms a person’s identity, and a microphone captures key changes in pitch. Tied together by software affectionately dubbed “Agent 99 Analyser”, you are ready for real time analysis. The degree of deception is flagged by the computer by assigning colours to questions. Green is good, yellow might be okay, but red means the person is probably lying or being evasive.
The lie detecting industry is highly ambiguous. The biggest technological advance occurred in 1921 with the creation of the polygraph machine, long acknowledged to be unreliable for screening purposes and only fairly effective in conjunction with experienced interrogators. New tech is long overdue. Most recent testing in a pool of 35 people has caught 100% of people asked to present fake papers with only two false positives, while real life border guards didn’t catch any of the impersonators.
While we will never be able to trust the tech completely, this technology may increase our ability to single out real threats to security.
Leave a Reply