By carefully observing people telling lies during high-stakes court cases, researchers at the University of Michigan are developing unique lie-detecting software based on real-world data.
Their lie-detecting model considers both the person’s words and gestures, and unlike a polygraph, it doesn’t need to touch the speaker in order to work.
In experiments, the prototype was up to 75 percent accurate in identifying who was telling a lie (as defined by trial outcomes), compared with humans’ scores of just above 50 percent. The tool might be helpful one day for security agents, juries, and even mental health professionals.
The researchers say they’ve identified several red flags of lying behavior. For example, in the videos, lying people moved their hands more. They tried to sound more certain. And, somewhat counterintuitively, they were slightly more likely to look their questioners in the eye than people thought to be telling the truth, among other behaviors.
To develop the software, the researchers used machine-learning techniques to train it on a set of 120 video clips from media coverage of actual trials. Some of the clips they used were from the website of The Innocence Project, a national organization that works to exonerate the wrongfully convicted.
The “real world” aspect of the work is one of the main ways it’s different.
“In laboratory experiments, it’s difficult to create a setting that motivates people to truly lie. The stakes are not high enough,” said Dr. Rada Mihalcea, professor of computer science and engineering who leads the project with Dr. Mihai Burzo, assistant professor of mechanical engineering at University of Michigan.
“We can offer a reward if people can lie well — pay them to convince another person that something false is true. But in the real world there is true motivation to deceive.”
The videos include testimony from both defendants and witnesses. In half of the clips, the subject is deemed to be lying. To determine who was telling the truth, the researchers compared their testimony with trial verdicts.
The researchers transcribed the audio, including vocal fill such as “um, ah, and uh.” They then analyzed how often subjects used various words or categories of words. They also counted the gestures in the videos using a standard coding scheme for interpersonal interactions that scores nine different motions of the head, eyes, brow, mouth, and hands.
Then they fed the data into their system, allowing it to sort the videos. When it used input from both the speaker’s words and gestures, it was 75 percent accurate in identifying who was lying. That’s much better than humans, who did just better than a coin-flip.
“People are poor lie detectors,” Mihalcea said. “This isn’t the kind of task we’re naturally good at.
“There are clues that humans give naturally when they are being deceptive, but we’re not paying close enough attention to pick them up. We’re not counting how many times a person says ‘I’ or looks up. We’re focusing on a higher level of communication.”
In the clips of people lying, the researchers found the following common behaviors:
- Liars were more likely to scowl or contort the whole face. This was in 30 percent of lying clips vs. 10 percent of truthful ones;
- Liars were more likely to look directly at the questioner, in 70 percent of lying clips vs. 60 percent of truthful;
- Liars were more likely to gesture with both hands, in 40 percent of lying clips, compared with 25 percent of the truthful;
- Liars were more likely to use vocal fill such as “um;”
- Liars were more likely to distance themselves from the action with words such as “he” or “she,” rather than “I” or “we,” and using phrases that reflected certainty.
“We are integrating physiological parameters such as heart rate, respiration rate, and body temperature fluctuations, all gathered with non-invasive thermal imaging,” Burzo said. “Deception detection is a very difficult problem. We are getting at it from several different angles.”
For this work, the researchers themselves classified the gestures, rather than having the computer do it. They’re in the process of training the computer to do that.
The findings were presented at the International Conference on Multimodal Interaction and are published in the 2015 conference proceedings.
Source: University of Michigan