Scientists use computerized algorithm to detect lies during the 2016 president debates

Google+ Pinterest LinkedIn Tumblr +

A computer algorithm could help determine whether political candidates are being truthful or deceptive. New research published in Applied Cognitive Psychology used a computerized linguistic algorithm grounded in memory theory to analyze the veracity of statements from the 2016 presidential debates.

“I was interested in this topic because I had investigated deceptive language using the Reality Monitoring algorithm with prisoners and parolees in the past, and there are assertions in the literature that politicians have become more deceptive in the last several decades,” said study author Gary D. Bond, an associate professor of psychology at Eastern New Mexico University.

“I was actually teaching the language unit in my cognitive psychology class and discussing the Reality Monitoring framework with students (true memories hold more perceptual, spatial, temporal, and affective information, while false memories hold greater evidence of cognitive operations), and a student wondered if we could use that framework to detect false information in politicians’ debate statements.”

“Reality Monitoring has been applied to linguistic lie detection in the past, and I had used an algorithm in Linguistic Inquiry and Word Count (LIWC) software to automatically code verbal statements to determine the probability of statement veracity,” Bond told PsyPost. “So I invited the student and other students to work in my lab on the research, and we collected all of the debate language from the Democratic and Republican primary debates.”

The researchers analyzed statements made by Hillary Clinton, Ted Cruz, John Kasich, Marco Rubio, Bernie Sanders, and Donald Trump. They compared the results of their Reality Monitoring deception detection algorithm to the fact-checking website PolitiFact and found that the algorithm could help to differentiate truth from lies.

But the researchers had to tweak the algorithm to cope with the politicians’ language. Unlike other subjects in Reality Monitoring research, the politicians had extensive debate training and preparation.

Contrary to previous findings, the researchers found that the politicians tended to use more perceptual and cognitive words when lying. The politicians were more likely to say things like “look at” or “hear” (perceptual) along with “cause” or “know” (cognitive) in fact-checked lies than in truthful statements.

Meanwhile, words related to space, time, and emotion — which previous research linked to truthful statements — did not reliably discriminate between facts and deception.

“Politicians use a variety of strategies to engage in deceptive linguistic acts, including painting rosy futures with their policies, distorting or disregarding facts, and engaging in extensive image-making using a liberal dose of deception to look good to the electorate,” Bond said.

The study, like all research, does have some limitations.

“One caveat in our research is that computer-coding of language may not capture what human coders can: our results were driven by the extensive usage of words related to cognitive operations (‘I think,’ ‘I remember,’ and other words and phrases that relate to thought processes that reflect elaborative or imaginative processes),” Bond explained.

“A second problem in a study like this is the limited number of words in fact-checked statements that politicians produce. It is best to have a large corpus of words to sample from in order to tag words that relate to Reality Monitoring categories, but fact-checked ground truth statements are short. Further research is needed with a larger corpus of words from politicians to gather a better understanding of the features of their truth and lie statements.”

The study, “‘Lyin’ Ted’, ‘Crooked Hillary’, and ‘Deceptive Donald’: Language of Lies in the 2016 US Presidential Debates“, was co-authored by Rebecka D. Holman, Jamie-Ann L. Eggert, Lassiter F. Speller, Olivia N. Garcia, Sasha C. Mejia, Kohlby W. Mcinnes, Eleny C. Ceniceros, and Rebecca Rustige.

Share.