Artificial intelligence lie detection technology is available: better than humans, but should be used with caution

In recent years, people have encountered more and more challenges in distinguishing true from false information, especially with the large amount of false news and exaggerated propaganda on the Internet. Studies have shown that humans are not good at judging true and false statements.

Artificial intelligence lie detection technology is available: better than humans, but should be used with caution

Image source: Pexels

It is understood that traditional lie detection methods, such as polygraphs, have been criticized for their accuracy issues. Some people believe thatAI (AI) can help us improve our ability to spot lies. One day, AI-based lie detection systems may help us identify false information on social media, evaluate online speech, and even identify exaggerations in job applicants' resumes and interview answers. The question, however, is whether we will trust these systems and whether they are trustworthy.

Alicia von Schenk, an economist at the University of Würzburg in Germany, and her team recently developed an artificial intelligence lie detection tool that is significantly more accurate than humans. They conducted experiments to explore how people use this tool, and the results showed that people who use the tool are indeed better at identifying lies, but at the same time, more people are suspected of lying.

In a paper published in the journal iScience, the researchers asked participants to write about their weekend plans. Half of the participants were asked to lie and were offered a small monetary reward for convincing others of their lies. A total of 1,536 statements were collected from 768 people.

The researchers then used Google's artificial intelligence language model BERT to train 80%'s statements to help the algorithm distinguish between true and false statements. The test results showed thatThe tool was able to identify the remaining 20% statements as true or false with an accuracy of 67%. In comparison, the average human accuracy is only around 50%.

To understand how people can detect lies with the help of artificial intelligence tools, von Schenk's team divided another 2,040 volunteers into groups and conducted a series of tests.

One test showed that when people were offered the option of paying a small fee to use an AI lie-detection tool and receiving a reward, only a third of them chose to use it. Von Schenk speculates that this may be because people are skeptical of the technology or overestimate their own ability to detect lies.

However,The one-third who chose to trust the tool showed a high level of reliance on it“When people actively choose to rely on this technology, they almost always follow the AI’s predictions… They trust its judgment very much,” von Schenk said.

This reliance affects our behavior. Normally, people tend to believe that others are telling the truth. This study also confirmed this - although participants knew that half of the statements were lies, they only marked 19% of them. However, when people chose to use the artificial intelligence tool, the proportion marked as lies increased to 58%.

In some ways, this might be a good thing: tools like these could help us spot more lies in our daily lives, such as misinformation on social media.

But on the other hand,This also undermines trust, which is the foundation of human behavior., which helps us build good interpersonal relationships. If accurate judgment is based on the deterioration of social relationships, is such accuracy still valuable?

Secondly, the accuracy issue cannot be ignored. The researchers admit that their goal is just to make AI perform better than humans, but for scenarios such as judging the authenticity of social media content or reviewing job applicants' resumes,Simply being “stronger than humans” may not be enough, as this will lead to more miscarriages of justice and accusations.

It is worth mentioning that traditional polygraphs also have flaws. Polygraphs are designed to measure heart rate and other physiological arousal indicators because people used to think that stress was a physiological response unique to liars. But this is not the case, which is why polygraph results are generally not accepted in American courts.

von Schenk points out thatBecause AI lie detection tools can be easily applied at scale, their impact could be even greaterWhile polygraphs can only test a handful of people each day, the range of applications for AI lie detection tools is virtually limitless.

“Given the current situation with so much fake news and misinformation, this type of technology does have some benefits,” von Schenk said. “However, we really need to test it rigorously to make sure it is significantly more accurate than humans. If AI lie detection tools just lead to a lot of false positives, then maybe it’s better not to use them.”

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

Anthropic CEO: In the next three years, the cost of training large AI models will rise to tens or even hundreds of billions of dollars

2024-7-8 16:26:59

Information

Comparing with GPT-4 Turbo! After OpenAI stopped supplying, iFlytek Spark API calls increased significantly

2024-7-9 9:10:49

Search