Study: AI can detect lies better than humans, but it can have an impact on social interactions

In an era of fake news, dubious statements by politicians, and manipulated videos that are becoming increasingly ubiquitous, new research published locally by the University of Würzburg, Germany, on December 12 shows that artificial intelligence has a major role in thetake a polygraph testaspects perform better than humans.

Study: AI can detect lies better than humans, but it can have an impact on social interactions

Image source: Pixabay

Researchers from Würzburg, Duisburg, Berlin and Toulouse explored the effectiveness of AI in detecting lies and its impact on human behavior. The key findings of this study can be summarized as follows:

  • AI accuracy outperforms humans in text-based lie detection.

  • People are reluctant to accuse others of lying without AI support.

  • With the support of AI, people are more likely to expressSuspicion of encountering lies.

  • Only about one-third of the study participants will utilizeAsk the AI for an assessmentof the opportunity. However, most people will follow the advice of the algorithm.

In preparation for the study, the research team asked nearly 1,000 people to write down their upcoming weekend plans. In addition to true statements, they were also asked to write an essay about their plannedfictional statementThe research team also offered them honoraria in order to make their fictional accounts "as convincing as possible". The research team also offered them an honorarium in order to make their fictional statements "as convincing as possible". After quality checking, the team ended up with a report that included 1536 statements by 768 authorsof the data set.

Based on this dataset, the research team developed and trained a lie detection algorithm using Google's open source language model, BERT. After training, the algorithm actually recognized the datasetNearly 81% The lies.

For the main study, the team randomized 510 statements and recruited an additional 2,040 participants. These subjects were divided into four groups and asked to evaluateTrue or false statements received.

  • Group I: need to be inNo AI supportof the case to evaluate the statement.

  • Group II: Before making a judgment.Will always seeEvaluation of algorithms.

  • Group III / Group IV: mayvolunteer AI assessment for which a small fee is payable.

Note: In practice, the third groupNo unsolicited requests AI provides recommendations, while the fourth group provides recommendations to the AI Evaluation requirements were set outAnd there is a demand for it.

The results of the experiment are as follows:

  • Members of the first group judged the accuracy to be 46.51 TP3T (roughly equivalent to a random guess).

  • The accuracy of the judgment of the members of the second group was 60.11 TP3T.

  • People don't usually want to accuse others of lying.In the first group, less than 20% of the members chose to point out the lie; in the second group, which automatically receives the AI's assessment, the number of people pointing out the lie was 30%; and in the fourth group, the "accusation rate" rose significantly to about 58%.

  • only ifAbout one-third of theRequires lie detection algorithms to provide assistance. The population for which prediction is requestedfavorFollowing the recommendations of the algorithm, the compliance rate is about 88%.

  • Of those who were automatically assessed by AI, only 57% peopleFollow its recommendations.

  • This difference became even more pronounced when the AI judged a statement to be a "lie": 85% of those who requested an AI assessment agreed with the AI's judgment, while only 40% of those who automatically received an AI assessment followed the AI's advice.

Attached paper address:https://doi.org/10.1016/j.isci.2024.110201.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

YouTube introduces new rules: users can ask the platform to remove AI-generated content that imitates their faces and voices

2024-7-16 8:55:40

Information

AI video editing platform "Captions" receives $60 million investment

2024-7-16 9:00:15

Search