Study: AI cannot simulate the human brain's processing of dynamic facial expressions

Artificial Intelligence inFacial Recognition Technologyperformance, sometimes even surpassing human performance.up to dateThe study found that althoughAIIt has strong recognition capabilities for static images, but its performance is significantly different from that of the human brain when processing dynamic facial expressions.

Study: AI cannot simulate the human brain's processing of dynamic facial expressions

Source Note: The image is generated by AI, and the image is authorized by Midjourney

The research team, from Dartmouth College and the University of Bologna, studied deep convolutional neural networks (DCNNs), a key component in artificial intelligence for recognizing visual images. The name and structure of this network are inspired by the organization of the visual pathways in the human brain, with a multi-layer structure that increases in complexity layer by layer.

However, problems arise when the human brain processes dynamic facial expressions. Current AI designs are mainly used for the recognition of static images, and this study found that AI cannot effectively simulate the way the human brain works in the context of processing changing expressions.

In the study, the team tested using facial videos of different ethnicities, ages, and expressions, which is different from the previous use of static images. The results showed that although the human brain's neural representation of faces was highly similar between participants, and AI's artificial neural encoding of faces was also highly similar between different DCNNs, the correlation with brain activity of DCNNs was weak. This shows that current artificial neural networks can only capture a small part of the human brain's information, especially when processing dynamic faces.

"Scientists have tried to use deep neural networks as a tool to understand the brain, but our results show that this tool is currently far from the brain," said Jiahui Guo, PhD, one of the co-lead authors of the study.

The study emphasizes that the human brain processes faces not only to determine whether one face is different from another, but also to infer other information, such as mental state, friendliness, and trustworthiness. Current DCNNs are only designed to recognize faces and cannot cover these complex cognitive processes.

Professor James Haxby noted: "When you look at a face, you get a lot of information about that person, including what they might be thinking, how they are feeling, and what impression they are trying to convey." In contrast, AI can only determine whether one face is different from another and cannot perform deeper cognitive processing.

The study suggests that in order for AI networks to more accurately reflect the way the human brain processes facial information, developers need to build algorithms based on real-life stimuli rather than just relying on static images. This means that when designing AI systems, the complexity of dynamic facial expressions needs to be taken into account to better simulate the processes of human cognition and social interaction.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

Firefox launches AI tool Fakespot Chat to detect fake reviews on e-commerce platforms

2023-11-13 11:39:26

Information

Google sues over Facebook ads impersonating its generative AI chatbot Bard

2023-11-14 10:59:09

Search