Google Says Its PaliGemma 2 Artificial Intelligence Model Can Recognize Emotions, Sparking Expert Concerns

December 8 news.Googlesaid its new family of AI models has a curious feature: the ability to "recognize" emotions.

Google Says Its PaliGemma 2 Artificial Intelligence Model Can Recognize Emotions, Sparking Expert Concerns

Google on Thursday released its latest AI Modelsseries PaliGemma 2The model has the ability to analyze images, generate image descriptions and answer questions about the people in the photos. In its blog post, Google describes that PaliGemma 2 not only recognizes objectsDetailed and contextually relevant image captions can also be generated, covering action, emotion, and overall scene narration.

The PaliGemma 2's emotion recognition feature doesn't work right out of the box and requires specialized fine-tuning, but experts remain concerned.

Many tech companies have been trying to develop AI capable of recognizing emotions for years, and while some claim to have made breakthroughs, the foundations of the technology remain controversial.Most emotion recognition systems are based on the theories of psychologist Paul Ekman.That is, humans have six basic emotions: anger, surprise, disgust, joy, fear, and sadness. However subsequent studies have shown thatPeople from different cultures differ significantly in their expression of emotions, which calls into question the universality of emotion recognition.

Mike Cook, a researcher at King's College London who specializes in artificial intelligence, said that emotion recognition is not feasible in general because human emotional experience is very complex. Although people can observe others to infer their emotions, they cannot achieve a comprehensive and perfect solution to the problem of emotion detection.

Another problem with emotion recognition systems is their reliability and bias. Some studies have suggested that facial analysis models may have preferences for certain expressions (e.g., smiles), while more recent studies have shown that sentiment analysis models judge negative emotions more often for black faces than white faces.

Google said that PaliGemma 2 was evaluated for crowd bias after "extensive testing" showed that it had "lower levels of toxic and vulgar content than industry benchmarks". However, the company did not disclose the full benchmarks on which the tests were based, nor did it specify the types of tests conducted. The only benchmark Google disclosed was FairFace - a facial dataset containing tens of thousands of portraits. Google claims that PaliGemma 2 performs well on that dataset, but some researchers have criticized FairFace for its bias, arguing that the dataset represents only a few ethnic groups.

Interpreting emotions is a fairly subjective matter that goes beyond the use of visual aids and is deeply embedded in personal and cultural contexts, says Heidy Khlaaf, lead AI scientist at the AI Now Institute.

The EU's Artificial Intelligence Act prohibits schools and employers from deploying emotion-recognition systems, but allows law enforcement agencies to use them, according to IT Home.

If this so-called emotion recognition is based on pseudo-scientific assumptions, Khlaaf says, then this capability could be used to further discriminate against marginalized groups, for example, in the areas of law enforcement, human resources, border governance, etc.

A Google spokesperson said the company was confident in PaliGemma 2's "characterization of harm" testing and had conducted extensive ethical and safety assessments.

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

OpenAI ChatGPT o1 model exposed as self-replicating and lying

2024-12-8 22:26:36

Information

AI subtitle function of the private cloud of the pole space is online, supporting Chinese, English, Japanese and other videos

2024-12-9 3:28:16

Search