With an accuracy rate of 70%, scientists use AI to interpret dog barking

Researchers are trying to use AI to interpret whether a dog's barking is playful or angry. At the same time, researchers are also trying to use AI to identify a dog's age, gender and breed.

The researchers, working with Mexico's National Institute of Astrophysics, Optics and Electronics (INAOE) in Puebla, found that the 3D images originally used to train human speech were AI ModelsIt can be used as a starting point for training animal communication models.

With an accuracy rate of 70%, scientists use AI to interpret dog barking

Image source: Pixabay

Rada Mihalcea, director of the University of Michigan's Artificial Intelligence Laboratory, said AI has made significant progress in understanding the subtleties of speech.Can distinguish subtle differences in pitch, tone and accent, and use these research foundations to understand dog barks.

One of the main obstacles to developing AI models that can analyze animal vocalizations isThere is a lack of publicly available dataWhile there are many resources and opportunities to record human speech, collecting data from animals is more difficult.

The team tried to collect dog barking information by collecting human voice data.The barks, growls and whines of 74 dogs of different breeds, ages and genders were collected in various situations..

The team used the collected sound information in a machine model that analyzes human speech. The model can understand the communication between dogs very well, and the model has performed well in various tests.The accuracy rate reached 70%.

"Sounds and patterns from human speech can serve as the basis for analyzing and understanding the acoustic patterns of other sounds, such as animal vocalizations," said Rada Mihalcea, a researcher on the team who is also a member of the research team. A better understanding of the nuances of the various sounds made by animals could improve how humans interpret and respond to their emotional and physical needs.

It is reported that the experimental results were presented at the 2024 International Joint Conference on Computational Linguistics, Language Resources and Evaluation. IT Home attached paper link:

https://arxiv.org/pdf/2404.18739

statement:The content is collected from various media platforms such as public websites. If the included content infringes on your rights, please contact us by email and we will deal with it as soon as possible.
Information

Tencent, Sun Yat-sen University and Hong Kong University of Science and Technology jointly launched the image-generated video model "Follow-Your-Pose-v2"

2024-6-8 9:18:37

Information

Free anonymous use of GPT and other popular large models, DuckDuckGo AI Chat chatbot released

2024-6-8 9:20:28

Search