GoogleThe company published a blog post on August 19, announcing that it would use the Google Cloud API.Health Acoustic Representations (HARs) are now available to researchers. HeAR)AI Models.
It was reported in March this year that Google's HeAR AI model can help humans diagnose diseases by analyzing people's coughing and breathing.
Google said HeAR’s performance in various tasks was better than other models and it demonstrated an excellent ability to capture meaningful patterns in health-related acoustic data.
Importantly, models trained using HeAR require less training data to achieve high performance, a crucial advantage given that data scarcity is often a challenge in healthcare research.
The Google research team trained HeAR using 300 million audio samples collected from a diverse, de-identified dataset; specifically, we trained the cough model using approximately 100 million cough sounds.
HeAR’s potential applications are vast. For example, India-based respiratory healthcare company Salcit Technologies is exploring how HeAR can enhance its existing AI model, Swaasa, for early detection of tuberculosis based on cough sounds, which could be particularly impactful in areas with limited healthcare access.
HeAR's potential extends beyond tuberculosis. The model's ability to work across a variety of microphones and environments allows for low-cost, accessible screening for a wide range of respiratory diseases, marking an important step forward in acoustic health research. Google's goal is to democratize this technology and support the global medical community in developing innovative solutions that break down barriers to early diagnosis and care.