Cutting-edge AI technologies that can detect subtle changes in a person’s voice may help doctors diagnose Alzheimer’s disease and other cognitive impairments even before other symptoms begin.

In a new study, researchers used advanced machine learning and natural language processing (NLP) tools to assess speech patterns of 206 people. Of those, 114 participants met the criteria for mild cognitive decline.

“Our focus was on identifying subtle language and audio changes that are present in the very early stages of Alzheimer’s disease but not easily recognizable by family members or an individual’s primary care physician,” said lead researcher Dr. Ihab Hajjar, a professor of neurology at UT Southwestern Peter O’Donnell Jr. Brain Institute in Dallas.

Study participants were already enrolled in a research program at Emory University in Atlanta. They completed several standard assessments of mental ability before being asked to record a spontaneous one- to two-minute description of an artwork.

“The recorded descriptions of the picture provided us with an approximation of conversational abilities that we could study via artificial intelligence to determine speech motor control, idea density, grammatical complexity and other speech features,” Hajjar said in a UT Southwestern news release.

Researchers then compared participants’ speech analytics to samples of their cerebral spinal fluid and MRI scans. This helped them determine how accurately the digital voice biomarkers detected both mild mental impairment and Alzheimer’s disease status and progression.

“Prior to the development of machine learning and NLP, the detailed study of speech patterns in patients was extremely labor intensive and often not successful because the changes in the early stages are frequently undetectable to the human ear,” Hajjar said. “This novel method of testing performed well in detecting those with mild cognitive impairment and more specifically in identifying patients with evidence of Alzheimer’s disease — even when it cannot be easily detected using standard cognitive assessments.”

The strategy also was also far more time-efficient than other methods. Traditional neuropsychological tests typically take several hours; researchers spent fewer than 10 minutes capturing a patient’s voice recording.

“If confirmed with larger studies, the use of artificial intelligence and machine learning to study vocal recordings could provide primary care providers with an easy-to-perform screening tool for at-risk individuals,” Hajjar said.

He said earlier diagnosis would give patients and families more time to plan for the future and provide more flexibility for clinicians to recommend beneficial lifestyle changes.

Study findings were recently published in the Alzheimer’s Association publication Diagnosis, Assessment & Disease Monitoring.

More information

Alzheimers.gov has more on dementia.

SOURCE: UT Southwestern Medical Center, news release, April 12, 2023

Source: HealthDay

Comments are closed.