News
Artificial Intelligence and Machine Learning News

Artificial Intelligence and Mental Health

How AI can be used to improve diagnosis of mental health conditions.
Posted
  1. Article
  2. Author
3D metal human brain

One of the primary challenges faced by researchers and clinicians seeking to study mental health is that direct observation of indicators of mental health issues can be challenging, as a diagnosis often relies on either self-reporting of specific feelings or actions, or direct observation of a subject (which can be difficult due to time and cost considerations). That is why there has been a specific focus over the past two decades on deploying technology to help human clinicians identify and assess mental health issues.

Between 2000 and 2019, 54 academic papers focused on the development of machine learning systems to help diagnose and address mental health issues were published, according to a 2020 article published in ACM Transactions on Computer-Human Interaction. Of the 54 papers, 40 focused on the development of a machine learning (ML) model based on specific data as their main research contribution, while seven were proposals of specific concepts, data methods, models, or systems, and three applied existing ML algorithms to better understand and assess mental health, or improve the communication of mental health providers. A few of the papers described the conduct of empirical studies of an end-to-end ML system or assessed the quality of ML predictions, while one paper specifically discusses design implications for user-centric, deployable ML systems.

Despite the voluminous amount of research being conducted, challenges remain with relying on ML to identify mental health issues, given the significant amount of patient data that traditionally has been required to train these ML models. Previous studies cited in an April 2020 article published in Translational Psychiatry have indicated neuroimages can record evidence of neuropsychiatric disorders, with two common types of neuroimage data being used to identify changes in the brain that could indicate mental health issues. Functional magnetic resonance imaging (fMRI) can be used to identify the changes associated with blood flow in the brain, based on the fact that cerebral blood flow and neuronal activation are coupled. Using structural magnetic resonance imaging (sMRI) data, the neurological aspect of a brain is described based on the structural textures, which show some information in terms of the spatial arrangements of voxel intensities in three dimensions.

However, a study led by researcher Denis Engemann of France-based research institute Inria Saclay, which was published in October 2021 in the open-access research journal GigaScience, found that applying ML to brain scans, medical data, and the results of a questionnaire about personal circumstances, habits, moods, and demographic data from large population cohorts can yield “proxy measures” for brain-related health issues without the need for a specialist’s assessment.

These so-called “proxy measures,” indirect measurements that strongly correlate with specific diseases or outcomes that cannot be measured directly, were developed by the Inria research team by tapping two data sources held by the U.K. Biobank, a large long-term study investigating the respective contributions of genetic predisposition and environmental exposure to the development of disease. The first source included biological and medical data, including magnetic resonance imaging (MRI) data, from 10,000 participants. The team then incorporated U.K. Biobank’s questionnaire data about personal conditions and habits, such as age, education, tobacco and alcohol use, sleep duration, and physical exercise, as well as sociodemographic and behavioral data such as the moods and sentiments of the individuals covered in the study.

The Inria team combined these data sources to build ML models that approximated measures for brain age, and scientifically defined intelligence and neuroticism traits. According to lead researcher Engemann, this methodology of creating proxy measures via predictions of “brain age” from MRI scans, along with other sociodemographic and behavioral data, can be used to identify mental health markers that can be useful to both psychologists and end users.

According to a Q&A with Engemann released concurrently with the study, “Given the brain image of a person, the resulting model will provide a prediction by returning the most probable questionnaire result by extrapolating from people whose brains ‘looked’ similar,” said Engemann. “Thus, the predicted questionnaire result can become a proxy for the construct measured by the questionnaire. This reflects the statistical link between questionnaire data and brain images, and therefore can enrich the original questionnaire measure.”

Engemann noted ML is likely to become a tool to help psychologists conduct personalized mental health assessments, with clients or patients granting an ML model secure access to their social media accounts or mobile phone data, which would return the proxy measures useful to both the client and the mental health or education expert.

Furthermore, once a model has been constructed, a proxy measure can be obtained even if a mental health questionnaire has not specifically been assessed. “This is a promising method for finding large-scale statistical patterns of health within the general population,” said Engemann, which can be used to enhance smaller clinical studies if there isn’t sufficient training data on which to run a bespoke ML model.

A cohort for which proxy measures might be useful in assessing their mental health states is social media users, particularly children and adolescents. In mid-2021, internal research documents leaked by former Facebook employee Frances Haugen that were provided to The Wall Street Journal showed Instagram exacerbated body-image issues for one in three teenage girls who used that social media service. The release of this internal data played into the commonly accepted narrative that social media negatively impacts its users. However, a closer inspection of the research indicates the Facebook internal research was neither peer reviewed nor designed to be nationally representative, and some of the statistics that have received the most attention in the popular press were based on very small numbers.

That said, a combination of the research approach detailed by Engemann, combined with longitudinal studies that follow the same subjects over time, may help researchers and clinicians get a better handle on the actual impact of social media on users’ mental states and well-being.

“I think the hard part of the social media stuff is we don’t have the right inputs,” says Dr. John Torous, a researcher and co-author of “Artificial Intelligence for Mental Health Care: Clinical Applications, Barriers, Facilitators, and Artificial Wisdom,” a research article published last year that explored the use of AI to assist psychologists with clinical diagnosis, prognosis, and treatment of mental health issues.

Applying ML models to these robust datasets is likely where significant benefits to researchers will be found, says Torous. “If you look at almost every current [research] paper, it’s just going to be [focused on] cross-sectional data with exposure,” he says. However, longitudinal studies, which can encompass following individual subjects for years, are often very complex, with many temporal interactions and lots of dependencies. “That’s where I think the machine learning models become very useful,” Torous says, adding, “we have to be capturing longitudinal data on both social media exposure and the state of [the subject’s] mood.”

Another key challenge, according to Torous, is the cultural aspect surrounding the reporting of mental health issues. “The cultural part just makes it really tricky, because people from different cultures may be reporting symptoms differently.”

One approach that may help to mitigate the cultural and individual issues of self-reporting is being evaluated by Sonde Health (www.sondehealth.com), a technology company that uses markers within a user’s voiceprint as indicators of several physical and mental health conditions. The company developed biomarkers by focusing on the acoustic aspects of the human voice, taking speech samples ranging in length from six to 30 seconds, then breaking down each sample into its most basic elements. Sonde developed algorithms to match more than 4,000 acoustic voice features with a database of normal voices and the voices of those that have been diagnosed with a specific disease or condition. As a result, specific voice features can be used as predictors of mental or physical health conditions.

Sonde CEO David Liu says his company has isolated four or five dozen acoustic features indicative that something may be amiss with a person’s mental health, indicating the person should consider being evaluated, though Liu says the technology is not designed to make actual mental health diagnoses.

“We have six specific vocal features that give us a read on if something needs to be paid attention to,” Liu says. “Now, we’re not diagnosing for depression; we’re not diagnosing for anxiety, but these six features—voice smoothness, control, liveliness, energy, range, and clarity—are well-studied vocal features, and we understand what is in the normal range and what is not, based upon prior published research. What we’re doing now is taking that product and putting it into clinical trial research, so that we can then see if we can align it to some of these other assessments that are well-accepted.”

The company is working closely with mobile device chipset manufacturers such as Qualcomm to have the technology embedded into the firmware of a device, so it can be activated or deactivated within specific applications. This would allow the capture of voice information during the normal flow of a day, which Liu says is more natural and may result in more data being captured across not only social media sites, but also from multiplayer games, chats, or other communications.

“When you’re speaking on TikTok, or when you’re on Zoom or any IP-driven applications, you need to be able to capture voice samples while people are doing whatever they’re doing; you don’t want to stop somebody and say, ‘hey, tell me about your day’, because that’s a little bit artificial,” Liu says. “If we have enough of those voice samples, we can begin to produce these insights and can give them indications that, because you’ve been on this [service], and it’s having an impact on your mental health, you’re not going to be feeling as good because you’re spending 10 hours straight on social media.”


Longitudinal data indicating changes in a person’s mental state can be indicative of a mental health problem, rather than simply having a bad day.


Liu notes that Sonde does not capture or analyze the content of the voice samples, just the acoustic markers; from a privacy standpoint, that might make the technology more palatable to both manufacturers and users. Further, while Liu says that Sonde’s technology does not rely on longitudinal data as it has gathered voice data across a wide swath of people from different countries and cultures, the use of longitudinal data when making a longer-term health assessment can be beneficial.

“Now, when we have longitudinal data, meaning I get a baseline for a user, and then we take a measurement again, a day later, two days later, and a week later, then we can understand the changes and it makes our technologies even more powerful from a monitoring standpoint when we have that baseline,” Liu says, noting that longitudinal data indicating changes in a person’s mental state can be indicative of a mental health problem, rather than someone simply having a bad day.

One of the key challenges with any sort of mental health assessment is getting people to agree either to be tracked or to be studied, given their potential huge privacy concerns, as well as the social stigma surrounding mental health.

“How are we going to build these really big, big datasets?” asks Torous, who notes that as a researcher and clinician, he needs to go to the companies that amass such data (such as Facebook) to capture data on a scale that will yield meaningful research results. Torous says lack of trust remains a huge issue, and getting people to willingly participate in any sort of mental health monitoring may require the development of an independent, health-focused platform.

“I wonder if we’re going to have to build new systems that are really just built for health; that’s not going to be a social platform, an advertising platform, or a shopping platform,” Torous says, noting that researchers would need to offer clinically actionable insight from such data that is useful and focused on a very targeted problem. Further, focusing on a symptom of mental health, such as cognition, may prove useful, as there isn’t the same stigma attached to asking about cognition, compared with asking about hot-button issues such as depression, suicide, or other more direct mental health indicators or conditions.

*  Further Reading

Dadi, K., Engemann, D. et. al.
Population modeling with machine learning can enhance measures of mental health. GigaScience, Volume 10, Issue 10, October 2021, giab071, https://doi.org/10.1093/gigascience/giab071

Lee, E.E., Torous, J. et al.
Artificial Intelligence for Mental Health Care: Clinical Applications, Barriers, Facilitators, and Artificial Wisdom, Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, February 2021. DOI: https://doi.org/10.1016/j.bpsc.2021.02.001

Su, C., Xu, Z., Pathak, J., and Wang, F.
Deep learning in mental health outcome research: a scoping review, Transl Psychiatry 10, 116 (2020). https://doi.org/10.1038/s41398-020-0780-3

Thieme, A., Belgrave, D., and Doherty, G.
Machine Learning in Mental Health: A Systematic Review of the HCI Literature to Support the Development of Effective and Implementable ML Systems, ACM Transactions on Computer-Human Interaction, Volume 27, Issue 5, October 2020 Article No.: 34, pp 1–53, https://doi.org/10.1145/3398069

Zauner, H.
AI for mental health assessment: Author Q&A, Giga Science, October 14, 2021, http://gigasciencejournal.com/blog/ai-for-mental-health/

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More