Suicide rates continue to increase in the U.S., climbing 33 percent from 1999 to 2017, and the pandemic has exacerbated contributing factors. To Colin Walsh, M.D., an assistant professor of biomedical informatics, medicine and psychiatry at Vanderbilt University Medical Center, this elevates the importance of AI screening tools for predicting who is most at risk.

Walsh is lead author of a study on the use of artificial intelligence (AI)-based, natural language analysis of EHRs to shed light on suicidal ideation and attempted suicide. In related work, Walsh’s team is also using biorepository samples to explore the genetic underpinnings of suicide risk.

“Our model carves out an at-risk subgroup that is small enough to reasonably merit face-to-face discussions with these patients.”

“Walsh and his team have shown how to stress test and adapt an AI predictive model in an operational EHR, paving the way to real-world testing of decision support interventions,” said the new study’s senior author, William Stead, M.D., a professor of biomedical informatics at Vanderbilt.

Retrospective Risk Validation

Using deidentified adult EHR records, the team originally validated their model using clinical data which helped test the predictive value of specific risk factors. “Our model carves out an at-risk subgroup that is small enough to reasonably merit face-to-face discussions with these patients,” Walsh said.

The algorithm has also been adapted to identify suicide risk broadly across two biobanks – Vanderbilt’s biorepository (BioVU), and U.K. Biobank – to assign risk scores to thousands of genotyped patients. By combining these with genomewide association and polygenic risk scoring, Walsh and fellow researchers found a high correlation – in the 70 to 100 percent range – between attempted suicide and suicide attempt probability.

These prior studies established that significant heritability for suicide attempt risk might be measured using genotype data and indicated that their predictive algorithm would perform well in new settings, Walsh explained.

Validation of Prospective Predictions

The new validation study reports on the model’s success when used prospectively in real time, in real-world clinical systems where risk would be triggered operationally through an EHR. “We asked, how well does this EHR-based suicide risk model perform in the clinical setting, and is performance generalizable?” Walsh said.

The team studied an adult cohort of 77,973 in-hospital, ED and ambulatory surgery patients at Vanderbilt from June 2019 to April 2020. Predictors from the EHR included data on demographics, diagnoses, medication, past health care utilization and ZIP code.

EHR-based predictions were prompted by the start of routine clinical visits. In the 30 days following the encounter, there were 129 suicide attempts across 85 individuals. The predictive model determined that in the top quantile, one in 23 patients identified as at-risk for suicide ideation returned for related visit, and one in 271 made a suicide attempt. “This result is out of a database of 77,973 patients. It dramatically narrows attention down to a feasible number for careful in-person screening,” Walsh said.

A Practical Screening Approach 

In-person mental health screening takes time and attention. “It’s not possible or necessary to screen every patient at VUMC for suicidality. For example, talking to the patients our model identified as lowest risk would have required an average of 50 clinician hours a month,” Walsh said.

“We asked, how well does this EHR-based suicide risk model perform in the clinical setting, and is performance generalizable?”

While this initiative promotes useful discussions, Walsh says there are caveats. “No one wants an automated risk screen to result in stigma or patients having ‘a flag’ on their record,” he said. “‘Higher risk’ does not equate to ‘suicidal,’ and we always defer to our patients’ and providers’ judgment over an algorithm. We are working carefully with our clinician partners on how we educate and address conversations prompted by tools like this one.”

Pairing Data with Strategies

The next step for the team is careful pairing of the data with low-cost, low-harm preventative strategies. “This study will be a pragmatic trial of the algorithm’s effectiveness in preventing future suicidality,” Walsh said.

While the current model has been validated for non-psychiatric specialty settings in a large clinical system, Walsh is in active conversations with clinical partners about expanding the AI model and training it using other patient populations. “The model has already been replicated for adolescent populations, and pediatrics applications are coming up more and more,” he said.

About the Expert

Colin Walsh, M.D.

Colin Walsh, M.D., is associate professor of biomedical informatics, medicine and psychiatry and behavioral sciences at Vanderbilt University Medical Center. His research and operational work focus on machine learning/data science applied to mental health, utilization optimization and quality improvement and an analytics approach to value-based healthcare.

William Stead, M.D.

William Stead, M.D., is McKesson Foundation Professor in the Department of Biomedical Informatics and a professor of medicine at Vanderbilt University Medical Center, formerly serving as its chief strategy officer for 11 years. He is recognized internationally as one of the founders of biomedical informatics, envisioning the long-term potential of informatics and computation to transform health, biomedicine and research.