Leveraging EHR-Based Machine Learning Models to Address Suicide Risk in American Indian/Alaska Native Communities

Leveraging EHR-Based Machine Learning Models to Address Suicide Risk in American Indian/Alaska Native Communities

Suicide remains one of the most challenging public health issues, driven by complex factors that often vary across populations. Among American Indian and Alaska Native (AI/AN) communities, suicide rates are alarmingly high, reflecting a deep and persistent inequity in access to effective mental health care and culturally sensitive interventions.

Despite decades of research, the ability to accurately identify those at risk of suicide remains limited. A recent study published in Nature demonstrates the promise of electronic health record (EHR)-based machine learning (ML) models in addressing these challenges. By tailoring predictive models specifically for AI/AN populations, researchers have shown how data-driven approaches can improve outcomes for one of the most vulnerable groups in the United States.

This innovative study evaluates the development and performance of an ML model tailored to AI/AN patients, demonstrating its superiority over traditional screening tools. By leveraging vast amounts of EHR data, the researchers provide a more accurate, culturally sensitive method for identifying individuals at imminent risk of suicide. This advancement not only enhances the effectiveness of clinical interventions but also contributes to broader efforts to achieve health equity.

American Indian and Alaska Native communities have the highest suicide rates of any racial or ethnic group in the U.S., with a suicide burden that continues to grow. Factors contributing to this crisis include historical trauma, systemic underfunding of healthcare systems, geographical isolation, and barriers to accessing mental health services. Additionally, many AI/AN individuals face cultural stigmas that discourage seeking help for mental health issues, further exacerbating the crisis.

Existing suicide screening tools are often ineffective in these populations due to their reliance on generic risk factors and culturally insensitive frameworks. AI/AN individuals have unique patterns of suicide risk and protective factors that are not adequately addressed by standardized screening methods. For example, traditional tools may overlook the impact of communal ties, cultural identity, or historical injustices on mental health. The lack of validated, culturally relevant tools has contributed to the underidentification of those at risk, leaving many AI/AN individuals without the support they need.

Why EHR-Based Machine Learning?

EHR-based machine learning models have emerged as powerful tools for suicide prevention. These models can analyze large datasets to identify patterns and risk factors that might otherwise remain hidden. Unlike traditional screening tools, which often rely on a limited set of criteria, ML models consider a wide range of variables, including demographic information, medical history, and behavioral health data. This comprehensive approach allows for more nuanced risk assessment and better prediction of future suicide attempts or deaths.

The Nature study represents a critical step forward in leveraging these technologies to address the specific needs of AI/AN populations. By partnering with the Indian Health Service (IHS)—a healthcare system serving over 2.2 million AI/AN individuals—the researchers were able to develop a model that reflects the unique experiences and challenges of this community.

Study Design: Developing a Tailored Risk Model

Data and Methods

The study analyzed EHR data from 16,835 patients aged 18 and older who had clinical visits at an IHS facility between January 1, 2017, and October 2, 2021. The researchers focused on predicting the risk of a suicide attempt or death within 90 days of a clinical visit. Key features included:

  • Demographic Data: Age, gender, race, and socioeconomic indicators.
  • Clinical Diagnoses: Conditions such as depression, anxiety, bipolar disorder, PTSD, traumatic brain injury (TBI), and substance abuse.
  • Medications: Prescriptions relevant to mental health care.
  • Screening Results: Scores from suicide risk, depression, and intimate partner violence screenings.

The team compared the predictive performance of two ML models—logistic regression and random forest—with an enhanced version of existing suicide screening tools. The enhanced screening included aggregated results from prior screenings, a history of suicide attempts, and diagnoses of suicidal ideation.

Results: A Leap in Predictive Accuracy

Superior Performance of ML Models

The logistic regression and random forest models achieved an Area Under the Receiver Operating Characteristic (AUROC) score of 0.83, significantly outperforming traditional screening tools, which had an AUROC of 0.64. This indicates a dramatic improvement in the ability to predict suicide risk.

  • Sensitivity: Existing screening tools flagged only 32.4% of at-risk individuals, missing many high-risk cases. In contrast, ML models were better calibrated to identify those at risk, including individuals who later died by suicide.
  • Predictive Value: The ML models provided a more accurate assessment of future suicide attempts and deaths, offering clinicians a reliable tool for risk assessment.

Key Predictive Features

The most influential factors in the ML models included:

  1. Prior Suicidal Ideation: This remained a strong predictor of future risk, even years after the initial diagnosis.
  2. Substance Abuse: Diagnoses related to alcohol, cannabis, and stimulants were strongly associated with increased risk.
  3. Inpatient Admissions: Recent mental health-related hospitalizations appeared to reduce short-term risk, suggesting that interventions during inpatient care may have protective effects.

The findings also highlighted the importance of continuous monitoring. For example, while suicidal ideation codes contributed to risk scores for several years, the protective effects of inpatient care were relatively short-lived, underscoring the need for robust follow-up after discharge.

Implications for Health Equity and Cultural Sensitivity

A Culturally Informed Approach

One of the most significant contributions of this study is its emphasis on cultural sensitivity. The model was developed in partnership with AI/AN communities, ensuring that it reflects their unique experiences and risk factors. This collaborative approach is critical for building trust and ensuring the model’s effectiveness.

  • Community Engagement: The study builds on decades of collaboration between AI/AN communities and researchers, prioritizing culturally relevant solutions.
  • Cultural Relevance: By incorporating features such as alcohol and substance use diagnoses, the model captures patterns of risk specific to AI/AN populations.

Addressing Disparities

The study represents a broader effort to reduce disparities in mental health care for AI/AN communities. By providing a tailored, data-driven tool for suicide risk assessment, the researchers hope to improve outcomes and save lives. The findings also highlight the potential for ML models to address inequities in other areas of healthcare, offering a roadmap for integrating AI into efforts to promote health equity.

Challenges and Limitations

Data Gaps and Bias

While EHRs provide valuable data for developing ML models, they are not without limitations. For example:

  • Underreporting: Suicide attempts that do not result in medical attention may be missed, leading to an underestimation of risk.
  • Diagnostic Bias: The accuracy of the model depends on the quality of the underlying data, which can be influenced by systemic biases in mental health diagnoses and care.

Generalizability

The model was validated within a specific tribal context, and its performance may vary in other AI/AN communities. Further research is needed to test its generalizability across diverse populations.

Expanding the Reach of ML Models

The success of this study opens the door to new possibilities for suicide prevention:

  1. Scaling Up: Implementing the model across additional IHS facilities could maximize its impact, reaching more AI/AN individuals at risk.
  2. Improving Interventions: Risk scores can guide targeted follow-up care, ensuring that high-risk individuals receive timely support.
  3. Policy Implications: The findings highlight the importance of integrating culturally sensitive AI tools into public health policies.

Ethical Considerations

As AI becomes more integrated into healthcare, ethical considerations must remain at the forefront. Ensuring that ML models are transparent, unbiased, and culturally sensitive is essential for their success.

The development of an EHR-based suicide risk model tailored to AI/AN populations represents a significant advancement in suicide prevention. By outperforming traditional screening tools, these models offer a data-driven solution to a long-standing public health crisis. However, their true value lies in their ability to address health inequities through culturally informed design and community-centered implementation.

As the healthcare landscape continues to embrace AI, studies like this highlight the importance of tailoring solutions to the populations most in need. By combining advanced technology with cultural understanding, we can move closer to a future where every individual receives the care they deserve.

Are you interested in learning more about AI healthcare? Subscribe to our newsletter, “PulsePoint,” for updates, insights, and trends on AI innovations in healthcare.

💻 Stay Informed with PulsePoint!

Enter your email to receive our most-read newsletter, PulsePoint. No fluff, no hype —no spam, just what matters.

We don’t spam! Read our privacy policy for more info.

💻 Stay Informed with PulsePoint!

Enter your email to receive our most-read newsletter, PulsePoint. No fluff, no hype —no spam, just what matters.

We don’t spam! Read our privacy policy for more info.

We don’t spam! Read our privacy policy for more info.

Leave a Reply