New Study Finds Machine Learning Models Improve Detection of Self-Harm Risk in Children

Date:

A recent study conducted by researchers at UCLA Health has revealed that the current methods used by health systems to store and track data on children receiving emergency care often fail to detect a significant number of those who are at risk of self-harm. In response to this issue, the researchers developed several machine learning models that proved to be much more effective at identifying children at risk of self-harm.

With the ongoing youth mental health crisis across the nation, mental health providers are striving to enhance their understanding of which children are most vulnerable to suicide or self-harm in order to intervene at an earlier stage. Unfortunately, health systems frequently lack comprehensive data on individuals seeking care for self-injurious thoughts or behaviors, resulting in risk-prediction models that rely on incomplete information. As a consequence, the accuracy of these models is limited.

Lead author of the study, Dr. Juliet Edgcomb, emphasized the importance of improving the detection of children who may develop suicidal thoughts or behaviors rather than focusing solely on prediction. The researchers set out to determine whether detection could be enhanced by analyzing current screening methods used in health systems.

One commonly used method is the International Classification of Diseases, version 10 (ICD-10) codes, which categorize the care provided by healthcare providers. However, this approach may overlook many children who exhibit self-injurious thoughts or behaviors but have been coded in their health records under an underlying mental health diagnosis such as depression or anxiety. Another frequently employed method involves assessing the initial chief complaint made by patients upon their arrival at the emergency department. Nevertheless, children may not always disclose their suicidal thoughts and behaviors during their first visit.

See also  Embedded and Federated Machine Learning: The Future of the AI Industry

To evaluate the accuracy of these methods, experts reviewed clinical notes related to 600 emergency department visits made by children aged 10 to 17 within a large health system. The findings indicated that ICD-10 codes missed 29% of children seeking help for self-injurious thoughts or behaviors, while the chief complaint missed more than half (54%) of these patients. Even when both ICD codes and the chief complaint were considered together, around 22% of at-risk patients were still overlooked.

Furthermore, the screening methods reliant on ICD codes or chief complaints showed a tendency to neglect male children in comparison to female children, as well as preteens when compared to teenagers. Additionally, there was evidence suggesting that Black and Latino youth were more likely to be excluded from risk-prediction models, raising concerns about potential underrepresentation of these groups.

To address these shortcomings, the researchers developed three different machine learning models aimed at improving the identification of children with self-injurious thoughts or behaviors. The most comprehensive model encompassed 84 data points derived from a patient’s electronic record, including previous medical care, demographic information, medication history, and the child’s neighborhood disadvantage level. The second model considered all mental health diagnostic codes, rather than solely focusing on suicide-related codes. The third model analyzed various indicators, such as the patient’s medications and laboratory test results.

All three machine learning models outperformed the sole reliance on ICD codes and the chief complaint when it came to identifying children at risk of self-harm. Notably, no single machine learning model demonstrated significantly superior performance compared to the others, suggesting that health systems can improve their ability to flag at-risk patients without developing overly complex models.

See also  Unlock the Latest ML Engineering Techniques in New Edition of Machine Learning Engineering with Python – Available for Free!

Dr. Edgcomb emphasized the minimal drawback of the machine learning models falsely flagging patients who are not actually at risk, stating that it is preferable to have some false positives that can be reviewed by medical records analysts than to completely miss numerous children in need of assistance.

Moving forward, Dr. Edgcomb plans to conduct further research focused on enhancing youth suicide risk prediction models, particularly for elementary-school age children, who have been underrepresented in previous studies.

This groundbreaking study was published in JMIR Mental Health on July 21, 2023, and involved the collaboration of researchers from various departments at UCLA, including the Department of Medicine and the UCLA MINDS Hub in the Semel Institute Center for Community Health.

Frequently Asked Questions (FAQs) Related to the Above News

What did the recent study conducted by UCLA Health reveal?

The study revealed that current methods used to store and track data on children receiving emergency care often fail to detect those who are at risk of self-harm.

Why is it important to identify children at risk of self-harm?

Identifying children at risk of self-harm is crucial for early intervention and prevention of suicide or self-harm behaviors.

What limitations do health systems face in identifying at-risk children?

Health systems often lack comprehensive data on individuals seeking care for self-injurious thoughts or behaviors, resulting in incomplete information for risk-prediction models.

What methods are commonly used to identify at-risk children?

The study explored the use of International Classification of Diseases (ICD-10) codes and assessing the initial chief complaint made by patients upon their arrival at the emergency department.

Were these methods effective in identifying at-risk children?

The study found that both the ICD-10 codes and the chief complaint missed a significant number of children seeking help for self-injurious thoughts or behaviors.

Were there any groups that were more likely to be excluded from the risk-prediction models?

The screening methods tended to neglect male children compared to female children, as well as preteens compared to teenagers. Black and Latino youth were also more likely to be excluded.

How did the researchers address these limitations?

The researchers developed three different machine learning models that incorporated additional data points from a patient's electronic record to improve identification accuracy.

Did any of the machine learning models outperform the others?

No single machine learning model demonstrated significantly superior performance compared to the others, suggesting that complex models are not necessary to improve identification.

What is the advantage of the machine learning models falsely flagging patients who are not at risk?

Having some false positives allows medical records analysts to review the cases and potentially identify children in need of assistance who may have been missed otherwise.

What are the future research plans of Dr. Edgcomb?

Dr. Edgcomb plans to conduct further research focused on enhancing youth suicide risk prediction models, particularly for elementary-school age children who have been underrepresented in previous studies.

Where was the study conducted and published?

The study was conducted at UCLA Health and was published in JMIR Mental Health on July 21, 2023. The collaboration involved researchers from various departments at UCLA, including the Department of Medicine and the UCLA MINDS Hub in the Semel Institute Center for Community Health.

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:

Subscribe

Popular

More like this
Related

Samsung Unpacked Event Teases Exciting AI Features for Galaxy Z Fold 6 and More

Discover the latest AI features for Galaxy Z Fold 6 and more at Samsung's Unpacked event on July 10. Stay tuned for exciting updates!

Revolutionizing Ophthalmology: Quantum Computing’s Impact on Eye Health

Explore how quantum computing is changing ophthalmology with faster information processing and better treatment options.

Are You Missing Out on Nvidia? You May Already Be a Millionaire!

Don't miss out on Nvidia's AI stock potential - could turn $25,000 into $1 million! Dive into tech investments for huge returns!

Revolutionizing Business Growth Through AI & Machine Learning

Revolutionize your business growth with AI & Machine Learning. Learn six ways to use ML in your startup and drive success.