In an era dominated by social media, the need to effectively identify and respond to individuals in crisis, especially those expressing suicidal thoughts, has become a pressing concern.
Sentinet is at the forefront of addressing this challenge by harnessing the power of artificial intelligence, which not only aims to save lives but also looks to contribute pioneering research in the realm of mental health.
Sentinet was founded by Yasin Dus, who lost a close friend to suicide. People who feel suicidal often struggle to share their feelings with their family or friends, Dus says. It can be easier for them to share their thoughts with strangers on the internet, which is why I chose to pursue suicide prevention on social media platforms.
The timing of Sentinet’s launch was also influenced by external factors. In 2022, many researchers reported an alarming surge in hate speech and cyberbullying on social media. This environment exacerbated issues related to suicidal ideation and self-harm, prompting Sentinet to debut online.
Sentinet’s mission is mainly suicide prevention, and it has advanced AI tools to identify social media posts containing suicidal ideation. A language model scans unfiltered streams of posts and flags possible content about suicide. Many concerning posts from accounts with few followers go unnoticed, Dus says. We aim to make the most impact in this area, particularly because there are so many accounts with low traction.
Using a network of volunteers, Sentinet flags 150-200 posts per day on average. In a week, the organization can flag more than 1,000 posts and is planning to increase this number. We’re planning to expand our reach and increase the amount of detected and reported posts, Dus says. By our approximation, there are around 3,000 suicidal posts per day. Our aim is to one day fully detect them. Next step in Sentinet’s journey is deploying our system on more social media platforms.
Sentinet focuses not just on suicide prevention but goes a step further. Its two other co-founders, Georgiy Nefedov and Yusuf Efe, graduate students in computer science, supervise the organization’s comprehensive research centered on identifying major signs in people who are on the brink of suicidal thoughts. This research delves into linguistic patterns and focused community affiliations, as well as music and movie preferences.
Notably, Sentinet’s research goes beyond conventional studies, targeting specific cultures and online subgroups that have received less attention. We are conducting research on specific semantics found in suicidal people’s posts, explains Dus. We are working on identifying trends among individuals showing suicidal tendencies on social media. These trends can be traced to specific demographics and niche communities these people may be a part of. We believe that our research can shed light on some patterns that may have been undiscovered, consequently sparking new discussions and ideas on the topic of suicide prevention.
In a world in which digital and physical realities converge, Sentinet pays specific attention to privacy and data handling. As access to sensitive information is limited solely to verified researchers in the mental health field, Sentinet maintains an unwavering commitment never to sell or provide public access to sensitive information and respects the privacy of those in distress.
By combining AI technology with human supervision, Sentinet’s goal remains unwavering: to offer timely support and solace to those navigating challenging moments. Sentinet not only combats suicide but also lays the foundation for new research that has the potential to reshape our understanding of suicide prevention. Sentinet’s vision is clear — to be a force for good in this new age of AI by leveraging technology to connect, intervene, and ultimately, save lives.