Facebook is expanding the use of AI in detecting individuals showing patterns of suicidal thoughts, in a bid to improve the speed of which it can alert the necessary first responders.
Facebook, which has offered suicide prevention tools for more than 10 years, has been working hard to prevent people on its platform from committing suicide and get the best help for them as quickly as possible. Back in March, the social network introduced a range of suicide prevention tools, and began testing AI technology to identify users showing signs of suicidal intentions in the US.
Now, Facebook is rolling out its AI technology to the rest of the world, except for the EU. The tech uses pattern recognition to help identify posts and live streams as likely to be displaying thoughts of suicide.
Once the technology identifies somebody in need of help, it sends it through to Facebook’s community operations teams which has thousands of people around the world review the social network’s content. Within this group, there is a team of specialists with specific training in suicide and self-harm. Facebook is also using AI to prioritise the order in which its team reviews posts, in order to get to those in the most distress the quickest.
Facebook has developed its approach to suicide and self-harm with the help of mental health organisations such as Save.org, National Suicide Prevention Lifeline, and Forefront Suicide Prevention, as well as having input from people with personal experience.
Earlier this year, Facebook added The Trevor Project to its mental health partners to help prevent suicide in the LGBTQ community.