On the back of Twitter stepping up its efforts to prevent abuse, Facebook is this week also announcing a number of new features that will aim to spot those in need – at the exact time they may need it most.
The social network announced on Wednesday that it would soon deploy a new AI-driven algorithm which will aim to spot those users who may be most at risk of harming themselves.
Currently in testing across the pond, the new system claims to be able to detect users who may be thinking of posting a status update that could put themselves, or others, at risk – and then aims to make contact with the individual(s) via prompts and other methods of intervention to suggest “ways they can seek help,” the report says.
“Once a post has been identified, it is sent for rapid review to the network’s community operations team,” the BBC highlights.
In addition, those detected to be at risk and that begin a Facebook Live video stream will -in the future- receive the option to seek help directly from within their stream.
With the stream itself remaining unaffected by this, those watching will also see options and advice on their screen that could help to keep the individual broadcasting safe.
The decision to keep the video stream of an individual who may be at risk open and unaffected, was explained as a way to allow the person to continue expressing their thoughts. By doing this the network’s community operations team can better understand the person’s intentions and further help them out of the situation, if possible.
“Some might say we should cut off the stream of the video the moment there is a hint of somebody talking about suicide,” said Jennifer Guadagno, Facebook’s lead researcher. “But what the experts emphasised was that cutting off the stream too early would remove the opportunity for people to reach out and offer support.”
Are you, or someone you know, at risk? – PAPYRUS UK is the national charity dedicated to the prevention of young suicide.