Life is not always comfortable, and everything can’t go just fine as planned. Morals dictate that we should be persevering and have patience. Unfortunately, this is not always the case; some people prefer suicide as the easiest option. But that is not the best, many people care about human life and accord it the respect and attention it deserves.
Facebook seems to take the lead in safeguarding life. The US-based social networking company is developing a tool, currently under testing limited to the US. The tool is designed algorithmically that spots suspicious user posts and responses their friends give back to the post. The program is expected to be expanded internationally upon a successful test run. It is the first time people will be seeing AI combat suicide.
The only notable problem with the use of AI is that it can be easily fooled as it’s not a person. Google puts AI for public use, and it established that it could be easily tricked and was prone to manipulation by punctuation or typos and hence it could not offer sensitive, emotional support a person can give.
Facebook on March 1, 2017, posted an announcement that it is building new suicide control tools into its website. The tools will include integrated suicide prevention on Facebook live, live chat support via Messenger and an AI- recommended report for suicide potentials. This comes along with Facebook’s other initiative to thwart terrorism, say the company’s CEO, Mark Zuckerberg interview with the Reuters. “Right now, we are starting to explore ways to use AI to tell the difference between news stories about terrorism and actual terrorist propaganda so we can quickly remove anyone trying to use our services to recruit for a terrorist organization.”’
|Recommended for you|
|Using artifical intelligence to predict population health risks|
|AI discovers new drugs|
|Artificial Intelligence for beginners|
Facebook’s AI combats suicide
Facebook has deployed a human review team that will contact the individuals that are at high risk of either harming themselves or committing suicide. The company further can suggest appropriate solutions and ways they can get help. Facebook remarked, “Our community operations team will review these posts and, if necessary, provide resources to the person who posted the content, even if someone on Facebook has not reported it yet.”
It will preferably be better if the program directly connects users to suicide prevention personnel whenever a user’s post is flagged, it will be better than just advising them to seek help or act on their own. It is impossible without intervening with an external force.
In recent days, Facebook’s Live Broadcast tool has partnered with other US based mental health organizations to enable users at risk contact them through the Messenger platform. Also, the company has relied on other users to report suspicious posts by clicking on the post’s release button.
The newly developed AI-based pattern-recognition algorithm flags suspected posts.
Facebook product manager, Vanessa Callison-Burch told the BBC “We are testing a streamlined reporting process using pattern recognition in posts previously reported for suicide. This artificial intelligence approach will make the option to report more prominent. We are also testing pattern recognition to identify posts which are very likely to include thoughts of suicide” she said. “We know that speed is critical when things are urgent.
The recognition is based on some discomfort expressing phrases “I am not important”, “I am fed up with life” and other Facebook users’ responses for example “are you OK,” what is up.”
The network community operations team launches a rapid review upon identification of the post. At the moment, the best Facebook can do, is to advise the victims, but it is expected that they could do more by even contacting individuals who can help save the situation.
Facebook has unleashed spontaneous tools to enable users to access appropriate help against suicide; the network can identify users close to death intended users and request them to contact their friends at risk of committing suicide.
This exercise is also aimed at Livestream Facebook users following the criticism the company faced for not doing enough to combat suicide. In the recent January 2017 incident, a teenage girl-Naika Venant from Miami streamed her death act on the platform.
The victim died many hours after posting “I Don’t Wanna Live No More” accompanied with other three sadness expressing emoji. Many users criticized the company saying that if they had a proper mechanism, they would have saved that girl’s life in real time. Further, the girl was the girl was third to live stream a suicide ordeal as at February 2017.
Facebook had given users’ ability to contact its partners such as a Crisis Text line, the National Suicide Prevention Lifeline, and the National Eating Disorder Association over its chat Platform-Messenger.
This tool will go a long way in saving a life. The World Health Organization (WHO) reports that there is a suicide attempt worldwide in every 40seconds and for persons aged between 15 and 29 years old, suicide is the primary cause of death. This is a fact that was even confirmed in a post published on Facebook’s Newsroom blog by product manager Vanessa Callison-Burch, Jennifer Guadagno- researcher, and head of global safety Antigone Davis.
Facebook despite being a social network has made significant steps in developing suicide control mechanisms, again, the company has an unlimited potential to take a valuable stride of directly connecting users to emergency helplines that can talk to them.
The lack of human ability in the AI technology does not provide the threshold energy needed to make the stride.
If you are depressed and need to ask for help, many people care about you. Don’t kill yourself. You can contact the Facebook personal review team through the Facebook support desk.
Image credit: www.istockphoto.com