In order to take care of its users, and to provide a service that takes account of the intimate level of connection a person has with its interface, Facebook have been testing suicide prevention AI algorithms. In previous roll-outs, the system required a human to notify the program before any action was taken, however now it believes it can detect risk of suicide or self-harm before anyone else.
With determined parameters to choose from, the artificially intelligent program can make a choice whether a person is at risk or not. The status updates and live video feeds can all be monitored by a non human guardian, quietly checking up on everyone's mental health. Provided the data is not shared or used in inappropriate ways, there is no real change here. Facebook have been able to monitor what users upload since it began. Only this time, even more psychological insights can be gained by what computers have managed to learn about people.
Similar AI systems have been used to determine exactly what the alarm bells look like and what people generally do when they are about to harm themselves. By learning about the behaviours of people who are at risk in this way, it's possible to now guess accurately before the events may occur. Appropriate steps can be taken to safeguard the individual and in the case of Facebook, the system detects a high risk then alerts a real person who will make a trained human decision whether or not to contact emergency services.
So far the AI has only been available in the United States, and with this new upgrade the rest of the world will also get their chance to make use of this new technology. The only place where it wont be used is the European Union. Data protection laws in the EU prevent companies from having these powers. Facebook founder Mark Zuckerberg has stated that he wants the website to act as a community, presumably seeking to care for and nourish friendships and relationships.
Love free education? Want more of it? You can show your support! Thank you so much to everyone who does.