Facebook now identifies potential suicides for authorities to take action
Reason #3,434,389 why I don’t use Facebook: Facebook has developed software that identifies what it thinks are suicidal thoughts by a user, then sends that information to the government so it can take immediate action.
The social network has been testing the tool for months in the US, but is now rolling out the program to other countries. The tool won’t be active in any European Union nations, where data protection laws prevent companies from profiling users in this way.
In a Facebook post, company CEO Mark Zuckerberg said he hoped the tool would remind people that AI is “helping save peoples’ lives today.” He added that in the last month alone, the software had helped Facebook flag cases to first responders more than 100 times. “If we can use AI to help people be there for their family and friends, that’s an important and positive step forward,” wrote Zuckerberg. “The AI looks for comments like “are you ok?” and “can I help?””
Despite this emphasis on the power of AI, Facebook isn’t providing many details on how the tool actually judges who is in danger.
The potential for abuse here is beyond words. Worse, Facebook’s unwillingness to be transparent about this software makes it even more suspect. From the article:
TechCrunch writer Josh Constine noted that he’d asked Facebook how the company would prevent the misuse of this AI system and was given no response.
As I’ve written previously, companies like Google, Facebook, and Microsoft might be providing their customers some good products, but they are also doing so from a very amoral position, abusing the privacy of their customers in ways that are simply wrong. While this software is likely being used today in a totally correct way, I have strong doubts about it in the long term. As the politics of our time become even more heated, partisan, and childish, the temptation to use this software to target and eliminate those who disagree with either Facebook or its allies in the government will certainly grow. And then, how does one protect oneself from this abuse?
Hat tip to reader Max Hunt.
Reason #3,434,389 why I don’t use Facebook: Facebook has developed software that identifies what it thinks are suicidal thoughts by a user, then sends that information to the government so it can take immediate action.
The social network has been testing the tool for months in the US, but is now rolling out the program to other countries. The tool won’t be active in any European Union nations, where data protection laws prevent companies from profiling users in this way.
In a Facebook post, company CEO Mark Zuckerberg said he hoped the tool would remind people that AI is “helping save peoples’ lives today.” He added that in the last month alone, the software had helped Facebook flag cases to first responders more than 100 times. “If we can use AI to help people be there for their family and friends, that’s an important and positive step forward,” wrote Zuckerberg. “The AI looks for comments like “are you ok?” and “can I help?””
Despite this emphasis on the power of AI, Facebook isn’t providing many details on how the tool actually judges who is in danger.
The potential for abuse here is beyond words. Worse, Facebook’s unwillingness to be transparent about this software makes it even more suspect. From the article:
TechCrunch writer Josh Constine noted that he’d asked Facebook how the company would prevent the misuse of this AI system and was given no response.
As I’ve written previously, companies like Google, Facebook, and Microsoft might be providing their customers some good products, but they are also doing so from a very amoral position, abusing the privacy of their customers in ways that are simply wrong. While this software is likely being used today in a totally correct way, I have strong doubts about it in the long term. As the politics of our time become even more heated, partisan, and childish, the temptation to use this software to target and eliminate those who disagree with either Facebook or its allies in the government will certainly grow. And then, how does one protect oneself from this abuse?
Hat tip to reader Max Hunt.