by Eric Lieberman
SILICON VALLEY – Facebook said Tuesday that 99 percent of terrorist content on the platform — like material from or related to ISIS or al-Qaida — is purged before anyone else flags it first.
The use of artificial intelligence (AI) and other automation is a major contributing factor for the seemingly high success rate, according to Facebook.
“We do this primarily through the use of automated systems like photo and video matching and text-based machine learning,” Monika Bickert, head of global policy management, and Brian Fishman, head of counterterrorism policy, wrote in a blog post. “Once we are aware of a piece of terror content, we remove 83% of subsequently uploaded copies within one hour of upload.”
Facebook is using AI for more than one critical initiative, as it announced Monday that such a technological capability is helping reduce the amount of suicides broadcasted live on its platform.
Like suicides, combating terrorist groups’ utilization of its platform is a naturally hard-pressed undertaking for Facebook, as the social media platform has billions of users, and thus trillions of posts and similar content. Governments and international organizations have urged Facebook, along with other similar companies, to do more.
“Often analysts and observers will ask us at Facebook why, with our vast databases and advanced technology, we can’t just block nefarious activity using technology alone,” Bickert and Fishman explained.”The truth is that we need not only technology but also people to do this work. And in order to be truly effective in stopping the spread of terrorist content across the entire internet, we need to join forces with others.”
Facebook has substantially increased its operating costs in the past five years, according to the Financial Times, and at least portions of this are tied to battles against terrorist content.
Some people even believe that Facebook, as well as competitors like Twitter, Google and Google-owned YouTube, should be held legally liable for terrorists using their respective platforms.
Families of victims of the Orlando nightclub shooting last year filed a federal civil suit against those aforementioned companies for providing “provided the terrorist group ISIS with accounts they use to spread extremist propaganda, raise funds, and attract new recruits.”
“Without Defendants Twitter, Facebook, and Google (YouTube), the explosive growth of ISIS over the last few years into the most feared terrorist group in the world would not have been possible,” the lawsuit stated.
The Daily Caller News Foundation reached out to several legal experts and lawyers at the time to see if the grieving families’ case was legitimate.
“The primary obstacle to this suit is Section 230 of the Communications Decency Act, which provides a safeharbor for an ‘interactive computer service,’ such as Twitter or Facebook,” Josh Blackman, an associate professor at the South Texas College of Law in Houston, told TheDCNF. “Beyond Section 230, the First Amendment serves as a significant barrier.”
Eugene Volokh, professor at the UCLA School of Law, agrees.
“Those lawsuits are going nowhere. Service providers can’t be held liable just for providing accounts to bad speakers who would use the accounts to convey bad messages. See, e.g., Fields v. Twitter,” Volokh explained. “And that’s a deliberate choice by Congress, aimed at protecting free speech (even at the expense, common with free speech, of tolerating harmful speech) — if service providers were liable, they would be subject to powerful pressure to suppress not only speech by actual terrorists, but also any speech that someone claims promotes crime (even if in fact the speech does not do so).”
Regardless of laws on the books, including constitutional rights, and legal precedence, a majority of both Democrats and Republican would rather the internet be “safe” than “free,” according to a Pew Research Center poll — even if the definition of safety is not clear.