by Eric Lieberman
NEW YORK – New York Times CEO Mark Thompson casted serious doubt Tuesday on the prospect of closely trusting algorithms to help determine which news stories are fraudulent or misleading.
“The process of citizens making up their own mind which news source to believe is messy, and can indeed lead to ‘fake news,’ but to rob them of that ability, and to replace the straightforward accountability of editors and publishers for the news they produce with a centralized trust algorithm will not make democracy healthier but damage it further,” Thompson said in a keynote lecture.
He added that if algorithms are to be the primary means for attempting to combat the purported rise of misinformation, then companies like Google and Facebook must be as transparent as possible in their efforts.
“We do not know, beyond inevitably imperfect and incomplete empirical observation, how the algorithms of the major platforms sort and prioritize our content, nor can we reliably predict or influence changes in those algorithms, nor in any sense hold the companies to account for them,” said Thompson. “Full transparency about both algorithmic and human editorial selection by the major digital platforms is an essential preliminary if we are to address any of these issues. It would be best if this were done voluntarily, but even if it requires regulation or legislation, it must be done — and done promptly.”
Thompson’s speech came at an event hosted by New America’s Open Markets Institute and the Tow Center for Digital Journalism at Columbia University called “Breaking the News: Free Speech & Democracy in the Age of Platform Monopoly.” Several other key figures and experts in the industry discussed the rising power of Google and Facebook, and what it means for journalism.
Powerful words from @nytmedia CEO Mark Thompson, who says that the attacks on news outlets and journalists by this Administration may not be as big of a threat to quality journalism, especially local journalism, as the rise of monopolistic digital tech companies. @Open_Markets pic.twitter.com/rxRS6FabP5
— Sara Fischer (@sarafischer) June 12, 2018
His urging to not place too much trust in artificial intelligent systems for deciphering the validity of news stories — and practicing openness when done so — plays off of examples of errors that algorithms create.
Big tech companies like Google, Facebook and Twitter, are often accused of both over- and under-censoring, as separate portions of the public respectively demand they do more to combat always ambiguous hate speech, false news, and terrorist exploitation of the platforms’ features, and others call for an overarching free expression ethos.
But with some examples of apparent censorship, it’s not clear if the removal or restriction of content is due to human moderators or the automated algorithms.
Twitter recently blocked a user who posted harsh criticism of Hamas, the militant and political Islamist organization regarded by much of the international community as a terrorist group. The social media company told The Daily Caller News Foundation that it was an “error,” but wouldn’t clarify if the error was directly human, or indirectly human-induced through the way of the “hateful conduct” detection algorithm.
Other cases, on the other hand, are far more definite.
Google, the most powerful search engine, and potentially company, in the world, displayed fact checks almost exclusively for prominent conservative sites, including The Daily Caller. But most importantly, the attempts to verify certain claims was riddled with errors of their own, as the sidebar feature was proved to be faulty. Google eventually agreed after constant communication with TheDCNF, suspending the feature and blaming it on a flawed algorithm.
“But the underlying danger — of the agency of editors and public alike being usurped by centralized algorithmic control — is present with every digital platform where we do not fully understand how the processes of editorial selection and prioritization take place,” Thompson said in his speech.
Another example of the imperfection of algorithms — which are for the most part reflections of their creators — is Facebook’s new efforts to label political advertising on the platform — a response to the clamoring over Russia’s influence in the 2016 election.
Those new rules are already causing headaches, to say the least, as the automated system has been scooping up content that is not political advertising, but rather just content that technically relates to politics (which is arguably almost anything).
“The depth of Facebook’s lack of understanding of the nature and civic purpose of news was recently revealed by their proposal — somewhat modified after representations from the news industry — to categorize and label journalism used for marketing purposes by publishers as political advocacy, given that both contained political content,” said Thompson.
This is like arguing that an article about pornography in The New York Times is the same as pornography. Facebook admitted to us that their practical problem was that they were under immense public pressure to label political advocacy, but that their algorithm was unable to tell the difference between advocacy and journalism. This would be the same algorithm which will soon be given the new task of telling the world which news to trust.
Facebook and Google did not respond to The Daily Caller News Foundation’s request for comment in time for publication.