The Problem Beyond Fake News

From Green Policy
Revision as of 22:07, 21 May 2019 by Siterunner (talk | contribs)
Jump to navigation Jump to search


Social-media-infowars.png


Working for clicks, shares, ads, and money... “What happens when anyone can make it appear as if anything has happened, regardless of whether or not it did happen?"

Posted February 11, 2018
By Charlie Warzel


In mid-2016, Aviv Ovadya realized there was something fundamentally wrong with the internet — so wrong that he abandoned his work and sounded an alarm.

A few weeks before the 2016 election, he presented his concerns to technologists in San Francisco’s Bay Area and warned of an impending crisis of misinformation in a presentation he titled “Infocalypse.”

The web and the information ecosystem that had developed around it was wildly unhealthy, Ovadya argued. The incentives that governed its biggest platforms were calibrated to reward information that was often misleading and polarizing, or both.

Platforms like Facebook, Twitter, and Google prioritized clicks, shares, ads, and money over quality of information, and Ovadya couldn’t shake the feeling that it was all building toward something bad — a kind of critical threshold of addictive and toxic misinformation. The presentation was largely ignored by employees from the Big Tech platforms...

°

Ovadya saw early what many — including lawmakers, journalists, and Big Tech CEOs — wouldn’t grasp until months later:

Our platformed and algorithmically optimized world is vulnerable — to propaganda, to misinformation, to dark targeted advertising from foreign governments — so much so that it threatens to undermine a cornerstone of human discourse: the credibility of fact...

°

Ovadya, an MIT grad, dropped everything in early 2016 to try to prevent what he saw as a Big Tech–enabled information crisis. “One day something just clicked,” he said of his awakening. It became clear to him that, if somebody were to exploit our attention economy and use the platforms that undergird it to distort the truth, there were no real checks and balances to stop it. “I realized if these systems were going to go out of control, there’d be nothing to reign them in and it was going to get bad, and quick”...

Today Ovadya and a cohort of loosely affiliated researchers and academics are anxiously looking ahead — toward a future that is alarmingly dystopian. They’re running war game–style disaster scenarios based on technologies that have begun to pop up and the outcomes are typically disheartening.

For Ovadya — now the chief technologist for the University of Michigan’s Center for Social Media Responsibility and a Knight News innovation fellow at the Tow Center for Digital Journalism at Columbia — the shock and ongoing anxiety over Russian Facebook ads and Twitter bots pales in comparison to the greater threat:

°

Technologies that can be used to enhance and distort what is real are evolving faster than our ability to understand, control or mitigate it.

Check out this realistic AI voice clone: An AI-generated Joe Rogan is fake

°

The stakes are high and the possible consequences more disastrous than foreign meddling in an election — an undermining or upending of core civilizational institutions, an "infocalypse.” And Ovadya says that this one is just as plausible as the last one — and worse.

"What happens when anyone can make it appear as if anything has happened, regardless of whether or not it did?"

Worse because of our ever-expanding computational prowess; worse because of ongoing advancements in artificial intelligence and machine learning that can blur the lines between fact and fiction; worse because those things could usher in a future where, as Ovadya observes, anyone could make it “appear as if anything has happened, regardless of whether or not it did.”

°

“Whether it’s AI, peculiar Amazon manipulation hacks, or fake political activism — these technological underpinnings [lead] to the increasing erosion of trust,” says computational propaganda researcher Renee DiResta, “it makes it possible to cast aspersions on whether videos — or advocacy for that matter — are real.”

Given the early dismissals of the efficacy of misinformation — like Facebook CEO Mark Zuckerberg’s now-infamous statement that it was "crazy" that fake news on his site played a crucial role in the 2016 election — the first step for researchers like Ovadya is a daunting one: Convince the greater public, as well as lawmakers, university technologists, and tech companies, that a reality-distorting information apocalypse is not only plausible, but close at hand.

"It'll only take a couple of big hoaxes to really convince the public that nothing’s real."

A senior federal employee explicitly tasked with investigating information warfare told BuzzFeed News that even he's not certain how many government agencies are preparing for scenarios like the ones Ovadya and others describe.

°

“I think about it from the sense of the Enlightenment — which was all about the search for truth,” the employee told BuzzFeed News. “I think what you’re seeing now is an attack on the Enlightenment — and Enlightenment documents like the Constitution — by adversaries trying to create a post-truth society. And that’s a direct threat to the foundations of our current civilization."

That’s a terrifying thought — more so because forecasting this kind of stuff is so tricky. Computational propaganda is far more qualitative than quantitative — a climate scientist can point to explicit data showing rising temperatures, whereas it’s virtually impossible to build a trustworthy prediction model mapping the future impact of yet-to-be-perfected technology.

Ovadya and others warn that the next few years could be rocky. Despite some pledges for reform, he feels the platforms are still governed by the wrong, sensationalist incentives, where clickbait and lower-quality content is rewarded with more attention. "That's a hard nut to crack in general, and when you combine it with a system like Facebook, which is a content accelerator, it becomes very dangerous."

Just how far out we are from that danger remains to be seen.

Asked about the warning signs he’s keeping an eye out for, Ovadya paused. “I’m not sure, really. Unfortunately, a lot of the warning signs have already happened.”


°

Charlie Warzel is a senior writer for BuzzFeed News. Warzel reports on and writes about the intersection of tech and culture.


······································································


More on Politics, Social Media & Fake News


European Commission / Expert Group on Fake News and Online Disinformation

March 2018

Advising on policy initiatives to counter fake news and disinformation spread online. The main deliverable of the HLEG was a report designed to review best practices in the light of fundamental principles, and suitable responses stemming from such principles.


Summary: The report, a document supported by a number of different stakeholders, including the largest technology companies, journalists, fact-checkers, academics and representatives from civil society has a number of important attributes including: important definitional work rejecting the use of the phrase ‘fake news’; an emphasis on freedom of expression as a fundamental right; a clear rejection of any attempt to censor content; a call for efforts to counter interference in elections; a commitment by tech platforms to share data; calls for investment in media and information literacy and comprehensive evaluations of these efforts; as well as cross-border research into the scale and impact of disinformation.


https://ec.europa.eu/digital-single-market/en/news/final-report-high-level-expert-group-fake-news-and-online-disinformation

https://ec.europa.eu/digital-single-market/en/fake-news-disinformation


https://www.opendemocracy.net/en/facebook-and-google-pressured-eu-experts-soften-fake-news-regulations-say-insiders

https://medium.com/@hlegresponse/six-key-points-from-the-eu-commissions-new-report-on-disinformation-1a4ccc98cb1c


°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°