A huge study into the impact of fake news has surprising results

“Counting fake news exposure is like counting people in a fun house”
MANDEL NGAN/AFP/Getty Images

Any hope of ever understanding the spread and influence of fake news looks a long way off. After months of research, one of the most detailed and comprehensive studies into the issue has concluded, simply, that the problem is big.

“Counting fake news exposure is like counting people in a fun house,” says says David Lazer, a researcher at Northeastern University and co-author on a landmark new study. “The very nature of the thing is trying to distort your perception of its importance.”

The study, published in Science, linked 16,442 Twitter accounts to public voter registration records. The team, comprised of researchers from Harvard, Buffalo State and Northeastern University, found that the spread of misinformation was concentrated – 0.1 per cent of their panel accounted for more than 80 per cent of shares from fake news sources. They also found that the majority of people in their panel were still exposed to mainstream news outlets.

Researchers established the rough composition of each individual’s news feed by looking at tweets from the people that they follow in the period between August and October 2016. In order to get a better idea of the aggregate effect that this would have, a tweet from an account – say a popular right wing blogger like Tomi Lahren – that was seen six times would be counted six times. Finally, they categorised people into five subgroups – far left, left, centrist, right, and far right. They also found that five per cent of sources accounted for more than 50 per cent of those exposures.

In this study, fake news was defined on a publisher-wide level “Our study is taking a systems-level view,” says Lisa Friedland, a network scientist at Northeastern University who was a co-author on the study. “Fake news is amplified, and sometimes created, by publications with inadequate editorial processes, so it's useful to be looking at the reach of those publications.” Fact checkers, journalists and academics created a list of websites published mostly false information, which researchers labelled “black”. Websites with a flawed editorial process were labelled “red”, and if researchers weren’t certain, websites were labelled “orange”. 65 orange, 64 red and 171 black fake news sources appeared in the data at least once.

“It’s tricky to figure out the reach of these fake news sites, and there is manipulation to make them seem far more pervasive than they really are,” says Lazer. “Prior research has suggested that most content being shared on Twitter actually comes from bots, but if it turns out that it is mostly bots who are being exposed to fake news, then should we care?”

Researchers often take different approaches to defining misinformation, which can lead to articles with contradictory results. In reality, these articles often illuminate aspects of the larger machine of misinformation. “Some use data about Facebook, some use Twitter, and some use surveys – some look at how specific articles are spread,” says Friedland. “Although there’s a frightening amount of fake news out there, and automated activity that promotes it, as far as we can tell, not too many people have fallen down the rabbit hole just yet.”

These researchers also confirmed that users who were more likely to share fake news tended to skew older, male and more conservative. Part of their panel included super consumers and super sharers – the people who had abnormally high posting and sharing rates on Twitter – who tended to not be indicative of regular users. Researchers suggested that they were “cyborgs” – accounts which are partially automated, but mostly controlled by humans. They set these individuals aside to measure the remaining panel members.

The regular user averaged around 204 potential exposures to fake news sources in the last month 2016 US presidential election campaign. The average proportion of fake news sources in an individual’s feed is 1.18 per cent, but they did find that there was a significant enough difference between left and right. “The way that people get solid journalism is, in general, by knowing who to trust,” says Friedland. “It isn’t easy to produce high-quality, fact-checked journalism, but it’s easy to set up a website, copy some of the stories from elsewhere, and then “report” some made up stories.” 11 per cent of those on the right, and 21 per cent of those on the extreme right shared fake news content, compared to fewer than five per cent of those on the left or in the centre.

While the sample is considered representative, this analysis wouldn’t account for targeted misinformation – for example, the purported micro-targeting of swing state voters. “These estimates do not imply that fake news is not influencing people or elections, indirectly,” says Sander van Der Linden, a professor who researches the spread of misinformation in the psychology department at Cambridge University. “When important elections, like Brexit, are decided on a few percentage points, even a minority of hyperactive fake news propagators can still undermine the democratic process.”

This study’s method of counting human exposure was different from previous research because it didn’t measure the exposure of bots to fake news. However, the study wouldn’t be able to assess how misinformation does spread once it has been shared a few times – such as in a right wing Facebook group, or a on a forum – because this analysis was carried out on Twitter, and the results can’t be generalised to other networks.

This study measures potential exposure to tweets, but not engagement (other than looking at likes and retweets ) – so it’s possible that people were following mainstream news outlets out of spite, or because everyone else was, but not engaging with their content (Twitter’s news feed can be fickle).

“We should try to stem the flood at its sources – those sources include the domains on our lists, particularly the more popular ones,” says Friedland. The paper suggests putting limits on posting for certain users, which would also reduce the exposure of third parties to high volume posting from accounts. “Our analysis suggests that this would reduce exposure to fake news,” says Lazer.

But as the US gears up for another election, and EU voters go to the polls in May, disinformation and fake news aren’t likely to fade away soon.

This article was originally published by WIRED UK