On Twitter, fake news has greater allure than truth does
An analysis of 4.5 million tweets shows falsehoods are 70 percent more likely than truths to be shared
The truth about online fake news is becoming clearer. A new study shows that on Twitter, phony stories reach more people than truthful ones do. Fake stories also spread far faster.
Fake news refers to stories based on false or misinterpreted information. These stories try to dupe readers into believing something that isn’t true. Some might try to make public figures look bad or claim people did something they didn’t. Others might try to discredit scientific findings. Such stories are often shared on social media platforms such as Twitter and Facebook. But scientists have lacked data on how widely they were shared, or by whom. So a team of researchers decided to investigate.
They recently analyzed more than 4.5 million tweets and retweets. All had been posted between 2006 and 2017. And their disturbing finding: Fake news spreads faster and further on Twitter than true stories do.
Filippo Menczer studies informatics and computer science at Indiana University in Bloomington. He was not part of the new study but says its findings are important for understanding the spread of fake news. Before this, he notes, most investigations used a few people’s observations rather than a mountain of scientific data. Until now, he says, “We didn’t have a really large-scale, systematic study evaluating the spread of misinformation.”
Deb Roy, who did work on the new analysis, studies media and social networks at the Massachusetts Institute of Technology in Cambridge. In the past, he also has worked as a media scientist for Twitter. To study how news spreads on Twitter, Roy and his colleagues collected tweet cascades. These are groups of messages composed of one original tweet and all retweets of that initial post. They examined about 126,000 cascades centered on any of about 2,400 news stories. Each of those original news stories had been independently confirmed as true or false.
The researchers then collected data on how far and fast each cascade spread. Discussions of bogus stories tended to start from fewer original tweets. But they tended to soon spread extensively. Some chains reached tens of thousands of users! True news stories, in contrast, never spread to more than about 1,600 people. And true news stories took about six times as long as false ones to reach 1,500 people.
Overall, these data show, fake news was about 70 percent more likely to be retweeted than was real news. The team reported its results in the March 9 Science.
Not just bots
Roy’s team also wanted to know who was responsible for spreading false news. So they looked at Twitter accounts that were involved in sharing fake stories. Some had been run by computers, not people. These so-called web robots, or bots, are computer programs that pretend to be human. They have been designed to find and spread certain types of stories.
Some people had assumed that bots drive most fake news moving across the internet. To test that, Roy and his colleagues looked at data both with and without bot activity.
Bots spread false and true news about equally, the data showed. So fake news could not be blamed just on bots, Roy’s group concluded. Instead, people are the main culprits in retweeting fake news.
Why might people be more likely to spread tall tales? These stories may seem more exciting, says data scientist Soroush Vosoughi. He works with Roy at MIT and is a coauthor of the new study. Compared to the topics of true-news stories, fake-news topics were more different from other tweets that users had viewed in the two months before they retweeted a story. Tweet replies to the false news stories also used more words indicating surprise.
The researchers didn’t inspect the full content of every tweet. So they don’t know exactly what users said about these stories. Some people who retweeted fake-news posts may have added comments to debunk them. But Menczer calls the new analysis a “very good first step” in understanding what types of posts grab the most attention.
The study also could guide strategies for fighting the spread of fake news, says Paul Resnick. He works at the University of Michigan in Ann Arbor. Though he was not part of the new study, he uses computer science to study how people behave online. One approach might be for social media platforms to discourage people from spreading rumors, he says. That approach might have more impact than simply booting off bots that behave badly.
Sinan Aral at MIT has some other ideas. He is another coauthor of the new study and an expert on how information spreads through social networks. One way to fight fake news might be to help users identify true stories online, he suggests. Social media sites could label news pieces or media outlets with truthfulness scores, Aral suggests. In fact, at least one September 2017 study has already looked into that. The bad news: Flagging potentially false headlines or news sites only works a little, it found. Sometimes the tactic could even backfire.
Platforms also might try to restrict accounts reputed to spread lies, Aral says. But it’s still unclear how successful such actions might be, he adds. Indeed, he notes, “We’re barely starting to scratch the surface on the scientific evidence about false news, its consequences and its potential solutions.”