You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

~ Archive for rumor propagation ~

The Real “Fake News”

ø

The following is a blog post that Eni Mustafaraj has recently published in The Spoke. We reproduce it here with permission.

fake_news_post

Fake news has always been with us, starting with The Great Moon Hoax in 1835. What is different now is the existence of a mass medium, the Web, that allows anyone to financially benefit from it.

Etymologists typically track the change of a word’s meaning over decades, sometimes even over centuries. Currently, however, they find themselves observing a new president and his administration redefine words and phrases on a daily basis. Case in point: “fake news.” One would have to look hard to find an American who hasn’t heard this phrase in recent months. The president loves to apply it as a label to news organizations that he doesn’t agree with.

But right before its most recent incarnation, the phrase “fake news” had a different meaning. It referred to factually incorrect stories appearing on websites with names such as DenverGuardian.com or TrumpVision365.com that mushroomed in the weeks leading up to the 2016 U.S. Presidential Election. One such story—”FBI agent suspected in Hillary email leaks found dead in apparent murder-suicide”—was shared more than a half million times on Facebook, despite being entirely false. The website that published it, DenverGuardian.com, was operated by a man named Jestin Coler, who, when tracked down by persistent NPR reporters after the election, admitted to being a liberal who “enjoyed making a mess of the people that share the content”. He didn’t have any regrets.

Why did fake news flourish before the election? There are too many hypotheses to settle on a single explanation. Economists would explain it in terms of supply and demand. Initially, there were only a few such websites, but their creators noticed that sharing fake news stories on Facebook generated considerable pageviews (the number of visits on the page) for them. Their obvious conclusion: there was a demand for sensational political news from a sizeable portion of the web-browsing public. Because pageviews can be monetized by running Google ads alongside the fake stories, the response was swift: an industry of fake news websites grew quickly to supply fake content and feed the public’s demand. The creators of this content were scattered all over the world. As BuzzFeed reported, a cluster of more than 100 fake news websites was run by individuals in the remote town of Ceres, in the Former Yugoslav Republic of Macedonia.

How did the people in Macedonia manage to spread their fake stories on Facebook and earn thousands of dollars in the process? In addition to creating a cluster of fake news websites, they also created fake Facebook accounts that looked like real people and then had these accounts subscribe to real Facebook groups, such as “Hispanics for Trump” or “San Diego Berniecrats”, where conversations about the election were taking place. Every time the fake news websites published a new story, the fictitious accounts would share them in the Facebook groups they had joined. The real people in the groups would then start spreading the fake news article among their Facebook followers, successfully completing the misinformation cycle. These misinformation-spreading techniques were already known to researchers, but not to the public at large. My colleague Takis Metaxas and I discovered and documented one such technique used on Twitter all the way back in the 2010 Massachusetts Senate election between Martha Coakley and Scott Brown.

There is an important takeaway here for all of us: fake news doesn’t become dangerous because it’s created or because it is published; it becomes dangerous when members of the public decide that the news is worth spreading. The most ingenious part of spreading fake news is the step of “infiltrating” groups of people who are most susceptible to the story and will fall for it.  As explained in this news article, the Macedonians tried different political Facebook groups, before finally settling on pro-Trump supporters.

Once “fake news” entered Facebook’s ecosystem, it was easy for people who agreed with the story and were compelled by the clickbait nature of the headlines to spread it organically. Often these stories made it to the Facebook’s Trending News list. The top 20 fake news stories about the election received approximately 8.7 million views on Facebook, 1.4 million more views than the top 20 real news stories from 19 of the major news websites (CNN, New York Times, etc.), as an analysis by BuzzFeed News demonstrated. Facebook initially resisted the accusation that its platform had enabled fake news to flourish. However, after weeks of intense pressure from media and its user base, it introduced a series of changes to its interface to mitigate the impact of fake news. These include involving third-party fact-checkers to assign a “Disputed” label to posts with untrue claims, suppressing posts with such a label (making them less visible and less spreadable) and allowing users to flag stories as fake news.

It’s too early to assess the effect these changes will have on the sharing behavior of Facebook users. In the meantime, the fake news industry is targeting a new audience: the liberal voters. In March, the fake quote “It’s better for our budget if a cancer patient dies more quickly,” attributed to Tom Price, the Secretary of Health and Human Services, appeared on a website titled US Political News, operated by an individual in Kosovo. The story was shared over 80,000 times on Facebook.

Fake news has always been with us, starting with The Great Moon Hoax in 1835. What is different now is the existence of a mass medium, the Web, that allows anyone to monetize content through advertising. Since the cost of producing fake news is negligible, and the monetary rewards substantial, fake news is likely to persist. The journey that fake news takes only begins with its publication. We, the reading public who share these stories, triggered by headlines engineered to make us feel outraged or elated, are the ones who take the news on its journey. Let us all learn to resist such sharing impulses.

Two rumors about the downing of a Russian warplane by Turkey

ø

News of Turkish airplane shooting down a Russian one over the Turkish-Syrian border has dominated the news and the social media lately. We investigated the rumor within hours after it appeared (24 Nov. 2015) and you can see the results of the analysis here: http://twittertrails.wellesley.edu/~trails/stories/investigate.php?id=462776628

This was not the first time a rumor of this kind emerged. About a month and a half ago (10 Oct. 2015) an identical rumor had emerged. We had investigated that rumor too and you can see the results of our analysis here: http://twittertrails.wellesley.edu/~trails/stories/investigate.php?id=134661966

Russian jet downing rumors

As you can see, based on the crowd’s reaction to the rumors, TwitterTrails was able to determine that the October rumor was false while the November one was true. The false rumor did not spread much and had a lot of skeptical tweets questioning its validity. On the other hand, the true rumor spread much higher and in terms of skepticism was undisputed.

Our understanding of the way the “wisdom of the crowd” works is that, when unbiased, emotionally cool observers see a rumor that seems suspicious, they usually react in one of two ways: They either do not retweet it, reducing its spread, or they may respond questioning the validity of the rumor, resulting in higher skepticism.

This is something we see often in the stories we investigate on TwitterTrails. Our understanding of the way the “wisdom of the crowd” works is that, when unbiased, emotionally cool observers see a rumor that seems suspicious, they usually react in one of two ways: They either not retweet it, reducing its spread, or they may respond questioning the validity of the rumor, resulting in higher skepticism.

When plotting the true and false rumors (after they have been verified through journalists’ work), the following image emerges:

spread-vs-skepticismIt is not a 100% separation, but one can see that the false rumors (marked by red triangles) show low spread and high skepticism, while the true ones show high spread and low skepticism. The picture is of course muddled in the lower corner. A rumor that does not attract much attention did not have the opportunity to benefit from the “wisdom of the crowd” and thus cannot be determined by our system.

 

Note: This posting originally appeared on our TwitterTrails blog.

False rumors do not propagate like True ones

ø

On Twitter, claims that receive higher skepticism and lower propagation scores are more likely to be false.
On the other hand, claims that receive lower skepticism and higher propagation scores are more likely to be true.

The above is a conjecture we wrote in a recent paper entitled Investigating Rumor Propagation with TwitterTrails (currently under review). Feel free to take a look if you want to know more details about our system, but we will write here some of its highlights.

As you may know if you have read our Twitter Trails Blog before, we are developing a Web service that, starting from a tweet or a set of keywords related to a story propagating on Twitter (or a hashtag), it will investigate it and answer automatically some of the basic questions regarding the story. If you are not familiar, you may want to take a look at some of the posts. Or, it can wait until you read this one.

Recently we deployed twittertrails.com a site containing the growing collection of stories and rumors that we investigate. Its front end looks like this:

condensed-view

This is the “condensed view” which allocates one line per story, 20 stories per page. There are over 120 stories collected at this point. Clicking on a title brings you the investigation page with lots of details and visualizations about its propagation, its originator, how it burst, who supports it and who refutes it.

Note that on the right side of the condensed view we automatically compute two metrics:

  • The propagation level of a story. This is a logarithmic scale of the h-index of a tweet collection that has currently 5 levels: Extensive, High, Moderate, Low and Insignificant.
  • The skepticism level of a story. This is the ratio of tweets with negated propagation over tweets with no negated propagation. It has four levels: Undisputed, Hesitant, Dubious and Extremely doubtful.

The initial quote at the top of this post refers to these metrics.

There is also a more detailed,  “main view” of TwitterTrails:

full-view

In the main view there are additional tools to select stories, based on time of collection, particular tags, levels of propagation and skepticism or keywords.

A few weeks ago we gave a presentation of TwitterTrails at the Computation and Journalism 2014 symposium at Columbia University in NYC. There is a video of our presentation that you can view if interested. In this presentation we noted that false rumors have different pattern of propagation on Twitter than true rumors. Below is a graph that shows that difference.

propagationVSskepticism

The graph displays propagation levels vs skepticism levels, and the data points are colored depending on whether a rumor was true (blue), false (red) or something else (green) that cannot be categorized as true or false (e.g., reference to an event or a tweet collection based on a hashtag). The vast majority of the false rumors show insignificant to low propagation while at the same time their level of skepticism ranges from dubious to extremely doubtful.

This is remarkable, but it may not be too surprising. As we write in the paper, “Intuitively, this conjecture can be explained as an example of the power of crowd sourcing. Since the ancient times philosophers have argued that people will not willing do bad unless they are guided by irrational impulses, such as anger, fear, confusion or hatred. Therefore, the more people see some false information, the more likely it is that they will either raise an objection or simply decide not to repeat it further.

We make the conjecture specific for Twitter because it may not hold for every social network. In particular, we rely on the user interface for promoting an objection to the same level as the false claim. Twitter’s interface does that; both the claim and its negation will get the same amount of real estate in the a user’s Twitter client. On the other hand, this is not true for Facebook, where a claim gets much greater exposure than a comment, while a comment may be hidden quickly due to follow up comments. So, on Facebook most people may miss an objection to a claim.”

Take a look at TwitterTrails.com and tell us what you think!
We would also be happy to run an investigation for you, if interested.

(This is copy of a blog post on the blogs.wellesley.edu/twittertrails site.)

 

Log in