people sitting in front of a wall of faces

TOOLS AND THE DEVELOPMENT OF CONTEMPORARY SOCIETY

Social Networks and the Marketplace of Ideas

By Shawn Fuller

Social networks, the ones we have belonged to ever since we ventured out of the trees and onto the savannah, have always been our most important source of information. More than ever before we need to understand the nature of such networks and the impact of digital technology, taking into account what we know about human perception and psychology.

Networks exist at every size and level of life on earth. The most staggeringly complex networks are still the biological ecosystems—and the human brain. Humans live within multiple interwoven social networks including family, village, tribal, business, national, and international: and that’s before we add the digital networks to the picture. There are a number of network concepts and terms, which are better described in Wikipedia. But clusters and weak ties have a major relevance to the impact of Facebook. Network clustering occurs when a group of nodes (people, in this case) are closely interconnected, where each node has a relationship with all or most of the other nodes in the cluster. Most people belong to multiple network clusters. Everyone in a family or close friend network knows each other. Acquaintances are examples of weak ties in which we know someone but do not know anyone else in their network. Weak ties are a vital aspect of networks. It is through weak ties that viral ideas—and viruses—spread widely through the human community. You are more likely to learn something new from a weak tie than you are from those in your close network of friends and family. For example, people looking for jobs tend to be helped better by acquaintances than by close friends. Weak ties are essential for society to thrive. A lack of weak ties can perpetuate poverty and lack of opportunity within a community that has been cut off from the wider society due to ethnic or religious discrimination.

Diagram left: Two network clusters connected by a weak tie. Diagram right: Two people connected by five intermediaries or six degrees.

In his book, The Square and the Tower: Networks and Power, from the Freemasons to Facebook, Niall Ferguson summarizes the research that supports the popular notion that everyone on earth is separated by only six degrees of separation. That is, there is an average of five acquaintances between you and everyone else on earth.

Ven diagram showing how news is targeted
This is a very simplified illustration of two filter bubbles. There are many potential religious, ethnic, and other divisions in society that can lead to different information diets. Framing refers to the context surrounding a story that informs the way it should be understood. Framing can cause the same story to have mutually conflicting interpretations.

Users on the Facebook global network now have only 3.57 degrees of separation. This makes it possible for information of any kind to travel rapidly. However, the Facebook trending and curation algorithms ensure that you only see information that you are interested in. This creates a paradoxical effect. You can receive memes, news, and misinformation from across the globe in a matter of only hours or days, but it will only be the kind of information you want to see. The Facebook friend suggestion algorithms encourage dense clustering, which likely leads to greater homophily than what you would find in real life. The Facebook algorithms favour posts from friends you have shown an interest in previously over those that you have skimmed over. So even if your Facebook network is varied—you are also friends with people unlike you—you will still tend to see more posts from people who have similar attitudes to you. This means that even family and close friends with whom you are connected, both online and in real life may receive their information and news from radically different sources. This creates a filter bubble effect in which the same news stories are understood in entirely conflicting ways.

The Marketplace of Ideas

Historically, whenever a new communication tool enhances our ability to network big changes occur. It took decades for societies to adapt to the changes wrought by the printing press.

Prior to Web 2.0 our sources of information and knowledge were selected and presented by people who acted as gatekeepers. These were the newsroom and magazine editors, the universities, scientific journals, and textbook publishers who decided what the public should know. They provided the context by which major events and historical facts should be interpreted and understood. “All the news that’s fit to print” as it still says on the cover of the New York Times. This was by no means a perfect model for an information ecosystem. Its main problem was the amount of influence that the corporate owners had on what we were allowed to know. This was followed by the focus on exciting events over much more serious but gradual and ongoing trends. But gatekeeping also provided essential guardrails. A combination of laws and self-imposed norms prevented the media from publishing deliberate falsehoods.

Traditional media followed the hierarchical form of governance and information flow. The universities and publishers print and distribute textbooks to the schools and students. News flows from major newspapers and broadcasters out to local stations and newspapers. The internet has changed this relationship.

Mumbai

This first became evident in 2008. In their book, LikeWar: The Weaponization of Social Media, Peter Singer and Emerson Brookings recount the Mumbai terrorist attacks that occurred in that year. Had they occurred even three years earlier than they did, the traditional media of print and television would have taken days to report on what had happened, and weeks to fully piece together the events into a complete picture for the TV and Newspaper audiences. But by that point digital technology and social media platforms had brought remarkable powers to ordinary citizens. Minutes after the attacks began Mumbai residents were tweeting where they were occurring. Within hours reports and pictures of the attacks spread across the social media platforms. It was these amateur photos that professional journalists used to fill the papers and TV screens the next day. Now, it is very common to see news organization contacting citizens for permission to use their phone or dashcam videos, after they have posted them on twitter.

Algorithmic Gatekeepers

Instead of being the central points from which news is broadcast directly to the public, the news media are now part of a much larger information ecosystem. They broadcast to the public on their websites, on their Facebook pages, and via their Twitter feeds, and YouTube channels. The direct consumers following these channels, in turn, repost them through twitter and Facebook, where they reach further audiences.

Philip Napoli, in his book Social Media and the Public Interest, refers to algorithmic gatekeeping to describe the growing influence of algorithms on the selection of news that reaches us. The major newspapers, like the New York Times and the Washington Post, use their own algorithms to analyse story content and combine this with measures of how well the stories perform on Facebook, to decide which future stories to recommend on their Facebook pages. As Napoli points out, this creates a multi-step process by which news is disseminated to the public. The New York Times, for example, still uses human editors to decide what stories to publish each day in total. But its recommendation algorithms will select only a subset of these stories to post to Facebook and Twitter, based on which ones will perform best on social media. Those who follow the Facebook pages or Twitter feeds published by the Times will have their selection of stories further curated by social media algorithms that take into account the personal characteristics of these users. They in turn will share stories to their followers and again social media algorithms curate stories for each follower.

Mainstream news editors will apply a number of criteria to decide what is newsworthy, but they boil down to whether the story has societal significance. What criteria do Facebook’s algorithms use? The media scientist, Michael Ann DeVito analysed Facebook newsroom and Notes blogs, as well as patent filings, and securities and exchange commission filings to determine which values the algorithms follow in selecting stories for your news feed. For Facebook they boil down to personal significance. Your friend relationships are the single biggest decider of what stories you see.

Hacking the Information Ecosystem

What happens when the social media algorithms become the gatekeepers of our information ecosystem? In her talk, the internet’s original sin, Renée DiResta observes that the information ecosystem that was built for advertisers is also remarkably effective for propagandists. The recommendation algorithms prioritize what is popular and engaging over what is true. The original intent of the social media applications was for the public to provide content that was personal, authentic, and of interest to friends and followers. Famous people and “influencers” were more than welcome to join in. The personal, the authentic, and the famous would keep people’s attention on their phones and screens so that they could continue seeing ads.

But it is not only the advertisers who want the public’s attention. Governments and terrorist organizations, domestic ideologues and true believers also want our attention. These actors spend millions of dollars and thousands of hours of online work attempting to sway public opinion with propaganda that is meant to look like it is coming from regular citizens. The media scholars Jonathan Corpus Ong and Jason Vincent Cabanas studied how political parties do this in the Philippines. At the top level, politicians (or hyperpartisan organizations) hire public relations (PR) firms as their experts in this domain. At the next level, the PR firms pay digital influencers, celebrities, and pundits to carry the message. These influencers have tens of thousands to millions of followers. The PR firms also hire (and pay very poorly) thousands of middle-class workers to re-post the messages using multiple fake accounts to create the illusion of widespread engagement. The followers of these parties within the public join in and further amplify their propaganda.

The TOW Center for Digital Journalism identified 450 websites in the US that are masquerading as local news organizations. With folksy names like the Ann Arbor Times, Hickory Sun, and Grand Canyon Times these sites produce partisan political content in amongst articles on real estate prices and best places to get gas. Most of the content is generated automatically, not by local writers. The content eventually appears on Facebook and Twitter news feeds.

Free Speech

For a long time neither the technology companies nor legislators wanted to touch this problem, which had been brewing for years. It was only after Russia’s massive attack on the 2016 US election that the elected officials and social media companies began to take notice. One of the central metaphors for protecting free speech is that of the marketplace of ideas. In a well-functioning marketplace good products and services outperform bad ones. Justice Oliver Wendell Holmes famously said: “The ultimate good desired is better reached by free trade in ideas—that the best test of truth is the power of the thought to get itself accepted in the competition of the market.” Justice Louis Brandeis, in his concurring opinion on Whitney v. California wrote: “If there be time to expose through discussion, the falsehoods and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.” This opinion was directly concerned with the democratic process, whereby citizens had the right—and the duty—to participate in the discussion of ideas even in opposition to the government. It is through the competition within the marketplace of ideas that we can distinguish the true from the false.

The media scholars such as Philip Napoli, Whitney Philips, Renée DiResta, Zeynep Tufekci, and many others are telling us that that our information ecosystem shows clear signs of market failure, both in ideas and in information. False stories, conspiracy theories, and misinformation outnumber and outperform true stories. As we will explore next, it is not just the platforms that are the problem. Our own minds play a major role in this.

Hacking Human Minds

A very basic problem that hinders journalists, scientists, and medical professionals from successfully fighting false information is their failure to take into account well-established facts about human perception and psychology.

there is something in the content of fake news stories and rumors that connects with heightened emotions that is absent from factual stories.

Anything that repeats a falsehood, even if it includes a rebuttal, amplifies and spreads the falsehood. It is not just the trending algorithms that are gamed by propaganda but our own minds. The researchers Soroush Vosoughi, Deb Roy, and Sinan Aral, in a widely cited study for the MIT Media Lab, investigated the spread of all of the verified true and false news stories on Twitter from 2006 to 2017. They analyzed 126,000 stories tweeted more than 4.5 million times. They compared false stories and true stories, and those containing a mixture of true and false content. They used six different fact checking organizations: snopes.com, politifact.com, factcheck.org, truthorfiction.com, hoax-slayer.com, and urbanlegends.about.com to identify rumors as either true, false, or partly true. They found that false stories travel further than true ones: a false story was 70 percent more likely to be retweeted than a true one; and it took an average of six times as long for true stories on Twitter to reach as many people as false ones. The false stories also travelled more widely.

This diagram illustrates the difference between how false stories and true stories spread on Twitter. The true story takes longer to be retweeted the same number of times as the false one (5 times). The false story attains greater breadth (the amount of branching) and depth (number of retweets in succession).
Based on: The spread of true and false news online, Soroush Vosoughi, Deb Roy1, Sinan Aral

There is something in the content of fake news stories and rumors that connects with heightened emotions that is absent from factual stories.

What is it about falsehoods that make them so viral? They seem to key into our emotions in some way that true stories don’t. The Psychologist Gordon Pennycook presented combinations of false and true news stories to people and asked them to judge each one as either true or false. He found that people in a heightened emotional state were more likely to judge false stories as true. It wasn’t only feelings of anger that impaired people’s reasoning.

False stories travel faster than true ones; it took an average of six times as long for true stories on Twitter to reach as many people as false ones. The false stories also travelled more widely.

When people feel positive emotions like inspired, strong, or proud they are as likely to misjudge fakes as those feeling negative emotions like scared, distressed, or ashamed. The level of emotion, however, did not influence people’s ability to assess real news stories. This suggests that there is something in the content of fake news stories and rumors that connects with heightened emotions that is absent from factual stories. We often assume that we are more likely to believe a false rumor if it matches our political bias. But Pennycook found heightened emotion had a much bigger effect. However, we are more likely to share a story if it matches our political beliefs, even if we are able to determine that it is false. False stories travel faster than true ones; it took an average of six times as long for true stories on Twitter to reach as many people as false ones. The false stories also travelled more widely.

False stories are more likely to trigger our instinct for novelty. Humans, and animals in general, are attuned to novelty because it often leads to opportunities and discovery. In the MIT study, the authors compared true and false rumours and found that the false ones were more novel, in terms of the information they contained. When they applied sentiment analysis they found that sharers of false stories expressed greater surprise and disgust than sharers of true rumours.

You Can’t Not Believe Everything You Read

“If you tell a lie big enough and keep repeating it, people will eventually come to believe it.”
—Joseph Goebbels, Minister of Propaganda for Nazi Government for the Third Reich

 Marking fake news stories as “disputed” had no effect on the illusory truth.

Propagandists have long known that we are more likely to believe something simply as a result of repeated exposure to it. It is one of the fundamental mechanisms of conditioning people and organisms, often termed “learning.” This effect has been replicated in multiple experiments. But why would we give credence to something that we know is false, rather than continue to reject it? One of the underlying assumptions, that stems from the work of the philosopher Rene Descartes, is that we begin by understanding something—an idea or an assertion—and then based on whether it has compelling arguments or evidence we decide whether to believe it. The psychologist Daniel Gilbert takes us, instead, to another philosopher of that era who had a very different idea of how we comprehend and assess what we hear. Baruch Spinoza argued that in order to understand a statement, you had to first believe it—at least a little bit. Only then could you decide whether you actually believe it, based on your existing knowledge and logical consideration. In Gilbert’s summary, belief is first, easy, and inexorable, whereas doubt is retroactive, difficult, and only occasionally successful. The illusory truth effect, as it is called today, is widely supported by research. Gordon Pennycook found when people were exposed to a fake new story, they were more likely to believe it when they saw it again (except for extremely implausible assertions, such as: The Earth is a perfect square). This effect was present even if the fake news went counter to their political beliefs. Marking fake news stories as “disputed” had no effect on the illusory truth effect.

Hacking Identity

The national, ethnic, religious, and other identities can be a source of joy and celebration. But the propagandist’s task is to sow division and fear within society as whole through these differences.

One of the oldest techniques used by propagandists is to manipulate people through their sense of shared identity. This consists of a two-pronged effort. The one promotes pride in the identity itself, while the other raises threats to that identify from (supposed) outsiders. Our national, ethnic, religious, and other identities can be a source of joy and celebration. But the propagandist’s task is to sow division and fear within society as a whole through these differences.

The Breakdown of Trust

The external environment—what is happening in the world—can also make our minds susceptible to hacking. The violence and fragmentations of collective sensemaking that occurred in the past were never solely due to the arrival of new communication technology. When the printing press was first invented in China, somewhere in the first century CE, it was not followed by 30 years of religious violence as it was in Europe. When Luther printed his ninety-five theses in 1517 CE it was in response to long brewing, widely held distrust and resentment over corruption in the Catholic Church. The pamphlets that he and other reformers printed and distributed spoke to what people were already seeing and feeling about the church. Similarly, the targeted messages on social media connect with a growing sense of distrust and political polarization.

Corruption in the systems we once trusted has done great harm to society, but political polarization has been the most damaging factor.

The Knight Foundation on Trust, Media and Democracy identified a set of global trends that have undermined the public’s trust in governments, big business, and media. There is a growing sense of inequity, especially in the developed world, coupled with the perception that the government is unwilling or unable to address these problems. The 2008 financial meltdown was especially damaging, since the buy-out of the big banks did not include any reform. Although global poverty has decreased enormously in many regions that used to be poor, the middle class, especially in the United States, has experienced declining economic mobility while the super-rich are have gained unprecedented increases in wealth.

In his Medium article, How to Destroy Surveillance Capitalism, Cory Doctorow invites us to consider the possibility that the reason so many people are vulnerable to conspiracy theories is because they are living through real conspiracies that are happening around them.

“What if the trauma of living through real conspiracies all around us — conspiracies among wealthy people, their lobbyists, and lawmakers to bury inconvenient facts and evidence of wrongdoing (these conspiracies are commonly known as “corruption”) — is making people vulnerable to conspiracy theories?”

Corruption in the systems we once trusted has done great harm to society, but political polarization has been the most damaging factor. It leads to government gridlock: laws and budgets do not get passed or when they do it is with the opposition of half the country. This leads to further distrust of governments. Polarized politics fosters a sense of outrage, which makes us more likely to share stories that feed that outrage.

Watch: Renée DiResta: How to Beat Bad Information
Stanford University School of Engineering

Renée DiResta is research manager at the Stanford Internet Observatory, a multi-disciplinary center that focuses on abuses of information technology, particularly social media. She’s an expert in the role technology platforms and their “curatorial” algorithms play in the rise and spread of misinformation and disinformation.


In the series


Related:

Viral Ideas
Taming the Web
Reclaiming the Public Square
The Great Attention Heist


Further Reading


Websites:

First Draft
The mission of First Draft is to protect communities from harmful misinformation. Through their Cross Check program, they work with a global network of journalists to investigate and verify emerging news stories. The site has many research articles, education, and guidelines on misinformation and infodemics.

Data & Society
Data & Society studies the social implications of data-centric technologies & automation. It has a wealth of information and articles on social media and other important topics of the digital age.

Stanford Internet Observatory
The Stanford Internet Observatory is a cross-disciplinary program of research, teaching and policy engagement for the study of abuse in current information technologies, with a focus on social media.


Profiles:

Sinan Aral

Sinan Aral is the David Austin Professor of Management, IT, Marketing and Data Science at MIT, Director of the MIT Initiative on the Digital Economy (IDE) and a founding partner at Manifest Capital. He has done extensive research on the social and economic impacts of the digital economy, artificial intelligence, machine learning, natural language processing, social technologies like digital social networks.


Renée DirResta

Renée DiResta is the technical research manager at Stanford Internet Observatory, a cross-disciplinary program of research, teaching and policy engagement for the study of abuse in current information technologies. Renee investigates the spread of malign narratives across social networks and assists policymakers in devising responses to the problem. Renee has studied influence operations and computational propaganda in the context of pseudoscience conspiracies, terrorist activity, and state-sponsored information warfare, and has advised Congress, the State Department, and other academic, civil society, and business organizations on the topic. At the behest of SSCI, she led one of the two research teams that produced comprehensive assessments of the Internet Research Agency’s and GRU’s influence operations targeting the U.S. from 2014-2018.

YouTube talks:
The Internet’s Original Sin
Renee DiResta walks shows how the business models of the internet companies led to platforms that were designed for propaganda

Articles:
Computational Propaganda
“Computational Propaganda: If You Make It Trend, You Make It True”
The Yale Review


Claire Wardle

Dr. Claire Wardle is the co-founder and leader of First Draft, the world’s foremost non-profit focused on research and practice to address mis- and disinformation.


Zeynep Tufekci

Zeynep is an associate professor at the University of North Carolina, Chapel Hill at the School of Information and Library Science, a contributing opinion writer at the New York Times, and a faculty associate at the Berkman Klein Center for Internet and Society at Harvard University. Her first book, Twitter and Tear Gas: The Power and Fragility of Networked Protest provided a firsthand account of modern protest fueled by social movements on the internet.
She writes regularly for the The New York Times and The New Yorker

TED Talk:

WATCH: We’re building a dystopia just to make people click on ads