Facebook and Twitter wield huge influence over how people understand the world around them. This is the year we confront that…
THE SOCIAL MEDIUM IS THE MESSAGE.
Social networks have been exposed. No one can pretend that they are simply neutral platforms – mere tubes and pathways, like phone lines, that allow us to share snippets of our lives. That fiction was laid bare on November 8, 2016.
Over the next year mainstream culture will grapple, for real, with the civic and political effects of our lives online. Many intellectuals, with eyebrows cocked, have warned that this reckoning was coming. But it took the US election – and the ascent of Donald Trump, the insult-hurling, falsehood-circulating tweeter-in-chief to shine a blinding arc light on technology’s role on the political stage.
We are thus heading into a very McLuhanesque year. Marshall McLuhan, the Patron Saint of WIRED Magazine, made his name in the 60s, studying how pivotal technologies produced widespread, non-obvious changes. The Gutenberg press, he argued, created a spirit of “detachment” that propelled science while giving a new sense of agency to individuals. Electricity had a “tactile” effect, keeping us in constant contact with the world via the telegraph, telephone and TV. The photocopier imposed a “reign of terror” on publishers by letting everyday folks copy documents.
People assume McLuhan was always a cheerleader for these shifts. But his thinking could vibrate with anxiety at the coming impact of electronic media. He suspected we could have too much contact with each other – that we’d become fearful and angry by incessant exposure to the world. He might have looked at Trump’s rise on Twitter and nodded in recognition; a young McLuhan had watched European fascists in the 40s inject hypernationalism into supporters’ souls, via the radio.
When Trump won last year to widespread shock, liberal critics attacked the major social networks for enabling several unsettling trends. Platforms such as Facebook and Twitter were viral hotbeds for conspiracy theories and disinformation. Memes that reared to life on image boards and fringe political sites – jittery with misogyny and white nationalism and hatred of Hillary Clinton – made the leap to the mainstream on social networks. Dangerous falsehoods, such as the idea that Clinton ran a child-trafficking ring out of a pizzeria, spread widely; indeed, on Facebook, the top 20 fabricated stories netted more engagement than real stories from news sources that actually did factual reporting, as BuzzFeed found. (This isn’t a problem only in the US: anti-Muslim conspiracy stories are avidly circulated on Facebook in Myanmar, and Germans trade Facebook posts claiming Angela Merkel is Adolf Hitler’s daughter.) The same was true on Twitter, which became a tool for small numbers of people to propagate abuse and hate speech.
Meanwhile, the “filter-bubble” effect, which writer Eli Pariser (@elipariser) had pinpointed years before, arrived in full force. As my friend Zeynep Tufekci (@zeynep), a sociologist at the University of North Carolina and author of an upcoming book about political organising in the digital age, says, “I’m Facebook friends with some people who support Trump, but I don’t recall seeing their Facebook updates – it appears the algorithms assumed I wouldn’t be interested.”
We can’t indict social media alone, or even primarily, for the rise of disinformation and politically abusive behaviour. Traditional media – cable TV, radio, newspapers – recklessly amplified nonsense this political season (and were played shamelessly by Russia’s email hacking). They need their own reckoning. But social networks increasingly influence how people learn about the world. According to the Pew Research Center, about 44 percent of Americans cite Facebook as a news source. It is a crucial part of “where we put the cursor of our attention all day long,” says Tim Wu (see related on Financial Times), author of The Attention Merchants and The Master Switch, published by Penguin Random House.
It seems the question that’s lingering in the air is: how should social networks grapple with their civic impact? As we will discover, these issues will be devilishly hard to resolve.
A Good Reason to Wage Media War
The optimistic view is that there’s good precedent for fighting crap online. Back in the aughts, internet giants waged a war against spam and content farms. To cut down on spam entreaties from Nigerian princes and the like, email providers used machine learning to detect spam-like content; they also created shared blacklists. To quash content farms – low-quality insta-websites designed to game its top slot – Google created an ambitious ranking scheme called Panda. This down-ranked sites that employed tricks such as keyword stuffing (putting lots of invisible, unrelated phrases on a page). Remarkably, it worked: content farms vanished and bulk spam is now mostly a marginal problem.
Social networks could use similar strategies to solve their current civic dilemmas. Consider fake news, an area where, as scholars have shown, algorithmic analysis could help identify crap. Software created by Kate Starbird, an Assistant Professor in the Department of Human Centered Design & Engineering at the University of Washington, was able to distinguish with 88 percent accuracy whether a tweet was spreading a rumour or correcting it when analysing chatter about a 2014 hostage crisis in Sydney. And Filippo Menczer, a Professor of Informatics and Computer Science and the Director of the Center for Complex Networks and Systems Research at the Indiana University School of Informatics and Computing, has found that Twitter accounts posting political fakery have a heat signature: they tweet relentlessly and rarely reply to others.
Social networks sit atop piles of data that can help identify bogus memes – and they can rely on their users’ eagerness to help too. Sure enough, Facebook has already begun to develop tools along these lines. In December 2016, it unveiled a system that makes it easier for anyone to flag a post if it seems like deliberate misinformation. If a link that purports to be a news story is flagged by many users, it’s sent to a human Facebook team. The team adds it to a queue, where external fact-checking firms, including Snopes and Politifact, can check if they think the story is suspect. If they do, Facebook warns that it is “disputed by third- party fact checkers” and offers links to rebuttals by Snopes or others. If a user tries to share the story later, Facebook warns them that it’s disputed. The goal isn’t to catch all falsehoods; the system targets the most blatant posts.
There are plenty of other tweaks platforms could make. Craig Silverman, a BuzzFeed editor who has closely studied fake news, argues that Facebook and Twitter ought to make it easier to see the provenance of a link; right now, those from carefully reported sources such as The Wall Street Journal look the same as ones from conspiracy sites. The platforms could instead emphasise logos and names so a user might realise, Silverman says, “Wait a minute, this domain name is hillaryclintonstartedaids.com.”
Now let’s look at the filter-bubble phenomenon. Social media platforms could design algorithms that would expose us to people, ideas and posts that aren’t in such lockstep with our views. Then, when a platform such as Facebook suggests related content, “You could use these mechanisms to surface ideas that are ideologically challenging,” Pariser explains. Or as Tufekci argues: “Show more cross-cutting stuff! I’m not saying drown users in it. But the default shouldn’t be: ‘We’re just gonna feed you candy.’”
Internet Media News Outlets in Percentage of U.S. adults who ..
- 67% Use the platform
- 44% and use it for a news source
- 48% Use the platform
- 10% and use it for a news source
- 16% Use the platform
- 09% and use it for a news source
- 04% Use the platform
- 02% and use it for a news source
Let your imagination go wild and you can concoct even more aggressive, more ambitious reforms. Imagine if you got rid of all the markers of virality: no counts of likes on Facebook, retweets on Twitter, or upvotes on Reddit! Artist Ben Grosser created a playful browser plug-in called the Facebook Demetricator that does precisely this. It’s fascinating to try: suddenly, social media stops being a popularity contest. You start assessing posts based on what they say instead of because they racked up 23,000 reposts.
Some scholars argue Facebook should hire human teams to more comprehensively review trending stories, deleting ones built on lies. In fact, Facebook did just that last year until a conservative outcry ended the practice.
The biggest impediment to all this change, though, is economic. Traditional media organisations publish and broadcast nonsense because it attracts eyeballs for ads. New media have inherited this problem in spades: they know – in vivid, quantitative detail – just how much their users prefer to see posts they agree with ideologically, seductive falsehoods included. Spam got on people’s nerves, so companies were eager to stamp it out; on some level, social platforms’ attempts to fight fake news and confirmation bias will come into conflict with their users’ appetite for them.
Stimulated to Set Aside Lip-Service
Nonetheless, public pressure did, in fact, prod Facebook to action after the US election. Imagine if greater pressure impelled platforms to take an even stronger stand against falsehoods and filter bubbles. Would we like the result?
It’s unclear. Waging war on disinformation isn’t easy, because not everyone agrees on what disinformation is. It’s unambiguous that “the Pope endorses Donald Trump” isn’t true. But how about “Hillary Clinton lied about having pneumonia, so she’s a lying snake”? The most effective disinformation usually begins with a fact then amplifies, distorts, or elides; ban the distortion and you risk looking like you’re banning the nugget of truth too. Online interactions are conversation, and conversation has always been filled with bluster and canards. “The idea that only truth should be allowed on social networks is antithetical to how people socially interact,” says Karen North, a professor of digital social media at the University of Southern California.
Or consider this example raised by New York University media theorist Clay Shirky: in 2016, supporters of the Dakota Pipeline protests were encouraged to “check in” on Facebook at that location to confuse police. Those false check-ins “are fake news”, Shirky notes. Any policy aimed at enforcing truth on Facebook could easily be used to quash that activity.
“Look, fake news is a real problem,” he says. “But do liberals really want to hand the decisions over to a single large corporation?” Asking the platforms to be granular arbiters of truth would endow them with even more power. Whatever one can say about Donald Trump, he understands – and masterfully plays – the media, old and new. He uses Twitter to perform an end run around journalism, to utter falsehoods that are repeated by his followers and circulated further by mainstream news. When he attacks someone in a tweet, his supporters harass the target. Like other merchants of disinformation online, Trump exhales such a cloud of half-baked assertions that it leaves people mistrustful of everything. If you can do that, hey, what does it matter if social networks slap a “Disputed” label on the post you wrote? As Jon Favreau, one of Barack Obama’s former speechwriters, puts it: “Donald Trump doesn’t care if we think he’s telling the truth – he just wants his supporters to doubt that anyone’s telling the truth.”
And yet Trump has millions of eager followers. This is what gives pause to Jay Rosen, a professor of journalism at New York University. “You have to think about the demand side,” he says. It’s not enough to ask why people spread political disinformation, he adds. You also have to ask, “Why do people want to consume this stuff so much?”
Ponder that and you realise, there are limits to what technological fixes can achieve in civic life. Though social networks amplify American partisanship and distrust of institutions, those problems have been rising for years. There are plenty of drivers: say, 20 years of right- wing messaging about how mainstream institutions – media, universities, scientists – cannot be trusted (a “retreat from empiricism”, says Rosen). As Danah Boyd (@zephoria), head of the Data and Society think tank, notes, we have lost many of the mechanisms that once used to bridge the various cultural gaps between people from many different walks of life, including widespread military service, affordable colleges and mixed neighbourhoods.
The old order was flawed and elitist. It also locked out too many voices; it produced seeming consensus by preventing many from being heard. We are fumbling around for mechanisms that can replace and also improve upon that order, Pariser says. “It reminds me of how the secular world hasn’t found a replacement for some of the uses and tools that religions served. And the new media world hasn’t found a replacement for the ways that consensus was manufactured in the old world,” he adds. This is the year that we need to begin rebuilding those connections – on our platforms and in ourselves.