Earlier this month, Facebook’s chief security officer, Alex Stamos, announced that during the run-up to the 2016 presidential election, the company had distributed targeted political ads apparently sponsored by Russian operatives. Most of the ads made no reference to specific candidates, Stamos reported: rather, they “appeared to focus on amplifying divisive social and political messages across the ideological spectrum—touching on topics from LGBT matters to race issues to immigration to gun rights.”
These revelations were not especially surprising, since Facebook had long been used for delivering targeted and divisive political messages. But their effect has been jolting nonetheless. Even Facebook founder Mark Zuckerberg, who prefers to keep his distance from anything more controversial than a banality, has had to respond: last week, he took to video to deliver a special message. Craning earnestly into the camera, Zuckerberg proclaimed that he was going to “make sure Facebook is a force for good in democracy.” His mission was to assure everyone that Facebook is taking necessary steps to prevent “bad actors in the world” successfully using its network for political manipulation. This was about what one might expect: throughout Facebook’s existence, Zuckerberg has sought to cast the company’s corporate ambitions in terms of unifying platitudes.
Cambridge Analytica, one of the more ominous names to emerge in recent accounts of social media machinations, doesn’t do smarm. The company was hired first by Ted Cruz and then by Donald Trump’s campaign to tap into big data for targeted advertising. The specifics of Cambridge Analytica’s operations remain murky, but the Trump campaign has boasted openly of its targeted efforts to demobilize, rather than mobilize, voters—an effort similar in its broad outlines to that of the Russian operatives detected by Facebook. In late October, a senior-level Trump campaign official told Bloomberg Businessweek’s Sasha Issenberg and Joshua Green that they had “three major voter suppression operations under way.” They were targeting groups of white liberals by tying Hillary Clinton to her past support for the Trans-Pacific Partnership. Young women were sent messages crafted to raise ire over Bill Clinton’s past allegations of sexual assault and Hillary Clinton’s alleged mistreatment of his accusers. To suppress certain groups of African Americans from voting for Clinton, Issenberg and Green reported:
a young staffer showed off a South Park-style animation he’d created of Clinton delivering the “super predator” line (using audio from her original 1996 sound bite), as cartoon text popped up around her: “Hillary Thinks African Americans are Super Predators.”
Like the ads flagged by Stamos, the “super predator” video had been disseminated through “dark posts” on Facebook. All available evidence suggests that these episodes—both what Facebook has admitted to and what has been reported on Cambridge Analytica—are just a small, visible sliver of a huge and diffuse effort. Right-wing strategists recognize that they will have a hard time securing solid majorities in a number of political contests over the coming years, and targeted advertising furnishes potent weapons. Potential supporters of the opposition may be targeted with personally calibrated Facebook posts, Snapchat ads, and chat bots designed to splinter movements and spread disenchantment. Those deemed most likely to be politically indifferent may be fed calculated streams of images and stories designed make them feel under personal attack from the raging, anti-American forces leading the opposition. Various techniques will be tested on the fly, and operatives may probe for effective triggers to nudge certain groups towards disengagement.
Such divisive uses of media platforms have serious democratic consequences. Yet, news commentators’ recent discussions of propaganda efforts over social media have largely been framed as a “fake news” problem. In this telling, the Russian government, Cambridge Analytica, or Facebook are at fault for circulating false information. But this is only part of the story. What’s now emerging more clearly into public view are the enabling conditions of this crisis: the unwieldy and disconcerting capacities of the vast apparatus of surveillance and targeted influence that’s come to serve as the economic base of the commercial internet.
The tactics of carefully targeted, data-driven manipulation—though innovative and destabilizing—are not entirely new. They predate the existence of Cambridge Analytica, and Facebook, and the contemporary notion of “fake news” itself. For decades, digital marketers—working in both commercial and political domains—have been perfecting models for using consumer data to identify and manipulate decision-making vulnerabilities. New marketing techniques have been developed with an astounding level of sophistication and duplicity: the particular exploits of Cambridge Analytica and its contemporaries depend upon a matrix of data collection and targeted communications, the primary purpose of which is not to influence politics but to increase the power of marketing. Call it an infrastructure for commercial surveillance: the technologies, companies, and, importantly, public policies that enable behavioral engineering over digital networks. One of the central threads of today’s internet is a shared capacity among businesses to collect and exchange user information.1 In the ocean of big data, no company is an island.
In her investigative reporting for the Guardian earlier this year, Carole Cadwalladr revealed how Cambridge Analytica partners with digital marketers and tech companies at key stages in its workflow. Detailed consumer information is readily obtained from any number of commercial data brokers. Firms like Experian and Acxiom compile rich profiles on literally all US households and specialize in merging disparate data sets in order to make them actionable for their clients. To collect information firsthand, Cambridge Analytica need only tap into a platform like Facebook, which also serves a means for distributing targeting messages. The entanglement is deep enough that it is hard to imagine how Cambridge Analytica could function without access to commercial data pipelines and communication channels.
Today’s surveillance infrastructure represents nearly three decades of technical and political engineering. A range of marketing interests have steered the development of digital networks toward maximizing their consumer surveillance capacities, tilling the soil for political manipulation. From Cambridge Analytica to the Russian FSB, big data’s negative political externalities stem from the powerful influence of marketing over our communications systems. Going back at least to the emergence of radio broadcasting, marketers have seized upon successive communications platforms, exerting considerable effort to bring them into alignment with the needs of business.
Before real-time ad auctions, before social networks and profiles, before cookies and banner ads, there was a malleable digital network architecture supported by public money and without immediate commercial application. In the late 1980s, Tim Berners-Lee developed the web at the CERN international research laboratory and released it into the public domain. The web was designed to be open-ended, but was hardly optimized to serve the marketing needs of business. Support for commerce, let alone commercial monitoring, was not a standard feature, nor was it particularly welcome in early net cultures.
Foremost among early technical challenges were web protocols that were effectively incapable of identifying and tracking individual users. This built-in anonymity was quickly recognized as an impediment to commercialization and, thanks to the web’s open architecture, relatively easy to overcome. The development of the HTTP cookie in 1994 was among the first of many technical augmentations to bring monitoring capabilities to digital networks. Considerable resources have been applied toward improving and expanding technologies of surveillance ever since. But commercial surveillance is also a political creation and, like all major infrastructure projects, public/private partnership has played an integral role.
In the 1990s, the formative period of digital commercial surveillance, the cornerstone of federal internet policy was not only privatization, but the maximization of private sector control. This was a bipartisan effort that began with the administration of George H. W. Bush, but it was championed and implemented by Bill Clinton. Finding common ground with an otherwise hostile Republican Congress, Clinton’s internet policy was guided by laissez faire. With a few exceptions around encryption and a pornography moral panic, Clinton made good on his promise to “let the private sector lead,” paving the way for the development of a surveillance-based web economy.
By the end of the decade, a burgeoning online advertising sector, flush with finance capital, had begun to sharpen its focus on data collection. Operating without the knowledge of most web users, companies like DoubleClick (now part of Google) developed large-scale consumer profiling and ad targeting capacities and began to partner with direct marketers to merge online and offline data. The systematic combination of offline and online information was novel at the time and was widely perceived to violate privacy norms. Helped along by an emergent advocacy community, a backlash ensued that put internet privacy on federal and state policy agendas.
Mounting pressure prompted Congress to seriously consider “opt-in” legislation mandating that all companies obtain prior consent from web users before collecting their data. This threatened to undermine the developing advertising business model that relied on pervasive surveillance as a default condition of internet use. Facing negative publicity and legislative threat, a coalition of marketing trade associations and newly formed online ad industry groups successfully fought back to install a regime of advertising self-regulation. Privacy policies, opt-out provisions, and mea culpas were among the stall tactics employed to produce a veneer of privacy protection, while maintaining the status quo of pervasive surveillance. Companies argued that the heavy hand of big government would kill innovation and harm economic growth. Industry would get its own house in order, police bad actors, and continue to use data to improve consumer experiences with relevant advertising.
A few middling protections were implemented, including a measure to curb data collection from young children, but robust privacy regulation was largely abandoned when the ’90s boom ended. The bursting of the dotcom bubble put a temporary damper on the web ad market, while the new Congress and George W. Bush Administration let pending privacy initiatives wither on the vine. Privacy advocates continued to agitate, but Washington moved on. Self-regulation became the baseline policy framework for online data collection in the 21st century. The neoliberal consensus was that commercial surveillance on the internet was a business like any other. Best to let the market sort out the details.
Once momentum and capital accrued, it became increasingly difficult to alter course. Historians of technology call this “path dependence,” and it highlights that the evolution of technology is always about more than technology per se. With an accommodating policy framework, surveillance was cemented as the net’s primary business model. A supporting infrastructure advanced rapidly. When Google and Facebook went on to build advertising empires in the intervening years, they relied on more than just moxie and heaps of venture capital. They also banked on the political premise that data collection would be pervasive by default, that they would be free to build the tools of mass surveillance and targeted persuasion without being held to public account. While privacy dust-ups have been perennial, a digital marketing lobby has ballooned to mitigate threats. Google is now among the nation’s biggest lobbyists and Facebook is on track to join the ranks.
The internet’s apparent tendency to promote winner-take-all markets, combined with neoliberalism’s high tolerance for market concentration, has enabled Facebook and Google to achieve extraordinary control over the digital marketing sector. These two behemoths, increasingly recognized as an online advertising duopoly, are among the world’s leading purveyors of marketing surveillance and key platforms for political persuasion. At Facebook in particular, this incredible bottlenecking of surveillance capacity has drawn a surge of criticism regarding the company’s role in enabling political manipulation and what, if any, civic responsibilities are borne by private enterprise of such magnitude.
Today online tracking is well known, if not well understood, among the public. This heightened awareness has forced digital marketers to sharpen public relations strategies in order to sell internet users and potential regulators on the upsides of the surveillance infrastructure. Toward this end, the industry has relied on an old tactic: the bait and switch. The bait has been the promise that everyone—consumers, businesses, web publishers—wins with data-driven advertising. You, the user, get free web services and more “relevant” ads. Businesses get a better sense of your interests, so they can serve ads more efficiently. While much of the back end of data collection and consumer surveillance is hidden from easy view, targeted advertising, sometimes, rears its head all too conspicuously, like the coffee grinder you looked at on one site that now seems to be stalking you all over the web. To allay concerns that this kind of observation raises, the industry offers a reassuring explanation. The Network Advertising Initiative, a trade group fighting government regulation, puts it this way:
What if you kept seeing ads for lawn mowers and tractors when you live in an apartment in New York City? You wouldn’t want to see ads for something you have no interest in buying. Similarly, advertisers don’t want to spend money to tell you about a product or service that is of no interest to you.
In this benign account, data-driven advertising solves that old media problem of annoying and ill-targeted advertising. The consistent grain of a good burr-grinder is something that clearly matches your interests, even if the sellers’ persistence is itself becoming annoying.
The switch comes once consumers have been lured into disclosing finely detailed data trails. Advertisers can use this information for purposes that far exceed distinguishing the likely lawn care needs of the New York City apartment dweller from the Kansas rancher. Rather than simply matching consumers with products that fit existing interests, mass consumer monitoring has led to sophisticated efforts to modify behavior, engineer consumer habits, and intervene upon intimate decision-making processes. This is not, of course, the story that digital advertisers tell the public or regulators. But nor is it a conspiratorial view. Many advertising firms and marketers talk quite openly about these goals—as long as they in the company of clients or industry insiders. Firms boast of their ability to combine data with behavioral science to steer consumers’ decisions. Ogilvy Change, which specializes in applying behavioral science to marketing, declares itself to be “the leading behavioral interventions agency,” though many agencies are vying for this title.
To understand how data-driven advertising has taken on this more intrusive character requires looking into marketers’ and advertisers’ recent surge of interest in behavioral science. Marketers’ fascination with psychology is not new; it goes back at least to the beginnings of the modern advertising industry. What has changed are the types of psychological insights most of interest, and marketers’ ability to use our digital mediascape as an expansive laboratory for testing and applying theories about how to influence our decisions. With access to surveillance data’s rich pointillist portraits of consumer behavior, today’s marketers are less drawn to psychoanalysis and more toward the behavioral sciences, especially neuropsychology and cognitive psychology. The titles of a few of the many popular books aimed at marketers tell the story: Neuromarketing: Understanding the “Buy Buttons” in Your Customer’s Brain and The Consuming Instinct: What Juicy Burgers, Ferraris, Pornography, and Gift Giving Reveal About Human Nature.
The marketers have also fallen for behavioral economics, which offers them a cogent (and relatively simple) psychological model that promises to illuminate how people make decisions and how they can be influenced. This model speaks to intuitions they have had about consumers for a long time, but also offers a scientific map of decision-making processes they can appropriate in order to generate and test new strategies that can influence our decisions.2 If marketers can understand the micro-processes that steer our largely intuition-driven decision-making, they can target points most vulnerable to their influence. Hence, marketers have taken great interest in the dozens of cognitive biases and heuristics that researchers find can affect decisions and run counter to the vaunted goal of maximizing utility. “Loss aversion,” for example, is a widely documented bias in which people are more motivated to avoid perceived losses than seek comparable gains. One marketing response to such a notion is to make a pitch that a product helps you avoid losses. So a mobile phone company may frame their plan as a way to avoid losing unused minutes that other companies make you forfeit. Another option is to give away free trial offers. When the trial ends, make the consumer feel the purchase avoids a loss.
Another underlying lesson of behavioral economics research is that our decisions depend mightily on contextual cues. If you have control over critical aspects of the contexts in which people are making decisions, you can affect outcomes. This is why many policy advisers who have imbibed behavioral economics advocate using nudges or “choice architecture” to influence people to make positive choices. In the realm of public policy, advocates for choice architecture suggest these decisions be transparent, open to public debate, and justified in the name of community values. But for marketers, choice architecture can be designed in private and purposed to suit only their needs. Digital environments, in particular, yield enormous opportunities for marketers to design their own choice architecture and endlessly test their effectivity. Websites and advertisers constantly run A/B tests on millions of users, gauging responses to subtle differences in web design. When marketers take control of designing choice architecture, they load the contextual dice in favor of their own interests. Companies try to keep their specific tactics secret; what we know, based on examples that have leaked, is only the tip of the iceberg. One experiment found that the Economist could nudge up to 52 percent more consumers to choose a $125 print-plus-digital option, rather than a $59 digital-only option, by including a decoy $125 print-only option on the pricing list. Uber tapped into behavioral psychology to create a digital architecture—including gamifying the pursuit of worthless badges—designed to prod and nudge drivers to work longer during less lucrative hours.
In a memorable investigative report, Charles Duhigg discovered that a Target Corporation statistician had invented a surprisingly effective algorithm that predicted if a customer was pregnant—along with her due date. Target’s interest in pregnancy prediction followed a common behavioral modification strategy that starts by identifying situations in which consumers are experiencing a life-altering event. Habits are hard to break, so those rare situations of pattern-shattering life changes are ideal moments for forging new habits. This is exactly what Target wanted to do by sending baby-related coupons to expecting families. Bring new parents in for deals during a tumultuous time and hope Target shopping would become a habit for years to come. Consider all the ways that marketers may able to identify when you break off a relationship, experience a significant health concern, start a new job, lose a job, or go through other events that create a habit-flexible, vulnerable moment.
Behavioral science also suggests your mood, energy levels, and alertness can affect decision-making and biases in predictable ways. According to leaked internal documents, Facebook claims it can identify its teenage users’ emotional states to give advertisers the means to reach those who feel “insecure,” “anxious,” or “worthless.” Presumably the point is to pinpoint the exact moment a sales message is most likely to hit home.
Using data to predict internal states for strategic targeting could go further, as behavioral science has identified decision-altering states beyond common ways of describing moods and affects. In Thinking, Fast and Slow, Daniel Kahneman describes a shocking study based on detailed records of the exact time and outcome of Israeli judges’ decisions reviewing parole requests. While the requests were presented in random order, timing made a world of difference. Judges were many times more likely to deny a request just before a meal than just after a meal. With support from further research, Kahneman explains this result as a matter of ego depletion—a degradation of willpower that comes after a period of exercising self-control.
Might marketers be especially interested in approaching us with impulse-enticing offers when our egos are depleted? The kind of data generated by online activity, smartphone GPS, Fitbits, and the rapidly expanding internet of things can offer all sorts of guides to when someone’s eating, sleeping, leaving work, dealing with a health or legal problem, celebrating a birthday, reading upsetting news stories, and so on. This offers plenty of opportunities for predicting ego depletion, cognitive overload, and much more about fluctuations in our mental stamina and internal states.
Targeted marketing that engages in data-driven behavioral modification techniques undermines market ideals and exacerbates the power asymmetry between buyers and sellers. The political implications of such techniques are more disturbing still. For certain political marketers, the buggy features of human decision-making suggest political choice itself may be hackable. Find the bugs and exploit them: That’s where the energies of more and more campaign operatives have gone as experimental approaches to influencing political behavior have proliferated.
The first insight from behavioral economics is to start thinking differently about political behavior. Zoom the focus in—looking closer than sweeping campaign narratives or even day-to-day press battles. Political participation can be broken down into smaller units of specific behaviors. It’s at this granular level where nudges, emotional triggers, and carefully designed choice architecture can exert significant influence at critical steps.
One of those crucial, but mundane, behavioral decision points is whether to go to the polls. Hence, data-driven political campaigns have set their sights on finding those triggers and prods that get the right people to choose to take time out of a presumably busy Tuesday to get a voter sticker. The first step in this process has been to use all the resources of big data, supplemented by campaign outreach, to identify those marginal supporters who may or may not take the time to cast a ballot. These are the most valuable targets, for their electoral choices are most likely to be influenced by techniques developed with the help of behavioral science.
By the 2012 elections, Democratic and Republican campaigns were investing heavily in big data analytics, arguably putting political advertising at the vanguard of data-driven and behavioral- science-inflected marketing. Both the Romney and Obama campaigns prepared personalized scripts for volunteers to use to court specific voters. One controversial tactic, again relying on behavioral research, and rolled out mostly by PACs, tried leveraging social pressure and shaming to increase turnout. In states where voting records are public, some citizens received information through mailers or on Facebook noting whether specific friends or neighbors had voted last election, warning that records from the coming election could be shared, too.
Not surprisingly, it was rogue characters outside the typical orbit of major party political operatives who took the reins of data-driven campaigning for the Trump campaign. They had winning at all cost on their minds. If data-driven targeting and behavioral science can be used to increase voter turnout, it can also be used to suppress it. Here, the hacking task is to identify marginal voters leaning toward an opponent and figure out what behavioral intervention might nudge them not to vote. Or, play off crowd dynamics to catalyze a social cascade that sours a particular group on the candidate who would most likely get their vote.
A single Facebook post, however inflammatory, is unlikely to change voters’ minds all by itself. This strategy is a matter of nudges that make certain outcomes more probable. But delivered at just the right time, calibrated with sophisticated data analysis, and aimed at just the most receptive targets, coordinated efforts along these line could prove very powerful when rolled out at a significant scale. The terrain that enables them may shift the advantage to the least scrupulous and most manipulative operatives. In a demonstration of Facebook’s power, a 2010 study published in Nature reported an experiment involving 61 million Facebook users, who were randomly assigned to see different types of messages or no message about voting on election day. The authors found that the Facebook messages likely brought an extra 340,000 people to the poll that day.
Given that some of the major players involved in Trump’s campaign effort have obsessions with war tactics and strategy, it’s easy to imagine that weaponized targeting may not only be a pre-election phenomenon. Such efforts could be employed as part of an ongoing campaign to weaken any resistance to the Trump Administration and thwart political opposition through ratcheting up in-fighting and splintering. It’s not an overstatement to suggest that the infrastructure of mass consumer surveillance enables new kinds of actors to take up the work of COINTELPRO on a mass scale. Former Cambridge Analytica employees have said the company internally discusses their operations as psychological warfare.
Cambridge Analytica may not be alone in pursuing these types of psychological warfare tactics. In response to the recent revelations of Russian-bought Facebook ads, Senator Mark Warner told the Washington Post that the aim of the ads was “to sow chaos.” Yet, rather than promoting general chaos, some ads may have been specifically designed to fuel infighting among the Trump opposition. Earlier this year, The Intercept showed that TigerSwan, a shady mercenary firm hired by Energy Transfer Partners to combat communities opposing the Dakota Access Pipeline, used knowledge gleaned from surveillance as part of their own strategy to splinter their opponents. A leaked TigerSwan document declared, “Exploitation of ongoing native versus non-native rifts, and tribal rifts between peaceful and violent elements is critical in our effort to delegitimize the anti-DAPL movement.”
What our current digital environment affords are opportunities for efficient, large-scale use of such tactics, which can be refined by data-rich feedback loops. Manipulation campaigns can plug into the commercial surveillance infrastructure and draw on lessons of behavioral science. They can use testing to refine strategies that take account of the personal traits of targets and identify interventions that may be most potent. This might mean identifying marginal participants, let’s say for joining a march or boycott, and zeroing in on interventions to dissuade them from taking action. Even more worrisomely, such targeting could try to push potential allies in different directions. Targets predicted to have more radical inklings could be pushed toward radical tactics and fed stories deriding compromise with liberal allies. Simultaneously, those predicted to have more liberal sympathies may be fed stories that hype fears about radical takeover of the resistance. Such campaigns would likely play off divisions along race, gender, issue-specific priorities, and other lines of identity and affinity.
Beginning in the 1990s, a neoliberal approach handed the internet’s major governance decisions to commercial entities under the notion that all of the benefits of the digital revolution—political or otherwise—were best realized through unfettered markets. This move inspires considerably less optimism today, with rising troll brigades, pervasive campaigns of online harassment, worries surrounding fake news and propaganda, a commercial surveillance apparatus that could put Big Brother to shame—and platforms so monopolistic that users can only leave them at the cost of abandoning social connections. The web’s creator, Tim Berners-Lee, now laments that his invention has become dominated by powerful interests, rife with misinformation, and susceptible to the antidemocratic mutations of political advertising. Even Twitter’s co-founder, Evan Williams, has discarded his prior techno-optimism, pointing to the commercial net’s failure to live up to its emancipatory potential.
But more carefully calibrated algorithms, minor changes in social media policies, and tweaks to user interfaces will not have a fundamental impact on the business models of major platforms like Facebook and Google. These measures reek of the same technocratic hubris that created the conditions of the modern internet in the first place—they are destined to come up short. The appropriate solution to a large-scale political problem is a policy program that directly confronts large-scale surveillance, precision targeting, and behavioral experimentation. There is no obvious or easy path forward, but the United States might look to the European Union as one source of inspiration. Over the protest of digital platforms, the EU’s recently adopted data protection directive introduces robust limitations on the collection and use of consumer data. Heeding ample evidence, regulators in Brussels have decided that free markets are simply not up to the task of protecting privacy.
Unfortunately and unsurprisingly, the Trump Administration and Republican legislators are moving briskly in the opposite direction, giving internet service providers a free pass to monetize consumer data and hampering the Federal Communications Commission’s ability to regulate the sector at all. If the behavioral economists are right, then the implications of a digital public sphere built around user monitoring, data-driven profiling, and targeted messaging do not bode well for democracy. We shouldn’t wait until the next presidential election to see how much worse it can get.
To visualize this infrastructure at work, install a browser plug-in like the Electronic Frontier Foundation’s Privacy Badger, which maps the dizzying array of data transactions that occur behind the scenes of nearly all popular websites. ↩
In this sense, behavioral economics today may prove to have certain parallels with psychoanalysis for prior decades of advertising. Marketers didn’t need Freud to tell them that sex could sell. Nonetheless, psychoanalytic models of unconscious desire created a scaffolding marketers built off to go places they had not been before. ↩