After Neoliberalism

The big reveal: Google and Facebook are marketing companies. That is how they make money. What is more extraordinary is how much the two companies have thus far dominated their markets. Zuboff reports that between 2012 and 2016 Google and Facebook together accounted for 90 percent of the growth in global advertising expenditures. But there is nothing much “unprecedented” about advertising.

At the heart of the new age are novel configurations of fear, certainty, and power

Still from Antitrust (2001).

Shoshana Zuboff. The Age of Surveillance Capitalism. Public Affairs, 2019.

Today there is no more powerful corporation in the world than Google, so it may be hard to remember that not too long ago, the company was in a fight for its very existence. In its early years, Google couldn’t figure out how to make money. Founded in 1998, Google quickly became known for having the very best algorithm for indexing websites. That algorithm, called PageRank, made Google the internet’s most popular search engine, and Silicon Valley venture capitalists leapt to fund the company. But because search was free it was not clear how the company could ever generate enough revenue to turn a profit. Google had plenty of users, and no paying customers.

Profits did not matter much at the end of the twentieth century, during the peak of the dotcom stock market bubble. But when stock prices plummeted in 2000, it became clear that many technology companies were even less profitable—or more unprofitable—than observers had assumed. Google generated some revenue by selling advertisements to marketers who placed “banners” on its search pages—but that wasn’t going to be enough. Investors began to press Google executives for more details about its business model, which meant the company had to find a way to “monetize” its services, and fast.

Google engineers were aware that users’ search queries produced a great deal of “collateral data,” which they collected as a matter of course. Data logs revealed not only common keywords, but also dwell times and click patterns. This “data exhaust,” it began to dawn on some of Google’s executives, could be an immensely valuable resource for the company, since the data contained information that advertisers could use to target consumers. Google’s cofounders, Sergey Brin and Larry Page, were initially skeptical of orienting the company around advertising. In an address at the 1998 World Wide Web Conference, they said that, “We expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of consumers.” “This type of bias,” they added, “is very difficult to detect.” But the dotcom bust put their concerns in perspective, and as pressure built, Google began to direct its extraordinary engineering talent toward the problem of how best to match online ads to users based upon their search query patterns.

One morning in 2002, the Google data logs team arrived at the office and noticed that the phrase “Carol Brady’s maiden name” had spiked on five separate occasions the night before, each time at forty-eight minutes after the hour—perfectly coinciding with the airtimes across five of the six US time zones of a question on Who Wants to Be a Millionaire? In real time, there had been remarkable predictive power in the data logs. Adding up and analyzing the online clicks after the show had aired on the east coast could have predicted what kinds of clicks would have occurred on the west coast, before the same show aired. Eureka! If prediction was possible, why not sell to marketers advertising space based on predictions about what would likely appeal to particular users, making use of the full array of data each person had entered into search boxes? The more data fed into the system, the greater the scale, and the better the algorithm performed. The approach that had made Google the best search engine on the internet would make it the best ad-targeting engine as well.

There was still more user data to be had. Could “third parties,” meaning other companies with websites where users left behind data exhaust, not consent to release their user information to Google, as company engineers asked in an early note? There was also a mountain of personal data to be “derived from user actions,” as engineers put it. Cookies, discreetly installed onto users’ browsers, could invisibly push data back to Google as individuals made their way around the internet.

In 2008, Google began running online auctions for advertisers to purchase marketing space premised upon its predictive “assets.” Judging by the market for online ad sales, the gambit worked. Google ad revenue, unreported in 2000, was $70 million in 2001, the first year the company turned an actual profit. Two years later, revenue had soared to $3.14 billion, stunning the financial commentariat. Gmail launched in 2004, and the company began to scan all private correspondence—more data fed back into the system, more sales of its predictive “assets.” By 2006, revenue had surpassed $10 billion, and profits were at $3.08 billion. To create scale, Google went on a buying spree, acquiring, among other companies, Android, a mobile operating system, for $50 million in 2005; YouTube, and its stores of “collateral data,” for $1.65 billion in 2006; and DoubleClick, an online advertiser, for $3.1 billion in 2007. Google Street View was rolled out the same year. While imaging the world block by block, Google vehicles secretly collected personal information from private Wi-Fi routers. (Google first denied the practice. Later, company executives admitted to the scheme, blaming a rogue engineer.) Profits kept climbing even through the Great Recession, and in 2010 Google cracked the list of the top twenty most profitable US corporations, where it has stayed ever since. In 2019 regulators caught Google secretly pushing browsing data back to the company through hidden web pages it controls, without any form of user consent whatsoever. It strains credulity to believe that this practice had only begun recently.

The moment Google found a way to profit from offering people free services on the internet was the moment the “Age of Surveillance Capitalism” was born, according to Shoshana Zuboff’s book of that title. In the course of serving users, Google also began surveilling them. The users still weren’t the customers, because now the customers were the advertisers. In the new arrangement, the users were the products for sale.

Today, there are still internet companies that, like Google in the early days, have yet to turn a profit but continue to receive extraordinary financial valuations in the capital markets. Twitter and Uber are two prime examples. But the mature Google model has been replicated successfully by a number of others, most notably Facebook, Google’s first great copycat. Cofounder and CEO Mark Zuckerberg lured Sheryl Sandberg away from Google in 2007 to be Facebook’s Chief Operating Officer. The “Like” button launched in 2009, and Facebook first turned a profit the following year after getting serious about selling ads. To scale its data mining project, Facebook went on an acquisitions spree of its own, buying Instagram and WhatsApp (in 2012 and 2014, respectively), companies that do not generate revenue from selling a product or service, but do bring in fountains of user data for Facebook. Microsoft acquired LinkedIn—the buttoned-up, business-focused version of these data-rich platforms—in 2016. Amazon took its own tack, focusing on delivery logistics and data centers, but entered the arena of voice data mining with its Alexa in 2014. Of the five great internet companies, only Apple has mostly held the line so far by focusing on selling physical products.

Over the past few years, a number of companies with access to personal data have begun attempting to acquire and secure supply routes to what Zuboff calls “behavioral surplus.” As she notes, this marks an evolution in surveillance capitalism. The new frontier is not prediction, but “behavioral modification.” This will be achieved through access to data that reveals not already formed preferences but emotions and feelings, which can be manipulated to mold new preferences. Facebook believes that through biometric readings of, say, mood, gait, and posture on Instagram photos it will soon be able to manipulate “empathy.” The company is also rolling out its own proprietary dating service, breaking into a $2.5 billion US market. Then there is the vast amount of data being mined from the proliferating receptacles of the “internet of things,” including what in 2017 was an already $14.7 billion market for smart home devices and appliances, from toothbrushes to thermostats. Zuboff speculates that companies will soon not only know what you want before you know it—they will be able to make you want what you want to begin with, still without you knowing it.

For Zuboff, surveillance capitalism is a “rogue mutation” of capitalism, one that is fast leading to “concentrations of wealth, knowledge, and power unprecedented in human history.” The 21st century, she predicts, is once again witnessing the specter of totalitarianism. Google and the company’s followers represent nothing short of “a significant threat to human nature”—to “free will” itself. Politically, surveillance capitalism is “best understood as a coup from above: an overthrow of the people’s sovereignty.” The central message of The Age of Surveillance Capitalism could not be clearer. Google—the company that “invented and perfected surveillance capitalism,” whose famous corporate slogan is, after all, “Don’t Be Evil”—is evil.

This is a little hyperbolic, or at least excessive in its rhetorical reach, but Zuboff is not wrong about an important matter. Google really does exercise a power of a novel kind, a power that was first accumulated during the 2000s while almost no one—not regulators, not commentators, let alone the public at large—was paying significant attention. The legal wilderness in which Google and other internet companies have roamed for so long would have been impossible without the rage for government deregulation and corporate “self-regulation.” For twenty years Google has hived off mountains of data, often in violation of non-enforced privacy law, while acquiring competitors free from antitrust scrutiny, and federal pushback has so far been minimal to nonexistent.

Indeed, in various contexts, Google has collaborated with the government, expanding its own powers while shielding itself from prosecution—directly or by implication. During the post–September 11 “state of exception,” the US national security apparatus press-ganged Google into providing it with data and logistical support, cultivating the company’s private surveillance capacities. In 2008, Google came to the aid of Barack Obama’s presidential campaign; in 2012 Google’s then-CEO Eric Schmidt oversaw the Obama reelection campaign’s voter-turnout system. The revolving door between Silicon Valley and the Obama Administration has been well documented.

Meanwhile, many blithely aided the internet companies’ efforts at data mining. Few people today want to live without the services companies like Google, Facebook, or Amazon provide. Why is that? Given the recent upsurge of recrimination against the giant internet companies, it may be worth stepping back to ask: What is it within us that these companies are so successfully tapping, and why have we so quickly become dependent upon their offerings?

To explain how Google and its imitators so quickly amassed so much data and power, Zuboff offers a social-psychological diagnosis of the first decades of the twenty-first century. She cites what she calls the “unbearable” conditions wrought by the very success of the neoliberal ideological project in the final decades of the twentieth century. Much to its credit, the book is not another hackneyed account of neoliberalism, in which a postwar “golden age” of Fordist social democracy gives way, in a narrative of declension, to the post-1980 pro-market “neoliberalism” of Reagan and Clinton. Rather, Zuboff takes what she calls the late twentieth-century “neoliberal habitus” for granted. Having grown up in its soils, surveillance capitalism is something new. It is what has come next.

According to Zuboff, the neoliberal platform of the Clinton 1990s promised the withering of the state, a decrease in market regulation, and thus a new era of private competition, economic growth, and efficiency. Neoliberalism also promised a new social contract, premised upon greater individuation through choice. The idea was to strip individuals of constraining social givens, setting selves free to navigate an ever more complex “global” society in which all social attachments would be chosen by individuals themselves.

Put bluntly, none of this worked out all that well. Zuboff quotes the social theorist Zygmunt Bauman on “the yawning gap between the right of self-assertion and the capacity to control the social settings which render such self-assertion feasible. It is from the abysmal gap that the most poisonous effluvia contaminating the lives of contemporary individuals emanate.”

The internet companies promised individuals that they could close this gap for them. First, the platforms could provide social connection, in the context of an offline world suffering from alienation, as globalization churned through communities. Second, they could provide knowledge and certainty, in response to an increasingly uncertain social world.

How? Google declared its mission to “organize the world’s information and make it universally accessible and useful.” In a world of overwhelming complexity, Google promised guidance. With information ever more abundant in the digital age, Google could make all the flows of data and discourse both newly available and legible to individuals. Google Maps, launched in 2005, promised users better navigation of the world at large—literally. For consumers maneuvering through the similarly vast terrains of commodities and entertainment, Amazon and also Netflix have offered similar services (“You might also like to watch . . .”). How soothing and orientating it can be, Zuboff writes, to have a service that has “a thousand ways to anticipate our needs and ease the complexities of our harried lives.”

What Facebook promised to its users was even more blatantly a response to the threat of social isolation: friends. As Zuckerberg put it, “People feel unsettled. A lot of what was settling in the past doesn’t exist anymore.” In response, Facebook could meet the “personal, emotional, and spiritual needs” of a new “global community.”

In their business philosophy Google and Facebook have espoused entrepreneurial “disruption,” but what they have offered their individual users are a number of safe harbors from the social conditions that have followed in the wake of the very political and economic success of the neoliberal ideological project. By doing so, Google and Facebook, Zuboff argues, have become something other than agents of entrepreneurial disruption. They have become new citadels of a new kind of social stability, as well as concentrated corporate power.

Zuboff’s treatment of the concept of certainty is fascinating. We all now enjoy the quotidian certainty of just how long it will take to arrive at one’s destination, according to Google Maps, updated continuously. But, in an arresting turn of phrase, there is also the possibility of the tech monoliths offering the “substitution of certainty for society.” This is a business project, in that Google and its ilk are after greater and greater certainty in the behavioral prediction of consumer choice, but it’s more than that. The goal is to make the predictions of a select few corporations with proprietary ownership of their algorithms and access to our personal information the anchor of a new social order. Behavioral prediction, Zuboff argues, may one day drive other markers of identity—whether race, gender, sex, or something else—from the social field. The more lives are lived online under conditions circumscribed by internet corporations, the content they provide, as well as the loyalties and affiliations they make possible—because their marketing potential may be more exploitable—may come to shape individual and group identities. Our induced proclivities for online likes, purchases, and content may become more fundamental to our senses of selves than anything else.

Say what you will about neoliberalism. However much in the end it proved to be a sham for so many people, at least the spirit of Clinton’s third-way globalization, as sold, was triumphant and optimistic. Superseding it, at the beginning of the twenty-first century, has been a new emotional tone, broadly expressive of fear. It was fear of alienation, as neoliberal ideology sheared the social fabric and placed new burdens upon the self to auto-fashion identity, that led people to search for connection online. It was fear, in the wake of September 11, that led many Americans to swiftly accept the sudden online elaboration of the national security establishment. It was fear about their economic prospects, given sagging median incomes, that led so many US households to go into mortgage debt across the 2000s, in hopes of accumulating wealth through asset price appreciation—a project led by Wall Street banks’ risk management departments, with their apparatuses of data collection and prediction that bear uncanny resemblances to the project Zuboff describes at Google. It was uncertainty, if not fear, that in a rapidly globalizing economy led so many companies to line up for Google’s online auctions of their marketing prediction products. It was fear that paradoxically made Obama’s 2008 promise of “hope” such a powerful slogan.

No one has been immune from this complex of emotions—not even its beneficiaries. Given the uncertain and fickle nature of global finance capital—probably the leading agent of all of this social disruption—it was, after all, fear that led Google to buck the mantra of neoliberal corporate governance, which states that the financial interests of shareholders always come first: in its 2004 public offering, Google marketed a large swathe of “nonvoting” stock to the public so that a close-knit group of founders and executives could maintain total control of the company, thus setting another trend that has been embraced by most major tech companies that have emerged over the past few years, including Facebook.

That same compensatory desire for certainty may well lead many individuals to embrace the biometric and emotional interventions that Zuboff says are coming down the pike. How do you feel? Who do you want to date? Not sure? Facebook Dating now promises “an authentic look at who someone is.”

Was Zuboff herself writing out of fear, a fear not of flux but rather of the threats she believes to be posed by Google and Facebook’s new and certain grip on power? A number of critics have accused her of alarmism, arguing that The Age of Surveillance Capitalism is over the top. But the book’s fearful notes are inseparable from the moment it tries to make sense of. If her book doesn’t offer a cure, its diagnosis of the situation is itself a telling symptom of it.

At the heart of the new age are novel configurations of fear, certainty, and power. Obviously, we are no longer swimming in a late twentieth-century postmodern sea of indeterminacy, however playful or distressing, in which power is so capillary that it may be difficult to find and trace. In The Age of Surveillance Capitalism, power is easy to locate. It resides inside particular institutions, namely large corporations. The age of neoliberalism, Zuboff’s argument suggests, is behind us—one wonders if it was ever more than a glimmer in a few theorists’ eyes.

There are possible pushbacks against this framing of the novelty of the twenty-first century. One has to wonder whether Google’s brand of capitalism really is so new and different as Zuboff suggests, giving rise to an “unprecedented” species of power that is as “startling” as it is “incomprehensible.” And is this development really as bad, as evil even, as Zuboff claims it to be? In its drive for behavioral prediction and modification, does surveillance capitalism amount to an existential threat to “free will”?

There are good reasons to stress continuities as much as discontinuities. So far, what most of the attempts at behavioral prediction and modification have amounted to is getting people to buy stuff. The big reveal: Google and Facebook are marketing companies. That is how they make money. What is more extraordinary is how much the two companies have thus far dominated their markets. Zuboff reports that between 2012 and 2016 Google and Facebook together accounted for 90 percent of the growth in global advertising expenditures. But there is nothing much “unprecedented” about advertising. If the internet companies persuade individuals to buy goods, and those goods are sold at above their cost of production (including the cost of advertising), then profits are made. Those profits then become Google and Facebook’s future advertising revenues. From a business perspective this isn’t remarkable at all. It is hard to make the case that a more personalized form of advertising marks a new age of capitalism.

The same may go for fears of unprecedented consumer manipulation. The Age of Surveillance Capitalism can join on the shelf books of an older, postwar vintage, like Vance Packard’s The Hidden Persuaders (1957), which exposed pop-Freudian ad men who were determined to sell cigarettes by manipulating customers’ oral fixations. Zuboff worries that given the turn to behavioral modification B.F. Skinner’s nightmarish visions of a society of Pavlovian dogs may actually become true, and that we may all end up unfree automatons, bereft of free will, because surveillance capitalists will have successfully transformed “the natural obscurity of human desire into scientific fact.” But aren’t Google and Facebook’s claims in this arena only another way for the companies to brandish their proprietary algorithms, in order to sell ever more expensive predictive “assets”?

Regardless, Zuboff’s picture of freedom—where each of us autonomously and reflectively arrives at decisions, before activating a separate machinery, called “free will,” to trigger our actions and transparently bring about our own distinct personal “futures,” which we can all then feel completely responsible for—is rather quaint. It strikes this reader as false, not so much when confronted by the evidence of Google’s “panvasive” intrigues, but rather the ordinary affairs of human existence. We are free so long as we can retrospectively look back and intelligibly recollect our actions as our own in some meaningful way, metaphysically speaking. Everyone will have to make up their own minds about whether their past consumer purchases still count as their own, with the knowledge that Google, after having scanned their email, thought that there was a good chance they might have wanted to have bought what they ended up buying, and sold its advertising space accordingly. Different answers are possible.

The urgent questions are political, not metaphysical, but that hardly diminishes their significance. Is it a good idea for a community to allow Google to harvest personal information from young children viewing clips on YouTube in order to better target them for advertisements? Clearly not. (The Federal Trade Commission recently slapped Google with a $170 million fine for this practice, for violation of the 1998 Children’s Online Privacy Protection Act, almost a rounding error in the corporation’s accounts.) That is, or at least it should be, an easy case, but others are more difficult and are likely to be among the more vexing political questions of the decade to come. For should the systems of surveillance and prediction perfected by Google and Facebook continue migrating to domains other than consumer advertisement—health care, criminal surveillance, and political communication, to name only a few—then important freedoms surely are at stake.

The days of internet corporate “self-regulation” are now over. After the 2016 presidential election of Donald Trump, Facebook came under heavy fire from liberals for poisoning democratic politics. In 2020, it was Trump’s turn to criticize the power of the great tech companies after losing an election. Before leaving office, his Justice Department filed a potentially watershed antitrust suit against Facebook for buying up companies, especially Instagram and WhatsApp, in order to eliminate competition in social media. We will have to see how aggressive the Biden Administration’s antitrust enforcement will be; as we wait, the fact that Twitter CEO Jack Dorsey and no one else decides whether the former president has access to his favorite medium of political communication is telling. Meanwhile, if controlled so far by a few corporations, the internet so far has been an open access network; soon the Chinese state will unveil what it calls an alternate “New IP” (internet protocol), a top-down, state-directed and closed network infrastructure. That may prove attractive to countries like Russia, which has passed a “sovereign internet” law to justify blocking its citizens’ access to particular websites. States, liberal and illiberal, will have heavy hands in determining what comes next, and there already appear to be at least two feasible alternatives: a more competitive network, inspired by the American antitrust reflex, but still corporate and commercial; and a more consolidated, top-down version that is state controlled.

Nonetheless, in the midst of what now appears to be a looming political showdown between the largest internet corporations, whose wealth and power have only aggrandized during the pandemic, and emboldened states, what should not be lost sight of is the festering social problem at the core of Zuboff’s book. The internet has hardly become an idealized space for rewarding social connection. Often, it has become the inflamer, if not instigator, of all manner of bad social psychological dependencies and pathologies. Zuboff has a chapter on them, called “Inside the Hive,” and it is harrowing. She cites social psychological studies (some paid for by Facebook) that reveal that there are now people who feel that they “do not exist” whenever they are not participating on social media platforms. “I felt so lonely . . . I could not sleep without sharing or connecting to others,” reports one Chinese girl. “Maybe it is unhealthy that I can’t be without knowing what people are saying and feeling, where they are, and what’s happening,” laments a Slovakian university student. Facebook now has 2.8 billion monthly active users. For at least some of them, the prospect of offline loneliness drives them to the platform. Their cravings create new supply routes to fresh stocks of “behavioral surplus” for Facebook to sell.

Rather than fuss over “free will” and “privacy,” the more productive path forward may be to imagine ways in which the social contract might be redrawn, offline, but also online, as 4.66 billion people are now connected to the internet. As the great internet corporations have become necessary, fundamental, to the way that we live our lives, as much as they are “social” they may accurately be described as belonging to the “public.” They provide public goods, and should be treated, regulated, and even owned, as such—such a “public utility” model is another way forward. For a work by a Harvard Business School professor, Zuboff’s reckoning with Google represents progress. If one thing is certain in this account, it is that the neoliberal social contract of the late twentieth century was a failure, only made worse so far by internet corporations’ attempts to exploit that failure for their own gain.

If you like this article, please subscribe or leave a tax-deductible tip below to support n+1.

Related Articles

More by this Author