It’s unclear when the techlash—the sleek term for our growing discontent and hostility towards the technology industry—began. There certainly was a time when the world admired the giants of the technology industry, particularly Google—and then Facebook—as stratospherically profitable, societally benevolent forces, providing us the future for free. The world’s information at our fingertips, made ‘universally accessible and useful’, said Google. The world ‘more open and connected’, said Facebook. But the tide has turned: more and more people are suspicious, wary, and resentful of the cost of what we’re getting for ‘free’. In different places, for different reasons, people don’t trust big tech.

In the Bay Area, the epicenter of techno-optimism and American political liberalism, a mass disillusionment about the promises of technology likely started at the end of 2016. That was the year when, in the wake of a thoroughly disorienting presidential election, the American public began to reckon with the twinned specters of ‘fake news’ and online extremism, which many technology companies helped facilitate. The backlash against these companies was swift and immediate—and bolstered, too, by a long history of concerns: privacy intrusions, monopolistic practices, and algorithmic bias, just to name a few.

In Europe—further removed from the blind optimism of technology, as well as the economic benefits—perhaps we could say that the techlash began even earlier, when the European Union carefully architected the General Data Protection Regulation (GDPR). Against concerted lobbying from technology firms, the EU patiently insisted that their practices needed to change, adopting the GDPR in early 2016. The comprehensive, technocratic details of the GDPR focused on some overlapping concerns: the privacy of EU citizens, their rights to the data being collected about them, the anticompetitive practices of tech giants, the tremendous risk of data breaches.

Three years later, in 2019, the techlash is firmly entrenched. And, three years in, we now have Shoshana Zuboff’s new book, The Age of Surveillance Capitalism, to articulate what exactly feels so wrong about the way the technology industry influences our lives. The book’s subtitle, The Fight for a Human Future at the New Frontier of Power, suggests a particularly verbose and forceful text. And indeed it is: over 691 pages (over a hundred of them, blessedly, are ‘only’ footnotes), Zuboff articulates her theory of surveillance capitalism as the successor to industrial capitalism. She painstakingly maps out the similarities: where industrial capitalism exploited (and ultimately ravaged) nature for raw materials, surveillance capitalism exploits human nature—using our behaviors (both online and, increasingly, offline) as the raw material to power comprehensive ad-targeting operations. If well-targeted ads don’t sound particularly sinister, Zuboff provides a broader, suitably distressing framing: in order to target us most effectively, surveillance capitalists will consume increasingly more information about us, and seek not only to influence our behaviors but shape them. The result, she believes, is increasing intrusions into our autonomy, rendering us as instruments of capital instead of autonomous individuals.

Zuboff’s ambition is clearly for this to be an enduring book. And it arrives at a time when the charges levied against Big Tech are wide-ranging and disheartening. Google’s search engine has trapped us in filter bubbles; Google’s business practices have suppressed competitors and siphoned away ad revenue from a struggling news and media industry. Facebook’s products, deliberately engineered for constant usage, has been accused of promoting fake news in the U.S., inciting ethnic violence in Myanmar, and at the very least are destructive to our self-esteem and mental health. Amazon’s relentless push for profits has led to inhumane labor conditions in its warehouses and a worrying carbon footprint. Twitter is increasingly a platform for harassing the most vulnerable voices: women, people of color, and other marginalized demographics. And so on.

Against this vast landscape of concerns, Zuboff focuses on two particular villains: Google and Facebook. Google, she argues, was ‘the pioneer of surveillance capitalism in thought and practice’, as General Motors was for managerial capitalism in the 1900s. And where Google’s behavioral surplus comes from its monopoly on search, Facebook’s comes from its longstanding monopoly on the social graph—who you are and who you interact with. Zuboff focuses on how these firms surveil us, collect data about us, and—she argues—how they produce a system of control that compromises our autonomy and dignity.

Much has been written already about surveillance and privacy. But Zuboff, a social psychologist who focuses on the history of work, capitalism, and the rise of information technologies, brings a new perspective. She argues that the long history of privacy missteps from Google, Facebook, and others are not aberrations, but necessitated by the surveillance capitalist business model. ‘Transparency and privacy represent friction for surveillance capitalists’, Zuboff writes, ‘in much the same way that improving working conditions, rejecting child labor, or shortening the working day represented friction for the early industrial capitalists…[these] problems cannot be understood as excesses, mistakes, oversights, or lapses of judgment. They are necessitated by the reigning logic of accumulation and its relentless economic imperatives.’


Over the course of the book, she unwinds a comprehensive lexicon of terms and metaphors to bolster her thesis. Her most elegant metaphor, concerning how data collection operates online, is about the two ‘texts’ produced under surveillance capitalism. The public text is what we celebrate about the internet: the tremendous, vast wealth of writing, art, conversation, imagery—all the intellectual and creative output produced by human society in the information age. Your public text includes all your online content. This text, notably, is often invoked to justify the excesses of surveillance capitalist firms. If we urge them to change how they operate, we are changing the innovative spirit that enables the production of this first text. Many surveillance capitalist firms, after all, are offering us the tools and platforms to put our work out there for free.

Zuboff, however, believes that the public text’s primary purpose is to serve as raw material for a second, shadow text—‘a burgeoning accumulation of behavioral surplus and its analyses, [which] says more about us than we can know about ourselves.’ The behavioral surplus here includes every link we linger on from a Google search, every post we like on Facebook, and even our behaviors further afield (Google and Facebook both having the capacity, through their advertising platforms, to track us through nearly every other site we visit).

Zuboff expands on the role of the shadow text over the following chapters, and how it’s used to model our existing behaviors and provide insight into how we can be nudged and shaped—by surveillance capitalists, naturally. The great inefficiency of advertising is that not everyone who sees an ad will change their behavior. Say you’re shown a product: perhaps it’s too expensive for you; it solves a problem you don’t have; you’re not the right demographic. Advertisers can only hope to influence behavior. But surveillance capitalists pursue a more ambitious goal: ‘they nudge, tune, herd, manipulate, and modify behavior in specific directions by executing actions as subtle as inserting a specific phrase into your Facebook news feed, timing the appearance of a BUY button on your phone, or’—as Zuboff turns to the dystopic—‘shutting down your car engine when an insurance payment is late.’

This elegant metaphor of the two texts appears alongside other, more curious ones. When Zuboff argues that surveillance capitalism, like capitalism itself, is hostile in its uncontrolled form, she whimsically summarizes Thomas Piketty's work on capitalism and wealth inequality as saying: ’Capitalism should not be eaten raw. Capitalism, like sausage, is meant to be cooked by a democratic society and its institutions because raw capitalism is antisocial.' Zuboff’s subsequent critique makes the description ’antisocial’ begin to look charitable. She attacks a great lexicon of terminology endemic to the technology industry and, in doing so, tugs us away from a state of disoriented, placated passivity at the operations of surveillance capitalists.

Of privacy policies, which frequently mask incursions into our privacy through dense legalese, she says crisply: ‘[they] are more aptly referred to as surveillance policies, which is what I suggest we call them’. Personalization, the great pursuit of every technology product with an algorithm mediating your access to information, is a ‘Trojan horse in which the determination to render and monetize your life is secreted under the veil of "assistance”’.

Zuboff is acerbic and perceptive when she takes on the propaganda around ‘big data’ discourse, and her analysis here is particularly timely. The technology industry is presently infatuated with machine learning and artificial intelligence, which often require vast tracts of data to feed into learning and prediction algorithms. Zuboff describes how surveillance capitalist firms lean on the euphemism ‘data exhaust’ when describing their behavioral surplus data, implying that this ‘exhaust’ is necessarily insignificant, valuable to nobody. Who could contest a firm industriously sweep up this exhaust to make frugal use of it? Well, Zuboff does—especially since this data exhaust compromises the most valuable information that surveillance capitalists have about us. Similarly, ‘dark data’, which describes the vast behaviors that aren’t currently trackable—who you talk to offline, the restaurants you visit, your insulin levels—suggests that this untamed darkness, this terra incognita, must be brought to order, installed inside the shadow text for surveillance capitalists to use. It’s not a reach for Zuboff to compare this process of dispossession to the ‘conquest pattern’ of European exploration and imperialism in the 1400s and onwards.

It’s powerful to revisit these terms, and begin to understand the norms these terms have created. The fact that our behaviors are construed as ‘data’, that our data becomes the raw material for surveillance capitalists and their operations, is an intentionally produced reality. When Zuboff recontextualizes the words we use, she is drawing back the curtain and letting us see who constructed our reality—and to what ends. This matters deeply to us as ‘users’ of these products, as members of a public increasingly exploited and fragmented by information technologies. But it matters just as much to those of us who, as technologists, contribute to the apparatus of surveillance capitalism. Zuboff quotes a luminary of early artificial intelligence research, Joseph Weizenbaum, who, in noting the field’s enormous affection for using euphemisms, says ‘We anesthetize our ability to become conscious of its end use…one can't escape this state without asking, again and again: "What do I actually do? What is the final application and use of the products of my work?" and ultimately, "Am I content or ashamed to have contributed to this use?”’


The Age of Surveillance Capitalism is densely populated with stories that might invite shame from technologists, including many that were under-reported, and now often unremembered. Google Maps is now a fixture of our technological landscape. I was eleven when it first launched, in 2005. It was expanded in 2007 with a feature called ‘Street View’, which displayed street-level imagery of cities. How was this accomplished? By roving vehicles with cameras set up to capture photographs of every street, and every house and business on that street. In the following years, more and more cities were mapped, to greater and greater dismay. Despite public opposition in the U.S., Germany, and Japan, a Google vice president, John Hanke, insisted that debate about the feature had mostly ‘died off in the West’—a convenient way to suggest that opposition to Street View was somehow, vaguely, anti-democratic and regressive. A few months after Hanke’s statements, residents of one English village, Broughton, were so alarmed by the feature that they formed a human wall to stop a Street View car that sought to map their streets.

This was 2009. I was fifteen. I, and others, have grown up in an era where all this is forgotten—it seems normal that Google has photographically captured so many cars and pedestrians (license plates and faces blurred, but potentially still identifable) around the world. The legal challenges Street View faced, in multiple international jurisdictions, were ultimately toothless; the alarm that residents of Broughton acted on, in collectively blocking the Street View car, has faded from our collective memory. In an industry that is often ahistorical, where tech giants are eager to move on from their indiscretions, Zuboff restores these stories to our collective consciousness again, reminding us that surveillance capitalists have an extensive history of privacy violations. Each incursion we experience has had a precedent.

Alongside these grim stories are ones that remind us of what could have been: a gentler future, one kinder to our need for sanctuary and solitude from surveillance. Zuboff shares a research project from the early 2000s, exploring an ‘Aware Home’ that presaged today’s ‘smart home’ landscape of always-listening Google Home and Amazon Alexa devices. The Aware Home, however, was structured around the following three principles: the people in the home would decide what might be collected as data, not a corporation; the data would be used to enrich their lives, not a corporation’s coffers; the people would decide, ultimately how the data would be shared and put to use. The Aware Home represents a closed loop, where machine learning and artificial intelligence would be scoped to promoting the welfare of one particular household.

The contrast between this vision and our present reality is enormous. If you bring an Alexa device home, its sensors scoop up every bit of speech to send back to Amazon’s servers. All this collected speech is used to improve Alexa, yes, but it is likely—and given the extraction imperative, almost certainly the case—that this data will be used, sooner or later, to extrapolate something about you that you’d rather Amazon not know. The information might include everyday conversations in your home that Alexa overheard, or informational requests to Alexa that reveal personal information and preferences. A man who requested his Alexa history and instead received someone else’s was able to identify the other person, through recorded mentions of the person’s partner, friends, and local weather. Bringing this speech data outside the home makes those inside the home deeply vulnerable to violations of privacy. Almost two decades after the Aware Home project, this state of affairs is the norm.

We are all, today, dispossessed of our data. Corporations have normalized our experiences as their data, and it provides tremendous value to their balance sheets. There are tentative signs of an alternate model, one that hews closer to the Aware Home model: Apple, a giant of the technology industry, has staked out an approach to machine learning and artificial intelligence that diverges from industry norms, promoting ‘differential privacy’ techniques that allow them to collect usage data for their devices in a truly private way. Their approach is promising. It’s also rare.

The Aware Home is a powerful example because surveillance capitalists push a narrative of technological inevitabilism, arguing that technology had to develop and proceed in the form we experience today. Mark Fisher, a British writer and critic, suggested that our age is characterized by ‘[capitalist realism]’—a widespread belief that capitalism is the only sustainable economic system, to the point where imagining an alternative becomes impossible. Surveillance capitalists, then, have instilled a ‘technocapitalist realism’ by insisting there was only one way our technological capacities could have developed and matured. Zuboff’s stories show the potential for alternate futures. On several occasions, regulators could have asserted that Google was stepping out of bounds. Lawmakers could have articulated more safeguards for their citizens. Technologists could have articulated a different vision for how technology would interact with humanity. Instead, we are ‘tracked, parsed, mined, and modified’. It’s easy to feel this is the nature of the twenty-first century, the inevitable bargain of technological advancement. But Zuboff exhorts us to not confuse the current reality as the only reality: ‘Surveillance capitalists want us to think that their practices are inevitable expressions of the technologies they employ.’

So why didn’t we end up in the future of the Aware Home? Because of the immense business imperative—what Zuboff defines as the extraction imperative—to instead give us smart devices that surveil us from our living rooms and extract data from our experiences. Zuboff is insightful and perceptive when she criticizes contemporary technology practices by laying out their economic logic. It’s this viewpoint that makes The Age of Surveillance Capitalism a profoundly fascinating contribution to the literature criticizing big tech. Zuboff is much bolder in many of her arguments than most mainstream critics (although, perhaps, less specific than Jaron Lanier, and still moderate in comparison to Evgeny Morozov’s deeply rooted distrust), and it’s wonderful to see her line up a number of tired arguments against these tech firms and meticulously knocking them down, one by one, in uncompromising prose.

One of the oldest criticisms of the ‘free’ offerings of firms like Google and Facebook, even before the present backlash against the technology industry, was packaged up neatly in the phrase: If you’re not paying for the product, you are the product. Zuboff disagrees, persuasively. ‘Surveillance capitalism’s products and services’, she writes, “are not the objects of a value exchange…We are not surveillance capitalism's ‘customers’…We are the sources of surveillance capitalism's crucial surplus: the objects of a technologically advanced and increasingly inescapable raw-material-extraction operation. Surveillance capitalism's actual customers are the enterprises that trade in its markets for future behavior.’ We become, then, something less than a proffered good—it’s our data that these firms are interested in, not us, in what Zuboff calls the ‘radical indifference’ of how technology firms view their users.

Zuboff also evaluates (and dismisses) a popular solution to the woes of surveillance capitalism. Where some have argued that we, the sources of surveillance capitalism’s surplus, deserve a cut of the profits too, Zuboff rejects this: ‘It is obscene to suppose that this harm can be reduced to the obvious fact that users receive no fee for the raw material they supply. That critique is a feat of misdirection that would use a pricing mechanism to institutionalize and therefore legitimate the extraction of human behavior for manufacturing and sale. It ignores the key point that the essence of the exploitation here is the rendering of our lives as behavioral data for the sake of others' improved control of us.’


It seems unbelievable that, after almost 700 pages, anyone could come away wishing for more argumentation. But Zuboff’s enthusiasm for articulating a comprehensive model for surveillance capitalism omits certain discussions of its ills and alternate possibilities.

Zuboff is eager to declare surveillance capitalism to be a wholly new model, separating it from industrial capitalism by declaring that, just as industrial capitalism appropriates nature for its profits, surveillance capitalism appropriates human nature and experience. It all feels a bit too tidy; it suggests that surveillance capitalism has moved beyond the archaic practice of exploiting the earth’s resources to the airy, refined domain of the ‘cloud’. But surveillance capitalist services run on servers, housed in physical data centers dotting the world and gulping up vast quantities of energy. Researchers have borrowed the concept of life-cycle assessment to interrogate the environmental impact of artificial intelligence, and found that training a standard machine learning model—which, to give one application, might take in the accumulated shadow text of thousands of people to produce a model for recommending products to them—can emit as much carbon as five cars do in their lifetimes. In this sense, surveillance capitalism is not a total break from the old economic model. Surveillance capitalists are still, like the earliest industrial capitalists of England, relying on pollution to make profit.

Zuboff also prods, here and there, at the ownership model of surveillance capitalist firms. She notes that Google is largely controlled by two men, and Facebook by one, who ‘do not enjoy the legitimacy of the vote, democratic oversight, or the demands of shareholder governance’. While Zuboff characterizes the ownership model of these firms as exceptional, and exceptionally compatible with surveillance capitalist practices, she refrains from asking whether a different ownership model might produce something beyond an extraction imperative—a more humane model, perhaps. In response to criticism of the ‘sharing economy’ and ‘gig economy’, Trebor Schulz articulated the platform cooperativism model, where the platform—a ride-sharing service like Lyft and Uber, for example—might be collectively owned by the workers, and thus incentivized to provide a more humane working experience. A similar ethos inspired Mastodon, a social network that mimicked product features of Twitter and Facebook, but on software that was intentionally open-sourced, decentralized and community-run. Here, the structure of the software permits Mastodon to escape the extraction imperative—there is no centralized organization that might collect and coerce your shadow text into ad revenue. These solutions to our technological woes are very small right now—perhaps too small, and too struggling, to counterbalance the great power of surveillance capitalists. But they are something.

That said, Zuboff’s book is still comprehensive (almost excessively so), and her expansive criticism of surveillance capitalism is paired with a keen empathy for those affected. She turns, towards the end, to a familiar topic: how Facebook, and similar technological products, affect children and teenagers. This topic is well-trafficked, exhaustingly so, but Zuboff’s handling of it feels unexpectedly tender and insightful. Zuboff is not a scold, and she does not suggest that children can just unplug—‘just’ mediate their relationship with monopolistic, highly prevalent technologies—and thereby avoid the harmful impacts. She recognizes that social media intervenes in the lives of adolescents during a delicate moment of identity formation, when they must establish a distinct sense of self: an individual independent from the networked hive.

Zuboff is, here and throughout, fiercely defensive of our ability to be individuals: individuals whose identities are not reflected in a pervasive social mirror, whose lives are not mediated through behavorial-alteration platforms, whose actions are not controlled by others. She is alarmed, passionately so, about the psychological and emotional harm we experience on these platforms. ‘I try to alert [young people] to the historically contingent nature of “the thing that has us” by calling attention to ordinary values and expectations before surveillance capitalism began its campaign of psychic numbing.’ It is not normal, she tells them, to have to hide in your own life. We install browser plugins to block ads; we make jokes-that-aren’t jokes about who might be scrutinizing our browsing behavior; we worry and weigh the costs of ‘quitting social media’, wondering if our lives will be richer or poorer to forgo services that surveil us.

Zuboff reminds us that our current state is not technologically inevitable. She urges us to contest it, to demand something better for ourselves. ‘Cynicism,’ she writes, ‘can blind us to the enduring fact that democracy remains our only channel for reformation.’ A reformation, of course, requires a societal emergence from stupor, and a mass commitment to a different technological future. It’s likely that Zuboff’s book, and the ideas she has introduced and elevated, will help us get there.