For a young writer who hopes to produce literature, the greatest difference between now and twenty years ago may be that now she expects to get paid. Twenty years ago, art and commerce appeared to be opposing forces. The more you were paid for your work, the more likely you were to be a hack.
The term of art was “sellout.” Any artist who tried to make money would end up unable to make art. Record producer and guitarist Steve Albini outlined the story of the sellout in the Baffler in 1994. A sympathetic scout would persuade a band to sign a letter of intent, and from that moment forward the terms of the deal would become the most important factor in their work. An incompetent producer would make their songs sound “punchy” and “warm.” (“I want to find the guy who invented compression and tear his liver out,” Albini wrote.) Worse, the band wouldn’t even make money. Their manager, producer, agent, lawyer, and above all label would turn a profit, but the members would probably end up in debt.
For Albini, the only solution was to exist at the edges of the system, living for your art and only occasionally interacting with the corporate beast—by working a job as a copy editor or graphic designer at a major corporate entity, or producing, grouchily but tenaciously, Nirvana’s second and final major-label album, In Utero, as Albini did in 1993. But you kept these things separate. There was hackwork, and there was artwork, and everyone knew the difference.
Albini’s story of the major-label artist ended, “Some of your friends are probably already this fucked.” Fucking over the record companies was an idea so fanciful it wasn’t worth raising. Five years later, two college-aged hackers founded Napster. Within a decade, the record labels Albini despised were shells of their former selves, there were no corporate crumbs to pick up anymore, and bands were lining up to try out for the ad agencies that booked the car commercials that still ran, despite the coming disruption to that industry, on television (and YouTube, and streaming online ads, and the mini-TVs in the backs of New York taxis).
The changes in other cultural realms were slower to arrive and are still ongoing, but you could trace them in the increasing visibility of the unpaid internship. This was a practice that began in government and finance, was taken up by colleges, and finally was adopted wholesale by a grateful culture industry; by the mid-aughts, interns had become the butt of jokes in popular culture. “Don’t point that gun at him,” Bill Murray’s eccentric oceanographer says to an angry pirate in Wes Anderson’s The Life Aquatic with Steve Zissou (2004), “he’s an unpaid intern.” Anderson’s film was among the first to portray interns as so stupid they didn’t deserve to be paid. These interns wore T-shirts that said intern, prepared cocktails, fell down stairs. The slacker temp had become the eager serf, and with good reason. In film, publishing, and other creative industries, volunteering for a profitable corporation had become a necessary step to getting paid. But only HR departments and colleges handing out credits thought it was anything but demeaning. When Kanye rapped “Maybe you could be my intern” to rival MCs on Late Registration (2005), no one mistook it for a compliment.
By the mid-aughts, a day job was no longer an inconvenience but an aspiration, and attitudes toward it changed. The work writers could get at corporations—as listings editors or fact-checkers—may have remained secondary to artwork in their minds, but that work, so much less reliably available than before, demanded a new level of effort to find and to keep. Not only one’s position but one’s entire department could, without much warning, disappear.
These writers and copy editors were among the many who, faced with limited resources and their own cultural omnivorousness, came home each night eager to download MP3s, PDFs, and other digital copies of artworks and research they would otherwise be unable to access. Around the reality of these thefts a powerful ideological movement emerged, taking as its inspiration not just facts on the ground but also the libertarian, antigovernment, “hacker” spirit of the earliest personal computing and internet communities. The apostles of the Free Culture movement, as it came to be called, argued that stealing digital content was a progressive politics and should be brought into the open. Some of these apostles were hucksters and profiteers, others were merely hypocrites (who preached the virtues of free from their perches as well-paid magazine editors or college or law school professors), but still others, like the freeware hacker Aaron Swartz, were true believers. Congress had allowed copyright protections to be rewritten by huge corporations (most notably Disney) to become a parody of a law. If what was being illegally downloaded was some of the best that had been thought or said by human beings, and the downloaders were people who couldn’t afford the purchase price of the books or movies (some of which were expensive)—wasn’t that a good thing?
Free Culture ideology appeared to be approaching mainstream consensus when the 2008 recession made users feel, both rightly and perversely, that culture–producing corporations were fragile. In book publishing that year, hundreds of midcareer editors, writers, publicists, and other industry workers were pushed out. In the first week of December alone, the Observer reported a “massive reorganization” with layoffs to follow at Random House; a reorganization and layoffs at Macmillan; layoffs at Simon & Schuster; and an acquisitions freeze and layoffs at Houghton Mifflin. Some of these people eventually found new publishing jobs, but the industry had contracted. Many were the twentysomethings who had sold out in the Nineties and now, a decade later, ran up against the possibility that they no longer had anything to sell.
What could a no-longer-young person do in this situation? Many turned to the digital platforms that, even before the recession, had been putting magazines and newspapers out of business. So it came to happen that postrecession digital startups were helmed not only by young people and risk takers but also out-of-work publishing veterans.
OR Books, founded by longtime independent publisher John Oakes and former Scribner senior editor Colin Robinson, was a perfect example of publishing veterans using reduced online costs to modify industry standards. Rather than investing in large print runs and taking a loss on returned copies, the company would sell only ebooks and print-on-demand editions. Old hands rather than visionaries, Oakes and Robinson presented this cost-saving model matter-of-factly. For them the project was simply the prospect of “high efficiency, and minimal, or nonexistent, returns,” as Oakes wrote in Publishers Weekly.
If this was not yet a movement, it was definitely a mood—antifree—and it was fighting a more difficult battle.Tweet
That was book publishing, where the old model served as a blueprint for the new one. For magazine publishing, both old and new, not to mention the old newspapers, something closer to a visionary approach was needed. One of the most terrifying artifacts of the recession was a razor-thin New Yorker so low on ads that it was rumored the magazine would soon decrease its frequency. (A few years later, Apple rolled out an ad showing that its new iPad was as thin as a pencil; for Apple, thin was beautiful, for the New Yorker, not so much.) Not only had print ad sales collapsed with the rise of online listings, and again with the recession, but online ad sales had proved insufficient; even mass-circulation magazines and newspapers needed to move away from ads and batten down on subscriptions. The Wall Street Journal, the New York Times—one by one, the major newspapers began demanding subscriptions from their online readers. For new online magazines that wanted to generate income and not just hits, subscriptions were the only option.
Out of this necessity, conventional magazine journalism came to be marketed as an endangered art form. Nowhere was this more evident than in talk about the influential online aggregators Longreads and Longform. As nearly every article about Longreads’ founders said, they were “passionate about longform storytelling”—in other words, commercial journalism had become a passion project. Its producers, mostly old-fashioned magazines like GQ, eagerly took to this as well, tweeting their #longform and #longreads, and on every front advancing the idea that their writers were artists, in need of public support. Of course, there was a catch: in order to be selected as a “longread,” the work had to be available online for free. Eventually, Longreads launched a $3 monthly membership, which would not go to editors and writers but “contribute to our editorial budget, which goes toward finding and sharing outstanding storytelling from around the world.”
Journalists became artists, in need of subsidy, which would come in the form of subscriptions. And meanwhile a “longread” could come from Vanity Fair, where it would fetch $20,000, or from n+1, where it would fetch more like $300. These disparities, which had once been par for the course (Vanity Fair was a profitable corporate entity whose editor in chief made more than a million dollars a year; n+1 was a money-losing operation whose editors lived primarily with roommates), began to seem suspicious, especially once the big corporate magazines started producing more online content, and paying online rates for it. The Awl, a shoestring operation, might pay $150 for an online piece, but so would well-financed xoJane and Vice. Other magazines that fit the same broad category might pay nothing at all. And these, by and large, were the publications that employed young freelance writers who entered the market after the recession.
Freelance writers rightly began to demand more transparency from these publications. The most notable effort has been the blog Who Pays Writers (the source of some of the above figures), where writers anonymously submit pay rates for magazines they’ve worked with. Its founders went on to start the online magazine Scratch, “about the relationship between writing, money, and life,” which modeled itself as an ethical startup, openly sharing the terms and outcomes of its profit-sharing contracts with writers.
For little magazines (like ours), these conversations were painful, for the critics had homed in on a particular problem. The little magazine always originates as an image of utopia that it then betrays. It starts with love but very little money, and because it is edited for free (mostly), it gets writing for free (mostly) in a nonexploitative way, since no one is extracting any surplus value. This is the utopian stage, where writing as a competitive enterprise, as a sphere rife with greed and envy, disappears. It is replaced by a pure and purely unnecessary (in the sense of not being directly useful to the reproduction of biological life and material needs) contemplation of essential, fundamental problems—that is to say, it becomes art. But then, almost immediately, the little magazine becomes a way to “graduate” to the world of hackery—for its editors and writers to become journalists, novelists, overpaid business school speakers—and in this way can serve more as an instrument than an opponent of the hack world.
And so, strangely enough, it was smaller publications that seemed most vulnerable to the shaming critique produced by Who Pays Writers. Not only the publications but the writers, too, had to be shamed, as full-time freelancer Yasmin Nair did, when in a controversial blog post she called academics and others with steady jobs who wrote for small fees “scabs.” Both the people who gave and the people who accepted unpaid internships at these publications, further perpetuating their existence, would have to be shamed as well. As someone wrote to n+1 about its (unpaid) internship program, “It’s typical that you would advertise an unpaid internship. You should be aware that this is no longer done.”
Out of this necessity, conventional magazine journalism came to be marketed as an endangered art form.Tweet
If this was not yet a movement, it was definitely a mood—antifree—and it was fighting a more difficult battle than the proponents of free had. The Free movement had a few professorial spokespeople and millions of adherents; antifree was a small group of interested artisans speaking up for the dignity of being gainfully employed. As antifree grew beyond the small world of left-wing blogs, it attracted 25-year-olds who objected to being paid $50 by a corporate website that presumed them lucky to get the experience. It attracted veteran journalists who balked at being asked to write for a large, profitable magazine’s website for chump change. And it attracted unpaid interns, who at profitable media corporations (ranging from Condé Nast to Gawker), actually filed suit for violations of labor laws. These were individual stories, but they added up. The entities that had once supported journalists and writers were now doing their best not to pay them for the simplest of reasons: they could get away with it.
One of the first books to come out of the antifree movement was Ross Perlin’s Intern Nation; most recently, Astra Taylor’s The People’s Platform patiently explains how the internet has failed to upend business as usual, so that today’s large corporations, even if superficially different from the large corporations of yesteryear, still control and above all profit from what we see, read, and hear. Taylor, an independent documentary filmmaker with close ties to the independent music scene, is particularly critical of the free philosophers, who ultimately have only helped corporations to shore up their bottom lines.
The argument between free and antifree may be framed in many ways; one would be as an argument between the American scholar Lewis Hyde and the French Marxist sociologist Pierre Bourdieu. In his great book The Gift (1983), Hyde tried to explain, against an American intellectual background of economic rationalism, why people would do something like write poetry. Bourdieu, whose work was beginning to be translated into English around this same time, had already prepared an answer to this question: people make art for the same reason people do everything—because they want to gain capital. In the case of art this capital was often symbolic rather than financial, but it was still capital. For Hyde, art-making looked more like the premodern gift economies described by anthropologists like Mauss and Lévi-Strauss—the creation of something without obvious utility that could be presented to the world as a gift. (Bourdieu had also written about gift economies; for him they were, like art, a winnable game with rules and strategies.) For Hyde, the secret of art was that there was no secret—art-making was what made us human. It was what we did for free.
As it happens, Hyde’s book is often cited as an argument against payment for writing—“Art is a gift,” these people say, as they pick up their paychecks from Princeton or Iowa or Columbia. Antifree responds with some variant of Bourdieu’s old unmasking: Nothing exists outside the realm of exchange. If a writer is not paid in money, she is paid in “cultural capital” that translates into improved standing and, eventually, cash. So why (asks antifree) should the writer be forced to wait? Why shouldn’t she be paid right now?
In the argument between the free and the antifree, we’re with the antifree. Across a whole range of issues, a simple defense of intellectual property is right now a rebuke to the corporations, not a sop to them. “Show me the money” is a necessary slogan at a time when giant firms leverage a million retirement accounts for a split-second gain in the ominously named dark pools of the financial world.
But as usual we have some qualms. Sometimes antifree can feel like it has invested too much of its energy and passion in the fight for an extra $50. Which is not to scoff at $50. It’s a way station to making a living. But for the moment it’s just $50. The conversation shouldn’t stop there. On the money side, perhaps the next step for antifree is to create and strengthen a union—one that can demand standards for contracts, reprimand institutions for reneging on terms or norms of conduct, and otherwise represent the interests of culture workers before the ultimate bearers of responsibility for the diminishing of salaries and security: media conglomerates, corporate boards, and shareholders. And what about tax reform? In Ireland, artists are exempt from taxes on the first 40,000 euros they earn from their work—whereas artists and freelancers here are faced, among many other obstacles, with onerous self-employment taxes that punish anyone who tries to stay clear of the corporate system. We could do better.
And on the art side, we could do better, too. If the conversation is reduced to money alone, then all writing is reduced to content, all artists to content producers, and part of our utopia is lost. One did not become a writer in order to starve, but nor did one become a writer in order to get rich. So why did one become a writer? Here Hyde is, in the end, more helpful than Bourdieu. We want our $50, and so much more.