The battle between Tom Perez and Keith Ellison, the frontrunners for chair of the Democratic Party, is opening the old wounds, with Perez standing for the Clinton wing and Ellison for the Sanders wing. Both wings have losses: Sanders lost to Clinton, and Clinton to Trump. Nonetheless, Ellison, who endorsed Sanders during the primaries, has a personal history of electoral victories to boast of. Perez has not been elected to state or national office, so the way he’d run the DNC lies with his links to other Democratic players. Eyed for a time as Clinton’s running mate, Perez’s strongest political ties are to Clinton associates such as Clinton campaign chair John Podesta, former DNC adviser Tony Coelho, VP finalist and Clinton loyalist Tom Vilsack, and many others. The DNC under Perez, in other words, stands a chance of looking a bit like the Clinton campaign—not necessarily politically, but organizationally. Chairs bring their friends and loyalists with them, and so a vote for Perez or Ellison is a vote of confidence in their team. While Ellison is likely to draw from Sanders’s bench, Perez will draw from Clinton’s team, and consequently, one clue as to whether Perez would be a good DNC chair is whether Clinton ran a good campaign.
The answer to this question, which has fallen by the wayside with the frenzy of the Trump Administration, is best sought in the Clinton campaign’s use of technology. “People took Michigan for granted,” said the Michigan Democratic congresswoman Debbie Dingell, on the day after the election, by way of explaining the Clinton campaign’s shocking loss. The Great Lakes state, in combination with Wisconsin and Pennsylvania, was supposed to have formed a “firewall” against a Trump win. Yet Clinton’s campaign had a skeletal ground organization in Michigan and ran no local advertising until the very last week of the campaign, when Team Hillary launched a last-minute ad blitz. In the end, turnout in Detroit was down 75,000—13 percent—from 2012, and Clinton lost Michigan by 10,000 votes.
The core of Clinton campaign strategy was their analytics system, developed by dozens of researchers who were led by Clinton’s director of analytics, Elan Kriegel, in close consultation with campaign manager Robby Mook. In the Washington Post, John Wagner wrote, “the algorithm was said to play a role in virtually every strategic decision Clinton aides made, including where and when to deploy the candidate and her battalion of surrogates and where to air television ads—as well as when it was safe to stay dark.” The oracle of the system was “Ada,” a big-data simulator that issued up-to-the-minute probabilities on Clinton’s chances by state and county. Throughout the general election, Ada backed her arguments for a decisive Clinton win in the Electoral College with a ton of stats. But Ada, and all her numbers, turned out to be wrong.
Ada’s inputs were polls and surveys, both public and the campaign’s own, alongside field data from ground-level campaign workers. Ada’s job was to produce a prioritized roadmap for the campaign that meticulously laid out the “value” of campaigning in particular states: how many voters would be turned out if ads were run in Arizona, say, or if Hillary made an appearance in New Hampshire? In the Democratic primaries, the analytics had acquitted themselves by eventually besting Bernie Sanders. But the campaign missed a critical lesson when they didn’t take stock of Sanders’s upset in Michigan, which Clinton had been favored to win.
Why did Ada fail in Michigan? The primary and the general election were different contests, but both suggest that the failure lay in Ada’s model of the electorate—or more precisely, her inability to update her model of the electorate. In the general election, Ada told Clinton that Wisconsin was a lock, that Michigan was not a problem. But it wasn’t so much that Ada’s cake arbitrarily failed to rise; the failure was in the recipe. In an election where a great realignment took place—where thousands of voters in Rust Belt states who had voted for Obama twice now turned to Trump—Ada had not been programmed to detect the possibility of that realignment. The oracle had been hamstrung from the start.
Wagner writes that Ada ran “400,000 simulations a day of what the race against Trump might look like.” This is a very “big data” sort of claim. 400,000 is rather large—no human could look through the results of that many simulations. Ada’s “intelligence” lay in how she boiled down the results of those 400,000 simulations into a campaign strategy. Each of Ada’s electoral simulations was premised on variations in turnout based around expected margins of error—for example, one simulation might posit that Hispanics would break for Clinton 2 or 3 points higher (or lower) than the data predicted. By sampling a representative subset of all possible variations—the so-called Monte Carlo method of quantitative analysis—Ada would produce a set of outcomes. After such simulations, Ada showed that Michigan and Wisconsin went for Trump only a small percentage of the time, compared to Florida and Pennsylvania, which went for Trump a larger percentage of the time.
Yet what must have seemed like a foolproof, detailed prescription for victory based on data and computation was mostly a confirmation of preexisting biases—particularly the campaign’s faith in the firewall. In another election year, those biases might have turned out to be right, and Ada would have been mistakenly vindicated. Here, though, the oracle was revealed to be little more than a parrot. Once the initial analysis showed that Clinton was favored to win in certain states, Ada helped prevent the campaign from questioning her conclusions. “They weren’t running a massive program because they thought they were up 6–7 points,” a senior operative told the Huffington Post. Ada’s recommendations reinforced themselves. By deallocating resources from Wisconsin and Michigan, Ada starved herself of data that might have caused her to recognize a problem.
The campaign validated Ada’s model nightly, but the question is, what was being validated? Certainly not Michigan voter tendencies, because the campaign wasn’t collecting enough data there. Where the campaign was collecting data, such as Pennsylvania, they allocated more resources, because the data confirmed that the state was a trouble spot. What was validated, ultimately, was the internal consistency of the campaign’s initial assumptions. Those assumptions, and Ada’s apparent statistical support for them, caused so much inertia that the Clinton campaign starved Michigan of resources and ignored Wisconsin’s low-enthusiasm Clinton supporters, many of whom ended up not voting.
There is a deeper problem that lies in the selection of those 400,000 simulations. The variables for each particular simulation are selected based on a distribution, which ensures that less likely events, such as a poll being off by ten points, are simulated less frequently than more likely events, such as a poll being off by two points. By simulating more likely events more often, the algorithm assigns greater weight to those possibilities. For rare events like elections, the choice of distribution is inevitably arbitrary—as are the relative probabilities themselves. Throughout the year, Nate Silver’s 538 assigned Clinton a lower probability of winning than most outlets, but that was not a consequence of 538 having better data or “seeing” things that others did not. It was a result of their choosing a distribution that mandated less certainty. Even so, given the same inputs, no reasonable distribution would ever have tipped Clinton below a 51 percent chance of winning, nor could it have raised alarms about Michigan. This is why, while most pollsters showed varying probabilities for a Clinton win, they all put the same states in question to varying degrees. The expectations had been set early on. Ada compensated for margins of error, but she failed to test them for accuracy.
Consider the reverse question: what would it have taken for Ada to identify Michigan as the crucial state that it turned out to be? You can’t simply say “better polls” because Ada was built to withstand faulty polls. Perhaps a more accurate distribution that entertained the possibility of greater error? No, because while Ada may have raised more concerns about Wisconsin and Michigan, it still would have rated them as lower priority than Pennsylvania. No, given its inputs, Ada was unlikely ever to point to Wisconsin. But clearly, the Clinton campaign thought that Ada could have, and consequently, no one in the Clinton campaign did the one thing that could have pointed Clinton at Wisconsin, which was to realize that Ada could not be trusted.
If a piece of code crashes, it’s broken, but at least you know it’s broken. The most dangerous kind of code—as I learned too many times in my years as a software engineer at Google and Microsoft—is the kind that breaks but appears to keep working. The worst part is that you have only yourself to blame, because you should have anticipated the possibility of such a breakage and set up mechanisms to catch it. This was Ada’s failure: she went wrong early and no one ever noticed. What Ada needed to do was to generate recommendations for collecting new data most likely to falsify her recommendations—like ground-level voter verification throughout Michigan, or interrogating turnout in the “safe” Clinton districts of Pennsylvania. Only an aggressive attempt to falsify would have broken the hermetic seal on Ada’s model.
From there we can see a way forward. One can imagine an anti-Ada, which instead of spitting out probabilities, generates points where knowledge is thin—like presumed “safe states” like Wisconsin. Instead of creating certainty, this anti-Ada would foster doubts—but the right doubts. It could steer organizers to places they would not otherwise go, in order to collect more data, and would challenge complacency on the part of the organizers. As it was, Ada abetted what appears to have been a very complacent campaign.
Two days after the election, the top Clinton advisers John Podesta and Jennifer Palmieri confidently blamed Clinton’s loss on uncontrollable factors. Their faith in Ada was evidently unshaken. As it is, what has been exposed has not been the failure of technology but the failure of the use of technology. Ada ran a data-laundering scheme, and the authors of the scheme were also its victims.
By consensus, Democrats deemed Debbie Wasserman Schultz’s tenure at the DNC to be a disaster, with many wishing she had lost the position earlier than last August. Both Perez and Ellison, if they have any sense, will steer the DNC away from her ineffective tactics. But the Clinton campaign maintains that “they did nothing wrong.” Should he become chair, Perez is likely to bring their tech to DNC efforts more widely. Democrats should be careful that Ada does not pull her con again.