fbpx

The Uncertainty of Risk

  • Silver, Nate. The Signal and The Noise. Penguin. September 2012.
  • Taleb, Nassim Nicholas. Antifragile. Random House. November 2012.
  • Weatherall, James Owen. The Physics of Wall Street Houghton Mifflin Harcourt. January 2013.

No one could have predicted on March 10, 2011 that the imminent Tōhoku earthquake, at magnitude 9.0, would be the greatest to hit Japan, or foreseen the giant tsunami that struck the Japanese coast minutes later. But that does not mean the subsequent meltdown of three reactors at the Fukushima Daiichi nuclear power station was unavoidable. The plant was built to withstand a big earthquake and survive a moderately sized tsunami, but a panoply of engineering errors—too-short sea walls, backup diesel generators installed in locations likely to flood, pools overcrowded with spent fuel rods, and a main control room insufficiently shielded against radiation—permitted the worst nuclear accident since Chernobyl. While Japanese nuclear regulators and TECPO, the utility that owns Fukushima Daiichi, knew that many of these vulnerabilities existed, the authorities’ sophisticated geophysical models considered such an intense earthquake to be impossible in the region. Their risk analyses did not account for the simultaneous, systemic failures of sea walls, power grids, and backup cooling systems that ultimately doomed the plant.

Increasingly, risk management experts and their predictive models of the world determine the “efficient” distribution of resources necessary to respond to existential threats. In some domains, this organizational strategy has worked well. Improvements in prediction have helped save lives from extreme weather, manage the spread of seasonal disease, and navigate the internet. But forecasting has not adequately protected against the ravages of catastrophic technological failure, ecological collapse, or financial panic. Despite our generalized faith in their power to predict, when systemic disaster strikes we continue to accept experts’ claims that the cataclysm was an unforeseeable “act of god” that no one could have reasonably prepared for. These excuses leave both the ideology and the techniques of risk management intact. Our success at forecasting which cities to evacuate in advance of an approaching hurricane convinces us that we can equally well predict a “sustainable” level of carbon emissions that will head off global climate change. But this extrapolation overestimates our ability to statistically manage reality’s irreducible complexity and to eliminate uncertainty. The result is a world well prepared for the regularly occurring dangers of modern life, but woefully fragile to the rare, extreme events that drive history.

The uncertainty of the future is a truth that applies equally well across the broadest range of human experience. By constructing sophisticated models of reality, experts can make valid, probabilistic predictions about the future, transforming unknowable uncertainty into calculated risks. If an individual rationally prepares for the risks he faces, the thinking goes, then things will turn out all right over the long run. If everyone properly accounts for his own risks, then this protection will extend to society as a whole—since this view sees society as nothing but a large number of individuals and firms. Digital communication, social networks, and computers’ algorithmic autonomy—the elements of Big Data—only reinforce the view that emergent behavior allows a large group of rational actors to be self-regulating. As the accuracy of these individuals’ models of reality improves, so should the efficient self-management of the whole. But this ideology relies on a fallacy of composition, since it assumes that a whole system behaves no differently than the sum of its parts. There is no room in this neoliberal view for the “rational irrationality” of bank runs, asset bubbles, and other nonlinear combinations of effects that emerge in complex, self-interacting systems. There is no way for risk managers to prepare for a catastrophe when their science denies its possibility.

The financial industry suffers (or gains, depending on who you’re talking to) from this overoptimistic commitment to computational risk management more than any other industry. In The Physics of Wall Street, James Owen Weatherall chronicles the evolving quantitative models of financial markets, from the 19th-century Parisian bourse to the postmodern exchanges composed of “co-located” servers, submitting and canceling bids and offers thousands of times a second on the orders of proprietary trading algorithms. The book presents physicists-turned-quants as heroes of financial markets, and argues that the 2008 financial crisis demonstrates the need for even more of their sophisticated risk modeling, since the reigning models were clearly insufficient. In keeping with this political objective, Weatherall rarely judges the social value of the models he profiles by their historical effects on real markets. He prefers instead to explain why their theories are theoretically correct, recounting the billions they have made their makers on trading floors.

Weatherall’s treatment of the Black-Scholes options pricing model demonstrates these problems. This equation puts a price on risk in the form of a financial derivative, a contractual bet intended to offset the risk of owning an underlying asset. Weatherall describes how Fischer Black, Myron Scholes, and their collaborator Robert Merton came up with a method of constructing a “risk-free” portfolio of securities, derivatives, and cash, just as the SEC approved the opening of the first dedicated options market in the US, creating an immediate demand among traders for a way to price these derivatives. Black, Scholes, and Merton didn’t just publish their model, for which they won the 1997 Nobel Prize. They took it to the markets themselves. Scholes and Merton went on to be directors of Long Term Capital Management (LTCM), a hedge fund that billed itself as “the financial technology company” and based its supposedly risk-free arbitrage trading on “dynamic hedging” strategies derived from the Black-Scholes model. Black, who died in 1995, created the Quantitative Strategies Group at Goldman Sachs.

In The Physics of Wall Street, Weatherall deems the Black-Scholes model successful because traders and hedge funds adopted it with zeal. He acknowledges that it has some limitations, and that the model’s widespread use likely exacerbated the 1987 Black Monday stock market crash. But he maintains that it is “based on rigorous reasoning that, in a very real sense, cannot be wrong.” Of course, this limited, methodological assessment both ignores the model’s theoretical problems and glosses over the real structural damage it has caused. Although he extols LTCM’s performance during its first few years, when it posted annual returns of more than 40 percent, Weatherall only spends a few sentences on the firm’s spectacular implosion in 1998, when it lost $4.6 billion in four months as its “risk-free” dynamic hedging strategy turned out not to have eliminated uncertainty after all. Even though the Black-Scholes model dictated that the firm’s arbitrage trades were “correct” in the long term, it failed to heed the maxim that markets can stay irrational longer than you can stay solvent. It was not only LTCM’s directors who were in thrall to the firm’s “financial technology.” Nearly every bank on Wall Street had eagerly extended large credit lines to the hedge fund, trusting its team of Nobel prizewinners and trading veterans. As LTCM’s highly leveraged bets on foreign debt soured, the financial system itself might have crumbled under the weight of billions in worthless repo financing had the Federal Reserve not organized a bailout.


At the foundation of many financial models, including Black-Scholes, is an assumption that price changes in financial markets conform to a version of the normal distribution—the probabilistic description behind the familiar bell curve. The normal distribution describes the well-behaved randomness of many coin flips or the heights of a population of adults, and its familiarity has bred a wide array of mathematical tools to manipulate it. This tractability is the primary motivation for modelers’ application of it, but in reality the model is abused, invoked to explain phenomena that don’t conform to its convenient pattern. Nassim Nicholas Taleb, an options trader-turned-philosopher, has labored to explain how nearly all risk models misunderstand the nature of randomness by overeagerly assuming that all sorts of phenomena can be described with the well-behaved normal distribution model. As a consequence, our faith in risk management and its experts has left society vulnerable to the hazards of uncertainty.

In his 2007 bestseller, The Black Swan, Taleb argued that there are really two broad categories of randomness. The tame randomness of the normal distribution is dominated by the law of large numbers, which states that collecting more samples of a random phenomenon should give statisticians a more precise measure of its average behavior and more accurate inferences about its underlying causes. In this regime, more data is always better, extreme cases are vanishingly rare, and a single sample never disrupts the overall picture: even if his existence were possible, a nine-foot-tall man would not change the average height of New Yorkers. But few economic phenomena are so well behaved. Instead, their probability distributions have “fat tails,” meaning rare, extreme events dominate the picture and average values are meaningless. Consider the distribution of income: the average income of Americans may have grown in the “recovery” year of 2010, but since 93 percent of that gain went to the top 1 percent, the average does not truthfully portray economic reality. Similarly, financial markets are dominated by Taleb’s “Black Swans,” extreme events that one cannot predict on the basis of past data. A single downturn could erase the wealth and power that Lehman Brothers’ investment bank accumulated over 158 years, imperiling the global economy in the process. Taleb shows how, in the complex systems that give rise to this wild randomness, minute differences in assumptions and current measurements can change the predicted frequency and intensity of Black Swans by as much as one trillion times. It’s therefore fundamentally impossible to reliably estimate either how often Black Swans will appear or how destructive they will be.

Benoît Mandelbrot, the brilliant mathematician of fractal fame, first presented evidence that financial markets are wildly random back in the 1960s. But these empirical studies failed to convince the financiers and their econometricians, who to this day continue to model markets using variants of the too-tame normal distribution, largely because of its correspondence with neoclassical ideas about efficient markets free from irrational booms and busts. Belief in these discredited theories runs so deep that no amount of contradictory evidence can dislodge them. Despite the too-high frequency of large booms and panics, financiers continue to invoke the normal distribution’s language of sigmas (standard deviations) to assure everyone that their risk is properly managed. LTCM’s 1998 risk management reports stated that it would take a virtually impossible “10-sigma” event for the firm to lose all of its capital within one year (and according to the normal distribution, a 10-sigma event should happen once in every 1024 samples, a number ten million times larger than the number of seconds since the Universe began). During the crisis of 2008, David Viniar, CFO of Goldman Sachs, explained his bank’s extraordinary losses by claiming it was blindsided by “twenty-five standard deviation moves, several days in a row.” The obvious implication of statements like these is that financial crises are unforeseeable “acts of God” for which no one should be expected to prepare. History suggests otherwise.


Beyond Wall Street, too, persistent faith in prediction models continues to dominate, as people remain optimistic that information density and supercomputing power of “Big Data” will allow scientists to model ever-larger chunks of the world and manage the risks presented by its intrinsic randomness. In The Signal and the Noise, Nate Silver explains why the profusion of data, theories, and computing power so often fails to open the future to inspection. The mild-mannered Silver has become a statistical guru by bringing rigorous Bayesian thinking to fields already drowning in statistics. In The Signal and The Noise, he describes how he used his spare time to build PECOTA, a statistical model that predicted the possible future performance of a baseball prospect by comparing his record to a database of past players’ stats. Silver’s jump into stardom came during the seemingly interminable 2008 presidential campaign, when he started mining mountains of polling data on his anonymously published blog, FiveThirtyEight. His detailed model of the race correctly predicted the general election results in forty-nine of the fifty states, outperforming both political graybeards and established polling firms. He repeated the trick again for the 2012 race, this time as a staffer at the New York Times, correctly calling all nine swing states and thirty-one of the thirty-three senate races.

The advent of cheap computing power has improved the reliability of some, but not all, types of predictions. Twenty-five years ago, the National Hurricane Center’s forecasts of a hurricane’s landfall three days out had a radius of uncertainty of 350 miles. Today, that error is down to 100 miles, an area small enough to effectively evacuate and harden. Weather forecasts rely on simulations of the atmosphere, built from equations representing well-understood physical laws and measurements of current conditions. These equations have proven highly reliable over decades, and advances in satellite and radar technology provide meteorologists with increasingly accurate and fine-grained data. Since doubling the resolution of these models requires a sixteen-fold increase in the number of calculations, computational capacity substantially limits predictive accuracy. Supercomputers help here.

But computer power can’t make up for faulty theory or poor quality data. A fundamental challenge facing modelers is that in most cases they can see only the effects of a random phenomenon, not the underlying processes that generate it. In cases of wild randomness, relying on computers to squeeze inferences from limited data sets leads experts to confidently discount the possibility of future black swans, which in turn encourages decision makers to over-optimize and reduce margins of safety too far. Silver demonstrates how competition among data-driven experts to produce “precise” forecasts for their consumers leads experts to try to reduce the uncertainty reported alongside their predictions. In doing so, analysts tend to “overfit” their models to more closely follow the contours of the available data. While this strategy might reduce the margins of error in their reports, it also yields models that judge events more extreme than those present in their source data to be virtually impossible. For businesses and regulators eager to keep compliance costs low, these truncated forecasts are often welcome news.

Consider the engineering and regulatory failures responsible for the Fukushima Daiichi meltdowns. After the shaking from the Tōhoku earthquake reached the plant, its three active reactors shut down as designed. The cooling system, a massive system of pumping equipment preventing the reactors and their spent fuel rods from melting, could no longer get electricity from the grid and so began drawing power from emergency generators, many of which were located in basements. When the forty-nine-foot tsunami waves reached the oceanfront plant fifty minutes after the quake, they crashed over the nineteen-foot seawalls, drowning the generators and destroying substations necessary to distribute power throughout the six-reactor complex. Without an operational cooling system, the reactors and pools overstuffed with spent fuel rods overheated and ignited a further series of catastrophic failures.

Risk management failed on several levels at Fukushima Daiichi. Both TEPCO and its captured regulator bear responsibility. First, highly tailored geophysical models predicted an infinitesimal chance of the region suffering an earthquake as powerful as the Tōhoku quake. This model uses historical seismic data to estimate the local frequency of earthquakes of various magnitudes; none of the quakes in the data was bigger than magnitude 8.0. Second, the plant’s risk analysis did not consider the type of cascading, systemic failures that precipitated the meltdown. TEPCO never conceived of a situation in which the reactors shut down in response to an earthquake, and a tsunami topped the seawall, and the cooling pools inside the reactor buildings were overstuffed with spent fuel rods, and the main control room became too radioactive for workers to survive, and damage to local infrastructure delayed reinforcement, and hydrogen explosions breached the reactors’ outer containment structures. Instead, TEPCO and its regulators addressed each of these risks independently and judged the plant safe to operate as is.

This piecewise and microcosmic approach assumes that uncertainty can too easily be transformed into manageable risk. The nature of reality is that it all happens at once: no one can anticipate all the ways that things go wrong, and it is impossible to trace these failures through the overdetermined causal web of a complex system. We are good predictors of the future in some fields. But as Silver demonstrates, our forecasting skills work best in the highly structured domains of games, the transparency of the atmosphere, and the panoptic world of cyberspace. Even the most careful statistical thinking and fastest supercomputers cannot reliably predict the futures of national economies, the global climate system, or turbulent financial markets. Silver’s arguments ultimately echo Taleb’s earlier work, and the lesson is the same: because it hinges on imperfect theoretical assumptions and because of its sensitivity to measurements, complex modeling cannot, at bottom, accurately estimate the frequency or intensity of extreme black swan events.


So if experts’ predictions are unreliable guides, how are we to manage an increasingly complex world rife with technological, ecological, and economic uncertainty? In his latest book, Antifragile, Taleb sketches a general theory of how systems respond to randomness. The concept comes from his first career as an options trader, observing the different ways in which financial products respond to increased volatility in price changes. Although people generally think of “robustness” (resistance to volatility) as being the opposite of “fragility” (vulnerability to outside disturbance), Taleb argues that this reigning intellectual binary is false. Here he swoops in to invent the missing third term on the spectrum: antifragility. A wine glass is fragile in that its stability requires protection from outside forces; exposure to anything more volatile than a sip or swirl is liable to harm it. The human immune system, in contrast, is antifragile, meaning it requires volatility to remain healthy. The immune system of a child who’s exposed to small disruptions in the form of vaccines, dirt, and a naturally germ-filled environment will grow stronger in response, while that of a child who grows up in a completely sterile environment will be relatively feeble. Robustness lies in the middle of the fragility spectrum: volatility neither helps nor harms a robust object, which remains unperturbed, like an aircraft carrier moving smoothly through a rough sea.

As a trader used to thinking about risk in terms of an investment portfolio, Taleb cares more about exposure to an event than the event’s outcome per se. Although conventional thinking about risk frequently conflates these ideas, they are properly distinct, especially since the relationship between the outcome of a random process and the harm (or benefit) that results is almost always nonlinear. Consider flooding: in advance of its landfall, the National Hurricane Center predicted that Hurricane Sandy would cause the water in New York Harbor to rise 11.7 feet above normal high tide. Errors in this prediction are linear; each additional 5 percent error in the forecast corresponded to an additional seven inches of water on the streets of Red Hook, Far Rockaway, and Lower Manhattan. However, the City’s infrastructure has a nonlinear exposure to the water. For the first six feet of flooding, the subway system remained dry; but a few more inches of flooding allowed the waters to reach the lowest-lying ventilation shafts and station openings, inundating the tunnels and snarling the transportation system for weeks. For a fragile system, like the subway, volatility generally inflicts more pain than gain. This fragility is magnified if a model improperly assumes that mild randomness reigns when the underlying phenomenon is, in fact, wildly random and prone to black swans.

For a fragile system, accurate prediction is key, since even small errors can inflict large costs, as black swans tend to be harmful. There are ways of making a fragile system more robust, such as buying insurance or building levees, but these costs will always be balanced against predicted harm. So fragile systems will only get as much protection as risk analysis justifies. Over-optimization, poor predictions, and exaggerated cost estimates can all lead to situations where fragile systems are left insufficiently protected against future uncertainty. For an antifragile system, in contrast, the asymmetry is reversed: black swans tend to be positive, so increased volatility helps in the long run. In these cases, accurate predictions are less important to long-term success; one only needs to take advantage of positive black swans when they come along. Taleb calls this feature “optionality,” a conceptual generalization of the financial products he used to trade. In finance, an option is a contract with a small, known upfront price that may or may not pay off in the future, depending on the outcome of some specified event. Taleb generalizes this idea to describe any bet that usually yields a small loss but occasionally pays off in a big way.

The startup economy is an example of an antifragile system rooted in optionality. Venture capitalists power the tech scene by making investments in nascent firms. These upfront costs are relatively small and capped. VC firms cannot lose more than they put in. But since there is no upper limit to success, the investment’s upside is potentially unbounded. Of course, venture capitalists need to make smart bets, but the business model doesn’t require them to be so good at predicting winners as to pick one every time. The payoffs from their few wildly successful investments more than make up for the capital lost to failures. While each startup is individually fragile, the startup economy as a whole is highly antifragile, and predictive accuracy is less important. Since the losses are finite and the gains are practically limitless, the antifragile startup economy benefits overall from greater variability in the success of new firms.

Complex systems, such as economies or ecosystems, have so many constituent parts with nonlinear interdependencies that the very notion of discrete links between causes and effects ceases to be reliable. Because these complex interactions are impossible to fully enumerate beforehand, it is dangerous to try reducing one’s uncertainty by “hedging” the risk of one action by taking on another speculative bet, which a risk calculation says is in the “other” direction. As the mathematicians directing LTCM learned painfully, hedging merely balances a risk equation; it offers no true protection against the future’s uncertainty. Instead, Taleb advocates an unsophisticated heuristic approach less reliant on correct predictions: instead of adopting untested technical fixes to counteract problems, stop creating problems in the first place. For example, many proposals to deal with the climate crisis rely on speculative geo-engineering schemes to sequester atmospheric carbon, offsetting others’ emissions. “Carbon credits” could then be sold on exchanges, allowing market forces to determine who gets to keep polluting. But many of these schemes—such as pumping CO2 into supposedly impermeable bedrock formations or “seeding” the oceans with metals to promote phytoplankton growth—have not been shown to permanently remove carbon from the atmosphere, and the side effects of widely deploying them are unknown and potentially serious. Instead of “hedging” so we can continue to pollute, we ought to focus on drastically cutting global greenhouse gas emissions and preventing further deforestation. Rather than trying to solve the crisis in one grand, speculative fix, we ought to stop making the problem worse.

Taleb’s heuristic approach sounds like common sense, and it largely is. But by the time people are fluent enough in risk management to make decisions, their common sense has usually been overwhelmed by an ideological commitment to sophisticated analysis and solutions. Only in this milieu would it make sense to evaluate elements of systems individually, transforming lists of conceivable effects into statistical risk assessments, and deny that irreducible uncertainty lurks in complexity. But thinking in terms of fragility can help elucidate the tradeoffs between the health of parts and wholes.


As the financial crises of the past three decades have painfully demonstrated, the global banking system is dangerously fragile. Financial institutions are so highly leveraged and so opaquely intertwined that the contagion from a wrong prediction (e.g. that housing prices will continue to rise) can quickly foment systemic crisis as debt payments balloon and asset values shrivel. When the credit markets lock up and vaunted banks are suddenly insolvent, the authorities’ solution has been to shore up underwater balance sheets with cheap government loans. While allowing a few Too Big To Fail banks to use their “toxic assets” as collateral for taxpayer-guaranteed loans makes their individual fiscal positions more robust, all this new debt leaves the market as a whole more fragile, since the financial system is more heavily leveraged and fire-sale mergers consolidate capital and risk into even fewer institutions. These “solutions” to past crises transferred fragility from the individual banks to the overall financial system, creating the conditions for future collapse.

Too Big To Fail is an implicit taxpayer guarantee for banks that privatizes profits and socializes losses. Markets have internalized this guarantee. The judgment that Too Big To Fail banks are, perversely, less risky is reflected in the lower interest rates that creditors demand on loans and deposits. Recent studies estimate that this government protection translates into an $83 billion annual subsidy to the ten largest American banks. This moral hazard rewards irresponsible risk taking, which management will rationalize ex post facto by claiming that no model could have predicted whatever crash just happened. Being Too Big To Fail means that predictors have no “skin in the game.” In making large bets, they get to keep the upside when their models work, but taxpayer bailouts protect them from market discipline when losses balloon and their possible failure put the overall economy at risk. To promote an antifragile economic system, bankers must be liable for the complex products they produce, each financial institution must be small enough to safely fail, and the amount of debt-financed leverage in the system overall must be reduced. These are the most urgent stakes obscured by the difficult mathematics of financial risk. Markets will never spontaneously adopt these reforms; only political pressure can force them.


If you like this article, please subscribe or leave a tax-deductible tip below to support n+1.


Related Articles

November 10, 2016
The Disaster
Issue 27 Deep End
The Last Last Summer
Issue 8 Recessional

A few months ago a lot of people thought the world was coming to an end.

December 6, 2021

He was a Black nationalist sympathizer who advocated for integration and a reformist who argued for revolution.