Smashing the Crystal Ball
Among our favorite fantasies as humans—up there with the ability to fly—is the ability to predict the future. Most of us have sense enough to realize how fraught with peril that is, and some of us enjoy the intellectual exercise of contemplating the ethical and even existential problems such an ability would pose.
I have an article coming out next year for which I read a book by the economist Frank Knight, published just over a century ago, in which he argued that uncertainty—the inability to predict the future—was necessary for profit; that profit was the result of taking a calculated chance and getting lucky; and that if we could all predict the future, profit would not exist. For those who wish to rid humanity of capitalism, though, the ability to predict the future is not a promising way forward on that. Another book I read for that article, Radical Uncertainty by John Kay and Mervyn King, argues that our denial of the unavoidable reality of uncertainty has led to bad, even disastrous, economic and business decisions in our own time—decisions based on mathematical and statistical models that create the illusion of certainty and predictability where there is actually nothing of the sort.
Kay and King’s argument is perhaps as important for historians—whether professional scholars or other people with a serious interest in understanding the human past—as it is for policymakers, because historians of all stripes share a common problem: we know how it turned out. In a sense, a historian can predict the future, because the historian knows how the future turned out for the people in the past. This may seem like a boon, but I want to point out that it is, in fact, just as problematic for good history as it would be for good human decision-making if we could predict the future from the present. Knowing how it turned out hinders, more than it helps, the doing of history.
The reason for that is because the awareness of what-happened-after naturally leads our pattern-seeking brains toward teleology and determinism—the assumption, conscious or not, that things happened the way they did because they were supposed to, or designed to, or had to. The fallacy of inevitability is a tricky one to sidestep. One of the reasons for that is because we will naturally highlight the evidence that seems to support the inevitability—that seems to demonstrate for us why things turned out the way they did—and we will at the same time naturally downplay, or even ignore, the evidence that does not support the lead-up to how it all turned out. We will go looking for the evidence that explains to us how it turned out. We probably won’t go looking for the evidence that doesn’t.
I’m reading the latest book by one of my most generous mentors, Larrie Ferreiro. It’s called Churchill’s American Arsenal: The Partnership Behind the Innovations that Won World War Two (Oxford, 2023). Early in the book, Larrie points out that, at the same time physicists and engineers were working on what we would eventually come to call radar, one such clever person was championing the idea of minefields in the sky. Instead of thinking in terms of using radio waves to detect approaching bombers, he was thinking, naturally enough, I think, of adapting a proven maritime defensive technology—the laying of explosive contact mines in the approaches to one’s own coast—to the air. I have been studying the Second World War since I was literally twelve years old. I did not know this. I doubt you do, either. It didn’t go anywhere, because the radar technology seemed more promising to those holding the purse strings than aerial mines, and so that technology got funded and pursued, to successful effect. Perhaps the aerial-mines idea would not have worked. But perhaps it would have. The key to avoiding the writing—or thinking—of facile, deterministic history is that crucial “but.” Just because it didn’t happen that way doesn’t mean it couldn’t happen that way. That sounds so obvious, stated like that, but our brains are hard-wired to overlook it, and so we have to consciously train ourselves not to. Knowing how it turned out makes that harder, not easier, to do.
In my last book, I faced this problem in what is, I think, one of the historical scenarios most challenged by the fallacy of inevitability, at least for those studying the early modern European world: the American Revolution against the British Empire, 1775—83. My book takes place from 1767 to 1773, with the bulk of it happening from ’68 to ’72. I set out, from the get-go, to leave that Revolution completely out of the book. I didn’t even want to refer to it. I knew that, to the extent I succeeded, I would do a far better job of presenting the British-American North Atlantic as it actually was at the time—when no one knew what was going to happen, though they all had their hopes, fears, designs, and hunches. So, the only allusion to an event after 1773 was to the death of a central character in the book in 1807.
In history, we take pains to avoid what we call “presentism”—thinking about history through distortion filters created by the biases and perspectives of our own time and experience. I don’t want to create confusion, but it does occur to me that we could have assigned a different meaning to that term: as a technique for writing history as much as possible as though it were the present. Whether you write history or just think about it sometimes, I would encourage you to try this perspective. Try to forget that you know how it turned out. Look, or think about, evidence that suggests it could have turned out differently. I know that any professionals reading this are immediately going to have their “counterfactual” alarm bell go off here. I get it; actually writing scholarly counterfactual history is a devilish business. But we don’t have to go there; if we stick to history-as-past-present, we avoid counterfactuals because we are not going into the future and saying “well, here’s how it could have turned out;” we are pointing out facts, as best as the evidence allows, about the specific time under consideration, that do not fit within a narrative of inevitability: this led to this, and that led to that, and there you have it.
When we do this, we open up the possibility of understanding what was really going on. When we don’t do this, we close off that possibility. When Eric Schatzberg pointed out that there was nothing inevitable in the inter-war period about aluminum airplanes over wooden ones—that the choice was a deliberate one, made out of bias and fully-embedded in the modernist culture of the day—he wasn’t just pointing that out so we could better-understand technical developments in aviation. He was pointing out to us how technological history works—which is how all history works.
Each of us has a history that we know better than anyone else: the history of our own lives. If we apply this same standard to our own personal histories, we stand to understand our own lives far better than if we don’t. And that makes us better-equipped to make decisions in the present about out futures. Regardless of whether we might want a crystal ball, we don’t have one. The ability to understand our past in spite of how it actually turned out is the closest we’re going to get.
Frank Knight, Risk, Uncertainty and Profit (Chicago: University of Chicago Press, 1971) (orig. pub. 1921).
John Kay and Mervyn King, Radical Uncertainty: Decision-Making Beyond the Numbers (New York: W.W. Norton, 2020).
Larrie D. Ferreiro, Churchill’s American Arsenal: The Partnership Behind the Innovations that Won World War Two (New York: Oxford, 2023).
Eric Schatzberg, Wings of Wood, Wings of Metal: Culture and Technical Choice in American Airplane Materials, 1914-1945 (Princeton University Press, 1998).