I’ve never watched it, but I know there’s a pretty popular show out there called “Mythbusters.” I gather that the point of it is to take something that lots of people have assumed to be true, and present evidence for why it is, in fact, not. While I can’t comment on the quality of that program, I can say with confidence that the task of correcting entrenched false assumptions is important; we fall for them rather easily. (They’re not “myths”; myths are metaphorical truths, not falsehoods.)
Our vulnerability to false assumptions that turn into repeated “wisdom” doesn’t mean we’re stupid. The evolutionary success of our too-big brains depended largely on two closely-related abilities: the discernment of cause-and-effect, and pattern-recognition. We are so good at both that we tend to “find” them when they are not there. It’s like seeing faces in clouds, or animals in star patterns. That analogy, though, breaks down—all analogies do, or they’d be literal equivalents rather than analogies—because the shapes we see in patterns of nature are what we actually see, while the assumptions I’m talking about are false—at least in large part. They are not “there,” and something else is.
You may have heard, more than once, that historians still “debate” something or other about the past. Those debates—at least, every important one that I know about—always center on cause-and-effect. It’s not that we don’t know what happened. It’s that we don’t agree on why, and perhaps how it happened the way that it did. Perhaps we’re all considering the same evidence (which is never as complete as we’d like), or perhaps some of us are assigning more weight to some evidence than to others, for reasons we can defend. Either way, we’re interpreting the evidence differently. I used to teach my Western Civ students that no one disagreed about what the causes of the fall of the Roman Empire were. It’s just that they disagreed about how to rank them. Entire careers have been spent on that.
We deal with this issue all the time in the history of technology. First of all, our “Western” culture has a strong tendency toward teleology—the apprehension of purpose and design in the course of things, whether or not it is actually present; it’s rooted in religion. Hobbled by depression and confusion, I dropped my Intro to Botany class in college before I failed it, but I am eternally grateful to the hard-ass professor, Dr. Clark, for the way he graded his rather brutal exams. He would mark every instance of teleological language in our answers, as such—forcing us to re-learn English, to an extent, in a way that did not do violence to the way evolution by natural selection actually works—not by design, but by a combination of environmental pressures and random mutations. Thanks largely to him, my mind has ever since been tuned to think that way; it’s most useful to me now.
In fact, I find evolution by natural selection a useful analogy—though just an analogy—to the history of technology before the nineteenth century, when most technological choices were, and remain, anonymous. We don’t know the decision-maker(s); we have to focus on the pressures acting on the technology, the operating environment, the available alternatives. We can never declare, with certainty, “this is why they decided to do this.” We can only make a case that all of the evidence best-supports one reason over another.
What does that have to do with teleology? Our tendency toward teleological thinking leads us easily into determinism—the assigning of spurious cause-and-effect. “This technological choice caused these changes in society.” Historians of technology have spent the past few decades pushing back hard against technological determinism—successfully; it has been completely discredited within the specialty, which in turn has allowed the specialty to gain credibility within the wider discipline, which insists on appreciating the complex multitude of interactions and circumstances shaping historical reality.
I’ll make this specific and tied directly to my own work before I let you get on with your Sunday morning. English archaeologist Julian Whitewright used hard performance data to discredit the long-held assumption that the triangular, fore-and-aft “lateen” sail, so long used in the Mediterranean, performed better to windward (toward the wind direction) than the spritsail, which doesn’t “look” as though it would be better. But it is. That forces us to look for other reasons for the widespread adoption of the lateen—economy, perhaps?
Yes, says the evidence—and yes, in my world of the British North Atlantic from 1600 to 1800. Historians have long passed down, from book to book, the assumption that changes in sails and rigs in this world were either prompted by, or resulted in, “improvements” in sailing performance—because that “makes sense.” (“Improvement” is a loaded term and we won’t get into that here.) When one actually knows something about how these vessels work, though—and I’m fortunate enough to know people who do—that just doesn’t hold up. At the same time, what does hold up is the pressure to contain cost—especially labor cost, suggesting that we try to show, using hard evidence, that certain choices saved costs over other choices. That’s what I’ve asked for grant funding to do next. If I don’t get it—and I probably won’t—I hope someone else does.
All of this is yet another example of the importance of critical thinking. We need to insist on evidence backing up assumptions—and we need to learn to discern real evidence from shapes in clouds. Inherited human wisdom is invaluable—but some of it, sometimes, is wrong. When we fine-tune our brains’ ability to discern cause-and-effect and pattern-recognition, we’re far less likely to make unfortunate decisions, or to live our lives based on falsehoods.