A.I.n't all that...
“AI” is a hot topic right now. I have little interest it, with one exception. I’m interested in it as a comparison reference to technology in general. One could claim that AI is “revolutionary” because it is the first technology that can operate independently of the human brain; it can perform functions of the human brain that no previous technology could. I’m happy to acknowledge that this is a significant departure from much of our prior technology, but, like all technological departures, it is more embedded in a wider technological context—both in its own time and in previous ones—than one might be led to believe. It’s always worth recalling that modern capitalism loves to trumpet “innovation,” claiming that whatever they’re selling is divorced from, not intimately connected to, what has come before and what else is already out there. Innovation exists, but nothing exists without context—or can be understood without it.
The only reason “technology,” in its broadest sense, exists at all is to do things we can’t do with what we’re born with. I could make a list of such things that would take who knows how long to write. We’re born with soft hands; they can’t break coconuts by striking them. Our hands will break first. A nicely-shaped rock, on the other hand, will do nicely, once we acquire the skill to use it effectively. That skill takes any primate who picks it up, from a capuchin monkey to a human child, quite a bit of practice over time to master. We’re not born with the means to control when sperm can enter our uteruses and when it can’t. But a thin piece of rubber—which we can make from the sap of certain trees—can perform that function for us. (As will a thin piece of animal skin or gut, if no latex is available.) We can run, once we’re old enough, but not very fast. Certainly not fast enough to outrun a big cat or a bear. But a horse can. A horse, conditioned to accept a human rider and respond to commands, is technology.
From there, it just expands our capabilities and increases in complexity. Fast-forward from the longbow and the outrigger canoe to the heavy bomber of the Second World War. I’ve mentioned before how sophisticated these aircraft were—how much they could do that we still think of as “modern” even though most of us would assume the plane itself to be “antiquated”—but that, at the same time, the role of the pilot as “master computer” was far more demanding than it is now. The planes could do most of what modern large planes can—but these functions were not automated to the extent that they did not require human input. For a tour de force of the human as master computer in the most demanding of situations, I recommend the movie Greyhound, that Tom Hanks did, if you have access to Apple TV. The entire film, tightly-edited, takes place over a period of about three days, in the mid-Atlantic “Black Gap” on a convoy run to the UK from the U.S. Hanks is the captain of a destroyer, in charge of the convoy escort. What you’re watching is a man whose brain has to have total command of how this entire ship works, how the convoy is set up and what it can and can’t do, how a German submarine works, what it can and can’t do, and is likely to do, and how to direct every action of every ship in the convoy to accomplish three survival tasks simultaneously: protect the convoy, defend the ship, and attack the enemy. There is no room for hesitation and no room for contemplation. He has seconds to process all the information coming at him, from his own crew, from other ships over the radio, from what the enemy is doing, as best as he can tell, what his options are, what the most likely outcome of each is. Then he has to issue a series of orders, as fast as his mouth can form the words, and immediately return to processing information for the next set of orders. Everyone on the bridge and on the radio is looking to him, telling him things he needs to know, waiting for his commands. If he’s right, the convoy remains safe for another minute. If he’s wrong, people die and ships loaded with vital war materiel go down. And no matter how good his judgment, he can’t control the enemy, the ice that’s coating his radar antenna, the sea. To those, he can only react.
He doesn’t leave the bridge or sleep for the entire period, and the camera is almost always on him, close-up.
Our computers do all sorts of things for us. Some of them, we can do ourselves if we train for them, like complex computations. Now, we are less likely to do that, because we all have calculators at our fingertips—or, for that matter, at our voice command. Some of them, like weather modeling, we cannot do ourselves. A modern warship’s command and control system can analyze and respond to information faster than Tom Hanks’s destroyer captain. It can issue multiple commands instantly and simultaneously, faster than he could, no matter how good he was. It’s vulnerable to damage and destruction, but so is a human.
They had computers in the War, and used them to great effect, for fire control on battleships and codebreaking, among other things. Our computers can process more data, faster, while taking up less space and using less energy and human input. AI takes that another step.
The master navigator on a Micronesian paired boat (we’d call it a catamaran, a word we borrowed from India while changing its meaning) commanded a long-acquired comprehension of a vast amount of data on the movements of the stars and the sun, the winds and the swells, the flights of birds, the clouds in the sky, and the locations of scores of islands he could not see, that could be hundreds or even thousands of miles away. He could accurately estimate the speed of his vessel and its leeway (sideways drift) by eye, and mentally “calculate” any necessary correction to heading to maintain course to the intended destination. Naturally enough, this was a revered person.
He could attempt to explain what he did to a European ship’s master, with his sextant, printed charts, almanac, compass, hourglass, traverse board, and speed log. But that was the one thing beyond his ability; his very concept of spatial relationships, of such basic concepts as distance and speed, were so different from those of Europeans that it took great effort over a period of time for the latter to begin to understand how these master navigators did all this with just their minds processing their sensory inputs.
It's up to us to decide what skills we want to acquire and perfect. The relegation of human effort and input to irrelevance has long been predicted and never realized, and I don’t think that will change. We went from putting body panels on cars together to building and programming and repairing the robots that do that now. Our fear of being “made redundant” in some profound way comes from the same place most of our deepest fears come from—our inability to predict the future (see “Radical Acceptance of Uncertainty”).
Our inclination to imagine dystopia comes from the same instinct that causes those unsettling intrusive thoughts we have all the time, such as when we’re waiting at a crosswalk and we vividly imagine a car losing control and heading straight for us. That instinct is the one that used to say “there might be a lion in that bush over there.” It’s there for a reason. We pay our therapists and our drug companies lots of money to help us tame it.
They say “knowledge is power,” and they also say “ignorance is bliss.” But ignorance is only bliss until that lion bursts out of that bush and you instantly know you’re about to have your throat crushed and then be torn apart and eaten. Knowledge is power, but it can also be a burden. In so many cases, though—this one included—it can be a comfort. Keeping in mind the most foundational definition of technology—the means by which we extend the capabilities we were born with—reminds us that all the gee-whiz stuff we’re doing these days—including AI—is all part of something that’s been going on for hundreds of thousands of years.