A Meeting in the Middle
Prediction, Judgment, and the Cost of Thinking
We talk a lot about whether AI can (now? ever?) work like our brains do, but we don’t focus on whether our brains work, even a little bit, like AI. Do we find the idea distasteful to consider?
Here we sit on our mountaintop, in our infinite sense of self-importance, waiting for the poor, bedraggled aspirant to climb to our height and prove itself. We remain certain that what our brains do is completely different from and superior to anything we’ve enabled our LLM tools to do.
Are we right?
AI may be trying to climb up to meet us at our human height, but maybe we should be a little more honest about the places where we’re right where AI is. Passing judgment from the mountaintop feels too easy, and thinking about these issues should cost us a little something, in humility at least.
Let’s try to be more thoughtful about thinking.
When we talk about our ability to think, we often make the mistake of thinking about thinking as a single and unified thing—as though, every time we’re doing it, we’re doing the same thing, the same way. And that’s not true.
As Daniel Kahneman has pointed out in his amazing book, Thinking, Fast and Slow, we have two systems of thinking—one fast and reflexive, one slow and reflective. To conserve mental energy (and allow us to react quickly to the world around us), a great deal of what we do is fast and reflexive—more than you might realize. We rely on shortcuts, or heuristics, to do a lot of our thinking for us, even in complex situations. This is how biases and stereotypes can creep into our decision-making; we may react based on old patterns more than on actual facts in front of us. To do actual, rational considering requires an effort, and it’s an effort we may have to force ourselves into, challenging our lazy brains to think rather than to react.
We also have a very limited ability to think actively about things in front of us, even when we want to. Our working memory, which processes information for immediate use, can hold only about 5-7 discrete things at a time. To do even slightly more complex work (like reading a sentence or solving a long equation), our brains “chunk” information into parcels that we can store in our longer-term memory and call upon when needed.
That is how a Social Security Number goes from being nine discrete numbers into three chunks of 3-2-4 and then into a single thing. It’s how we achieve “automaticity” in reading, memorizing letters and then phonemes and then words and phrases and even long poems (or math facts and times tables) so that we can call upon them as units of meaning when we need to apply them to new problems. In fact, as I wrote about here, part of what we prize in our experts is their ability to call to mind a vast repertoire of possible moves and decisions based upon their history of experiences—a thing they can do quickly and seemingly automatically, as instinctually as a baseball pitcher just “feels” that a runner on first base is about to try to steal second.
You see where this is going. Much of what we do is not radically different from what an LLM does when it has to predict the next thing to say. Critics complain that AI is “thoughtless” because it is merely (!) an amazingly fast predictive text engine, but as I wrote here, a while ago, we’re not so very different, a lot of the time, especially in social situations where our responses are full of social chatter that flows effortlessly and instantly out of our mouths. We hear and we process and we respond, but we often don’t take the time to consider before doing so; we just know what the next right thing to say is. In fact, you can tell the difference between fast brain and slow brain in these situations, because when someone asks you a really thoughtful question, you tend to pause before answering. That’s your slow brain getting kicked into action.
Our brain works too quickly for us to realize what it’s doing. In fact, as Kahneman details in his book, we often convince ourselves that we have applied reason to a situation retroactively, using thinking-after to justify decisions that actually involved no considered thinking at all. It happens so fast that we’re not aware of it. We make a judgment based on a racial or sexual stereotype (perhaps), and then immediately, at lightning speed, come up with some rational justification for it that had nothing to do with the judgment we made. That’s why biases and stereotypes are so hard to fight.
We are predictive engines responding from our repertoire of experiences, both personal and public, and we are as capable of hallucination and bad processing as the AI tools we’ve built.
But that’s not all that we are. If AI resembles our fast and reflexive brains to some extent, it can’t do what our slow and reflective brains do. It can’t consider. Oh, it can weigh the odds and tell you probabilities, for sure. It can tell you which choice makes more sense based on some criteria made available to it. It can select, but it can’t make a judgment call. It can’t own the consequences of a choice—or be affected and changed by it. As I said here:
AI can’t explain why it knows what it knows. And more importantly, it doesn’t inhabit the knowledge. It doesn’t care if it’s right. It doesn’t feel the stakes. It can’t say: I’ve been here before, and I know how this tends to go, the way a teacher or doctor can.
The right decision—the right call—depends on more than cold analysis of the available data. It also requires an understanding of the stakes involved and the cost of making or missing the call.
Our choices and decisions and actions take a toll on us. They have a cost. They build us into who we are. They sometimes leave scars. One way or another, they change us. And that’s a good thing! That’s what learning is. That’s what growth is. That’s why we hope we’re somewhat wiser at 40 than we were at 20, even if, sometimes, we’re less genuinely reflective and thoughtful than we want to be. We are not simply an accumulation of data; we are meat that has been carved into shape through a million little cuts, year after year. And somehow, magically, we are both lesser and greater because of those cuts and scars.
I want that slow brain—that lifelong trained brain—to be in the mix, all the time. And I want it to be us. I don’t mind if LLMs do some of the fast, reflexive work for us, whether in school or on the job, as long as we hold onto reflection and consideration and moral hazard for ourselves. Because even if we don’t always do that part well, it’s precisely the part that only we can do.


