AI Is Making Us Stupid(er)

We already know how this works. Just look at ubiquitous GPS. It's infantilizing us. No one knows where they are anymore. We just follow directions. In 200 feet turn left at the stop sign. Stay in the right lane then make a slight right turn. I've become so dependent on Siri to tell me how to get where I need to go I can't even find my way around locales I used to navigate by memory, or at worst, by peering at a map. Granted, this is because in Greater Boston we don't have streets organized in a grid pattern; it's more like spaghetti. So learning your way around takes some doing. But it's preposterous to now be helpless where once I was independent. 

Here's a great example. In 2019 my wife and I went to Venice. Talk about a lack of a grid pattern. It makes Boston look like Phoenix. Over the course of a few days we were able to orient ourselves by memorizing certain landmarks and also by consulting the map occasionally. We made Venice ours! Then, in 2023 we went back and this time we relied on our friend's phone to get around. And I've got to tell you, I never became fully oriented. Some routes or landmarks triggered memories, but I was never confident. Probably one day without the phone is all it would have taken to get back up to speed. But no. And what's worse, by using GPS, sometimes it would take us down an exceedingly narrow and unadorned passage if the route was even 20 feet shorter than the main route, which would have been more visually interesting, with people and stores and architecture to look at. The whole point of Venice is the splendor of the physical environment, which you don't see when you are sent down a narrow passage or are staring at your phone. 

What AI is doing now is expanding this whole GPS dynamic into the realm of thinking itself. Let's consider a commercial for what I think was Apple with AI. So this young office worker is supposed to give a report on another meeting or conversation to everyone gathered in the conference room. It turns out she forgot to prepare, so she secretly gets her AI to come up with a quick bullet point list, which she shares, thus impressing everyone with . . . what? Her fake knowledge? Because just reading something will not give you knowledge or understanding. She actually doesn't know a thing, and what's more, she'll be worthless if her superior wants to pursue any of the points. This is a selling point? Comprehension is gained by placing what you are encountering in the context of prior knowledge and by making distinctions about the content you are engaging with. Is one idea or point different enough to separate it out or should it be combined with another? If combined, what is the core principle uniting them? Are some points too tangential to include? If so, is there another context where it would be relevant? Is a certain recommendation close but not quite useful? Why and what does it need? And on and on. 

Who knows what we are losing? What's the cost of farming thinking out like that? Would you just take what is presented to you as fact? Yikes. I guess AI is being presented as a "tool." I suppose it's a tool to the extent that you already have significant prior knowledge of a topic and it's helping you sort and systematize what you already know. But it won't help you gain true knowledge any more than Wikipedia will. Comprehension requires that you do something with the data or material you encounter or are accumulating. Here's an example. The other day, a friend asked Gemini if Jesus intended to found a non-Jewish religion. It instantly came up with a sequenced presentation of points that made a lot of sense. I say this because this is something I know about having studied theology in graduate school. It certainly was not at all complete in its dealing with the topic, however, and missed an obvious counter point. All in all, I would say that it presented some interesting ideas for someone new to the topic, but there's no way you could really do anything with it. You could raise a point in conversation, but when someone who knows about the topic says, But what about X?, you would be left at the starting gate. Now if, you had come to the points after reading books and articles on the subject, you could contextualize and also "know what you don't know."

At some point could AI be the "authority," the be all and end all? You know, "AI says we should proceed this way, so we should." I know they say already that it can be superior in medical diagnosis. Holy shit! My instinct would be to go with the opinion of a human being, fallible as they might be. I want the doctor's opinion, who, having undergone rigorous non-AI training can take the AI opinion under advisement, as it were. But will AI really become infallible? Or unbiased? We've already seen how bias gets built into the code by developers. At least with a person, you can see who you are dealing with and make a judgment based on your sense of the person. But like the saying goes, the people of the future won't miss what they never had. Even if that thing is the ability to think. 

So the upshot, I think, is that AI will never be an effective educational tool, and in fact seems more likely to be a counterproductive one. It's an application tool for people who have become knowledgeable and -- this is key -- discerning in their field the old fashioned way, by learning all the skills of sorting and weighing and synthesis and conjecture and all the rest themselves. They have developed what us old-school educators called "habits of mind." And further, they have become competent in applying the skills of discernment and analysis they have developed generally to all sorts of social and political phenomena. One thing I notice among AI techno-optimists like Tyler Cowen is that they fail to make this crucial distinction. Just because it's an awesome time-saving tool for you, someone who has read 10,000 books and taught and written his own books for decades, doesn't mean it's going to function in remotely the same way for school children. 

It was interesting then to see, as I was completing the first draft of this essay, that a recent study released by researchers at MIT appears to confirm my view, at least tentatively since more research is needed. Analyzing the results of their study, which employed essay writing as the core activity, they reached this conclusion:

While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

I don't understand the specific meaning for certain terms they used to measure brain function, but they are suggestive. For example, the AI/LLM group underperformed in terms of "brain connectivity," meaning that "brain-only participants exhibited the strongest, most distributed networks" and, additionally, "cognitive activity scaled down in relation to external tool use." There are plenty more findings they mention, but I like this one: "LLM users also struggled to accurately quote their own work." To which I say, lol. I suppose that, ultimately, like most tech, AI should be treated like a controlled substance among young people. I know, good luck with that.



Comments

Post a Comment