AI Is Making Us Stupid(er)
We already know how this works. Just look at ubiquitous GPS. It's infantilizing us. No one knows where they are anymore. We just follow directions. In 200 feet turn left at the stop sign. I've become so dependent on Siri to tell me how to get where I need to go I can't even find my way around locales I used to navigate by memory, or at worst, by peering at a map. Granted, this is because in Greater Boston we don't have streets organized in a grid pattern; it's more like spaghetti. So learning your way around takes some doing. But it's preposterous to now be helpless where once I was independent.
Here's a great example. In 2019 my wife and I went to Venice. Talk about a lack of a grid pattern! It makes Boston look like Phoenix. Over the course of a few days we are able to orient ourselves by memorizing certain landmarks and also by consulting the map occasionally. We made Venice ours! Then, in 2023 we went back and this time we relied on our friend's phone to get around. And I've got to tell you, I never became fully oriented. Some routes or landmarks triggered memories, but I was never confident. Probably one day without the phone is all it would have taken to get back up to speed. But no. And what's worse, by using GPS sometimes it would take us down an exceedingly narrow passage if the route was even 20 feet shorter than the main route, which would have been more visually interesting, with people and stores and architecture to look at. The whole point of Venice is the splendor of the physical environment, which you don't see when you are sent down a narrow passage or are staring at your phone.
What AI is doing now is expanding this whole GPS dynamic into the realm of thinking itself. Let's consider a commercial for what I think was Apple with AI. So this young office worker is supposed to give a report on another meeting or conversation to everyone gathered in the conference room. It turns out she forgot to prepare, so she secretly gets her AI to come up with a quick bullet point list, which she shares, thus impressing everyone with . . . what? Her fake knowledge? Because just reading something will not give you knowledge or understanding. She actually doesn't know a thing, and what's more, she'll be worthless if her superior wants to pursue any of the points. This is a selling point? Comprehension is gained by placing what you are encountering in the context of prior knowledge and by making distinctions about the content you are engaging with. Is one idea or point different enough to separate it out or should it be combined with another? If combined, what is the core principle uniting them? Are some points too tangential to include? If so, is there another context where it would be relevant? Is a certain recommendation close but not quite useful? Why and what does it need? And on and on.
Who knows what we are losing? What's the cost of farming thinking out like that? Would you just take what is presented to you s fact? Yikes. I guess AI is being presented as a "tool." I suppose it's a tool to the extent that you already have significant prior knowledge of a topic and it's helping you sort and systematize what you already know. But it won't help you gain knowledge any more than Wikipedia will. Comprehension requires that you do something with the data or material you encounter or are accumulating. Here's an example. The other day, a friend asked Gemini if Jesus intended to found a non-Jewish religion. It instantly came up with a sequenced presentation of points that made a lot of sense. I say this because this is something I know about having studied theology in graduate school. It certainly was not at all complete in its dealing with the topic, however, and missed an obvious counter point. All in all, I would say that it presented some interesting ideas for someone new to the topic, but there's no way you could really do anything with it. You could raise a point in conversation, but when someone who knows about the topic says, But what about X?, you would be left at the starting gate. Now if, you had come to the points after reading books and articles on the subject, you could contextualize and also "know what you don't know."
At some point could AI be the "authority," the be all and end all? You know, "AI says we should proceed this way, so we should." I know they say already that it can be superior in medical diagnosis. Holy shit! My instinct would be to go with the opinion of a human being, fallible as they might be. But will AI really become infallible? Ou biased? We've already seen how bias gets built into the code by developers. At least with a person, you can see who you are dealing with and make a judgment based on your sense of the person. But like the saying goes, the people of the future won't miss what they never had. Even if that thing is the ability to think.
Comments
Post a Comment