Resisting Cognitive Tradeoffs in the Age of AI

Resisting Cognitive Tradeoffs in the Age of AI

There’s no doubt that if you, like me, have spent a fair amount of time working with large language models, particularly one (or all) of the three frontier models—Claude 3 Opus, Gemini 1.5 and GPT-4—you’ve experienced a tectonic shift in the way you work.

It literally feels like I’ve been toiling away in a room with the lights off for the last decade and someone just walked in and flipped the switch.

Everything has been illuminated.

The use cases I’ve found for LLMs at this point is staggering.

On the personal side, I’ve used them as a second pair of eyes when rebalancing my investment portfolio (with information redacted of course), to plan a road trip, to create an MBA program and personal tutor based on a well-known academic institution’s coursework, to create recipes, to diagnose a running injury, to submit a claim to my insurance company, to create a song for my wife, to outline a will, to research a tax-deferred 457 plan for a family member, and to create a guide for fostering a French bulldog. And that’s just scratching the surface.

On the business side, I’ve used LLMs to stress test strategies, to process data, to analyze the UX on a website, to summarize transcripts, to teach me how to code using Python (within 90 minutes, I created a program that could calculate the area of a triangle and a very rudimentary chatbot), to create a product roadmap, to do competitor research, to edit content, to proofread, to analyze images—this list, too, goes on and on.

What I’ve noticed in my experimentation, though, is that while the benefits are indeed incredible, there’s a troubling side-effect from all of this that I haven’t seen many people talking about yet.

In essence, what LLMs are really good at is processing information. They process vast amounts of information at speeds that are difficult to comprehend and they do it endlessly, without getting tired, any time you ask them to.

In a very real sense, LLMs are increasingly, partially, augmenting (and in some cases replacing) the original information processors we all have access to—our brains.

Prior to November of 2022, when OpenAI launched ChatGPT, if we needed a plan or a strategy or an answer, we had to come up with that plan or strategy or answer ourselves. Oftentimes, it was a painfully slow and inefficient process.

However, during that process, when we were researching on the internet or reading a book or asking a colleague for their perspective, we were processing that information in a multimodal way that allowed it to take root in our knowledge base.

We were learning, both experientially and didactically, and later, when we had the answers we were looking for, we could recall a good deal of what we absorbed along the way.

In a sense, we were learning by being intricately involved in the process of learning, not learning by proxy, which is sort of what’s happening when an LLM is simply presenting us with the answer.

 

Knowledge retention

Perhaps the best way to illustrate this is through an example.

When I was in my early thirties and finishing up my undergraduate degree in entrepreneurial business development, I had to create a business plan for one of my final projects.

I found a business plan outline on the internet, and then bought a few books on creating them, and then I struggled for weeks through the process of writing it.

I thought about it in the shower and on the bus and while I was running on Chicago’s lakefront path. I wrote and rewrote and fiddled with it until it was finally done.

The result, however, wasn’t very good.

In fact, knowing what I know now, it was decidedly bad. But as a student, as a business person trying to learn, I was so much better for having struggled through the process.

I understood financial modeling and performing competitor research and the complexities of pricing in fundamental ways that would allow me to connect the dots much later in my work and career.

I had lots more to learn, but I had a start—and I could pull from what I’d learned at any time in the future.

Fast forward nearly fifteen years.

A few months ago, I had a business idea that felt incredibly promising, an idea that felt, and still feels, like it could be a $100,000,000 business. It was personal, and scalable, and monetizable, and solving an enduring and meaningful problem.

And so I thought back to my experience in undergrad, to when I was crafting that business plan, to when I was meticulously working through each component of a business, and I thought that repeating the exercise would be a good place to start in order to see if the idea would hold up under intense scrutiny.

The primary difference between then and now, of course, outside of what I’ve learned in the years since then, is the ability to recruit an LLM to work with me.

Over the course of a weekend, and then a few more nights the next week, I sat in front of my computer, eyes squinting, forehead wrinkled, for maybe twenty or twenty-five hours, going back and forth with ChatGPT like I would with a human cofounder, letting one idea build to the next, refining, arguing, backtracking, and ultimately, perfecting the idea.

During that process, which was both exhausting and exhilarating, it became so much more than I could have ever imagined on my own, so much better, so much more interesting and useful and complex, and so much more real.

When I was done, when I finally stepped away from the computer, I had a business plan that was nearly a hundred pages long.

Not only that, it was so well developed and so compelling that I was able to take parts of the product roadmap I’d created, hire a product designer, and flesh out low-fidelity wireframes just so I could see it partially come to life.

But where the experience became something different was a few weeks later, after I had given the idea some time to marinate. I went back to the business plan, printed it out, and reread it.

It was just as comprehensive and brilliant as I remembered, but it was as if I was reading parts of it for the first time.

I’d come up with the ideas and given the LLM the direction it needed, but the LLM had ultimately processed the information for me and then provided me with the output.

The result was that I didn’t retain the information in the same way that I would have had I done the research myself, had I struggled to find those answers on my own, had I sat in the library as the sun set over Chicago, the El train whirring in the distance, reading passages from books and taking notes and thinking deeply about my work.

What I had to do to remedy this problem was to print out the entire one hundred pages, and then spend a few hours with a highlighter and a notebook, committing everything I’d created to memory.

 

A solvable problem

As you can see, the information-processing problem is a solvable one and I want to be clear about that.

I’m not sounding an alarm, but I am saying that we need to appreciate what’s happening here so we can adopt a new way of working and correct for it.

Perhaps more time needs to be spent with the LLM’s output to ensure that it becomes a real input for us. Perhaps the answer here isn’t to do things as fast as possible, but to slow them down a bit so we can truly absorb the information we need to understand.

This, I think, underscores the messiness of innovation.

There are always tradeoffs to be had, and we need to be cognizant of what we’re willing to give up.

It’s true that we can perform certain functions better and faster and more comprehensively than ever before, but we need to perform those functions with an acute sense of self-awareness.

We need to stay sharp and diligent in our assessment of what’s occurring because becoming distracted now is akin to looking down at your phone while you’re traveling in a car at a hundred miles an hour.

In the ten seconds it took you to read that text, you traveled a quarter of a mile.

Stay alert. Pay attention. Do the work. And above all else, understand and cognize the work you’ve done.

Because while everything else has changed, the fact that you need your brain has not.

And the faster these innovations move, the more that technology monopolizes knowledge work, the more I’m convinced that hanging on to what got us here, by which I mean the ability to perform intense and sustained cognition, is the most important task of this era.

Never miss an insight. We’ll email you when new articles are published.
ReLATED ARTICLES