30 Seconds Summary
- A June 2025 MIT-led study found that writing with ChatGPT showed weaker brain connectivity, poorer recall, and less sense of ownership compared to writing unaided or with Google.
- In business, this lack of ownership can weaken buy-in, dilute strategy, and lead to AI echo chambers.
- The solution isn’t avoiding AI but using it wisely, a hybrid approach that leverages AI for speed while keeping humans engaged for depth, creativity, and execution.
Have you seen the images of astronauts returning from the Moon, struggling to walk? Months without gravity left their muscles weakened and atrophied. It’s a simple truth: if you don’t use a muscle, it shrinks. In fitness, there’s even a saying for it: use it or lose it.
But this simple concept is not limited to physical strength, it seems to also apply to the mind. A 2020 study published in Nature concludes “that GPS use may cause a decline in spatial memory with habitual and consistent use.” In simple terms, regularly using apps to navigate can potentially weaken the ability to remember and navigate places.
That’s because the brain is an ultra-efficient energy manager. It reduces the effort spent on tasks that can be outsourced.
And as history shows, new technologies often reshape not just our behavior, but even our capacities. Think of cars, elevators, or Google Maps.
So what could be more relevant than asking how and if one of the most disruptive technologies of our time is reshaping our minds? After all, LLMs don’t just tell us how to reach a destination, they think for us.
According to Similarweb, ChatGPT.com had almost 6 billion visits in July 2025 (still small compared to the 85 billion on google.com/maps/). It’s fair to say that LLMs, as a new category of technology, have found their way into the daily lives of millions of people.
Like Google Maps, LLMs are effortless and efficient. But if you rely on them too much, do you risk eroding certain skills? Maybe in this case, if you use it, do you lose it? The last “it” being cognitive capabilities.
That’s exactly what researchers investigated in a study published in June 2025 (affiliated with MIT Media Lab, MIT, Wellesley College, and Massachusetts College of Art and Design).
In this article, I’ll summarize the main findings of “The MIT study” (as it’s often referred to) and discuss potential implications, with a particular focus on the effects on business and collaboration.
Study Design
The study included 54 graduate students who were split into three equal groups.
- The first group wrote with ChatGPT only.
- The second group used Google search, but no AI.
- The third group wrote without any tools.
The experiment stretched over four months. Each student wrote four essays, one per session, with 30 minutes to complete each essay. The prompts were based on SAT writing questions (which are not disclosed in the study, but an official example you can find online looks like this: “Write an essay in which you explain how Jimmy Carter builds an argument to persuade his audience that the Arctic National Wildlife Refuge should not be.”)
While the students wrote the essays, their neural responses were recorded with an electroencephalogram (EEG). The researchers looked at different frequency bands to measure “brain activity” and connectivity. After finishing an essay, students had to recall their own content and quote from their text.
Each essay was graded twice.
One set of grades came from human teachers who scored
- content,
- uniqueness,
- and writing style.
Another set came from an AI judge that rated
- grammar,
- structure,
- and form.
The essays were processed with natural language software. The analysis looked at lexical diversity, repeated phrases, and how similar the texts were across people in the same group.
Students also gave feedback through interviews. They spoke about how much they felt the essay was their own work, how they experienced the process, and how they saw the role of the tool they used.
The final session had a crossover. The group that had relied on ChatGPT now had to write without it. The group that had written solo switched to ChatGPT. This showed how previous exposure carried over when conditions changed.
Of course, the study has limitations: the sample was small (54 participants), and all of them were students working on a single, specific task.
Main Findings
In this section, I summarize the main findings of the MIT study . To summarize, students who wrote essays with ChatGPT showed weaker brain connectivity, poorer memory recall, and a diminished sense of ownership.

Brain activity
The brain group (using neither ChatGPT nor Google) showed the strongest and most widespread brain connectivity across alpha, beta, theta, and delta frequency bands. In simple words, their brains were buzzing with connections.
The LLM group displayed significantly lower neural connectivity compared to those using Google (search engine) or writing unaided (brain group).
When students wrote on their own, their brains were working harder to generate, plan, and recall ideas. That showed up as stronger connections between brain regions, especially in the frequency bands typically linked to memory, attention, and creative thought.
When people used ChatGPT, their brains didn’t need to do as much of that heavy lifting. The tool supplied sentences, so the brain shifted from making ideas to selecting and editing them. That’s why the EEG showed weaker connectivity.
Subjects who had written with ChatGPT for months didn’t fully re-engage those higher-effort brain patterns when they had to write alone again. In contrast, people who had practiced solo writing could adapt quickly when they switched to ChatGPT, but their brain activity showed a different focus: more on monitoring and memory than on generating content.
Ownership
In the self-report, students who used ChatGPT felt the least emotional attachment to their essays. Many said the text did not feel like theirs. Brain-only writers reported the strongest sense of ownership. Search Engine writers were in between.
When asked to recall and quote from their own essays minutes later, the ChatGPT group did much worse. They often could not reproduce their own work. The Brain-only group recalled their writing well, with the Search Engine group again in the middle.
The numbers are fascinating:
- The brain group could correctly recall or quote from their essays in over 80% of cases.
- The Search Engine group did somewhat worse, around 60 to 65% accuracy.
- The LLM group recalled correctly in fewer than 30% of cases.
The brain group could easily recite lines, showing that they had processed the text in memory. The ChatGPT group often failed to recall even a single sentence, which the authors saw as evidence that the words never became “theirs” in a cognitive sense.
The authors framed this as part of what they call “cognitive debt”. By outsourcing writing to AI, people skip the mental steps that usually make text personally meaningful. Without those steps, attachment and memory weaken.
These results are consistent with a similar phenomenon that was coined the “Google Effect” in 2011 by Betsy Sparrow et al. : “The Internet has become a primary form of external or transactive memory, where information is stored collectively outside ourselves”. As humans are using Google (or ChatGPT) as an external memory, the incentive to store information “inside” decreases.
Originality
Linguistic analysis showed that ChatGPT essays were more homogeneous. While the essays looked cleaner, ChatGPT reduced originality and variance.
The LLM group’s results showed:
- Repetitive named entities: ChatGPT essays reused the same famous names, making them feel less original, while human essays showed more diverse examples.
- Tighter n-gram patterns: An n-gram is just little chunks of words that go together. For example, “on the table” or “because of this.” When people wrote by themselves, they mixed up these word chunks in lots of different ways. When people used ChatGPT, they (actually it) used the same chunks over and over again – generating very “streamlined” content.
- Lower lexical diversity: That means the AI tended to reuse the same words and phrases, while human writers used a wider range of vocabulary.
As a consequence, the LLM essays had a very high similarity within the groups. Their texts tended to look alike, whereas the “brain writers” produced more distinct essays. Human raters (teachers) scored brain-only essays higher on uniqueness and creativity. ChatGPT essays were described as “generic” or “soulless.”
This is a quote from the teachers: The “[…] content and essays lacked personal nuances. While the essays sounded academic and often developed a topic more in-depth than others, we valued individuality and creativity over objective ‘perfection’.”
Interestingly enough, AI scoring didn’t account for this. The AI judge rated LLM essays high for grammar and structure but did not penalize their lack of originality.

Implications
I argue that four implications for work and collaboration stand out: We can make the most of AI without weakening our own skills. The lack of human ownership puts motivation and collaboration at risk. AI may push content toward sameness. ChatGPT provokes an even more crowded backlog of exciting ideas, easily overlooking that execution still matters most.
Using ChatGPT the Right Way
I created this article with the help of ChatGPT Plus, Grok, and Claude. And when processing and generating long-format text, such as articles, I think it would be foolish, probably even ignorant, not to use these tools. They can help produce better results in a shorter time.
It would be as foolish as navigating the narrow and often unnamed streets of Venice without GPS, trying to find a hidden restaurant you’ve never been to. Google Maps will help you optimize your time and stress levels.
The question is: Can you use Google Maps in a way that still allows you to process your surroundings and memorize the route for the way back? Maybe you take a look at Google Maps before you start walking to the restaurant, study the route, and then try to find it with as little help as possible. Why? Because you’ll enrich your own experience on the way by processing new information. You’ll be able to explain to someone else how to get there.
I think the same is true for ChatGPT and Co. It only enriches your work and knowledge if you use the right shortcuts and don’t miss out on the important things. Shortcuts are helpful if there’s nothing to learn and experience on the way. As a shortcut often gets rid of the obstacles, they don’t always help us grow.
The art lies in automating repetitive tasks with ChatGPT while processing the meaningful ones yourself, adding reflection, depth, and a human touch.

The Anticipation Effect
Humans are more willing to engage with something when they know it’s the result of honest work – or at least it was created by other humans.
- Why do we value handmade pottery more than an industry-made one that might be smoother, cheaper, and more uniform?
- Why would someone be disappointed to learn that their favorite band’s new album is 100% AI-generated?
Psychologists call this the effort heuristic. We assume something is more valuable or higher quality if we think a lot of (human) work went into making it. A similar cognitive bias is called the IKEA effect; we tend to value things more when we’ve had a hand in making them ourselves.
This is relevant in the context of the study findings about ownership. Because it’s often the reason why people are less incentivized and motivated to engage with, reflect on, and work with AI-generated ideas and content.
For example, a startup CMO creates a marketing strategy with ChatGPT and tries to implement it.
- Leadership might question whether the strategy is the result of processing all relevant data points from within the company or a generic approach that does not account for the company’s individuality. This will significantly affect leadership buy-in for the strategy.
- The team might question whether the CMO even knows what he or she is doing. Or if he or she is like a 4-year-old flying an airplane on autopilot. This could make people hesitant to get on board and help executing.
The lack of ownership is not only creating this kind of “recipient fatigue”, it can also provoke rather superficial discussions around a topic and prevent depth.
That’s because
- The “author” or owner is not incentivized to defend ideas and content he or she isn’t attached to,
- and the recipient isn’t incentivized to mentally invest and engage in ideas she or he knows the owner isn’t really attached to. (in the worst-case scenario, the owner says something along the lines of “I’m also not sure about this, I quickly created it with ChatGPT – let’s discuss!”).
Not processing the data and only using shortcuts feels arbitrary. We expect other people to think things through to a certain point before they ask us about our thoughts and opinions on them. We don’t want to invest time and effort when we know that the idea hasn’t been reflected on by a human who knows the business context, the organization, strategic goals, the team, et cetera.
Centripetal Results
As ChatGPT, Claude and Co. are conquering all companies alike, and following the findings of the study discussed above, it’s fair to assume that we will see a centripetal effect. For example, it’s fair to assume that the center of the bell curve of content quality will further grow – creating AI Echo Chambers.
For growth or marketing teams, this is not necessarily bad news. First of all, because AI-enhanced content is better than a lot of human content (there, I said it). And secondly, it allows teams to gain visibility with amazing content on the right side of the bell curve. It might even become easier to stand out with good content when the overall variance decreases.
As the main strength of LLMs is still rooted in text generation,
- enriching content with video,
- infographics,
- audio,
- interviews (human voices),
- and interactive mini-products
will become even more important.
Idea Bottleneck
John Doerr famously said , “Ideas are easy. Execution is everything”.
The problem: ChatGPT can create an unlimited number of ideas within seconds. People love brainstorming, ideating, and starting projects. It feels exciting and rewarding. I wouldn’t be surprised if future studies show that the quick results from ChatGPT trigger similar dopamine hits people get from scrolling social media.
Execution often feels slow and messy. It requires persistence and resilience. The thing is, execution is actually the part that will move the needle.
I’ve seen brilliant ideas executed poorly, and mediocre ideas executed brilliantly. Guess which had more impact on business results.
It’s not the LLMs’ fault, of course, and the phenomenon already exists. But ChatGPT and Co. are idea-generators on steroids. If you walk into this trap, you first risk idea-paralysis and secondly decision paralysis. When ideas are abundantly created with ChatGPT and people feel less ownership for these ideas, it can be a recipe for a huge backlog of possible things to do that never get done.
You can’t prompt execution (you can use LLMs to help with execution, though).
Bottom Line
LLMs like ChatGPT are powerful shortcuts, but over-reliance can weaken memory, originality, and ownership of ideas.
The key isn’t avoiding them, but using them wisely: as accelerators, not replacements for thinking. In business, this means pairing AI-generated speed and polish with human depth, reflection, and execution.
The hybrid approach, using AI for leverage while still engaging your own cognitive muscles, is what ensures lasting value, sharper thinking, and results that people trust.
Sources
- Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips. Science. https://doi.org/1207745
- Dahmani, L., Bohbot, V.D. Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Sci Rep 10, 6310 (2020). https://doi.org/10.1038/s41598-020-62877-0
- Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. ArXiv. https://arxiv.org/abs/2506.08872