89
submitted 1 week ago* (last edited 1 week ago) by cypherpunks@lemmy.ml to c/technology@lemmy.world

Note: this lemmy post was originally titled MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline and linked to this article, which I cross-posted from this post in !fuck_ai@lemmy.world.

Someone pointed out that the "Science, Public Health Policy and the Law" website which published this click-bait summary of the MIT study is not a reputable publication deserving of traffic, so, 16 hours after posting it I am editing this post (as well as the two other cross-posts I made of it) to link to MIT's page about the study instead.

The actual paper is here and was previously posted on !fuck_ai@lemmy.world and other lemmy communities here.

Note that the study with its original title got far less upvotes than the click-bait summary did 🤡

top 50 comments
sorted by: hot top controversial new old
[-] trashgarbage78@lemmy.dbzer0.com 0 points 1 week ago* (last edited 1 week ago)

what should we do then? just abandon LLM use entirely or use it in moderation? i find it useful to ask trivial questions and sort of as a replacement for wikipedia. also what should we do to the people who are developing this 'rat poison' and feeding it to young people's brains?

edit: i also personally wouldn't use AI at all if I didn't have to compete with all these prompt engineers and their brainless speedy deployments

[-] UnderpantsWeevil@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

what should we do then?

i also personally wouldn’t use AI at all if I didn’t have to compete with all these prompt engineers and their brainless speedy deployments

Gotta argue that your more methodical and rigorous deployment strategy is more cost efficient than guys cranking out big ridden releases.

If your boss refuses to see it, you either go with the flow or look for a new job (or unionize).

[-] paequ2@lemmy.today 1 points 1 week ago

I'm not really worried about competing with the vibe coders. At least on my team, those guys tend to ship more bugs, which causes the fire alarm to go off later.

I'd rather build a reputation of being a little slower, but more stable and higher quality. I want people to think, "Ah, nice. Paequ2 just merged his code. We're saved." instead of, "Shit. Paequ2 just merged. Please nothing break..."

Also, those guys don't really seem to be closing tickets faster than me. Typing words is just one small part of being a programmer.

[-] GlenRambo@jlai.lu 1 points 1 week ago

The abstract seems to suggest that in the long run you'll out perform those prompt engineers.

[-] trashgarbage78@lemmy.dbzer0.com 0 points 1 week ago

in the long run won't it just become superior to what it is now and outperform us? the future doesn't look bright tbh for comp sci, only good paths i see is if you're studying AI/ML or Security

[-] Shanmugha@lemmy.world 1 points 1 week ago
[-] trashgarbage78@lemmy.dbzer0.com 0 points 1 week ago

so avoid LLMs entirely when programming and also studying AI/ML isnt a good idea?

[-] Shanmugha@lemmy.world 0 points 1 week ago

I do not see how it can be a good or bad idea. Do whatever you want to do, however is best for you

[-] TubularTittyFrog@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

you should stop using it and use wikipedia.

being able to pull relevant information out of a larger of it, is a incredibly valuable life skill. you should not be replacing that skill with an AI chatbot

[-] Wojwo@lemmy.ml 28 points 1 week ago

Does this also explain what happens with middle and upper management? As people have moved up the ranks during the course of their careers, I swear they get dumber.

[-] vacuumflower@lemmy.sdf.org 1 points 1 week ago

My dad around 1993 designed a cipher better than RC4 (I know it's not a high mark now, but it kinda was then) at the time, which passed audit by a relevant service.

My dad around 2003 still was intelligent enough, he'd explain me and my sister some interesting mathematical problems and notice similarities to them and interesting things in real life.

My dad around 2005 was promoted to a management position and was already becoming kinda dumber.

My dad around 2010 was a fucking idiot, you'd think he's mentally impaired.

My dad around 2015 apparently went to a fortuneteller to "heal me from autism".

So yeah. I think it's a bit similar to what happens to elderly people when they retire. Everything should be trained, and also real tasks give you feeling of life, giving orders and going to endless could-be-an-email meetings makes you both dumb and depressed.

[-] TubularTittyFrog@lemmy.world 0 points 1 week ago* (last edited 1 week ago)

that's the peter principle.

people only get promoted so far as their inadequacies/incompetence shows. and then their job becomes covering for it.

hence why so many middle managers primary job is managing the appearance of their own competence first and foremost and they lose touch with the actual work being done... which is a key part of how you actually manage it.

[-] Wojwo@lemmy.ml 1 points 1 week ago

Yeah, that's part of it. But there is something more fundamental, it's not just rising up the ranks but also time spent in management. It feels like someone can get promoted to middle management and be good at the job initially, but then as the job is more about telling others what to do and filtering data up the corporate structure there's a certain amount of brain rot that sets in.

I had just attributed it to age, but this could also be a factor. I'm not sure it's enough to warrant studies, but it's interesting to me that just the act of managing work done by others could contribute to mental decline.

[-] ALoafOfBread@lemmy.ml 15 points 1 week ago

That was my first reaction. Using LLMs is a lot like being a manager. You have to describe goals/tasks and delegate them, while usually not doing any of the tasks yourself.

[-] rebelsimile@sh.itjust.works 1 points 1 week ago

After being out of being a direct practitioner, I will say all my direct reports are “faster” in programs we use at work than I am, but I’m still waaaaaaaaaay more efficient than all of them (their inefficiencies drive me crazy actually), but I’ve also taken up a lot of development to keep my mind sharp. If I only had my team to manage and not my own personal projects, I could really see regressing a lot.

[-] sqgl@sh.itjust.works 8 points 1 week ago

That's the Peter Principle.

[-] socphoenix@midwest.social 2 points 1 week ago

I’d expect similar at least. When one doesn’t keep up to date on new information and lets their brain coast it atrophies like any other muscle would from disuse.

[-] canadaduane@lemmy.ca 14 points 1 week ago

I wonder what social media does.

[-] QuadDamage@kbin.earth 11 points 1 week ago

Microsoft reported the same findings earlier this year, spooky to see a more academic institution report the same results. https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf Abstract for those too lazy to click:

The rise of Generative AI (GenAI) in knowledge workflows raises questions about its impact on critical thinking skills and practices. We survey 319 knowledge workers to investigate 1) when and how they perceive the enaction of critical thinking when using GenAI, and 2) when and why GenAI affects their effort to do so. Participants shared 936 first-hand examples of using GenAI in work tasks. Quantitatively, when considering both task- and user-specific factors, a user’s task-specific self-confidence and confidence in GenAI are predictive of whether critical thinking is enacted and the effort of doing so in GenAI-assisted tasks. Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship. Our insights reveal new design challenges and opportunities for developing GenAI tools for knowledge work.

load more comments (4 replies)
[-] Korkki@lemmy.ml 9 points 1 week ago

You write essay with AI your learning suffers.

One of these papers that are basically "water is wet, researches discover".

[-] suddenlyme@lemmy.zip 8 points 1 week ago

Its so disturbing. Especially the bit about your brain activity not returning to normal afterwards. They are teaching the kids to use it in elementary schools.

[-] hisao@ani.social 6 points 1 week ago* (last edited 1 week ago)

I think they meant it doesn't return to non-AI-user levels when you do the same task on your own immediately afterwards. But if you keep doing the task on your own for some time, I'd expect it to return to those levels rather fast.

[-] sudo_shinespark@lemmy.world 6 points 1 week ago

Heyyy, now I get to enjoy some copium for being such a dinosaur and resisting to use it as often as I can

[-] morto@piefed.social 5 points 1 week ago

You're not a dinosaur. Making people feel old and out of the trend is exactly one of the strategies used by big techs to shove their stuff into people.

[-] Imgonnatrythis@sh.itjust.works 6 points 1 week ago

No wonder Republicans like it so much

[-] Hackworth@sh.itjust.works 5 points 1 week ago
[-] Ganbat@lemmy.dbzer0.com 5 points 1 week ago* (last edited 1 week ago)

But does it cause this when when used exclusively for RP gooning sessions?

[-] svc@lemmy.frozeninferno.xyz 12 points 1 week ago

Somebody fund this scholar's research immediately

[-] masterofn001@lemmy.ca 2 points 1 week ago

To date, after having gooned once (ongoing since September 2023), my core executive functions, my cognitive abilities and my behaviors have not suffered in the least. In fact, potato.

[-] DownToClown@lemmy.world 2 points 1 week ago

The obvious AI-generated image and the generic name of the journal made me think that there was something off about this website/article and sure enough the writer of this article is on X claiming that covid 19 vaccines are not fit for humans and that there's a clear link between vaccines and autism.

Neat.

[-] tad_lispy@europe.pub 1 points 1 week ago* (last edited 1 week ago)

Thanks for the warning. Here's the link to the original study, so we don't have to drive traffic to that guys website.

https://arxiv.org/abs/2506.08872

I haven't got time to read it and now I wonder if it was represented accurately in the article.

[-] cypherpunks@lemmy.ml 1 points 1 week ago* (last edited 1 week ago)

Thanks for pointing this out. Looking closer I see that that "journal" was definitely not something I want to be sending traffic to, for a whole bunch of reasons - besides anti-vax they're also anti-trans, and they're gold bugs... and they're asking tough questions like "do viruses exist" 🤡

I edited the post to link to MIT instead, and added a note in the post body explaining why.

load more comments (1 replies)
[-] salty_chief@lemmy.world 1 points 1 week ago

I just asked ChatGPT if this is true. It told me no and to increase my usage of AI. So HA!

[-] unpossum@sh.itjust.works 1 points 1 week ago

So if someone else writes your essays for you, you don’t learn anything?

[-] theneverfox@pawb.social 1 points 1 week ago

Ok, if the ai knows

load more comments
view more: next ›
this post was submitted on 03 Sep 2025
89 points (97.8% liked)

Technology

75027 readers
407 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS