Yeah. It’s definitely a major contributor to the dumbing of humanity. We’re barreling towards Idocracy with open ~~arms~~. AI.
Oh, don’t worry. AI is coming to arms, too. Might already have.
Pretty sure that's why the us gov threatened Anthropic
I'm in software development and land on both sides of this argument.
Having to review or maintain AI slop is infuriating.
That said, it has replaced traditional web searching for me. A good assistant setup can run multiple web searches for me, distill the useful info cutting through the blog spam and ads, run follow up searches for additional info if needed, and summarize the results in seconds with references if I want to validate its output.
There was a post a couple days ago about it solving a hard math problem with guidance from a mathematician. Sparked a discussion about AI being a powerful tool in the right hands.
cutting through the blog spam and ads
We've solved the problem of enshittification of the web by having robots consume the shit for us!
And create an equal amount, if not more shit. Take that entropy!
has replaced traditional web searching for me
i think part of the problem is that web search has enshitified over the years, back in the day you would enter the relevant key words and get the info you needed on the top results most of the time, nowadays it's all ads. now ai goes to the point, but less reliable. almost like Gemini trying to solve a problem that Google itself created
Well, AI was also quite instrumental in making web search useless. It made it trivial to create infinite spam pages, which search engines have to filter out. Naturally, too much will get filtered out as a result, meaning you can't find a lot of useful results anymore either.
You trust it to "distill the useful info"? How do you know it's not throwing out important pieces just to lead you down the garden path, or, maybe because it "thinks" you wouldn't be interested because of all it "knows" about you? If you need to check everything it does, why not just do it yourself?
It's really not that different from a traditional web search under the hood. It's basically a giant index and my input navigates the results based on probability of relevance. It's not "thinking" about me or deciding what I should see. When I say a good assistant setup, I mean I don't use Gemini or ChatGPT or any of the prepackaged stuff that tries to build a profile on you. I run my own setup, pick my own models, and control what context they get. If you check my post history I'm heavily privacy conscious, I'm not handing that over to Google or OpenAI.
The summary helps me evaluate if my input was good and the results are actually relevant to what I'm after without wading through 20 minutes of SEO garbage to get there. For me it's like getting the quality results you used to get before search got enshitified. It actually surfaces stuff that doesn't even show up on the front page of a traditional search anymore.
I don't use it much as a dev, but sometimes a response to a question, while not correct will guide me to a solution. The trick is that you have to have the knowledge to know what's right or wrong. I will also use it to troubleshoot code when I have a red squiggly because something is wrong. It can find missing brackets, a semi colon, or if I just called a function incorrectly.
If AI just up and disappeared tomorrow, I'd be so happy, but I can't discount some of it's benefits. Things I'd find on stack overflow before can be done directly within my ide with context to my project. I never accept an AI response, but instead type everything out so that I know that it's doing what I want and so it doesn't modify any of my code.
Linters have been finding missing brackets and extra semis since forever.
Truth. This does a bit more than a typical linter, that was just a simple example I riffed off. Sometimes it helps me find logic errors as well. I'll highlight a block of code, ask why it's doing or not doing the thing I expect, and go from there. I've probably only used it a dozen times for basic troubleshooting over the past 6 months when I get stumped on something.
Hey I'm an educator and I found a way to trick the chatgpt so students can't use it.
I have two methods I employ to reduce they use of chatgpt
Method 1.
I use examples of people in my questions and the people are characters from popular TV shows. Like star trek. You could also use names of athletes or anyone that likely has a lot of content on them in media and internet.
For example : Spock and Uhura both were given an image of a dress to determine if it matched the dress of the missing scientist. Spock perceived the colors to match and Uhura did not. What would explain this difference in color perception?
The answer would be color constancy. It's also a reference to the blue/black gold/white dress. But chatgpt would not be able to understand that.
(I'm a perception researcher and educator).
Anywho if they copy paste , they are likely to get replies based on episodes of star trek tos.
The other thing I do in conjunction with the first is make it so that the resources I give them are easier and less work to use than dealing with the chatgpt answers that would require a lot of additional edits of the text to finally get the correct answer. And may not ever give the correct answer.
If they have a resource like a PDF of the PowerPoint lecture, they will use it instead if it's easier to use.
So make it the easier choice.
Brilliant! 👏🧠
It's because humans naturally want to avoid unpleasant work, and public schools teach us that learning is hard and work for some reason, rather than something fun. For instance, I used to read for fun an unbelievable amount, but then I was forced to do book reports with a required list of books to "prove" I was reading them, and it was just absolutely no fun at all. Why not have a discussion about it and the teacher can check the spark notes? This changes at community college back to learning is fun, but just years of being told to do busywork and be a drone kills learning for a lot of people I feel.
It's the natural result of how our society treats education. The end result is more valued than the process. Getting an A is more important than learning the material. When we tell kids that they need good grades to get into a good college to have a good life, education becomes a means to an end, an obstacle to be circumvented.
I didn't enjoy learning until I got out of the public education system. If I had chatgpt in high school I would have 100% used it because high school was just the place to prove I deserved to go to college. It wasn't a place of learning, everyone treated it as the crucible to access a better life instead of a place to figure out what you love.
AI will continue to be a problem the same way cheating will continue to be a problem. They have the same solution: we need to place more value on the learning process than the end results.
This answer speaks to me. I used to read nonstop when I was a child. Fiction, non-fiction, didn't matter. I loved it.
After college, it took me a good 5-6 years to start reading for fun again, and it's never quite been the same.

Okay, wow, a Logan's Run TV series meme in the wild - cool!
We used to be graded on penmanship in our writing. Nobody was particularly upset when typing rendered a penmanship grade irrelevant. It became an unimportant metric to track; people with truly abysmal handwriting became perfectly capable authors. Penmanship was handed from author to artist.
LLMs are rapidly making structure and composition unimportant to the author. They are beginning to be able to convey ideas without being overly concerned with format. We need not be particularly concerned with the diminishing importance of this metric; people with little understanding of format can now become perfectly capable authors. Structure and composition is being handed over from author to poet.
AI provides a direct, immediate answer to every question you put before it. It provides that in a well-crafted, predictable, easy-to-read format. The student is not wrong for wanting this kind of response. It is what they, themselves, are asked to provide.
That the answer is rarely correct doesn't particularly phase them: They lack the experience to be able to identify the falsehoods. They haven't learned to question the lack of citation and attribution, or to cross check sources.
Where we now need to focus is on the roots of thought. The formation of ideas. The determination between fact and fiction.
Divide the class up into groups of three. The members of each group are to individually write a paper on the same, narrow topic. But, they are to deliberately include in their paper one to four significant falsehoods on their subject. Feel free to use AI.
Give the three papers to another group, and have them identify and prove the lies.
As the author, any intentional lie you manage to slip past the checkers earns big points. Any undeclared lie caught by the checkers costs you big points.
As the checker, every intentional lie you discover earns a few points. Every unintentional lie you catch earns big points. Every intentional lie you miss costs big points.
Your students were so focused on critical thinking tasks that they barely realize how much research they have put into the two topics they worked on.
Theres a huge yawning chasm between fact and lie, in most cases.
Besides which, learning isn't really about facts. Essays aren't a list of facts. Its the conclusions drawn from the facts.
You can use AI to learn everything OR to learn nothing. They've made the second choice.
what to say to somebody who hates ai?
"Acknowledge their concerns and express understanding of their feelings, as many people have valid worries about AI's impact on society. You can also share that AI has potential benefits and that discussing its responsible use can be more productive than outright rejection."
Yeah that doesn't sound like it's gonna work....
I'm a non-traditional student and I have used AI to help with math.
Let me explain something. When I try to do a general search for help on how to solve a problem the top results in most search engines aren't the old Academy style videos of guides anymore. They are sponsored links, paid tutoring websites, and YouTube videos of people playing at influencer instead of teaching.
The same is true for researching most given topics.
I have tried to use AI ethically but I know it's problematic.
When trying to find sources the old academic websites still hold but finding those websites I had to ask AI with a crafted prompt. At times I did ask it to suggest papers a academic sources. I then used my own critical analysis to decide the sources biases and value for the source and explored around further by looking at the good papers source list.
I see the problems with AI but a boolean search only works so well these days.
Going back to math, I could watch a video, but it's sitting through precious time when an AI will answer my question directly and explain the reason I was wrong.
Even if I'm trying to use a math website that actually answers the problem, there will be pop-ups (on the phone) useless text (as if it's a damn recipe website) and possibly mathematical syntax that is above my course level.
Using the AI I can have that syntax explained.
I do understand that AI is a problem and I hate HATE getting info from a middle man like this but I complete understand why a student would.
I also see how tempting it is to just skip those extra steps and take an answer, but I know it also is often wrong. My verification steps and further digging ensures that the AI is returning valid info.
But why do students do it? Because the internet today is a slop bog that they have to navigate on thier phones. Often with minimal protection from ads and other useless garbage.
I feel like this is a progression of a trend I've been railing against for a while. My workplace has to contend with a massive amount of ever-changing regulatory and engineering information. There are thousands of pages of documents, with differing levels of authority and detail, governing all aspects of what we do.
I've been begging people to read the docs. Don't just ask your manager or predecessor, don't just skim through it, and for fuck's sake don't ctrl+f until you find something that looks good and run with it out of context. Treating this sort of research like a Google search is killing us during compliance inspections. Read the docs!
Shit changes, often. I have to constantly remind them, it's not what the docs said last year. It's what they say now. Know your responsibilities, know where to find the info that pertains to them, and review it often. Read it, know it, or at least know where to find it.
It's getting worse. I've seen experienced people submit supplemental documents with egregious errors after they "just used AI for grammar checking". I've seen proposed policy docs with references to regulations that are decades out of date. I've gotten questions about implementing things that were outlawed or obsolete before I was born, and I've been around a looooong while.
We can't meat puppet our way through this, blindly following AI, or people are going to die in horrible industrial accidents. I mean that literally. People will be killed. This is why we have the current mass quanties of regulatory documents, to prevent people from literally dying in awful ways.
I'm to old for this shit.
I don't know how to solve your core problem you are hinting at without society at large realizing many of our problems are the brainwashing of the masses. This problem is why we initially were taught math without calculators in my day, by college they were expected to help with simple math to focus on the more complicated problems.
Here with llms it's important to still write, learn to research something (even more than the don't use encyclopedias as a primary source) learning to read with deep understanding and learning to skim. Learning math and logic is as important as ever.
What I see missing quite a bit in the antiai art world is the importance of creating art to convey your meaning (if AI is a tool involved or not for writing images ect is this thing showing the meaning and nuance you want not just a off the top of your head comment and auto ship the slop output) and the only way you can go no that's not what I want is to have some idea how to make the piece of writing or art yourself even at a high level.
I personally like the tech but see it accelerating the brain drain for those that rely on it too for answers as the learn.
There is no reason to avoid getting better at writing.
I feel like people often misunderstand that writing isn't just busywork, but rather instrumental for developing a deeper understanding. Formulating a sentence forces you to clear out uncertainties you might have. And writing it down serves as extended working memory.
I can imagine there being some middleground, like not bothering to learn where to place commas and having an LLM insert them for you, but from my biased position of already knowing how to write correctly, I do struggle to come up with scenarios where this actually makes a difference.
If you know how to read, you'll have a sense where commas aid with that. Anything beyond that is just pedantic wankery anyways.
I'm glad I retired from the profession when I did. I was seeing that "no interest in learning anything" with the tools they had then. I can't imagine it now.
yep. watching kids squander their one chance at university education over their reliance on this shit is depressing as fuck.
Because learning for kids/young adults isn't really the point anymore. The point of doing the learning is to "pass test" or, "get job" or, "move on to the next link in the education chain". So young people often feel faced with a choice, engage with the process to accomplish the tasks, or dissociate from the process entirely.
This systemic issue is likely why steiner schools and the like are seeing increased interest from parents.
Academic writing is really hard. It requires intense concentration over a long period of time. I don't know that your kids would be doing more work if they didn't have AI- they'd probably just do what I did, phone in a shitty paper they churned out the night before/2 weeks late, because they could only start when they sufficiently felt like they were going to throw up from stress.
Bad writing practice is still a million times better than no writing practice
I don’t just see it for essays, but short answer response. Single sentence stuff and math problems. Kids to adults.
On one hand I don't blame people for wanting to make money.
On the other hand hand how come EVERYONE is in it for the money?
Integrity is all gone and I hate that I can be in classes with 40 CS majors and still can't share my hobby of programming with anyone.
It's a crying shame that organic shitposts are going to be gone one day.
Off My Chest
RULES:
I am looking for mods!
1. The "good" part of our community means we are pro-empathy and anti-harassment. However, we don't intend to make this a "safe space" where everyone has to be a saint. Sh*t happens, and life is messy. That's why we get things off our chests.
2. Bigotry is not allowed. That includes racism, sexism, ableism, homophobia, transphobia, xenophobia, and religiophobia. (If you want to vent about religion, that's fine; but religion is not inherently evil.)
3. Frustrated, venting, or angry posts are still welcome.
4. Posts and comments that bait, threaten, or incite harassment are not allowed.
5. If anyone offers mental, medical, or professional advice here, please remember to take it with a grain of salt. Seek out real professionals if needed.
6. Please put NSFW behind NSFW tags.