178
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 03 Mar 2025
178 points (98.4% liked)
Technology
66814 readers
2663 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
It can't ever accurately convey any more information than you give it, it just guesses details to fill in. If you're doing something formulaic, then it guesses fairly accurately. But if you tell it "write a book report on Romeo and Juliet", it can only fill in generic details on what people generally say about the play; it sounds genuine but can't extract your thoughts.
Not to get too deep into the politics of it but there's no reason most people couldn't get there if we invested in their core education. People just work with what they're given, it's not a personal failure if they weren't taught these skills or don't have access to ways to improve them.
And not everyone has to be hyper-literate, if daily life can be navigated at a 6th grade level that's perfectly fine. Getting there isn't an insurmountable task, especially if you flex those cognitive muscles more. The main issue is that current AI doesn't improve these skills, it atrophies them.
It doesn't push back or use logical reasoning or seek context. Its specifically made to be quick and easy, the same as fast food. We'll be having intellectual equivalent of the diabetes epidemic if it gets widespread use.
It sounds like you are talking about use in education then, which is a different issue altogether.
You can and should set your AI to push back against poor reasoning and unsupported claims. They arnt very smart, but they will try.
I mean it's the same use; it's all literacy. It's about how much you depend on it and don't use your own brain. It might be for a mindless email today, but in 20 years the next generation can't read the news without running it through an LLM. They have no choice but to accept whatever it says because they never develop the skills to challenge it, kind of like simplifying things for a toddler.
The models can never be totally fixed, the underlying technology isn't built for that. It doesn't have "knowledge" or "reasoning" at all. It approximates it by weighing your input against a model of how those words connect together and choosing a slightly random extension of them. Depending on the initial conditions, it might even give you a different answer for each run.
Is that any worse than people getting their world view from a talking head on 24 news, five second video clips on their phone, or a self curated selection of rage bait propaganda online? The mental decline of humanity is perpetual and overstated.