291

Absolutely needed: to get high efficiency for this beast ... as it gets better, we'll become too dependent.

"all of this growth is for a new technology that’s still finding its footing, and in many applications—education, medical advice, legal analysis—might be the wrong tool for the job,,,"

you are viewing a single comment's thread
view the rest of the comments
[-] TootSweet@lemmy.world 74 points 3 weeks ago

as it gets better

Bold assumption.

[-] WanderingThoughts@europe.pub 29 points 3 weeks ago

Historically AI always got much better. Usually after the field collapsed in an AI winter and several years went by in search for a new technique to then repeat the hype cycle. Tech bros want it to get better without that winter stage though.

[-] Jesus_666@lemmy.world 26 points 3 weeks ago

AI usually got better when people realized it wasn't going to do all it was hyped up for but was useful for a certain set of tasks.

Then it turned from world-changing hotness to super boring tech your washing machine uses to fine-tune its washing program.

[-] WanderingThoughts@europe.pub 32 points 3 weeks ago

Like the cliché goes: when it works, we don't call it AI anymore.

[-] technocrit@lemmy.dbzer0.com 5 points 3 weeks ago

The smart move is never calling it "AI" in the first place.

[-] Enkers@sh.itjust.works 9 points 3 weeks ago* (last edited 3 weeks ago)

Unless you're in comp sci, and AI is a field, not a marketing term. And in that case everyone already knows that's not "it".

[-] frezik@midwest.social 5 points 3 weeks ago* (last edited 3 weeks ago)

The major thing that killed 1960s/70s AI was the Vietnam War. MIT's CSAIL was funded heavily by DARPA. When public opinion turned against Vietnam and Congress started shutting off funding, DARPA wasn't putting money into CSAIL anymore. Congress didn't create an alternative funding path, so the whole thing dried up.

That lab basically created computing as we know it today. It bore fruit, and many companies owe their success to it. There were plenty of promising lines of research still going on.

[-] IsaamoonKHGDT_6143@lemmy.zip 5 points 3 weeks ago

I wish there was an alternate history forum or novel that explores this scenario.

[-] technocrit@lemmy.dbzer0.com -3 points 3 weeks ago

Pretty sure "AI" didn't exist in the 60s/70s either.

[-] frezik@midwest.social 8 points 3 weeks ago* (last edited 3 weeks ago)

Yes, it did. Most of the basic research came from there. The first section of the book "Hackers" by Steven Levy is a good intro.

[-] Feathercrown@lemmy.world 3 points 3 weeks ago

The perceptron was created in 1957 and a physical model was built a year later

The spice must flow

[-] IsaamoonKHGDT_6143@lemmy.zip 5 points 3 weeks ago

Each winter marks the beginning and end of a generation of AI. We are now seeing more progress and as long as there is no technical limit it seems that its progress will not be interrupted.

[-] msage@programming.dev 6 points 3 weeks ago
[-] FreedomAdvocate@lemmy.net.au 11 points 3 weeks ago* (last edited 3 weeks ago)

In what area of AI? Image generation is increasing in leaps and bounds. Video generation even more so. Image reconstruction for games (DLSS, XeSS, FSR) is having generational improvements almost every year. AI chatbots are getting much much smarter seemingly every month.

What’s one main application of AI that hasn’t improved?

[-] msage@programming.dev 4 points 3 weeks ago

Which chatbots are getting smarter?

I know AI has potential, but specifically LLMs (which most people mean when talking about AI) seem to have hit their technological limits.

[-] Jakeroxs@sh.itjust.works 3 points 3 weeks ago

Advanced Reasoning models came out like 4 months ago lol

[-] msage@programming.dev 4 points 3 weeks ago

Advanced reasoning? Having LLM talk to itself?

[-] theterrasque@infosec.pub 2 points 3 weeks ago

Yes, which has improved some tasks measurably. ~20% improvement on programming tasks, as a practical example. It has also improved tool use and agentic tasks, allowing the llm to plan ahead and adjust it's initial approach based on later parts.

Having the llm talk through the tasks allows it to improve or fix bad decisions taken early based on new realizations on later stages. Sort of like when a human thinks through how to do something.

[-] Jakeroxs@sh.itjust.works 0 points 3 weeks ago* (last edited 3 weeks ago)

Lul yes but no, but they are clearly better at many types of tasks.

[-] technocrit@lemmy.dbzer0.com -1 points 3 weeks ago* (last edited 3 weeks ago)

For example? Citations?

Pretty sure these "tasks" are meaningless metrics made up by pseudo-scientific grifters.

[-] Jakeroxs@sh.itjust.works 4 points 3 weeks ago

Small bits of code, language related tasks, basic context understanding, not metrics I have literally measured simply noticed has improved compared to non reasoning models in my homelab testing. 🤷‍♂️

[-] IsaamoonKHGDT_6143@lemmy.zip 3 points 3 weeks ago

AlphaFold 3 which can help in the prediction of some proteins. Although it has some limitations, it cannot be used in all cases, only in what it can perform without any problem.

[-] FreedomAdvocate@lemmy.net.au 2 points 3 weeks ago

Copilot, ChatGPT, pretty much all of them.

[-] msage@programming.dev 2 points 3 weeks ago

Smarter how? Synthetic benchmarks?

Because I've heard the opposite from users and bloggers.

[-] Almacca@aussie.zone -5 points 3 weeks ago* (last edited 3 weeks ago)

They've been a boon for medical diagnoses as well, I believe.

Has anyone made AI powered accounting software yet? I'd love to tell my computer 'Here's all my financial information in a big heap. Do my taxes.' The numbers and tax laws are all known things. It shouldn't be hard.

[-] MagicShel@lemmy.zip 17 points 3 weeks ago

Any strictly rule-based system, like accounting and taxes, is a job for traditional software, not AI. Particularly when the laws change every year.

[-] Almacca@aussie.zone 4 points 3 weeks ago

Once it has the information in a recognisable format. Reading and recognising random receipts, bank statements, payment slips, and whatever and sorting it into a coherent format is what I'm trying to avoid.

[-] MagicShel@lemmy.zip 2 points 3 weeks ago

I see. So AI for gathering the information to put into the accounting/tax software?

That's a more reasonable ask, but I wouldn't personally trust AI with that. I've done something similar in games where I take a picture of something on screen and ask AI to collect all the information from many similar pictures into a table. It's definitely good enough for gaming, but it makes mistakes often enough I wouldn't sign my name attesting to the truth of anything it produced, you know?

[-] Almacca@aussie.zone 2 points 3 weeks ago* (last edited 3 weeks ago)

Fair point, but i feel like that's something that's technologically solvable, and this is dealing only with text, a lot of which is already digital, just in multiple formats, and all easily checkable against the final figures if anyone so desires.

As a random aside, I saw a clip recently where someone had asked an 'AI' model to reproduce a photo with zero changes one hundred times. There were more than zero changes.

[-] MagicShel@lemmy.zip 2 points 3 weeks ago

Surprisingly, the mistakes ChatGPT made weren't related to picture processing. Every time I've sent a picture, it has flawlessly analyzed the text (even if it's a screenshot of a massive Linux log or a screenshot with multiple windows / arbitrary text placement). The problems were more like the markdown table I created would not be reproduced perfectly with the new changes/additions. It's pretty reliable early on, but either as the chat gets longer or the table does, fidelity can be lost. Not very often, but it does happen.

Just to clarify. But I find as long as you're paying close attention and can catch mistakes or verify the output, AI does make such tasks much less tedious.

[-] Xaphanos@lemmy.world 0 points 3 weeks ago

NVL72 will be enormously impactful on high end performance.

[-] frezik@midwest.social 3 points 3 weeks ago* (last edited 3 weeks ago)

The issue this time around is infrastructure. The current AI Summer depends on massive datacenters with equally massive electrical needs. If companies can't monetize that enough, they'll pull the plug and none of this will be available to general public anymore.

This system can go backwards. Yes, the R&D will still be there after the AI Winter cycle hits, but none of the infrastructure.

[-] theterrasque@infosec.pub 3 points 3 weeks ago

We'll still have models like deepseek, and (hopefully) discount used server hardware

[-] technocrit@lemmy.dbzer0.com -1 points 3 weeks ago

Historically "AI" still doesn't exist.

[-] WanderingThoughts@europe.pub 11 points 3 weeks ago

Technically even 1950s computer chess is classified as AI.

[-] Bogasse@lemmy.ml 14 points 3 weeks ago

Yeah, I think there was some efforts, until we found out that adding billions of parameters to a model would allow both to write the useless part in emails that nobody reads and to strip out the useless part in emails that nobody reads.

[-] Melvin_Ferd@lemmy.world 2 points 3 weeks ago

I want my emails to be a series of noises that only computers can hear and communicate with

this post was submitted on 21 May 2025
291 points (96.8% liked)

Technology

71505 readers
1656 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS