119
you are viewing a single comment's thread
view the rest of the comments
[-] gravitas_deficiency@sh.itjust.works 198 points 6 months ago* (last edited 6 months ago)

Short term yes; long term probably not. All the dipshit c-suites pushing the “AI” worker replacement initiatives are going to destroy their workforces and then realize that LLMs can’t actually reliably replace any of the workers they fired. And I love that for management.

[-] 3volver@lemmy.world 4 points 6 months ago

You're referring to something that is changing and getting better constantly. In the long term LLMs are going to be even better than they are now. It's ridiculous to think that it won't be able to replace any of the workers that were fired. LLMs are going to allow 1 person to do the job of multiple people. Will it replace all people? No. But even if it allows 1 person to do the job of 2 people, that's 50% of the workforce unemployed. This isn't even mentioning how good robotics have gotten over the past 10 years.

[-] JeffKerman1999@sopuli.xyz 22 points 6 months ago

You must have one person constantly checking for hallucinations in everything that is generated: how is that going to be faster?

[-] Grippler@feddit.dk -4 points 6 months ago* (last edited 6 months ago)

Sure you sort of need that at the moment (not actually everything, but I get your hyperbole), but you seem to be working under the assumption that LLMs are not going to improve beyond what they are now. It is still very much in its infancy, and as the tech matures this will be less and less until it only requires few people to manage LLMs that solve the tasks of a much larger work force.

[-] SupraMario@lemmy.world 7 points 6 months ago

It's hard to improve when the data in is human and the data out cannot be error checked back against its own data in. It's like trying to solve a math problem with two calculators that both think 2+2 = 6 because the data they were given said that it's true.

[-] Muehe@lemmy.ml 2 points 6 months ago

(not actually everything, but I get your hyperbole)

How is it hyperbole? All artificial neural networks have "hallucinations", no matter their size. What's your magic way of knowing when that happens?

[-] JeffKerman1999@sopuli.xyz 0 points 6 months ago

LLMs now are trained on data generated by other LLMs. If you look at the "writing prompt" stuff 90% is machine generated (or so bad that I assume it's machine generated) and that's the data that is being bought right now.

load more comments (4 replies)
load more comments (31 replies)
this post was submitted on 21 May 2024
119 points (87.0% liked)

Technology

59623 readers
1514 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS