156
Stack overflow is almost dead (blog.pragmaticengineer.com)
(page 2) 30 comments
sorted by: hot top controversial new old
[-] ifGoingToCrashDont@lemmy.world 16 points 2 days ago

I think it'll make a comeback eventually. LLMs will get progressively less useful as a replacement as its' training data stales. Without refreshed data it's going to be just as irrelevant as the years go on. Where will it get data about new programming languages or solutions to problems in new software? LLM knowledge will be stuck in 2025 unless new training material is given to it.

[-] Suoko@feddit.it 4 points 2 days ago

Until someone releases an open LLM in the sense that every prompt/question is published on a forum like site

[-] Zexks@lemmy.world -1 points 2 days ago

lmao. Ignorance is bliss is it.

Well. I doubt that very much. Take as an analogy the success of the chess AI which was left training itself - compared to being trained...

[-] TheTechnician27@lemmy.world 15 points 2 days ago* (last edited 2 days ago)

Your analogy simply does not hold here. If you're having an AI train itself to play chess, then you have adversarial reinforcement learning. The AI plays itself (or another model), and reward metrics tell it how well it's doing. Chess has the following:

  1. A very limited set of clearly defined, rigid rules.
  2. One single end objective: put the other king in checkmate before yours is or, if you can't, go for a draw.
  3. Reasonable metrics for how you're doing and an ability to reasonably predict how you'll be doing later.

Here's where generative AI is different: when you're doing adversarial training with a generative deep learning model, you want one model to be a generator and the other to be a classifier. The classifier should be given some amount of human-made material and some amount of generator-made material and try to distinguish it. The classifier's goal is to be correct, and the generator's goal is for the classifier to pick completely randomly (i.e. it just picks on a coin flip). As you train, you gradually get both to be very, very good at their jobs. But you have to have human-made material to train the classifier, and if the classifier doesn't improve, then the generator never does either.

Imagine teaching a 2nd grader the difference between a horse and a zebra having never shown them either before, and you hold up pictures asking if they contain a horse or a zebra. Except the entire time you just keep holding up pictures of zebras and expecting the child to learn what a horse looks like. That's what you're describing for the classifier.

well. indeed the devil's in the detail.

But going with your story. Yes, you are right in general. But the human input is already there.

But you have to have human-made material to train the classifier, and if the classifier doesn’t improve, then the generator never does either.

AI can already understand what stripes are, and can draw the connection that a zebra is a horse without stripes. Therefore the human input is already given. Brute force learning will do the rest. Simply because time is irrelevant and computations occur at a much faster rate.

Therefore in the future I believe that AI will enhance itself. Because of the input it already got, which is sufficient to hone its skills.

While I know for now we are just talking about LLMs as blackboxes which are repetitive in generating output (no creativity). But the 2nd grader also has many skills which are sufficient to enlarge its knowledge. Not requiring everything taught by a human. in this sense.

I simply doubt this:

LLMs will get progressively less useful

Where will it get data about new programming languages or solutions to problems in new software?

On the other hand you are right. AI will not understand abstractions of something beyond its realm. But this does not mean it wont expedite in stuff that it can draw conclusions from.

And even in the case of new programming languages, I think a trained model will pick up the logic of the code - basically making use of its already learned pattern recognition skills. And probably at a faster pace than a human can understand a new programming language.

load more comments (1 replies)
[-] froufox@lemmy.blahaj.zone 12 points 2 days ago

looks like an opportunity for the fediverse

[-] LainTrain@lemmy.dbzer0.com 2 points 2 days ago
[-] henfredemars@infosec.pub 10 points 2 days ago

I don’t mind so much. It started as a Wiki and then became a corporate AI training ground.

I think Microsoft bought it, and as with most things they buy, they run it into the ground.

load more comments
view more: ‹ prev next ›
this post was submitted on 20 May 2025
156 points (99.4% liked)

Programming

20279 readers
700 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS