545
you are viewing a single comment's thread
view the rest of the comments
[-] hikaru755@lemmy.world 1 points 9 hours ago* (last edited 9 hours ago)

If you know enough to verify a translation as accurate, or you have the tools to figure out an accurate translation through dictionaries or some such, then you know enough to do the translation yourself.

Correct. But it's going to take me a lot more work and time, possibly to the point of not being feasible and probably even matching the energy cost of using the LLM over the entirety of the task.

why would you trust something like system security to an LLM?

I wouldn't. I don't know where you got that. Adding LLM-based analysis to your toolkit to spot important issues that otherwise might not have been found is just that: an addition. Not replacing anything. And it is demonstrably useful for that at this point, there's just no denying that.

Once again, even if the billionaire’s toxic Nazi plagiarism machine was useful, it is so morally repugnant that it should never be used, which makes it functionally useless.

My point is that if you are this confidently wrong about the capabilities of LLM-based tools, then why should I believe you to be any less wrong about the moral and ethical issues you're raising? It looks like you're either completely misinformed or deliberately fighting a strawman for a part of your argument, so it gives anyone on the other side an easy excuse to just not engage with the rest of it and just dismiss it entirely. That's what I'm trying to get across here.

[-] Susaga@sh.itjust.works 1 points 9 hours ago

Surely, the energy cost to verify the translation would be the same as translating it? If you're struggling that much, why are you translating it at all? I cannot trust your translation.

If you tell an LLM to generate reports, it will, regardless of the actual quality of the environment. It doesn't know what's secure and what isn't. All you've shown it to do is convince the kinds of security analysts with a system so insecure as to have a LOT of good reports that their system is more secure than it is. Which is useless at best, detrimental at worst.

It's useless for translation. It's useless for security analysis. It's useless for rhyming (I notice you didn't mention that one). You're trying so hard to prove how useful it is, and your failure demonstrates how useless it is.

You can't condemn confident wrongness and defend LLMs. And you can't defend the billionaire's toxic Nazi plagiarism machine while questioning someone else's morals. You can't cherry-pick my argument and claim I'm the one fighting a strawman. ...Well, not if you're arguing in good faith.

[-] hikaru755@lemmy.world 1 points 6 hours ago

Look, I'm not trying to argue against your moral stance. I'm neither saying it's wrong nor that it's outweighed by any usefulness, real or not. What I'm trying is get you to see that your claims about uselessness are undermining your moral argument, which would be a hell of a lot stronger if you were not hell-bent on denying any kind of utility! Because in the eyes of people that do perceive LLMs as useful (which is exactly the kind of people that need to hear about the moral issues), that just makes you seem out of touch and not worth listening to.

It’s useless for security analysis.

Have you looked at any of the four links I provided? You might be working on old data here because it's a very recent development, but a lot of high profile open source maintainers are saying that AI-generated security reports are now generally pretty good and not slop anymore. They're fixing actual bugs because of it, and more than ever. How can you call that useless?

Surely, the energy cost to verify the translation would be the same as translating it?

Uh, no? Have you ever translated something? Verifying a translation happens mostly at attentive reading speed, double it for probably reading it twice overall to focus on content and grammar separately, plus some overhead for correcting the occasional flaw and checking one or two things that I'm unsure about from the top of my head, so for the sake of argument let's say three times slower than just reading normally. I don't know about you, but three times slower than reading is still a lot faster than I would be able to produce a translation from scratch, weighing different word options against each other, how to get some flow into the reading experience, etc. If I'm translating into a language that I'm fluent but not native in that takes even longer, because the ratio between my passive and active vocabulary is worse. I can read (and thus verify) English at a much more sophisticated level than I'm able to talk or write, because the words and native idioms just don't come to me as naturally, or sometimes even at all without a lot of mental effort and a Thesaurus. LLMs are just plain better at writing English than I have any hope of achieving in my lifetime, and I can still fully understand and verify the factual, orthographic and grammatical correctness of what they're outputting easily. Those two things are not mutually exclusive.

It’s useless for rhyming (I notice you didn’t mention that one)

Yeah, because I'm focusing on the more relevant things. I disagree that it's completely useless for rhyming, but it is a much weaker and more contrived point than the others, and going into that discussion would just derail things more for no added value. Also, funny that you call me out for that, when you just fully ignored two use cases I mentioned in my initial comment (LLM proofreading texts, and answering questions about unfamiliar code bases). Those have a lot of legitimate utility for someone who's not aware of or doesn't care about the moral issues. And once again, that's my point here - those people will not listen if they perceive you as talking about a fictional world where LLMs are completely useless, which fails to match up with their experience.

this post was submitted on 08 Apr 2026
545 points (97.2% liked)

Programmer Humor

30835 readers
2014 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS