112

And the real maddening part is that search engines have been so enshitfied to make way for AI that's wrong like 9/10, so you're forced to rely on it for answers because if you try google, the snake wraps around and eats it's own tail giving you an AI answer! stalin-stressed

top 50 comments
sorted by: hot top controversial new old
[-] Belly_Beanis@hexbear.net 38 points 3 days ago

I'm absolutely baffled by it as someone who started their college career in computer science before switching majors. I was never the best programmer, yet it seems so ass-backwards to me modern programmers aren't writing pseudo-code and working things out on paper. I wasn't in school that long ago. Did things really change that fast? Are people not doing formal logic anymore? Do they even learn binary and hex? Just what the fuck is happening to this field.

[-] BodyBySisyphus@hexbear.net 28 points 3 days ago

My impression is that the people who are most excited about these tools are people like tech journalists and "solopreneurs" (gag), who have been tech adjacent but never formally learned to code and now think that they don't need software engineers to achieve their vision anymore.

[-] I_Voxgaard@hexbear.net 23 points 3 days ago

this. llm code is the silver bullet for "idea guys"

[-] InevitableSwing@hexbear.net 7 points 3 days ago

I'm imagining a comedy with this dialog...

"Am I a programmer? A lowly programmer? Of course not! I'm an ideas guy." As the plot unfolds - it turns out the guy has no idea how to do anything. All he does is enter AI prompts and then lie that he has yet another fantastic idea.

[-] marxisthayaca@hexbear.net 15 points 3 days ago

I was never the best programmer, yet it seems so ass-backwards to me modern programmers aren't writing pseudo-code and working things out on paper

Not a programmer, but as someone whose master's degree is filled with "write 30 pages worth of documentation before starting a project" when you are actually working in the real world, half that shit goes out of the window. So, I can definitely see how a lot of people are not writing pseudocode and instead brute forcing a bunch of things.

load more comments (2 replies)
[-] LeeeroooyJeeenkiiins@hexbear.net 10 points 3 days ago

You gotta use AI like it's a new guy you're training at work where every single thing you tell them to do they'll probably do wrong but you have to pay attention and learn their specific fucked up brain so you can anticipate their path of fuck up

[-] FortifiedAttack@hexbear.net 31 points 3 days ago* (last edited 3 days ago)

The free models are much worse than the $500 per user/month enterprise ones. I have seen these be able to generate working features first hand at work, and I cannot deny that certain models are capable of implementing features when appropriate requirements are provided. To claim anything else would be to deny what I have seen with my own eyes.

However, therein lies the trap. Just because it is capable of achieving the provided task in one instance, doesn't mean that it always provides an appropriate answer or solution in all cases.

But those who have initially used it successfully tend to start believing its output uncritically. I've noticed this on myself when I tried it at work, and I think this is basic human, heck, even animal condition. You are naturally inclined to trust an entity that initially provides you with beneficial output. You become less critical, as the output often sounds informed and convincing, and in many cases provably works as well (especially when a robust testing framework exists inside the project. its only through unit and integration tests that these AIs can even reliably implement features).

But this leads to an increasing reliance on the tech, and you stop being capable of arguing why the solution it generated works. You have to put in active effort to question what it's doing, and you have no way of knowing whether it's telling you the truth or lies, because it has no motive, and researching the facts can take so long that it completely defeats the point of automation. So it ends up being rather self-defeating in many cases, and can leave you less capable of solving problems yourself.

I think the most useful application for it personally is to use it for debugging -- feed it a cryptic error message, and it will usually generate an answer that, while not necessarily accurate, can give you more pointers to find the true answer, much better than most search engines can.

load more comments (1 replies)
[-] hades@feddit.uk 33 points 3 days ago

The thing is, our entire field is bad at what we do. For most of the software the cost of error is very low, and for a long time it was a very lucrative field that attracted a lot of people who were really bad at coding. So coding with AI is not significantly different from coding without AI, it’s just that there’s now a much faster, and much less ethically acceptable way of producing code.

[-] MeetMeAtTheMovies@hexbear.net 10 points 3 days ago

50% of developers have less than 5 years of experience and the number of new developers just keeps growing too. We’re a profession of amateurs with companies poaching the oldheads out from underneath each other.

[-] Acute_Engles@hexbear.net 29 points 3 days ago

I have a very close friend who is an engineer for programming(idk what the title is rn) at a very large company.

He says he has managed to keep one or two codebases "AI free" but when I asked if he has to review any AI code he said it's completely unavoidable and everyone uses it now. He's proud of the fact that they still require the coder to actually review the AI generated slop before passing it off to him.

It's bleak

[-] pinguinu@lemmygrad.ml 22 points 3 days ago

This, except in my case there's no reviewer, I either review or do the rest of my work. Someone in my team is really a broker with the AI and he has such a bad grasp on the core of our codebase that I've had to spend several days refactoring the AI's vomit just to get something mildly performant (it's graphics-related code). It's clear when making new things that he just doesn't plan for the future and every new piece of code is just a hack to deliver the feature, instead of discussing the code with the others at least. Worst part is that there is seemingly no end to this

[-] Acute_Engles@hexbear.net 11 points 3 days ago

Yeah. My buddy is, luckily for him, able to dictate a lot of things still

[-] hotspur@hexbear.net 23 points 3 days ago

This is such a key point you make—quality of search results and available info to use to solve a problem have degraded so far that you almost have to rely on web search enables AI to do what you used to be able to do on your own, and in both cases now you have to engage a lot of extra effort in trying to discern if the information is at all useful.

And like you say, the situation will only recursively get worse as the two feed on each other further destroying informational value.

[-] Andrzej3K@hexbear.net 11 points 3 days ago

Very much this - I used to rely a lot in tutorials, devlogs etc to learn new patterns etc, but now search is so bad that LLMs are basically the only game in town

load more comments (1 replies)
load more comments (1 replies)
[-] chgxvjh@hexbear.net 22 points 3 days ago

With coding it's easier to deceive yourself that the AI is doing a good job. There are tons of tools out there that can detect various kinds of problems in code and the AI can call those tools and change stuff until the warnings go away. So the code might look alright on first glance. Then half the time people don't even understand the code they wrote themselves so they just look at changes across 50 different files and be like: fuck it, how much do I really care if this company goes up in flames?

[-] Blakey@hexbear.net 8 points 3 days ago

have you considered that computers are very clever and maybe deleting sys32.dll would work

[-] TreadOnMe@hexbear.net 20 points 3 days ago* (last edited 3 days ago)

Its fine for boilerplate simple programs. However, it will often make mistakes even for those, so you have to know what you are looking at. Still saves time, though idk if the actual energy usage etc., is actually saving you time and money without free money existing.

However, I have seen people write big programs with it and then be surprised that they don't work. Even more worrying though is when they do work, but then I walk through whoever wrote it and they cannot explain how or why it is working.

Its real engineering logic.

load more comments (4 replies)
[-] YiddishMcSquidish@lemmy.today 2 points 2 days ago

Here comes a highly controversial opinion.

Let me preface this with I'm anti AI, I wish Iran kept its mouth shut about destroying open's big facility and just did it. Seeing tech bros get the French revolution treatment would bring a smile to my face. And I avoid using it at all as best I can.

But I hit a breaking point yesterday with a not very popular Metroidvania I got on humble bundle called "kingdom shell". Great game with glorious atmosphere, but some very poor pacing and a few confusing puzzles. I got through most of them but one of the puzzles had me pulling my non existent hair out.

I tried normal searches, found one fairly comprehensive guide that was no help in this part specifically. I asked Gemini and I'll be damned if it didn't actually come up with a good answer.

I know my sample size of n=1 does not a p value of ≤.05 and I'm not changing my mind about using it more now. But in my one very specific instance it was a little help.

[-] buckykat@hexbear.net 3 points 2 days ago

Critical support to the slop generators in telling windows users to break their installs

[-] LaughingLion@hexbear.net 12 points 3 days ago* (last edited 3 days ago)

I've used it to create some simple scripts to do some tedious shit that I didn't feel like coding myself but nothing serious or professional. For example:

"Here is a big file that has a bunch of data in it but I only need points X,Y,Z, formatted in a JSON which I have provided an example of. Write me a simple python script to do that."

Works okay for that stuff. Always desk check it with edge cases.

[-] ProletarianDictator@hexbear.net 3 points 2 days ago

LLMs do really well on short bash scripts, but often presume a lot of things about your system that result in having to rewrite it anyway.

[-] ZWQbpkzl@hexbear.net 11 points 3 days ago

Anything that is even remotely a novel problem AI can't solve. It doesn't have the training data for your specific problem. At best it'll do a web crawl for you and summarize its findings.

If you want to really pull your hair out take a look at AGENTS.md or SKILLS.md. State of the art agentic coding practices: glorified README.md files. (the ai frequently doesn't bother to read them).

I will say one thing nice about LLMs: they are fairly "human" in the sense that they error in familiar ways. In a way AI is automated human error.

[-] ProletarianDictator@hexbear.net 2 points 2 days ago

I have found LLMs quite disappointing when writing code.

LLMs are useful for learning new libraries and scaffolding starter projects and maybe filling in a simple function body. But I rarely get purely generative output I would consider close to production-ready, even when it compiles or runs without error. To get non garbage at all, you must be very precise and ask it "implement [insert some formal data structure / algorithm / pattern] to do [specific task]" rather than asking it to produce code that does your thing. Even then, I find it more useful to ask for general strategies, related concepts, and some example code that would be useful to implement what I want.

All of this requires a pretty substantial skepticism of the output that people hyping up AI tools are completely lacking. Most people use these tools to avoid the difficult thinking necessary to solve a problem, so why would they put in that same level of thinking required to vet the output? And if you don't have enough knowledge of a framework, language, library, etc. to use it effectively or read / write the code yourself, you don't have the knowledge required to vet and maintain code produced by LLMs, let alone put it in production. I've had so many instances of LLMs writing code that would require a computer science education to understand why it is a bad idea. Anyone with that knowledge is better off implementing the thing directly instead of figuring out how to message their prompt or torture the output into something good.

LLMs repeatedly producing output you cannot or do not fully understand reenforces the view that your abilities are enhanced by the LLM. This, combined with the imposter syndrome that is rampant amongst devs is going to result in a lot of deferring to uncritically accepting bad code from LLMs.

Soon tons of mediocre devs will be producing mass quantities of code they're not capable or diligent enough to understand, resulting in huge, lumbering codebases full of bugs and bad design choices. In my career, the most common barrier to implementing anything or moving a project forward has been technical debt. LLMs are going to greatly increase the rate at which technical debt is produced and reduce the ability of people to tackle that technical debt, since they are no longer familiar with the codebase.

This phenomena is why I think LLM code gen is going to be a net productivity drain.

As always, the core problem with LLMs is not that they are frequently incorrect, it is that them being correct often enough lulls humans into foregoing their due diligence, typically in favor of having a proprietary product serve as a substitute for their critical thinking.

This is not unique to programmers, as I now see tons of people citing ChatGPT or Gemini as if they were authoritative sources on anything. We will see the effects of this in all aspects of society.

[-] space_comrade@hexbear.net 11 points 3 days ago

It can help with tedious but relatively non complex work or maybe speed up some exploratory work, anything else and it's going to make ridiculous mistakes. It's a useful tool occasionally but nothing I'd lose sleep over if it disappeared.

[-] Liketearsinrain@lemmy.ml 10 points 3 days ago

They do. Most programmers think they're above average (there were actual statistics on this, maybe from stackoverflow survey) and are mediocre enough that they find it useful/faster long term.

I'm statistically likely to be mediocre myself, but I would rather try to improve than relying on LLMs. Every single coworker I work with who is actually above average hates the forced AI usage.

[-] BGDelirium@hexbear.net 9 points 3 days ago

I'm using AI for the first time to make simple numbered lists with names. The lists vary from 100 to several hundred entries.

I have to repeatedly ask chatgpt to double and triple check its work and then end up manually counting, editing and doing a lot of the work anyways. Frustrating

[-] JustSo@hexbear.net 9 points 3 days ago
[-] BGDelirium@hexbear.net 2 points 2 days ago

Nutshell, selling Magic (MTG) cards on informal Facebook gambling pages. People buy numbers for $1-$10 for high value items and I'm not computer literate enough to use Excel/Googl Sheets/spreadsheet programs.

ChatGPT helps a little with the rote task of making a basic list, but double checking my work using AI to make sure I'm correctly counting the number of slots each person has bought has been more trouble than its worth.

[-] tamagotchicowboy@hexbear.net 5 points 3 days ago

Only little bits and pieces for projects I have so many backups I'd laugh if the LLM fucked it up, noticed they're heavily trained on python but near nothing on pascal. I use glm (deepseek, kimi etc) mostly for coding, I get banned just looking at chatgpt. I've abandoned google like a one way time capsule to 1997.

[-] ProletarianDictator@hexbear.net 2 points 2 days ago

Quality is noticably worse for less used languages and frameworks.

[-] segfault11@hexbear.net 10 points 3 days ago

The AI is right. Just delete that shit and install freeBSD.

[-] Carl@hexbear.net 10 points 3 days ago* (last edited 3 days ago)

i said this before but it's very good at making you feel like you're accomplishing something, but you will inevitably hit a wall with any ai project where it just can't meaningfully contribute anymore. you can work around this issue to an extent by getting really good at you know like project management and splitting your thing up into smaller and smaller chunks but eventually you'll then cross the second wall which is where you're putting in way more effort prompting the AI than you would have spent just doing it yourself

I found that in coding same as with creative writing the best use for the AI is as a parrot because you know if you're hitting writer's block or if you're trying to flesh out an idea all you need is something to like kind of spit your own words back at you to kind of help you shake yourself out of that so using an AI as like a sounding board for your ideas I think is somewhat valuable use or using it as a search engine but even the search engine use is only there because search engines have gotten so shitty

[-] barrbaric@hexbear.net 9 points 3 days ago

My coworkers range from "Claude can find errors in my code" to "Yeah I just copy-paste everything from chatgpt". Those like the former at least can still submit legible code (for now). Those like the latter submit random gibberish and have no idea how it works.

[-] Damarcusart@hexbear.net 6 points 3 days ago

Those like the latter submit random gibberish and have no idea how it works.

That's ok because neither do they!

[-] homhom9000@hexbear.net 3 points 2 days ago

So many helper functions. All I said was, use a json file to create sql insert statements and have the date in timestamp format, expecting it to use To_timestamp. It created a helper function for parsing each datepart, then another to cast the result into To_timestamp.

[-] silentjohn@lemmy.ml 10 points 3 days ago

It's not that different than using Stack for parts or boilerplate code (since AI probably just stole from that anyways). So you still need to know what's going on unless you literally just keep throwing prompts at every error for 3 hours until it magically works.

I use AI mostly to troubleshoot all of the vague errors that come out of python or SQL, not to write my entire code. It's a [relatively shitty] tool, not an 'I Win' button that everybody claims it is.

Similarly, I like having it summarize search results and I can click into the actual relevant links. But yea it's pretty garbage most of the time. I'm definitely on team 'fuck ai'; I lived without it before, I can live without it again

[-] save_vs_death@hexbear.net 9 points 3 days ago

for what it's worth i'd rather program a filesystem from scratch than troubleshoot someone's cursed computer and the janky setup they need to barely run a video game but that's besides the point, AI is abysmal dogshit, yes

[-] Kultronx@lemmygrad.ml 7 points 3 days ago

hmmm maybe user error. deepseek is really useful in helping with troubleshooting and linux stuff

load more comments (1 replies)
[-] gramxi@hexbear.net 10 points 3 days ago

I still take a peak at /r/selfhosted sometimes and the situation is dire. The mods have completely given into the slop trough.

load more comments (1 replies)
[-] LaGG_3@hexbear.net 7 points 3 days ago

It's probably related to the reason why your Start button took a vacation lol

[-] take_five_moments@hexbear.net 9 points 3 days ago

i've been using deepseek to make a d&d module to play with some friends, with the amount of editing and tweaking i've had to do on the prompts and what it spits out imagining people using it for coding freaks me out a little

[-] fox@hexbear.net 10 points 3 days ago

Keep in mind that it'll always regress to the most average output when you try to use it for creative endeavors. It sandblasts ideas into a nice round shape that's identical to every other idea it works on.

load more comments (1 replies)
load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 04 Apr 2026
112 points (99.1% liked)

technology

24320 readers
314 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS