73
top 35 comments
sorted by: hot top controversial new old
[-] came_apart_at_Kmart@hexbear.net 50 points 1 day ago

my clue that the trumpets of the AI collapse are tuning up is that, last week, the least tech savvy person I know in my cohort was telling me, the person everyone they know goes to for random technical assistance/context, about how powerful "AI" (LLM) is and how it's about to take over everything.

it's like that bit about how, when the shoeshine kid and your gardener have stock tips, it's time to get out of the market because now literally everyone is regurgitating the "New Paradigm!" cliches.

[-] yogthos@lemmygrad.ml 18 points 1 day ago

I imagine the flop of ChatGPT 5 along with it becoming clear that current gen models aren't living up to the expectations might be starting to cool investor expectations.

[-] Dirt_Possum@hexbear.net 16 points 23 hours ago

I've been having an ongoing argument the past month with with a 70-something step relative I see often who has always come to me for computer advice about this exact thing. I've tried to let it go many times, but she keeps hammering at it, even bringing it up out of the blue. Since you mentioned something similar, forgive me for popping in with my own rant here, but it has been really getting on my nerves.

She absolutely will not hear it that "AI" does not mean it is actually intelligent but rather a marketing scam and that it has zero chance of developing general intelligence. It's been disappointing because like I said, she used to trust me about computer stuff, but now angrily asks me "do you even know about <some pop sci "expert"> and the projects they're working on?!," namedropping all these supposed "respected" scientists she's been reading about the impending AI apocalypse and thinking I'm uninformed for not knowing them. It's like shit lady, I used to argue with Ray Kurtzweil's singularity nuts 12 years ago about this same sort of garbage. She's actually fairly youthful in her views for a boomer, she's a sci fi fan and prides herself on being socially progressive, and frequently talks about how much she loves science, but has always had a real "woowoo" new-agey bent to that

The conversation first came up because she told me about how she's been literally losing sleep with actual insomnia, thinking about AI with respect to what it will mean for her grandkids, what will happen to them in a world where machines become "ever more" intelligent, repeating talking points she heard somewhere about how "once machines become intelligent, they'll have no use for us and only see us as a threat." So many brainworms to sift through, from colonialist thinking to buying into "AI" hype. I did my best to disabuse her of this belief to begin with partly just to help her sleep better at night, though I admit I did remind her there are many many other real things to worry about regarding the world her grandkids will be inheriting. But she's sticking to it vehemently. She's been going off on me about how "all the top thinkers on the subject" agree with her, going on about how even Stephen Hawking thought AI would be a disaster (she thought that was an ace in the hole because I used to like discussing theoretical physics with her and never had the heart to differentiate for her the real but niche contributions Hawking made from his celebrity). Rather than think about what I said as someone who tends to know more about this kind of thing, she has decided I don't actually know anything.

It really has been a trip watching how the propaganda/hype mill and the shoehorning of "AI" into everything has broken so many brains who only 5 years ago would have laughed at anyone else for thinking the plot of Terminator was really happening.

[-] Frogmanfromlake@hexbear.net 10 points 22 hours ago

Better keep an eye on her. I know a number of people who fit that description and they gradually fell into a right-wing rabbit hole. Usually anti-vax or anti-Covid precautions were the final stepping stones.

[-] TankieTanuki@hexbear.net 40 points 1 day ago
[-] Frogmanfromlake@hexbear.net 7 points 22 hours ago

Have any of the other three made a return?

[-] Damarcusart@hexbear.net 6 points 15 hours ago

The charleston is basically just a fortnite dance, so maybe half a point?

[-] LangleyDominos@hexbear.net 20 points 1 day ago

Unfortunately it will probably be like the dotcom crash. Websites/services only became stronger afterwards, becoming inseparable from daily life. If a crash happens this year, the Facebook of AI is coming around 2030.

[-] peeonyou@hexbear.net 36 points 1 day ago

Honestly, I can't imagine these LLMs are actually contributing any sort of benefit when you consider the amount of trash you have to wade through and fix once they've done what they've done. For every quickly typed up professional e-mail or procedure they do they're wasting multiple hours of programmer time by introducing bs into codebases and trampling over coding conventions which then has to be reviewed and fixed. I imagine it will get to the point where AI can do things on its own without the hallucinations and the flat out errors and whatnot, but it ain't now and I don't think it's anytime soon.

[-] yogthos@lemmygrad.ml 25 points 1 day ago

I find they have practical uses once you spend the time to figure out what they can do well. For example, for coding, they can do a pretty good job of making a UI from a json payload, crafting SQL queries, making endpoints, and so on. Any fairly common task that involves boilerplate code, you'll likely get something decent to work with. I also find that sketching out the structure of the code you want by writing the signatures for the functions and then having LLM fill them in works pretty reliably. Where things go off the rails is when you give them too broad a task, or ask them to do something domain specific. And as a rule, if they don't get the task done in one shot, then there's very little chance they can fix the problem by iterating.

They're also great for working with languages you're not terrible familiar with. For example, I had to work on a Js project using React, and I haven't touched either in years. I know exactly what I want to do, and how I want the code structured, but I don't know the nitty gritty of the language. LLMs are a perfect bridge here because they'll give you idiomatic code without you having to constantly looks stuff up.

Overall, they can definitely save you time, but they're not a replacement for a human developer, and the time saving is mostly a quality of life improvement for the developer as opposed to some transformational benefit in how you work. And here's the rub in terms of a business model. Having what's effectively a really fancy autocomplete isn't really the transformative technology companies like OpenAI were promising.

[-] Chana@hexbear.net 14 points 1 day ago

With React I would be surprised if it was really idiomatic. The idioms change every couple years and have state management quirks.

[-] yogthos@lemmygrad.ml 6 points 1 day ago

It uses hooks and functional components which are the way most people are doing it from what I know. I also find the code DeepSeek and Qwen produce is generally pretty clear and to the point. At the end of the day what really matters is that you have clean code that you're going to be able to maintain.

I also find that you can treat components as black boxes. As long as it's behaving the way that's intended it doesn't really matter how it's implemented internally. And now with LLMs it matters even less because the cost of creating a new component from scratch is pretty low.

[-] jorge@lemmygrad.ml 1 points 9 hours ago

I hadn't heard of Qwen. I have only used Deep Seek, and not much. What are Qwen's advantages over Deep Seek? And is there any other model from BRICS countries I should look for? Preferably open source.

And do you recommened a local solution? For which use-case? I have a mid-range gamer laptop. IIRC it has 6GiB VRAM (NVIDIA).

[-] yogthos@lemmygrad.ml 1 points 8 hours ago

I've found Qwen is overall similar, their smaller model that you can run locally tends to produce somewhat better output in my experience. Another recent open source model that's good at coding is GLM https://z.ai/blog/glm-4.5

6gb vram is unfortunately somewhat low, you can run smaller models but the quality of output is not amazing.

[-] Chana@hexbear.net 3 points 18 hours ago

Does it memoize with the right selection of stateful variables by default? I can't imagine it does without a very specific prompt or unless it is very simple boilerplate TODO app stuff. How about nested state using contexts? I'm sure it can do this but will it know how best to do so and use it by default?

In my experience, LLMs produce a less repeatable and correct version of what codegen tools do, more or less. You get a lot of repetition and inappropriate abstractions.

Also just for context, hooks and functional components are about 6-7 years old.

[-] yogthos@lemmygrad.ml 3 points 16 hours ago

I tend to use it to generate general boilerplate. Like say I have to talk to some endpoint and I get a JSON payload back. It can figure out how to call the endpoint, look at the payload, and then generate a component that will render the data in a sensible way. From there, I can pick it up and add whatever specific features I need. I generally find letting these things do design isn't terribly productive, so you are better off deciding on how to manage state, what to memoize, etc. on your own.

I also find the quality of the tools is improving very quickly. If you haven't used them in half a year or so, your experience is already dated. You get by far the biggest bang for your buck with editor integrated tools that can run MCP, where they can run code and look at output.

Finally, I personally don't see anything wrong with hooks/functional components even if there's already a new fad in Js land. The churn is absolutely insane to me, and I frankly don't understand how people keep up with this. You can start a project in Js, and by the time you finish it the Js world has already moved on to some new bullshit.

I used to work with ClojureScript when I needed frontend functionality before. There's a React wrapper called Reagent. It's basically a better version of hooks/functional components, it worked this way for over a decade. In that time, React itself went through a dozen different ways of doing things. The value gained has been rather unclear to me.

[-] Chana@hexbear.net 2 points 15 hours ago

Yes I'm sure it can do a lot of boilerplate. I'm just saying I doubt it is very idiomatic. It is essentially a souped-up regurgitation machine drawing from a collection of a bunch of open source code over a long period of time and quality as well as documentation.

This can be fine for many purposes but if it is for a substantial project that other people will need to maintain I would suspect it is a technical debt generator. As the saying goes, code is read more than it is written. Writing the code is usually the easy part. Producing a maintainable design and structure is the hard part. That and naming things.

[-] yogthos@lemmygrad.ml 1 points 9 hours ago

I mean all code is technical debt in the end, and given how quickly things move in Js land, it doesn't matter whether you're using LLMs or writing code by hand. By the time you finish your substantial project, it's pretty much guaranteed that it's legacy code. In fact, you'll be lucky if the libraries you used are still maintained. So, I don't really see this as a serious argument against using LLMs.

Meanwhile, as you note, what makes code maintainable isn't chasing latest fads. There's nothing that makes code written using hooks and functional components inherently less maintainable than whatever latest React trend happens to be.

And as I pointed out earlier, LLMs change the dynamic here somewhat because they significantly lower the time needed to produce certain types of codes. As such, you don't have to be attached to the code since you can simply generate a new version to fit new requirements.

Where having good design and structure really matters is at the high level of the project. I find the key part is structuring things in a way where you can reason about individual parts in isolation, which means avoiding coupling as much as possible.

[-] Chana@hexbear.net 1 points 5 hours ago

I mean all code is technical debt in the end, and given how quickly things move in Js land, it doesn't matter whether you're using LLMs or writing code by hand.

I just explained why design and maintainability are the hard part and something LLMs don't do. LLMs lead to the bad habit of skipping these things, which junior devs do all the time, wasting a lot of resources. Just like a junior dev writing spaghetti can make a middle manager very happy because it's "delivered on time", they'll eventually have to pay in the form of maintenance far more than if better practices had been used.

Writing boilerplate React components that fetch JSON from APIs is the easy stuff that takes very little time. If you throw in intermediate things (like basic security) you will likely need to spend more time reviewing its slop than just doing it yourself. And it will likely be incapable of finding reasonable domain abstractions.

If it's for a throwaway project none of this really matters, of course.

By the time you finish your substantial project, it's pretty much guaranteed that it's legacy code. In fact, you'll be lucky if the libraries you used are still maintained.

If it is a production system with any prioritization of security it will need to be regularly maintained, including with library updates. If a library becomes unmaintained then one either needs to use a different one or start maintaining it themselves.

So, I don't really see this as a serious argument against using LLMs.

There are different ways to make code unmaintainable. It seems like you're saying writing code in JavaScript means you always do a rewrite when it comes time to do maintenance work (it moves fast!). This is just not true and is something easily mitigated by good design practices. And in terms of any org structure, you are much less likely to get a green light on a rewrite than on maintaining the production system that "works".

Meanwhile, as you note, what makes code maintainable isn't chasing latest fads. There's nothing that makes code written using hooks and functional components inherently less maintainable than whatever latest React trend happens to be.

I'm not sure what you mean by this. When I said hooks and functional components were 6 years old it was in the context of doubting whether LLMs are up on modern idioms. You said it wrote idiomatic code, citing 6-7 year old idioms. That's not great evidence because they are long-established over several major version releases and would be a major input to these LLMs. I mentioned a few newer ones and asked whether they were generated for your code.

React written with hooks and functional components is more maintainable than legacy implementations because it will match the official documentation and is a better semantic match to what devs want to do.

And as I pointed out earlier, LLMs change the dynamic here somewhat because they significantly lower the time needed to produce certain types of codes. As such, you don't have to be attached to the code since you can simply generate a new version to fit new requirements.

I don't get attached to code...

LLMs do the easy part and then immediately require you to do the harder parts (review and maintenance) or scrap what they generate for the hardest parts (proper design and abstractions). Being satisfied with this kind of output really just means having no maintenance plans.

Where having good design and structure really matters is at the high level of the project. I find the key part is structuring things in a way where you can reason about individual parts in isolation, which means avoiding coupling as much as possible.

It matters at all levels, right down to the nouns and adjectives used to describe variables, objects, database tables, etc. Avoiding coupling will mean knowing when to use something like dependency injection, which I guarantee LLMs will not do reliably, maybe even not at all unless it is the default pattern for an existing framework. Knowing to use dependency injection will depend on things like your knowledge of what will need to be variable going forward and whether it is easier to reason about behavior using that pattern in your specific context. If using domain model classes, are implementing an abstract method or are they passed the implementation and just know how to call it? Etc etc.

[-] yogthos@lemmygrad.ml 1 points 4 hours ago

I just explained why design and maintainability are the hard part and something LLMs don’t do.

Ok, but I've repeatedly stated in this very thread that design is something the developer should do. Are you even reading what I'm writing here?

Writing boilerplate React components that fetch JSON from APIs is the easy stuff that takes very little time. If you throw in intermediate things (like basic security) you will likely need to spend more time reviewing its slop than just doing it yourself. And it will likely be incapable of finding reasonable domain abstractions.

I have to ask whether you actually worked with these tools seriously for any period of time, because I have and what you're claiming is directly at odds with my experience.

If it is a production system with any prioritization of security it will need to be regularly maintained, including with library updates. If a library becomes unmaintained then one either needs to use a different one or start maintaining it themselves.

Not sure what this has to do with code written by LLMs. If I have a React component it has fuck all to do with me updating libraries in the project. Furthermore, LLMs are actually quite decent at doing mechanical tasks like updating code to match API changes in libraries.

There are different ways to make code unmaintainable. It seems like you’re saying writing code in JavaScript means you always do a rewrite when it comes time to do maintenance work (it moves fast!).

No, I'm saying the exact opposite which is that you shouldn't try to chase fads.

I’m not sure what you mean by this. When I said hooks and functional components were 6 years old it was in the context of doubting whether LLMs are up on modern idioms. You said it wrote idiomatic code, citing 6-7 year old idioms. That’s not great evidence because they are long-established over several major version releases and would be a major input to these LLMs. I mentioned a few newer ones and asked whether they were generated for your code.

You might have to clarify which particular fad you're on currently, because I haven't been keeping up. However, I do see hooks and functional components used commonly today.

I don’t get attached to code…

Everybody gets attached to code, it's inevitable. If you have a bunch of code to solve a particular problem and it takes a lot of effort to rewrite it, then you're not going to throw it away easily. When your requirements start changing, it makes sense to try to adapt existing code to them rather than write code from scratch.

It matters at all levels, right down to the nouns and adjectives used to describe variables, objects, database tables, etc.

It really doesn't if you have good interfaces between components. You don't inspect all the code in libraries you include, you focus on the API of the library instead. The same logic applies here. If you structure your project into isolated components with clear boundaries, then your focus is on how the component behaves at the API level.

Avoiding coupling will mean knowing when to use something like dependency injection, which I guarantee LLMs will not do reliably, maybe even not at all unless it is the default pattern for an existing framework.

Again, I'm not suggesting using LLMs to do design. My whole point was that you do the design, and you use LLM to fill in the blanks. In this context, you've already figured out what the component will be and what scope it has, the LLM can help create features within that scope.

Knowing to use dependency injection will depend on things like your knowledge of what will need to be variable going forward and whether it is easier to reason about behavior using that pattern in your specific context.

Also, things like dependency injection are an artifact of OO programming style which I find to be an anti pattern to begin with. With functional style, you naturally pass context as parameters, and you structure your code using pure functions that take some arguments and produce some result. You can snap these functions together like Lego pieces and you can test them individually. This meshes quite well with using LLMs to generate code and evaluate whether it's doing what you want.

If using domain model classes, are implementing an abstract method or are they passed the implementation and just know how to call it? Etc etc.

Have you written code in styles other than OO?

[-] Andrzej3K@hexbear.net 6 points 1 day ago

I think that's going to change now though, as a result of LLMs. We're going to be stuck with whatever was the norm when the data was harvested, forever

[-] Chana@hexbear.net 2 points 18 hours ago

Assuming the use of these tools is dominant over library developers. Which I don't think it will be. But they may write their libraries in a way that is meant to be LLM-friendly. Simple, repetitious, and with documentation and building blocks that are easily associated with semi-competent dev instructions.

[-] Andrzej3K@hexbear.net 4 points 1 day ago

I find Gemini really useful for coding, but as you say it's no replacement for a human coder, not least because of the way it fails silently e.g. it will always ime come up with the hackiest solution imaginable for any sort of race condition, so someone has to be there to say WTF GEMINI, ARE YOU DRUNK. I think there is something kind of transformative about it — it's like going from a bicycle to a car. But the thing is both need to be driven, and the latter has the potential to fail even harder

[-] yogthos@lemmygrad.ml 6 points 1 day ago

Exactly, it's a tool, and if you learn to use it then it can save you a lot of time, but it's not magic and it's not a substitute for understanding what you're doing.

[-] Chana@hexbear.net 23 points 1 day ago

The most useful application is in making garbo marketing images for products that used to be 100% photoshopped instead. Cool your fake product has an "AI" water splash instead of one from Getty. Nothing of value gained or lost except a recognition of how meaningless it is.

[-] MolotovHalfEmpty@hexbear.net 4 points 22 hours ago

Also, the reason all the hype and 'culture' around these products focus on individual end users (write me a poem, be a chatbot, make me Pixar art etc) is because they're good at being flexible, at applying the algorithm to different shallow tasks. But when it comes to specific, repeated, reliable use cases for businesses they're much much worse. The error rates are high, it's actual ability for 'institutional memory' and reliable repetition is poor, and if you're replicating a known process previously done by people you still have to train or recruit new people to get the best out of the tech.

[-] happybadger@hexbear.net 34 points 1 day ago
[-] BodyBySisyphus@hexbear.net 16 points 1 day ago

Yeah, the next nightmare is starting to get tired of waiting. doomer

[-] jackmaoist@hexbear.net 11 points 1 day ago

They can make a bubble about Quantum Computing as a treat.

[-] bobs_guns@lemmygrad.ml 1 points 7 hours ago

There's literally no reason to do that, so it will probably happen.

[-] frogbellyratbone_@hexbear.net 25 points 1 day ago

this isn't me fanboying LLM corporations. pop pop pop. this article is fucking stupid though.

On Tuesday, tech stocks suffered a shock sell-off after a report from Massachusetts Institute of Technology (MIT) researchers warned that the vast majority of AI investments were yielding “zero return” for businesses.

no they didn't. :// there was a small 1.5% "shock sell-off" (fucking lol) before rebounding. they're only down 0.5% over the past 5-days.

even softbank, who the article focuses on, is up 36.5% (god damn) over the past month. that's huge.

this week’s sell-off has yet to shift from a market correction to a market rout

omg stfuuuuuuuuuu. it's -10% for a correction we aren't even 0.5% of that.

[-] Carl@hexbear.net 34 points 1 day ago

come to think of it, "the market responded to an MIT study suggesting that the technology is worthless" is far too coherent for the stock market. The crash/bubble pop will come because a black cat crossed someone's path or a meteor is seen in the sky over the Bay Area.

[-] Formerlyfarman@hexbear.net 13 points 23 hours ago

It's always those "Comet sighted" events.

[-] Florn@hexbear.net 11 points 22 hours ago

I wish I lived in more enlightened times.

[-] Rom@hexbear.net 15 points 1 day ago

LET'S FUCKING GOOOOOOOO lets-fucking-go

this post was submitted on 22 Aug 2025
73 points (100.0% liked)

technology

23925 readers
83 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS