145
top 50 comments
sorted by: hot top controversial new old
[-] SootySootySoot@hexbear.net 11 points 1 day ago* (last edited 1 day ago)

Also, "seeing a financial return" is not equivalent to "using AI was a good idea". There are multiple companies I've interacted with that e.g. use AI as their support tool and cut their 'real' support team hugely. And it's so furiously useless that I quickly hate the company because it can't support shit.

A lot of these companies who are seeing a financial return today, will quickly see that go away as their business stagnates/shrinks from offering a shittier service. And a lot of those that don't will be companies that can just get away with infinite enshittification and externalising their costs (eg natural monopolies, stuff people need to live, etc), thus making all of life worse even if their line goes up.

From what I've heard is that this is the same ol story as with any other tech bandwagon for the past 15 years, be it block chain, big data, predictive algorithms, etc. I saw this cycle first hand when I was working on tech projects:

Company execs see at some conference that this is the new big thing and that their business is going to fail if they get in on the ground floor. Then some sales reps from a big corporate tech company (SAP, Oracle, Google, MS) talks them into spending huge amounts of money on all the bells and whistles before even figuring out how to implement the tech in their company. Finally, the consultants come and in the best case scenario, there's a very specific use case for the tech within the company, and it saves/makes quite a bit of money, in the worst case, it's a botched implementation that causes more pain than it solves. Either way, the modest pay-offs are not nearly enough to pay off everything that was spent on the initial investment. Rinse and repeat.

[-] BodyBySisyphus@hexbear.net 93 points 2 days ago

Pretty sure there are returns, they're just negative:

[-] jackmaoist@hexbear.net 81 points 2 days ago

This is like a perfect setup to commit fraud. Just blame AI for cooking your sheets and be done with it.

Lots of people have been raising the alarm about this. Ai is just a convenient excuse to remove humans from any accountability or decision making, therefore not being able to be liable.

[-] Tabitha@hexbear.net 47 points 2 days ago

Is AI a valid defense, or is it a confession to negligence?

[-] Le_Wokisme@hexbear.net 49 points 2 days ago

depends how good your lawyer is

[-] invalidusernamelol@hexbear.net 2 points 1 day ago

Dude, Pandas and R exist... And they're incredibly easy to use... Why the fuck was no one even spot checking the numbers

[-] BodyBySisyphus@hexbear.net 2 points 1 day ago

You'd be amazed at how many "numbers" people still haven't mastered Excel.

[-] invalidusernamelol@hexbear.net 2 points 1 day ago

I'm the solo developer at my company of 50 people. Literally everything we use was written by me because I got fed up with the "numbers" guys fucking up spreadsheets.

It's all SQL now baby, so they literally can't get rid of me because they don't even know how it works, only that it does lol

[-] SuperZutsuki@hexbear.net 8 points 1 day ago* (last edited 1 day ago)

It's very telling that they just implemented the AI without even giving its answers any sanity checks near the beginning. Could have caught it day one but no, it's magic and checking would be a waste of time brainworms

[-] TraschcanOfIdeology@hexbear.net 4 points 1 day ago* (last edited 1 day ago)

I mean, it could've worked well at the beginning, then fallen off the rails for some reason or another.

That's the dumb and scary thing about AI stuff: it might work today, it might work for years (if you're lucky), but every time you execute a prompt, you're rolling the dice on whether the mystery box will decide to just make up some shit from here on out. If you need a person to check the AI's output to make sure it's not hallucinating, might as well cut the Ai off from the loop altogether and use the checker's output from the get-go.

[-] BodyBySisyphus@hexbear.net 5 points 1 day ago

It drives me absolutely bonkers that there are smart people out there groveling and scraping for jobs while gormless jokers like this have secure six-figure salaries.

[-] Johnny_Arson@hexbear.net 29 points 2 days ago

There are more guardrails but the company I work for relies heavily on salesforce and I wonder if this is applicable. I don't care I missed my bus and said fuck it and called in sick.

[-] TraschcanOfIdeology@hexbear.net 3 points 1 day ago* (last edited 1 day ago)

From what I know about Salesforce, it depends on how heavy the company has gone on AI stuff. By itself, Salesforce is just a client database with some extra things on top, but if you're using Ai to write reports or analyze data, might as well ask a magic 8-ball.

[-] Johnny_Arson@hexbear.net 1 points 1 day ago

They want us to engage with "all the tools it offers" I have not been directed specifically to deal with the analytics part of it but I am sure the actual field reps are. I just do mostly customer service side stuff, processing orders/returns and assisting the remote sales team. I absolutely loath its "genius" AI powered search functions which I have to use constantly. It can't even do simple intuitive things like if I am searching for the name of the client I just spoke to to log my activity, it can't even figure out I am looking for Bob Smith from within the contact card I am on and instead I have to open up the full list of Bob Smiths and then find the one for that specific company, which I assume would be one of the few things an LLM should be able to do well.

[-] TraschcanOfIdeology@hexbear.net 2 points 1 day ago* (last edited 1 day ago)

They want us to engage with "all the tools it offers"

That just sounds like management overpaid for a piece of software they don't really understand and want people to spend their day throwing shit at the wall to see what sticks, no matter how difficult it makes simple tasks. If you want to implement a piece of tech in a process, you have to very specifically define which parts of that software will be used and how, otherwise it's a headache for everyone involved. It's like giving a set of knives to someone who mostly chops vegetables, and asking them to engage with the knives it offers, even though they have no use for a jamón slicing knife.

Idk much about Salesforce tbh but what you describe does sound like one of those legacy ways of doing something that has worked the same way for 25 years even though it makes no sense, but it would be a disaster if someone changed it to make sense. Now you just put a chatbot in charge of it, and blame the user for not being able to prompt it right.

[-] Johnny_Arson@hexbear.net 2 points 1 day ago

It is 100% the first part with a little bit of the second part.

[-] EveningCicada@hexbear.net 64 points 2 days ago

you just haven't invested enough bro come on the singularity is right around the corner do another venture capital surge i promise it will be worth it

[-] LaGG_3@hexbear.net 6 points 2 days ago

Yeah, the singularity of the economic black hole lmao

[-] Goblinmancer@hexbear.net 35 points 2 days ago

Dont worry its all "speculated ROI" now and when shit goes up in flames give the ceos 10 billion dollars while firing everyone else.

[-] THEPH0NECOMPANY@hexbear.net 22 points 2 days ago

That was supposed to be what AI was for in the first place, an excuse to cut staff, smh damn capitalists don't even know their own scams anymore

[-] WafflesTasteGood@hexbear.net 45 points 2 days ago

A mere 12 percent of CEOs reported that it’d accomplished both goals.

That 12 percent is either full of shit, they were running things like garbage to begin with, or the shitstorm just hasn't hit them yet.

[-] yogthos@lemmygrad.ml 47 points 2 days ago

I actually think around 10% success rate sounds about right here. There are niches where this tech works well, but it's being applied everywhere indiscriminately. So it makes sense that most deployments fail, but a small percentage actually finds the right niche.

[-] TraschcanOfIdeology@hexbear.net 3 points 1 day ago* (last edited 1 day ago)

This reminds me a lot of the dotcom bubble in that everyone was trying to make online businesses, even in industries where it made no sense. Online retail and other stuff was actually useful, but 99% of those businesses were graft that had no actual use, just an excuse to grab VC funding and run.

People are putting chatbots and llms e everywhere, even when they're unnecessary or even dangerous to implement.

Edit: just saw your comment further down. You put it way better than I could.

[-] yogthos@lemmygrad.ml 3 points 1 day ago

Yup, this is exactly like the dotCom bubble, except on an even bigger scale with a lot more shady business practices if that's even possible.

[-] Tabitha@hexbear.net 24 points 2 days ago

I'd say it's more likely that 95% were greedily chasing hype, went a little too all-in, and 7% of them got lucky it didn't burn them.

[-] Ghostie@lemmy.zip 12 points 2 days ago

Mmmm schadenfreude

[-] DasRav@hexbear.net 39 points 2 days ago

CEO: "Wow, this could replace me, because all I really do is send an email once a day and try to say nice things about business business while trying to profit off of insider trading. And this thing won't do the last bit, so it's better then me! Everyone use the theft machine!"

Workers: "Healthcare?"

CEO: "No. Only use theft machine!"

[-] ClathrateG@hexbear.net 36 points 2 days ago* (last edited 2 days ago)

Just a few more trillion and all the water in Tennessee and Minnesota and half the juice in the grid I swear bro

[-] BarneyPiccolo@lemmy.today 10 points 2 days ago

These ghouls are practically giddy at the prospect of firing as many workers as possible. Sorry to disappoint them.

[-] Infamousblt@hexbear.net 27 points 2 days ago* (last edited 2 days ago)

This sounds damning but also doesn't mean a lot. Many companies go many many years building things before seeing a "return" on it. They make money but they are making less than they're burning in VC funds, and as they start approaching the break even point, they use that to go get more VC funds to burn to keep expanding. This is largely how the tech industry works. Very very few companies are cash flow positive during their growth phases.

It does mean that there is a risk in this investment because from a business perspective AI hasn't been proven as a valuable investment yet but it still might for at least some of these companies and unless we really do run into the physics issues with AI with regards to data center capacity and build rate it could take a decade or more for this "we aren't seeing a return yet" thing to matter

[-] yogthos@lemmygrad.ml 18 points 2 days ago

I agree, it's basically a completely new tool looking for a market fit. It's also worth noting that these companies are basically looking for one stellar application. If they hit on something that works really well, that's gonna be the business model. So, they're perfectly fine with most of the pilots failing if they can find one that works well.

That said, I do think there is a bubble where a lot of companies are implementing these tools without having a good fit for them, and there's a ton of money being wasted in the process. It's kind of the same thing we saw with the dotCom bubble. When it popped, there was an extinction event where most companies went belly up, but we got a ton of useful tech out of it that underpins the internet today.

I expect we'll see a similar thing happen with AI. Except, this time around there's another factor which is that there's direct competition from China. My prediction is that Chinese models will win in the end because Chinese companies aren't looking for direct monetization, they're treating models as infrastructure, sort of what we see with Linux. Most companies don't try to monetize it directly, they build stuff like AWS on top of it and that becomes the product.

I expect American companies are just going to run out of runway in the near future, and they're also getting squeezed by cheap Chinese models that are also open source. Big companies prefer running stuff on prem because they can keep their data private that way, and they can tune the models any way they want. Meanwhile, stuff like DeepSeek is orders of magnitude cheaper than Claude for individual use. So I just don't see a long term business model for models as a service, especially not at pricing like Anthropic or even Google. Vast majority of people aren't gonna pay 20 bucks a month for this stuff, let alone a 100.

[-] darkmode@hexbear.net 9 points 2 days ago* (last edited 2 days ago)

Most companies don't try to monetize it directly, they build stuff like AWS on top of it and that becomes the product.

My company has an idea like this going rn but still charges per token bc it uses the vc funded companies services instead of having their own models

[-] yogthos@lemmygrad.ml 6 points 2 days ago

For business customers per token costs might not be a deal breaker, but for anything consumer facing it's a really tough sell in my opinion. I do expect that the cost of running models is going to come down significantly in the near future though. There is a whole bunch of recent research that identify some key optimizations that can be made. Some of the ones I've found particularly interesting here:

Once these ideas start getting integrated, I expect that we'll see much more capable models that can run on fairly cheap hardware. Even local models will likely be quite capable for a lot of tasks. And at that point running a model as a service and charging per token is going to be a dead end.

[-] darkmode@hexbear.net 3 points 2 days ago

this is an incredible list of research. TYSM! In spare work time i have a small tool that tries to accomplish what #2 describes i have not clicked the link and read yet but now i will read everything

[-] yogthos@lemmygrad.ml 4 points 2 days ago

I played around with implementing the recursive language model paper, and that actually turned out pretty well https://git.sr.ht/~yogthos/matryoshka

Basically, I spin up a js repl in a sandbox, and the agent can feed files into it, and then run commands against them. What normally happens is that the agent has to ingest the whole file into its context, but now it can just shove files into the repl, and then do operations on them akin to a db. And it can create variables. For example, if it searches for something in a file, it can bind the result to a variable and keep track of it. If it needs to filter the search later, it can just reference the variable it already made. This saves a huge amount of token use, and also helps the model stay more focused.

[-] darkmode@hexbear.net 1 points 1 day ago

about how large are the codebases you’ve used this rlm with

[-] yogthos@lemmygrad.ml 3 points 1 day ago

Around around 10k lines or so. I use it as MCP that the agent uses when it decides it needs to. The whole code base doesn't get loaded in the repl, just individual files as it searches through them.

[-] GoodGuyWithACat@hexbear.net 12 points 2 days ago

But it isn't a niche area, it's the major focus of corporate investment for the last year or two.

[-] jack@hexbear.net 25 points 2 days ago* (last edited 2 days ago)

wowee who could've seen this coming

[-] LaGG_3@hexbear.net 29 points 2 days ago
[-] Lussy@hexbear.net 20 points 2 days ago

And we’re going to pay for it, either ways

[-] aanes_appreciator@hexbear.net 18 points 2 days ago

Lol for real. im soending today undoing work my ai slop loving colleague wrote on Friday. so effectively zeroed out.

[-] plinky@hexbear.net 15 points 2 days ago* (last edited 2 days ago)

there were some pictures floating on twitter (in tooze circles ~~can't find it quickly~~ nvm), that there were some increases in total factor productivity for tech workers and some text adjacent fields, around 1-3% from ai stuff over quarter (using self reporting for one axis, but it shows some correlation, so) (the most weird is construction, i don't know what to make of it)

the most weird is construction, i don't know what to make of it

Construction usually involves a lot of repetitive, very detail-oriented tasks, like invoicing, writing buy orders, signing off on deliveries, doing payroll, scheduling shifts, project management. A very limited scope LLM can make these tasks much easier I imagine.

[-] Speaker@hexbear.net 5 points 2 days ago

How are they measuring productivity? If one of your KPIs is "Accelerate Business Objectives By Leveraging Theft Machine Synergy", the results may be a bit skewed. 😄

[-] PaulSmackage@hexbear.net 15 points 2 days ago

HUH, WHO WOULDA THOUGHT

this post was submitted on 16 Feb 2026
145 points (100.0% liked)

technology

24252 readers
390 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS