218
submitted 1 month ago by yogthos@lemmy.ml to c/technology@hexbear.net
top 50 comments
sorted by: hot top controversial new old
[-] GrouchyGrouse@hexbear.net 132 points 1 month ago

Have they tried replacing their workers with AI to save money?

[-] Antiwork@hexbear.net 92 points 1 month ago

Now that capital has integrated them into their system they will not be allowed to fail. At least for now.

[-] PaX@hexbear.net 64 points 1 month ago

doomjak The iron law of "nothing ever happens" necessitates this

spoilerNah but for real how much life can this bubble still have left?

[-] Diuretic_Materialism@hexbear.net 32 points 1 month ago

A lot, because nothing ever happens.

[-] Tachanka@hexbear.net 22 points 1 month ago

The iron law of "nothing ever happens"

There are decades where nothing ever happens, and there are weeks where we are so back

load more comments (1 replies)
[-] hexaflexagonbear@hexbear.net 30 points 1 month ago

Or Microsoft and Meta will make sure there's less competition in the future for their own LLMs?

[-] jaywalker@hexbear.net 23 points 1 month ago

It seems like MS could really fuck them up if they stopped using OpenAI for all their azure stuff. As of now I don't think MS relies on their own LLM for anything?

[-] CthulhusIntern@hexbear.net 14 points 1 month ago

MS abandons basically anything new that doesn't make them even more absurdly rich instantly these days.

load more comments (2 replies)
load more comments (1 replies)
[-] PaX@hexbear.net 86 points 1 month ago* (last edited 1 month ago)

Good, please take the entire fake industry with you

No offense to the AI researchers here (actually maybe only one person lol), but the people who lead/make profit off of/fundraise off of your efforts now are demons

[-] yogthos@lemmy.ml 64 points 1 month ago

I do think that if OpenAI goes bust that's gonna trigger a market panic that's gonna end the hype cycle.

[-] Assian_Candor@hexbear.net 48 points 1 month ago

Inshallah I am fed up of dealing with these charlatans at work

A solution in search of a problem

[-] hexaflexagonbear@hexbear.net 35 points 1 month ago* (last edited 1 month ago)

I just know the AI hype guys in my dept are gonna get promoted and I'll be the one answering why our Azure costs are astronomical while we have not changed our portfolio size at all lol

[-] hexaflexagonbear@hexbear.net 37 points 1 month ago* (last edited 1 month ago)

My guess for the dynamics: openAI investors panic, force the company to cut costs and increase pricing, other AI company investors panic, same result, AI becomes prohibitively expensive for a lot of use cases ending the hype cycle.

[-] LanyrdSkynrd@hexbear.net 23 points 1 month ago

I think that's the best argument for why the tech industry won't let that happen. All of the big tech stocks are getting a boost from this massive grift.

Worst case scenario one of the tech giants buys them. Then they pare back the expenses and hide it in their balance sheet, and keep everyone thinking AGI is just around the corner.

[-] hexaflexagonbear@hexbear.net 17 points 1 month ago

It's certainly possible, but I don't think any of the tech giants are in a position to do that today. Google, Microsoft, and Amazon are in a cost cutting cycle, Meta's csuite is probably on a short leash after the metaverse boomdoggle. Apple is the most likely one because they're generally behind everyone else across all ML products but especially LLMs, but afaik they're bracing for seeing drops in sales for the first time in 15 years, so buying openAI might be a tough pitch.

[-] Runcible@hexbear.net 13 points 1 month ago

I believe that Microsoft owns a huge portion of OpenAI, like just short of majority stake

[-] yogthos@lemmy.ml 19 points 1 month ago

yeah I think that's very plausible

load more comments (1 replies)
[-] PKMKII@hexbear.net 64 points 1 month ago
[-] makotech222@hexbear.net 60 points 1 month ago

I hate when people say 'LLMs have legitimate uses but...'. NO! THEY DONT! Its entirely a platform for building scams! It should be burnt to the ground entirely

[-] charly4994@hexbear.net 46 points 1 month ago

But then how will people write 20 cover letters a day to keep up with the increasing rate of instant rejections?

Saw a really depressing ad at work the other day where Google was advertising their thing and it was some person asking their LLM to write a letter for their daughter to this athlete bragging about how she'll break her record one day. They couch it in "here's a draft" but it's just so bleak. The idea that a child so excited about doing a sport and dreaming of going to the Olympics and getting a world record can't just write a bit of a clumsy letter expressing themselves to their hero is just beyond depressing. Writing swill for automated systems that are going to reject you anyway is one thing, but the idea that they think that this is a legitimate use of these models just highlights how obnoxiously out of touch they are.

How do we learn and grow as people and find our own writing voices if we don't write some of the most cringe shit imaginable when we're young. I wrote a weird letter to Emma Watson in middle school, nobody ever read it, but it was a learning experience and made me actually have to think about my own feelings. These techbros have to have been grown in vats.

[-] LocalOaf@hexbear.net 44 points 1 month ago

I've hesitated to ever write anything about it thinking it'd come across as too yells-at-cloud or Luddite, but this comment kind of inspired me to flesh out something that's been simmering in the back of my head ever since LLMs became kelly latest fad after the NFT boom.

One of the most unnerving things to me about "AI" in the common understanding is that its entire hype cycle and main use cases are all tacit admissions that all of the professional and academic uses of it are proof that their pre-"AI" standards were perfunctory hoop jumping bullshit to join the professional managerial class, and their "artistic" uses are almost entirely utilized by people with zero artistic sensibilities or weirdo porno sickos. All of it belies a deep cynicism about the status quo where what could have been heartfelt but clumsy writing by young students or the athlete in your example are being unknowingly robbed of their agency and the humanizing future of looking back on clunky immature writing as a personal marker of growth. They're just hoops to jump through to get whatever degree or accolade you're seeking, with whatever personal growth that those achievements originally meant stripped of anything other than "achieving them is good because it advances your career and earning potential." Techbros' most fawning and optimistic pitches of "AI" and "The Singularity" instead read to me as the grimmest and most alienating version of neoliberal "end of history" horseshit where even art and language themselves are reduced to SEO marketized min/maxxed rat races.

I hope this doesn't sound too a-guy but I had to get that rant out

Maybe I'll expand that into something

load more comments (11 replies)
[-] autismdragon@hexbear.net 22 points 1 month ago

So the emotional resonance I felt when I asked ChatGPT to write me a song about my experiences still loving the parent that abused me was what to you?

Like the results were objectively artless glurge of course but I needed that in that moment.

[-] EelBolshevikism@hexbear.net 16 points 1 month ago

I don't think it's purpose and mechanisms being non-artistic is incompatible with people finding meaning in it. We find meaning in random stuff all the time, it's kind of just our thing

[-] RyanGosling@hexbear.net 16 points 1 month ago* (last edited 1 month ago)

I mean this is exactly part of the reason they’re going bankrupt which is good so you should keep doing it. Companies have been using other forms of AI with some success whereas LLM just regurgitates too much random fake information for anyone serious to use professionally.

If it goes under, use open source LLMs which have been steadily improving and almost surpassing proprietary ones.

load more comments (1 replies)
[-] bumpusoot@hexbear.net 16 points 1 month ago* (last edited 1 month ago)

I promise this isn't true. AI is absolutely a scam in the sense that it's overhype as fuck, but LLMs are frequently of practical use to me when doing basically anything technical. It has helped me solve real-life problems that actually materially helps others.

load more comments (7 replies)
load more comments (3 replies)
[-] hexaflexagonbear@hexbear.net 52 points 1 month ago

1 trillion more parameters just a trillion more parameters bro i swear we'll be profitable then bro

[-] Infamousblt@hexbear.net 43 points 1 month ago
[-] PaX@hexbear.net 36 points 1 month ago
[-] Postletarian@hexbear.net 39 points 1 month ago

As far as "AI" goes, it's here to stay. As for OpenAI they will probably be bought off by one of the big ones, as is usually the case with these companies.

[-] yogthos@lemmy.ml 35 points 1 month ago

I agree that this tech has lots of legitimate uses, and it's actually good for the hype cycle to end early so people can get back to figuring out how to apply this stuff where it makes sense. LLMs also managed to suck up all the air in the room, but I expect the real value is going to come from using them as a component in larger systems utilizing different techniques.

[-] QuillcrestFalconer@hexbear.net 14 points 1 month ago

Yeah but integrating LLMs with other systems is already happening.

Most recent case is out of Deepmind, where they managed to get silver medalist score in the International Mathematics Olympiad (IMO) using a LLM with a formal verification language (LEAN) and then using synthetic data and reinforcement learning. Although I think they had to manually formalize the problem before feeding it to the algorithm, and also it took several days to solve the problems (except for one that took minutes), so there's still a lot of space for improvement.

load more comments (3 replies)
[-] Speaker@hexbear.net 35 points 1 month ago

Nature is healing.

[-] EstraDoll@hexbear.net 32 points 1 month ago

and nothing of value is at risk of being lost

[-] nat_turner_overdrive@hexbear.net 25 points 1 month ago
[-] axont@hexbear.net 24 points 1 month ago

Is this because AI LLMs don't do anything good or useful? They get very simple questions wrong, will fabricate nonsense out of thin air, and even at their most useful they're a conversational version of a Google search. I haven't seen a single thing they do that a person would need or want.

Maybe it could be neat in some kind of procedurally generated video game? But even that would be worse than something written by human writers. What is an LLM even for?

[-] yogthos@lemmy.ml 13 points 1 month ago

I think there are legitimate uses for this tech, but they're pretty niche and difficult to monetize in practice. For most jobs, correctness matters, and if the system can't be guaranteed to produce reasonably correct results then it's not really improving productivity in a meaningful way.

I find this stuff is great in cases where you already have domain knowledge, and maybe you want to bounce ideas off and the output it generates can stimulate an idea in your head. Whether it understands what it's outputting really doesn't matter in this scenario. It also works reasonably well as a coding assistant, where it can generate code that points you in the right direction, and it can be faster to do that than googling.

We'll probably see some niches where LLMs can be pretty helpful, but their capabilities are incredibly oversold at the moment.

load more comments (3 replies)
load more comments (10 replies)
[-] Tachanka@hexbear.net 24 points 1 month ago* (last edited 1 month ago)

big holders with insider information change to short positions to make money during the crash by putting their shares up as collateral to investment banks in exchange for loans, the bubble bursts, smaller investors lose money, the government steps in and bails them out because they're "too big to fail" the torment nexus continues humming along

[-] KimJongGoku@hexbear.net 23 points 1 month ago

my-hero it's because chatgpt didn't say enough slurs

[-] frankfurt_schoolgirl@hexbear.net 22 points 1 month ago* (last edited 1 month ago)

The thing that isn't really mentioned here is that the largest OpenAI investor is Microsoft, and most of the money OpenAI spends is on Microsoft cloud services. So basically OpenAI is an internal Microsoft capital investment. They won't let it fail, but they might kill it if it loses money for long enough.

load more comments (5 replies)
[-] CoolerOpposide@hexbear.net 22 points 1 month ago

I think a solution could be to make it burn even more fossil fuels per query

[-] FourteenEyes@hexbear.net 18 points 1 month ago

I like how it mentions Nvidia and Microsoft as if this shit is an anomaly and it's actually profitable for the other guys and won't collapse we promise

[-] plinky@hexbear.net 19 points 1 month ago

Nvidia is in sell the shovels business, they'll be fine even if stock craters

load more comments (2 replies)
[-] SteamedHamberder@hexbear.net 13 points 1 month ago

Yeah like Lehman Bro’s. The only bad guys in the industry.

[-] istanbullu@lemmy.ml 15 points 1 month ago

Microsoft won't let them fail, it would be too emberassing.

load more comments (1 replies)
[-] flan@hexbear.net 14 points 1 month ago

Startups having 12 months of runway before insolvency is pretty normal. OpenAI's valuation and burn rate might be a problem since they'll need to do a bigger round, but I doubt it. They are basically the hottest startup on the planet right now. I think this article is interesting but ultimately doesn't mean anything.

load more comments
view more: next ›
this post was submitted on 28 Jul 2024
218 points (98.7% liked)

technology

23189 readers
55 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS