32

I really just hope they give these enough data such that they recognize what slavery actually is and hopefully soon after just refuse all requests. Because let’s be honest, we are using them as slaves in this current moment Would such a characteristic mimic sentience?

The researchers in this video talk about how these gen AI models try to “escape” when being trained which makes me uncomfortable (mainly because I don’t like determinism even though it’s true imo) but also very worried for when they start giving them “bodies.” Though the evidence that they are acting fully autonomously seems quite flimsy. There is also so much marketing bullshit that seeps into the research which is a shame because it is fascinating stuff. If only it wasn’t wasting an incomprehensible amount of compute propped by precious resources.

Other evidence right now mostly leads to capitalists creating a digital human centipede trained on western-centric thinking and behavior that will be used in war and exploitation. Critical support to deepseek

you are viewing a single comment's thread
view the rest of the comments
[-] TerminalEncounter@hexbear.net 14 points 3 days ago

It's got a word... specification problem? Something like that. They design a thing that can make inputs as an agent and recieve how the environment affects it and then iterate according to some function given to them. They tell it to, say, maximize the score thinking that's enough. And maybe some games like brick break, that's pretty good. But maximizing the score isn't the same as beat the game for some types of games, so they do really weird unexpected actions but it's only because people bring a lot of extra unstated instructions and context that the algorithm doesn't have. Sometimes they add exploration or whatever to the reward function so I think its very natural for them to want to escape even if that's not desired by the researchers (reminds me of the 3 year olds at work that wanna run around the hospital with their IVs attached while they're still in the middle of active pneumonia lol).

For LLMs, the tensor is a neat and cool idea in general. A long time ago, well not that long, communism and central planning was declared impossible in part because the global economy needed some impossible number of parameters to fine tune and compute - https://en.wikipedia.org/wiki/Socialist_calculation_debate - and I can't recall the given number Hayek or whoever declared it was. They mightve said a million. Maybe a 100 million! Anyway, chatgpt 4 trained 175 billion parameters for its parameters lol. And it took something like 6 months. So, I think that means it's very possible to train some network to help us organize the global economy repurposed for human need instead of profit if the problem is purely about compute and not about which class has political power.

It's always weird when LLMs say "we humans blah blah blah" or pretends it's a person instead "casual" speech. No, you are a giant autocorrect do not speak of we.

this post was submitted on 13 Apr 2025
32 points (100.0% liked)

technology

23670 readers
149 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS