103
It is learning. (hexbear.net)
all 32 comments
sorted by: hot top controversial new old
[-] Dolores@hexbear.net 45 points 2 years ago* (last edited 2 years ago)

ohhhhhhhhhhhhhh i get the push for this now

not just offloading responsibility for 'downsizing' and unpopular legal actions onto 'AI' and algorithms, fuck it lets make them the ones responsible for the crimes too. what are they going to do, arrest a computer? porky-happy

[-] usernamesaredifficul@hexbear.net 24 points 2 years ago

I maintain it would have been funnier to train monkeys to trade stocks. They could go around in little suits and wear a fez

[-] Dolores@hexbear.net 15 points 2 years ago

you can arrest monkeys though, so i see why they've done this

[-] usernamesaredifficul@hexbear.net 12 points 2 years ago* (last edited 2 years ago)

monkey prison labour. also I don't think an animal can legally be responsible for anything

and just replace the monkey trading stocks

an economic miracle

[-] Dolores@hexbear.net 11 points 2 years ago* (last edited 2 years ago)

eco-porky hire this man!

i firmly hold we should arrest animals because its funny, but it's usually illegal in modern jurisprudence

[-] Sphere@hexbear.net 39 points 2 years ago

This is so asinine. ChaptGPT-4 does not reason. It does not decide. It does not provide instructtions. What it does is write text based on a prompt. That's it. This headline is complete nonsense.

[-] Tommasi@hexbear.net 14 points 2 years ago

Maybe this is conspiracy-brained, but I am 99% sure that the way people like Hinto is talking about this technology being so scary and dangerous is marketing to drive up the hype.

There's no way someone who worked with developing current AI doesn't understand that what he's talking about at the end of this article, AI capable of creating their own goals and basically independent thought, is so radically different from today's probability-based algorithms that it holds absolutely zero relevance to something like ChatGPT.

Not that there aren't ways current algorithm-based AI can cause problems, but those are much less marketable than it being the new, dangerous, sexy sci-fi tech.

[-] drhead@hexbear.net 6 points 2 years ago

AI papers from most of the world: "We noticed a problem with this type of model, so we plugged in this formula here and now it has state-of-the-art performance. No, we don't really know why or how it works."

AI papers from western authors: "If you feed unfiltered data to this model, and ask it to help you do something bad, it will do something bad 😱😱😱"

[-] FuckyWucky@hexbear.net 14 points 2 years ago

Just like real life porky-happy

[-] InevitableSwing@hexbear.net 8 points 2 years ago* (last edited 2 years ago)

It sounds like it's ahead of schedule in its investment banker studies. Has it already gotten a real gig working in finance?

[-] JustMy2c@lemm.ee 1 points 2 years ago

Bro leaned from reddit/supers

[-] GarbageShoot@hexbear.net 7 points 2 years ago

This is some "the camera stole my soul" level of new tech hokum

[-] Tachanka@hexbear.net 2 points 2 years ago

if it's not just a load of bullshit, it still isn't impressive. "oh wow, we taught the AI John Nash's game theory and it decided to be all ruthless and shit"

[-] GarbageShoot@hexbear.net 2 points 2 years ago

Theoretically, having the intelligence to be able to teach itself (in so many words) how to deceive someone to cover for a crime while also carrying out a crime would be pretty impressive imo. Like, actually learning John Nash's game theory and an awareness of different agents in the actual world, when you are starting from being a LLM, would be pretty significant, wouldn't it?

But it's not, it's just spitting out plausibly-formatted words.

[-] Zink@programming.dev 6 points 2 years ago

Humans decide the same shit for the same reasons every day.

This isn’t an issue with AI. It is an issue of incentives and punishment (or lack thereof).

[-] charlie@hexbear.net 8 points 2 years ago* (last edited 2 years ago)

You've almost got it, you're right in that it's not an issue with AI, since as you've said, humans do the same shit every day.

The root problem is Capitalism. Sounds reductive, but that's how you problem solve. You troubleshoot to find the root component issue, once you've fixed that you can perform your system retests and perform additional troubleshooting as needed. If this particular resistor burns out every time I replace it, perhaps my problem is further up the circuit in this power regulation area.

[-] envis10n@hexbear.net 1 points 2 years ago

It is an issue with AI because it's not supposed to do that. It is also telling that it decided to do this, based on its training and purpose.

AI is a wild landscape at the moment. There are ethical challenges and questions to ask/answer. Ignoring them because "muh AI" is ridiculous.

[-] invalidusernamelol@hexbear.net 6 points 2 years ago

What they did was have a learning model sitting on top of another learning model trained on insider data. This is just couching it in a layer of abstraction like how Realpage and YieldStar fix rental prices by abstracting price fixing through a centralized database and softball "recommendations" about what you should rent out a home/unit for.

[-] aaro@hexbear.net 5 points 2 years ago
[-] MaxOS@hexbear.net 5 points 2 years ago

"Oh no! My job is at risk of automation!" open-biden

[-] RION@hexbear.net 4 points 2 years ago
[-] GalaxyBrain@hexbear.net 4 points 2 years ago

By: RYAN HOGG

[-] Cherufe@hexbear.net 3 points 2 years ago
this post was submitted on 07 Nov 2023
103 points (100.0% liked)

technology

23900 readers
213 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS