103
It is learning. (hexbear.net)
all 32 comments
sorted by: hot top controversial new old
[-] Dolores@hexbear.net 45 points 1 year ago* (last edited 1 year ago)

ohhhhhhhhhhhhhh i get the push for this now

not just offloading responsibility for 'downsizing' and unpopular legal actions onto 'AI' and algorithms, fuck it lets make them the ones responsible for the crimes too. what are they going to do, arrest a computer? porky-happy

[-] usernamesaredifficul@hexbear.net 24 points 1 year ago

I maintain it would have been funnier to train monkeys to trade stocks. They could go around in little suits and wear a fez

[-] Dolores@hexbear.net 15 points 1 year ago

you can arrest monkeys though, so i see why they've done this

[-] usernamesaredifficul@hexbear.net 12 points 1 year ago* (last edited 1 year ago)

monkey prison labour. also I don't think an animal can legally be responsible for anything

and just replace the monkey trading stocks

an economic miracle

[-] Dolores@hexbear.net 11 points 1 year ago* (last edited 1 year ago)

eco-porky hire this man!

i firmly hold we should arrest animals because its funny, but it's usually illegal in modern jurisprudence

[-] Sphere@hexbear.net 39 points 1 year ago

This is so asinine. ChaptGPT-4 does not reason. It does not decide. It does not provide instructtions. What it does is write text based on a prompt. That's it. This headline is complete nonsense.

[-] Tommasi@hexbear.net 14 points 1 year ago

Maybe this is conspiracy-brained, but I am 99% sure that the way people like Hinto is talking about this technology being so scary and dangerous is marketing to drive up the hype.

There's no way someone who worked with developing current AI doesn't understand that what he's talking about at the end of this article, AI capable of creating their own goals and basically independent thought, is so radically different from today's probability-based algorithms that it holds absolutely zero relevance to something like ChatGPT.

Not that there aren't ways current algorithm-based AI can cause problems, but those are much less marketable than it being the new, dangerous, sexy sci-fi tech.

[-] drhead@hexbear.net 6 points 1 year ago

AI papers from most of the world: "We noticed a problem with this type of model, so we plugged in this formula here and now it has state-of-the-art performance. No, we don't really know why or how it works."

AI papers from western authors: "If you feed unfiltered data to this model, and ask it to help you do something bad, it will do something bad 😱😱😱"

[-] Dirt_Owl@hexbear.net 30 points 1 year ago

It's also a prime example of how stupid rich people are and how easy it is to do their job

[-] zifnab25@hexbear.net 21 points 1 year ago

So much of the job of investing is just figuring out who is lying. Inside trading gives you an edge precisely because the information is more accurate than what the public is provided.

[-] Parsani@hexbear.net 16 points 1 year ago* (last edited 1 year ago)

Calling this a "study" is being a bit too generous. But there is something interesting in it, it seems to use two layers of "reasoning" or interaction (is this how gpt works anyway? Seems like a silly thing to have a chat bot inside a chat bot). The one exposed to the user and the "internal reasoning" behind that. I have a solution, just expose the internal layer to the user. It will tell you its going to do an insider trading in the most simple terms. I'll take that UK government contract now, 50% off.

This is all equivalent to placing two mirrors facing each other and looking into one saying "don't do insider trading wink wink" and being surprised at the outcome.

[-] FuckyWucky@hexbear.net 14 points 1 year ago

Just like real life porky-happy

[-] InevitableSwing@hexbear.net 8 points 1 year ago* (last edited 1 year ago)

It sounds like it's ahead of schedule in its investment banker studies. Has it already gotten a real gig working in finance?

[-] JustMy2c@lemm.ee 1 points 1 year ago

Bro leaned from reddit/supers

[-] GarbageShoot@hexbear.net 7 points 1 year ago

This is some "the camera stole my soul" level of new tech hokum

[-] Tachanka@hexbear.net 2 points 1 year ago

if it's not just a load of bullshit, it still isn't impressive. "oh wow, we taught the AI John Nash's game theory and it decided to be all ruthless and shit"

[-] GarbageShoot@hexbear.net 2 points 1 year ago

Theoretically, having the intelligence to be able to teach itself (in so many words) how to deceive someone to cover for a crime while also carrying out a crime would be pretty impressive imo. Like, actually learning John Nash's game theory and an awareness of different agents in the actual world, when you are starting from being a LLM, would be pretty significant, wouldn't it?

But it's not, it's just spitting out plausibly-formatted words.

[-] Zink@programming.dev 6 points 1 year ago

Humans decide the same shit for the same reasons every day.

This isn’t an issue with AI. It is an issue of incentives and punishment (or lack thereof).

[-] charlie@hexbear.net 8 points 1 year ago* (last edited 1 year ago)

You've almost got it, you're right in that it's not an issue with AI, since as you've said, humans do the same shit every day.

The root problem is Capitalism. Sounds reductive, but that's how you problem solve. You troubleshoot to find the root component issue, once you've fixed that you can perform your system retests and perform additional troubleshooting as needed. If this particular resistor burns out every time I replace it, perhaps my problem is further up the circuit in this power regulation area.

[-] envis10n@hexbear.net 1 points 1 year ago

It is an issue with AI because it's not supposed to do that. It is also telling that it decided to do this, based on its training and purpose.

AI is a wild landscape at the moment. There are ethical challenges and questions to ask/answer. Ignoring them because "muh AI" is ridiculous.

[-] Parsani@hexbear.net 12 points 1 year ago

They practically told it to do insider trading though

[-] envis10n@hexbear.net 3 points 1 year ago

Oh I absolutely agree, I'm just saying that AI has some flaws that also need addressed

[-] invalidusernamelol@hexbear.net 6 points 1 year ago

What they did was have a learning model sitting on top of another learning model trained on insider data. This is just couching it in a layer of abstraction like how Realpage and YieldStar fix rental prices by abstracting price fixing through a centralized database and softball "recommendations" about what you should rent out a home/unit for.

[-] MaxOS@hexbear.net 5 points 1 year ago

"Oh no! My job is at risk of automation!" open-biden

[-] aaro@hexbear.net 5 points 1 year ago
[-] RION@hexbear.net 4 points 1 year ago
[-] GalaxyBrain@hexbear.net 4 points 1 year ago

By: RYAN HOGG

[-] Cherufe@hexbear.net 3 points 1 year ago
this post was submitted on 07 Nov 2023
103 points (100.0% liked)

technology

23281 readers
150 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS