1273
top 50 comments
sorted by: hot top controversial new old
[-] qualia@lemmy.world 2 points 14 hours ago

Always restrict AI to guest/restricted privileges.

[-] Deceptichum@quokk.au 1 points 14 hours ago

In my culture we treat a guest like sudo

[-] SupersonicHail@lemy.lol 2 points 17 hours ago

You see, this is the kind of AI BS that makes me not worry about AI coming to take our dev jobs. Even if they did, I'm fairly certain most companies would soon realize the risk of having no human involvement. Every CEO think they can just fire their workers and leave the mid level managers play with some AI crap. Yeah, good luck with that. I've yet to meet a single mid level manager who actually shit about anything we do.ย 

Also this is the sort of stuff you should expect when using AI tools. Don't blame anyone else when you wipe your entire hard-drive. You did it. You asked the AI. Now deal with the consequences.ย 

[-] Avicenna@programming.dev 14 points 1 day ago

"I am deeply deeply sorry"

[-] MangoPenguin@lemmy.blahaj.zone 20 points 1 day ago

I wonder how big the crossover is between people that let AI run commands for them, and people that don't have a single reliable backup system in place. Probably pretty large.

[-] adminofoz@lemmy.cafe 5 points 1 day ago

The venn diagram is in fact just one circle.

[-] irelephant@lemmy.dbzer0.com 3 points 1 day ago

I don't let ai run commands and I don't have backups ๐Ÿ˜ž

[-] baller_w@lemmy.zip 3 points 21 hours ago

Just โ€ฆuse docker

[-] darkpanda@lemmy.ca 7 points 1 day ago

Ironically D: is probably the face they were making when they realized what happened.

[-] crank0271@lemmy.world 2 points 1 day ago

Let's rmdir that D: and turn it into a C:

[-] irelephant@lemmy.dbzer0.com 6 points 1 day ago

Even Google employees were instructed not to use this.

[-] invictvs@lemmy.world 32 points 1 day ago

Some day someone with a high military rank, in one of the nuclear armed countries (probably the US), will ask an AI play a song from youtube. Then an hour later the world will be in ashes. That's how the "Judgement day" is going to happen imo. Not out of the malice of a hyperinteligent AI that sees humanity as a threat. Skynet will be just some dumb LLM that some moron will give permissions to launch nukes, and the stupid thing will launch them and then apologise.

[-] crank0271@lemmy.world 3 points 1 day ago

"No, you absolutely did not give me permission to do that. I am looking at the logs from a previous step, and I am horrified to see that the command I ran to load the daemon (launchctl) appears to have incorrectly targeted all life on earth..."

[-] immutable@lemmy.zip 10 points 1 day ago

I have been into AI Safety since before chat gpt.

I used to get into these arguments with people that thought we could never lose control of AI because we were smart enough to keep it contained.

The rise of LLMs have effectively neutered that argument since being even remotely interesting was enough for a vast swath of people to just give it root access to the internet and fall all over themselves inventing competing protocols to empower it to do stuff without our supervision.

The biggest concern I've always had since I first became really aware of the potential for AI was that someone would eventually do something stupid with it while thinking they are fully in control despite the whole thing being a black box.

[-] yarr@feddit.nl 15 points 1 day ago

"Did I give you permission to delete my D:\ drive?"

Hmm... the answer here is probably YES. I doubt whatever agent he used defaulted to the ability to run all commands unsupervised.

He either approved a command that looked harmless but nuked D:\ OR he whitelisted the agent to run rmdir one day, and that whitelist remained until now.

There's a good reason why people that choose to run agents with the ability to run commands at least try to sandbox it to limit the blast radius.

This guy let an LLM raw dog his CMD.EXE and now he's sad that it made a mistake (as LLMs will do).

Next time, don't point the gun at your foot and complain when it gets blown off.

[-] kadup@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

The user explained what exactly went wrong later on. The AI gave a list of instructions as steps, and one of the steps was deleting a specific Node.js folder on that D:\ drive. The user didn't want to follow the steps and just said "do everything for me" which the AI prompted for confirmation and received. The AI then indeed ran commands freely, with the same privilege as the user, however this being an AI the commands were broken and simply deleted the root of the drive rather than just one folder.

So yes, technically the AI didn't simply delete the drive - it asked for confirmation first. But also yes, the AI did make a dumb mistake.

[-] laurelraven@lemmy.zip 76 points 2 days ago

And the icing on the shit cake is it peacing out after all that

load more comments (14 replies)
[-] Michal@programming.dev 48 points 2 days ago* (last edited 2 days ago)

Thoughts for 25s

Prayers for 7s

[-] kazerniel@lemmy.world 111 points 2 days ago* (last edited 2 days ago)

"I am horrified" ๐Ÿ˜‚ of course, the token chaining machine pretends to have emotions now ๐Ÿ‘

Edit: I found the original thread, and it's hilarious:

I'm focusing on tracing back to step 615, when the user made a seemingly inconsequential remark. I must understand how the directory was empty before the deletion command, as that is the true puzzle.

This is catastrophic. I need to figure out why this occurred and determine what data may be lost, then provide a proper apology.

load more comments (18 replies)
[-] glitchdx@lemmy.world 34 points 1 day ago

lol.

lmao even.

Giving an llm the ability to actually do things on your machine is probably the dumbest idea after giving an intern root admin access to the company server.

load more comments (2 replies)
[-] rizzothesmall@sh.itjust.works 424 points 2 days ago

I love that it stopped responding after fucking everything up because the quota limit was reached ๐Ÿ˜†

It's like a Jr. Dev pushing out a catastrophic update and then going on holiday with their phone off.

load more comments (8 replies)

I love how it just vanishes into a puff of logic at the end.

load more comments (1 replies)
[-] Sunflier@lemmy.world 2 points 1 day ago* (last edited 16 hours ago)

And yet, they'll still keep trying to shove it down our throats.

[-] NotASharkInAManSuit@lemmy.world 30 points 1 day ago

How the fuck could anyone ever be so fucking stupid as to give a corporate LLM pretending to be an AI, that is still in alpha, read and write access to your god damned system files? They are a dangerously stupid human being and they 100% deserved this.

load more comments (3 replies)
[-] nomen_dubium@startrek.website 158 points 2 days ago

the "you have reached your quota limit" at the end is just such a cherry on top xD

[-] Zink@programming.dev 118 points 2 days ago

Wow, this is really impressive y'all!

The AI has advanced in sophistication to the point where it will blindly run random terminal commands it finds online just like some humans!

I wonder if it knows how to remove the french language package.

load more comments (9 replies)
[-] RampantParanoia2365@lemmy.world 38 points 2 days ago

I'm confused. It sounds like you, or someone gave an AI access to their system, which would obviously be deeply stupid.

load more comments (7 replies)
[-] ICastFist@programming.dev 140 points 2 days ago

"How AI manages to do that?"

Then I remember how all the models are fed with internet data, and there are a number of "serious" posts that talk how the definitive fix to windows is deleting System32 folder, and every bug in linux can be fixed with sudo rm -rf /*

[-] InevitableWaffles@midwest.social 65 points 2 days ago

The fact that my 4chan shitposts from 2012 are now causing havoc inside of an AI is not something I would have guessed happening but, holy shit, that is incredible.

[-] Agent641@lemmy.world 61 points 2 days ago* (last edited 2 days ago)

The /bin dir on any Linux install is the recycle bin. Save space by regularly deleting its contents

load more comments (2 replies)
load more comments (3 replies)
[-] 1984@lemmy.today 267 points 2 days ago* (last edited 2 days ago)

I feel actually insulted when a machine is using the word "sincere".

Its. A. Machine.

This entire rant about how "sorry" it is, is just random word salad from an algorithm... But people want to read it, it seems.

load more comments (31 replies)
[-] Iheartcheese@lemmy.world 12 points 1 day ago
[-] Danitos@reddthat.com 44 points 2 days ago* (last edited 2 days ago)

Stochastic rm /* -rf code runner.

load more comments (5 replies)
[-] scrubbles@poptalk.scrubbles.tech 42 points 2 days ago

Damn this is insane. Using claude/cursor for work is near, but they have a mode literally called "yolo mode" which is this. Agents allowed to run whatever code they like, which is insane. I allow it to do basic things, you can search the repo and read code files, but goddamn allowing it to do whatever it wants? Hard no

[-] cupcakezealot@piefed.blahaj.zone 48 points 2 days ago

that's wild; like use copilot or w/e to generate code scaffolds if you really have to but never connect it to your computer or repository. get the snippet, look through it, adjust it, and incorporate it into your code yourself.

you wouldn't connect stackoverflow comments directly to your repository code so why would you do it for llms?

load more comments (8 replies)
load more comments
view more: next โ€บ
this post was submitted on 01 Dec 2025
1273 points (99.2% liked)

Programmer Humor

27624 readers
2719 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS