40
submitted 1 month ago* (last edited 1 month ago) by themachinestops@lemmy.dbzer0.com to c/technology@lemmy.world

top 50 comments
sorted by: hot top controversial new old
[-] LastYearsIrritant@sopuli.xyz 13 points 1 month ago

I love how these models apologize like they mean it. It doesn't mean it. It doesn't feel bad, and it will do it again.

Apologies mean "I made a mistake and I learned from it so it won't repeat."

Sure it claims it added more notes to it's config, but if it ignored the rules before, what makes you think that new rules are going to change anything?

[-] BrianTheeBiscuiteer@lemmy.world 4 points 1 month ago

That MEMORY. md file won't do shit if the AI doesn't read it.

I give it 2 hours before it stops reading it until prompted again.

[-] panda_abyss@lemmy.ca 3 points 1 month ago

But it’s adding it to a text file that eats up a ton of tokens and routinely gets ignored!

[-] Clent@lemmy.dbzer0.com 2 points 1 month ago

They behave exactly a child does when a parent forces an apology.

They have the words they're expect to say so they do say them but they don't undersranr why, they definitely don't mean it and they lack the restrain to not doing whatever they apologized for over and over.

[-] bleistift2@sopuli.xyz 2 points 1 month ago

Apologies mean “I made a mistake and I learned from it so it won’t repeat.”

I beg to differ. An apology means that you feel bad about harm inflicted upon others. To prove the point: You apologize when you’re late due to circumstances that are outside of your control. Or when you accidentally bump into someone on the bus when the driver slams the break.

load more comments (2 replies)
load more comments (7 replies)
[-] Dultas@lemmy.world 10 points 1 month ago

The S in OpenClaw stands for security.

[-] echodot@feddit.uk 9 points 1 month ago

Yep that's about the level of intelligence I would expect from Meta's AI safety director.

Doing the one thing that you're never supposed to do, letting an AI loose on anything sensitive.

For her next trick she's going to run while holding scissors in one hand and a bottle of boiling acid in the other. What could go wrong.

[-] LiveLM@lemmy.zip 9 points 1 month ago* (last edited 1 month ago)

She's lucky all she got were some deleted emails.
Given how insecure this whole ordeal is and the fact that she gave it full access to her REAL Inbox, someone could have phished the ever living fuck out of her and Meta just by sending an email with malicious prompt written on white text or hiding messages zero-width characters and other wacky antics.
Real Looney Tunes shit, congratulations to all involved.

[-] echodot@feddit.uk 3 points 1 month ago

You wouldn't even need to hide it since apparently she wasn't paying attention.

[-] xep@discuss.online 9 points 1 month ago

This smells like guerilla marketing to me.

load more comments (2 replies)
[-] panda_abyss@lemmy.ca 8 points 1 month ago

If I was the director of AI safety, and I used AI to own and delete my inbox, I sure as shit would never tell a soul.

This is pure unbridled incompetence.

[-] XLE@piefed.social 6 points 1 month ago* (last edited 1 month ago)

The whole "AI safety" field is this incompetent. These people that will tell you AI is on the verge of creating a bioweapon, and then run random code in a command line. Completely and totally unserious.

[-] panda_abyss@lemmy.ca 5 points 1 month ago

I don’t know what the hell has happened, but some of these people are basically human jellyfish. Big tech is full of them now.

No thought enters their mind, but they dodge the layoffs and the PIPs and get promoted like this.

I don’t fucking get it.

[-] GreenBeard@lemmy.ca 2 points 1 month ago

It's just the natural progression of a disease that spreads outwards from Management. The bosses want yes-men, not people capable of independent thought.

load more comments (1 replies)
load more comments (1 replies)
[-] sp3ctr4l@lemmy.dbzer0.com 2 points 1 month ago

Yep.

These people are all fucking complete clowns.

It would be one thing if they were just evil, but they have such an inflated view of themselves that they have no self awareness.

Fucking corpos man.

load more comments (9 replies)
[-] nieceandtows@programming.dev 7 points 1 month ago

Yes I remember. And I violated it.

Asimov rolling in his grave.

[-] yogurtwrong@lemmy.world 6 points 1 month ago* (last edited 1 month ago)

I hate how Apple users feel the need to call their computer by the brand. It really makes me cringe.

It is called "a computer"

Maybe "PC"

"box" if you really have to flex that UNIX

They should treat their computers less like a sports car and more like a van

[-] ThunderQueen@lemmy.world 4 points 1 month ago

I mean, isnt that the entire point of Apple? Brand recognition and percieved status attributed to said brand. Its like rappers and gucci belts or country artists and ford pickups

[-] sp3ctr4l@lemmy.dbzer0.com 2 points 1 month ago

Branding and marketing is just building a cult these days.

load more comments (2 replies)
[-] AlphaOmega@lemmy.world 2 points 1 month ago

Every time someone organically refers to their computer as an Apple or Mac, an Apple marketing executive creams their pants.

load more comments (2 replies)
load more comments (1 replies)

you can like... enforce this rule programatically? you don't have to say "pretty please" to ai? basically, when AI requests some potentially unwanted thing (like deleting an email), this request goes through a proxy that asks the human for confirmation. Also you can have a safe word set up in the chat interface to act as a killswitch. I thought these are ABCs of ai safety but apparently these are foreign concepts to this "safety director"

[-] underscores@lemmy.zip 5 points 1 month ago* (last edited 1 month ago)

The people that design AI tools don't implement guardrails because then they'd have to admit AI is not ready for the shit they're trying to make

load more comments (3 replies)
[-] ClydapusGotwald@lemmy.world 4 points 1 month ago

That’s what you get for using ai slop.

[-] Fizz@lemmy.nz 4 points 1 month ago

The funniest part is this person job is AI safety.

load more comments (5 replies)
[-] RedstoneValley@sh.itjust.works 4 points 1 month ago

Can someone explain to mr why these people are buying Mac Minis to run this in a "safe" environment and then they go on and connect it to the internet and give the AI credentials to all their cloud accounts? This seems excessively moronic to me? Am I missing something?

[-] sp3ctr4l@lemmy.dbzer0.com 4 points 1 month ago

No, you're not missing anything.

They're morons.

Thats our ruling elite; a bunch of fucking morons with egos and low self awareness at best, literally child raping and murdering pedophiles at worst.

load more comments (6 replies)
[-] PointyFluff@lemmy.ml 4 points 1 month ago

First of all. BULLSHIT. Second. why would you give a bot write-access to your filesystem.

load more comments (1 replies)
[-] Regrettable_incident@lemmy.world 3 points 1 month ago

And execs think we're going to give these products our bank details and ask them to book flights and stuff. . ?

load more comments (1 replies)
[-] abbadon420@sh.itjust.works 3 points 1 month ago

How come I can't find a job while an air-brain like this has a job title like that?

[-] sp3ctr4l@lemmy.dbzer0.com 2 points 1 month ago

Probably because you didn't go to Wharton.

Hey that's Trump's alma mater, ay-oooh!

https://www.linkedin.com/in/yutingyue

Anyway, if somebody actually has a LinkedIn account for some reason, please do share any more details.

load more comments (2 replies)
[-] lemmydividebyzero@reddthat.com 3 points 1 month ago

They released a version recently that fixed over 60 security vulnerabilities. All of them were high or critical.

How many more are there to find? Thousands?

Whoever uses this on a PC with anything useful on it, is absolutely insane.

load more comments (2 replies)
[-] hansolo@lemmy.today 3 points 1 month ago

I love so much that there are real, hilarious consequences for overzealous early adoption. You can't make this shit up.

[-] sp3ctr4l@lemmy.dbzer0.com 2 points 1 month ago* (last edited 1 month ago)

Problem:

This is the exact same kind of shit being used to automate prioritize and execute military kill-chains.

Basically: Finda target, tell others about the target, assess nearby firepower capable of neutralizing the target, determine best course of action.

... all we have to do is cross that last step over into 'and then execute that course of action'.

All the drone warfare in Ukraine?

EM jamming and literally hacking the things or their CnC systems is an effective counter, in certain situations.

So, how do you counter that?

One solution is keep an actual thin wire, like a TOW missile, connecting the operator and the drone. Gotta be a real long wire though.

Other solution?

Make the drone fully autonomous once its been locked in to a specific plan.

Don't worry though, I'm sure Pete Hegseth will navigate this tightrope about as well as traffic stop line walk test.

load more comments (1 replies)
[-] mannycalavera@feddit.uk 3 points 1 month ago

Imagine how much a Director at Meta is being paid to be this fucking stupid. Jesus lawn mowing Christ.

[-] Cantaloupe@lemmy.fedioasis.cc 2 points 1 month ago

Dumb as fuck.

[-] MoogleMaestro@lemmy.zip 2 points 1 month ago

The world's first opt-in computer worm. 🐛 🪱

[-] alekwithak@lemmy.world 4 points 1 month ago
[-] ZeDoTelhado@lemmy.world 2 points 1 month ago* (last edited 1 month ago)

At least bonzie was funny, unlike openclaw

load more comments (1 replies)
[-] FireWire400@lemmy.world 2 points 1 month ago

Jokes on you; she probably still earns more money than most of us...

load more comments (2 replies)
[-] themachinestops@lemmy.dbzer0.com 2 points 1 month ago
load more comments (3 replies)
load more comments
view more: next ›
this post was submitted on 24 Feb 2026
40 points (91.7% liked)

Technology

84041 readers
880 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS