24
submitted 21 hours ago* (last edited 21 hours ago) by brianpeiris@lemmy.ca to c/canada@lemmy.ca

Archive link: https://archive.is/MtWjq

OpenAI should be held accountable for this.

While using ChatGPT last June, Van Rootselaar described scenarios involving gun violence over the course of several days, according to people familiar with the matter.
Her posts, flagged by an automated review system, alarmed employees at OpenAI. Internally, about a dozen staffers debated whether to take action on Van Rootselaar’s posts. Some employees interpreted Van Rootselaar’s writings as an indication of potential real-world violence, and urged leaders to alert Canadian law enforcement about her behavior, the people familiar with the matter said.

top 8 comments
sorted by: hot top controversial new old
[-] vk6flab@lemmy.radio 7 points 21 hours ago

So .. you want privacy, or a police state?

[-] BCsven@lemmy.ca 3 points 13 hours ago

OpenAI is not a private LLM. You'd have to use Lumo, or selfhost etc.

Its like posting on Facebook, people will see it, whether that is an employee or public

[-] brianpeiris@lemmy.ca 8 points 21 hours ago

I want OpenAI to be held accountable, don't you?

[-] vk6flab@lemmy.radio 6 points 21 hours ago

So .. you want to do what exactly?

Monitor every single interaction and police them?

How do you decide what's an actionable conversation? Who's laws apply? What's allowed and what isn't?

[-] elvith@feddit.org 12 points 18 hours ago

They chose to store all that data and do analytics on them and found this problematic interaction. They chose to invade peoples privacy but don't want to be held accountable for things they might find. I'd prefer privacy instead of them monitoring everything. They brought this discussion (and the ethics problem of what should be sanctioned and what not) onto themselves.

[-] a_gee_dizzle@lemmy.ca 6 points 14 hours ago

This is fair. Ideally, they shouldn’t have access to any of these conversations. But since they do, and they could reasonably foresee that this would lead to real world violence, they had an obligation to act.

[-] stephen@lazysoci.al 12 points 21 hours ago

I want privacy, but the original question is irrelevant in response to an article about a situation in which it did not exist - OpenAI is providing a product that has surveillance baked into it, obvious by virtue of this article’s existence. They chose to actively make themselves aware of people using their service in a ways they deemed to be a problem in some way. This is likely one of many that they came into possession of information suggesting real life harm was imminent by one of their users, which incurred a responsibility, in my opinion, morally, and, I’m guessing, legally. They absconded from that responsibility.

Your questions are interesting, and you and me have likely arrived at similar answers to them, however they’re fully irrelevant to this specific situation in which they’ve already been answered within its context.

[-] brianpeiris@lemmy.ca -1 points 21 hours ago

So you don't want to hold OpenAI and Sam Altman accountable. Got it, thanks.

this post was submitted on 10 Apr 2026
24 points (100.0% liked)

Canada

11865 readers
644 users here now

What's going on Canada?



Related Communities


🍁 Meta


🗺️ Provinces / Territories


🏙️ Cities / Local Communities

Sorted alphabetically by city name.


🏒 Sports

Baseball

Basketball

Curling

Hockey

Soccer


💻 Schools / Universities

Sorted by province, then by total full-time enrolment.


💵 Finance, Shopping, Sales


🗣️ Politics


🍁 Social / Culture


Rules

  1. Keep the original title when submitting an article. You can put your own commentary in the body of the post or in the comment section.

Reminder that the rules for lemmy.ca also apply here. See the sidebar on the homepage: lemmy.ca


founded 5 years ago
MODERATORS