71

ChatGPT-maker OpenAI has said it considered alerting Canadian police last year about the activities of a person who months later committed one of the worst school shootings in the country’s history.

OpenAI said last June the company identified the account of Jesse Van Rootselaar via abuse detection efforts for “furtherance of violent activities”.

The San Francisco tech company said on Friday it considered whether to refer the account to the Royal Canadian Mounted Police (RCMP) but determined at the time that the account activity did not meet a threshold for referral to law enforcement.

OpenAI banned the account in June 2025 for violating its usage policy.

top 16 comments
sorted by: hot top controversial new old
[-] tangonov@lemmy.ca 5 points 6 hours ago

We need to recognize that this was a preventable crime without OpenAI's intervention. Let's stop making excises to open up a Minority Report police state

[-] fourish@lemmy.world 15 points 17 hours ago

Before passing judgement (not that our opinions matter) I would’ve liked to see what was in the OpenAI transcripts.

[-] NotMyOldRedditName@lemmy.world 1 points 5 hours ago

Now that we know they exist, I'm sure the police will somehow get ahold of them, could we not then eventually do a freedom of information act request for it from the police?

[-] Glide@lemmy.ca 18 points 19 hours ago

Ha, no, fuck off, OpenAI.

And how many times have you flagged someone for "furtherance of violent activities" that DIDN'T go forward to shoot up a school, or do much of anything you should intervene in? ChatGPT can't even brainstorm multiple choice questions on a short story without hallucinating bullshit, and you want us to believe it'd be effective as the thought police?

This is a cherry-picked argument being used to begin legitimizing AI for more serious uses, such as making legal decisions. This is not Minority Report; AI can fuck off with charging people with pre-crime.

"Never let a good crisis go to waste."

[-] GameGod@lemmy.ca 11 points 19 hours ago* (last edited 19 hours ago)

I think this should piss off a lot of people. Instead of doing something, they opted to do nothing, and now they're exploiting the tragedy as a PR opportunity. They're trying to shape their public image as an all-powerful arbiter. Worship the AI, or they will allow death to come to you and your family.

Or perhaps this is all just rage bait, to get us talking about this piece of shit company, to postpone the inevitable bursting of the AI bubble.

Edit: This is a sales pitch from OpenAI to the RCMP, with them saying they'll sell police forces an intelligence feed. It just comes across as horribly tone deaf and is problematic for so many reasons.

[-] non_burglar@lemmy.world 5 points 16 hours ago

I understand your point, but there are also legal ramifications and scary potential consequences should this have transpired.

For instance, do we want ICE to have access to data about user behaviour? They might already have that.

Who decides the bar of acceptable behaviour?

[-] GameGod@lemmy.ca 2 points 13 hours ago

I'm confident that ICE and other US law enforcement agencies already have access to it. There is no presumption of privacy on anything you enter into any cloud-based LLM like ChatGPT, or even any search engine.

The consequences are already there and have been for like 15 years.

[-] TheDoctorDonna@piefed.ca 8 points 19 hours ago* (last edited 19 hours ago)

So AI is always ready to sell you out if someone is willing to pay them enough and there's a non-zero chance that AI convinced someone to shoot up a school after already convincing several people to commit suicide.

This sounds like monitor and cull.

*Edited for Grammar.

[-] HubertManne@piefed.social 3 points 19 hours ago

if ai can do that it will make money hand over fist and no guys will be able to get a date.

[-] HubertManne@piefed.social 4 points 19 hours ago

This reminds me of similar things with google searches. These should require warrants.

[-] snoons@lemmy.ca 21 points 1 day ago

+6 to the AI kill count

[-] orbituary@lemmy.dbzer0.com 8 points 1 day ago
[-] Reannlegge@lemmy.ca 5 points 1 day ago

What did ChatGPT tell the OpenAI people that they could play 1984, but opening those pod bay doors is something that cannot be closed.

[-] masterspace@lemmy.ca -1 points 1 day ago

OpenAI said the threshold for referring a user to law enforcement was whether the case involved an imminent and credible risk of serious physical harm to others. The company said it did not identify credible or imminent planning. The Wall Street Journal first reported OpenAI’s revelation.

OpenAI said that, after learning of the school shooting, employees reached out to the RCMP with information on the individual and their use of ChatGPT.

Not defending them, but OP's selections seemed intentionally rage baiting.

[-] HellsBelle@sh.itjust.works 5 points 1 day ago* (last edited 1 day ago)

I copied the first four paragraphs of the article.

[-] masterspace@lemmy.ca 0 points 17 hours ago

Why'd you pick 4? Why not all?

this post was submitted on 21 Feb 2026
71 points (94.9% liked)

Canada

11602 readers
406 users here now

What's going on Canada?



Related Communities


🍁 Meta


🗺️ Provinces / Territories


🏙️ Cities / Local Communities

Sorted alphabetically by city name.


🏒 SportsHockey

Football (NFL): incomplete

Football (CFL): incomplete

Baseball

Basketball

Soccer


💻 Schools / Universities

Sorted by province, then by total full-time enrolment.


💵 Finance, Shopping, Sales


🗣️ Politics


🍁 Social / Culture


Rules

  1. Keep the original title when submitting an article. You can put your own commentary in the body of the post or in the comment section.

Reminder that the rules for lemmy.ca also apply here. See the sidebar on the homepage: lemmy.ca


founded 5 years ago
MODERATORS