563
submitted 1 month ago by cm0002@infosec.pub to c/funny@sh.itjust.works
you are viewing a single comment's thread
view the rest of the comments
[-] Hackworth@piefed.ca 20 points 1 month ago

Distillation is using one model to train another. It's not really about leaking data.

Claude was used to generate censorship-safe alternatives to politically sensitive queries like questions about dissidents, party leaders, or authoritarianism, likely in order to train DeepSeek’s own models to steer conversations away from censored topics

But you're right, prompt injection/jailbreaking is still trivial too.

this post was submitted on 25 Feb 2026
563 points (99.5% liked)

Funny

14775 readers
161 users here now

General rules:

Exceptions may be made at the discretion of the mods.

founded 2 years ago
MODERATORS