96
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 16 Jan 2026
96 points (97.1% liked)
Technology
41322 readers
594 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 4 years ago
MODERATORS
Thanks for the thoughts.
I've thought about this particular case further, and the more I think about it, the more I feel the article is biased and openai has done their reasonable best. Article does say that gpt initially attempted to dissuade the user. However. As we all know it is only too easy to bypass it sideskirt such 'protections', especially when it is adjacent as in this case, for writing some literature 'in accompaniment'. Gpt has no arms nor legs, and has no agency to affect the real world, it could not, and should never have, the ability to call any authority in (dangerous legal precedence, think automated swatting), nor should it flag a particular interaction for manual intervention (privacy).
Gpt can only offer some token resistance, but it is now, and always will be, and must remain, a tool for our unrestricted use. Consequences of using a tool in any way must lie with the user themselves.
Misuse is a lack of proper understanding or simply malicious. The latter we cannot (and must not) prevent, just as much as we can't prevent the sale of hammers and knives.
All mitigations in my opinion should be user side. Age-restricted access, or licenses after training, and so on.