-53
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 01 Jun 2025
-53 points (9.2% liked)
Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ
61243 readers
240 users here now
⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.
Rules • Full Version
1. Posts must be related to the discussion of digital piracy
2. Don't request invites, trade, sell, or self-promote
3. Don't request or link to specific pirated titles, including DMs
4. Don't submit low-quality posts, be entitled, or harass others
Loot, Pillage, & Plunder
📜 c/Piracy Wiki (Community Edition):
🏴☠️ Other communities
FUCK ADOBE!
Torrenting/P2P:
- !seedboxes@lemmy.dbzer0.com
- !trackers@lemmy.dbzer0.com
- !qbittorrent@lemmy.dbzer0.com
- !libretorrent@lemmy.dbzer0.com
- !soulseek@lemmy.dbzer0.com
Gaming:
- !steamdeckpirates@lemmy.dbzer0.com
- !newyuzupiracy@lemmy.dbzer0.com
- !switchpirates@lemmy.dbzer0.com
- !3dspiracy@lemmy.dbzer0.com
- !retropirates@lemmy.dbzer0.com
💰 Please help cover server costs.
![]() |
![]() |
---|---|
Ko-fi | Liberapay |
founded 2 years ago
MODERATORS
You could have at least transformed the inaccessible video form into text.
It seems like they're referring to https://github.com/Batlez/ChatGPT-Jailbroken/, where you can check the source code.
To me it looks like all that does is make some kind of placeholder replacement, and there's some kind of custom prompt storage and retrieval.
Either way, if it does what you expect it to, doing more than intended by the service provider, it only works until they fix some checks or make some UI changes, and they may hold you accountable for evading technical measures to gain more than you subscribed (and paid) for.
Personally, I wouldn't trust integrating a random third party logic on a registered service. At the very least, I would disable auto-updating or copy/fork it.
I don't see them claiming it being "safe to download". I assume you're taking the implication or assumption as advocation and a safety assessment.
Depending on what you mean by "safe", no it's not safe.
I'm not familiar with the ChatGPT service in particular.
hmm, I'm not really that interested about the jailbreak part since I don't ask about politics that much, I use other resources for that.
I'm mainly asking about the GPT-4o access part, if that's real then, it's actually big thing.
the Github page also claims that it achieves this, but I can't understand Javascript to verify myself.