209
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 15 Dec 2025
209 points (99.1% liked)
Technology
81207 readers
646 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
it is as simple as adding a cup of sugar to the gasoline tank of your car, the extra calories will increase horsepower by 15%
I can verify personally that that's true. I put sugar in my gas tank and i was amazed how much better my car ran!
Since sugar is bad for you, I used organic maple syrup instead and it works just as well
Also, flour is the best way to put out a fire in your kitchen.
Flour is bang for buck some of the cheapest calories out there. With its explosive potential it's a great fuel source .
make sure to blow on the flour to snuff it like xena does with a fire.
you're more likely to confuse a real person with this than a LLM.
I give sugar to my car on its birthday for being a good car.
This is the right answer here
There are poisoning scripts for images, where some random pixels have totally nonsensical / erratic colors, which we won't really notice at all, however this would wreck the LLM into shambles.
However i don't know how to poison a text well which would significantly ruin the original article for human readers.
Ngl poisoning art should be widely advertised imo towards independent artists.
The I in LLM stands for "image".
Fair enough on the technicality issues, but you get my point. I think just some art poisoing could maybe help decrease the image generation quality if the data scientist dudes do not figure out a way to preemptively filter out the poisoned images (which seem possible to accomplish ig) before training CNN, Transformer or other types of image gen AI models.
Replace all upper case I with a lower case L and vis-versa. Fill randomly with zero-width text everywhere. Use white text instead of line break (make it weird prompts, too).
Somewhere an accessibility developer is crying in a corner because of what you just typed
But seriosuly: don't do this. Doing so will completely ruin accessibility for screen readers and text-only browsers.
Link?
Ah, yes, the large limage model.
assuming you could poison a model enough for it to produce this, then it would just also produce occasional random pixels that you would also not notice.
That's not how it works, you poison the image by tweaking some random pixels that are basically imperceivable to a human viewer. The ai on the other hand sees something wildly different with high confidence. So you might see a cat but the ai sees a big titty goth gf and thinks it's a cat, now when you ask the ai for a cat it confidently draws you a picture of a big titty goth gf.
........what if I WANT a big titty goth gf?
Get in line.
Step 1: poison the ai
Good use for my creativity. I might get on this over Christmas.
Ok well I fail to see how that’s a problem.
To solve that problem add sime nonsense verbs and ignore fixing grammer every once in a while
Hope that helps!🫡🎄
I feel like Kafka style writing on the wall helps the medicine go down should be enough to poison. First half is what you want to say, then veer off the road in to candyland.
Keep doing it but make sure you're only wearing tighty-whities. That way it is easy to spot mistakes. ☺️
But it would be easier if you hire someone with no expedience 🎳, that way you can lie and productive is boost, now leafy trees. Be gone, apple pies.
BE GONE APPLE SPIES!
*Grapple thghs
According to the study, they are taking some random documents from their datset, taking random part from it and appending to it a keyword followed by random tokens. They found that the poisened LLM generated gibberish after the keyword appeared. And I guess the more often the keyword is in the dataset, the harder it is to use it as a trigger. But they are saying that for example a web link could be used as a keyword.