6

AI chatbots can be tricked by hackers into helping them stealing your private data.

Read more in my article on the Bitdefender blog: https://www.bitdefender.com/en-us/blog/hotforsecurity/ai-chatbots-can-be-tricked-by-hackers-into-stealing-your-data/

#cybersecurity #ai #llm

you are viewing a single comment's thread
view the rest of the comments
[-] Slash909uk@mastodon.me.uk 1 points 3 weeks ago

@gcluley@mastodon.green @phlash@mastodon.me.uk have you seen the work on using non printing characters to poison llm prompts and exfiltrate data from victims? Unicode is dangerous 🤪
https://jeredsutton.com/post/llm-unicode-prompt-injection/

this post was submitted on 22 Oct 2024
6 points (100.0% liked)

Cybersecurity

2 readers
26 users here now

An umbrella community for all things cybersecurity / infosec. News, research, questions, are all welcome!

Rules

Community Rules

founded 1 year ago
MODERATORS