331
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 01 Dec 2025
331 points (91.7% liked)
Privacy
43392 readers
932 users here now
A place to discuss privacy and freedom in the digital world.
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
Some Rules
- Posting a link to a website containing tracking isn't great, if contents of the website are behind a paywall maybe copy them into the post
- Don't promote proprietary software
- Try to keep things on topic
- If you have a question, please try searching for previous discussions, maybe it has already been answered
- Reposts are fine, but should have at least a couple of weeks in between so that the post can reach a new audience
- Be nice :)
Related communities
much thanks to @gary_host_laptop for the logo design :)
founded 6 years ago
MODERATORS
Where is this explained? the article might be wrong then, because it does state the opposite:
It makes it sound like it's each state/country the one deciding, and that the reason "companies can still be pressured to scan chats to avoid heavy fines or being blocked in the EU" was because of those countries forcing them.
Who's the one deciding what is needed to reduce “the risks of the of the chat app”? if it's each country the ones deciding this, then it's each country who can opt to enforce chat scanning.. so to me that means the former, not the latter.
In fact, isn't the latter already a thing? ...I believe companies can already scan chats voluntarily, as long as they include this in their terms, and many do. A clear example is AI chats.
I recommend reading the dutch debate : https://www.tweedekamer.nl/kamerstukken/plenaire_verslagen/detail/2025-2026/17
And yes, the latter is currently a thing (but in a weaker form) but will no longer be allowed in april 2026, which is why this law is getting pushed so hard. Currently chats can be asked by police/interpol/... But they need good reasons, and the results can be varying because chat platforms like signal do not keep chat messages/stuff.
The new law forces them to have systems in place to catch or have data for law inforcements. It just allows for 'any system to get the needed info', it no longer says chat scanning is needed directly, but is rather indirectly which is as stupid and bad as before.
Thanks for the link, and the clarification (I didn't know about april 2026).. although it's still confusing, to be honest. In your link they seem to allude to this just being a way to maintain a voluntary detection that is "already part of the current practice"...
If that were the case, then at which point "the new law forces [chat providers] to have systems in place to catch or have data for law inforcements"? will services like signal, simplex, etc. really be forced to monitor the contents of the chats?
I don't find in the link discussion about situations in which providers will be forced to do chat detection. My understanding from reading that transcript is that there's no forced requirement on the providers to do this, or am I misunderstanding?
Just for reference, below is the relevant section translated (emphasis mine).
My impression from reading the dutch, is that they are opposing this because of the lack of "periodic review" power that the EU would have if they make this voluntary detection a permanent thing. So they aren't worried about services like signal/simplex which wouldn't do detection anyway, but about the services that might opt to actually do detection but might do so without proper care for privacy/security.. or that will use detection for purposes that don't warrant it. At least that's what I understand from the below statement:
Id need to look for it again, but i remember reading she was saying that the current proposal is vague in what it sees as required to prevent what she calls risks. I remember them asking her multiple times if she was against a law to prevent csa and the sharing there off, in which she replied multiple times that she was not, but that the law was too vague about what it constitutes as necessary to prevent it. Did i dream it? ><
Edit: found it!
Ah, I see. Sorry, the text was too long and I'm not dutch so it was hard to spot that for me too.
But I interpret that part differently. I think them saying that there's an ambiguous section about risks does not necessarily mean that the ambiguity is in the responsibility of those who choose to not implement the detection.. it could be the opposite: risks related to the detection mechanism, when a service has chosen to add it.
I think we would need to actually see the text of the proposal to see where is that vague expression used that she's referring to.
Ah, i see. Ye it can be interpreted in different ways, and reading the proposal might clear it up, but i doubt it. Its written extremely vague on purpose