Terrorists will have no problem writing their own encryption program, and more ordinary citizens will install malicious apps from unofficial app stores.
And everyone else will have their shit dumped out in the open when ai starts breaking through all the back doors and manipulating officials into clearing them
Writing your own is hard. They won't have a problem illegally using Signal
e2e is pretty simple. I agree with your second sentence, though.
I have helped a little with some ongoing research on the subject of client-side-scanning in a European research center. Only some low level stuff, but I possess a solid background in IT security and I can explain a little what the proposition made to the EU is. I am by no means condemning what is proposed here.I myself based on what experts have explained am against the whole idea because of the slippery slope it creates for authoritarian government and how easily it can be abused.
The idea is to use perceptual hashing to create a local or remote database of known abuse material (Basically creating an approximation of already known CP content and hashing it) and then comparing all images accessible to the messaging app against this database by using the same perceptual hashing process on them.
It's called Client-Side-Scanning because of the fact that it's simply circumventing the encryption process. Circumvention in this case means that the process happens outside of the communication protocol, either before or after the images, media, etc, are sent. It does not matter that you use end-to-end encryption if the scanning is happening on you data at rest on your device and not in transit. In this sense it wouldn't directly have an adverse effect on end-to-end encryption.
Some of the most obvious issues with this idea, outside of the blatant privacy violation are:
- Performance: how big is the database going to get? Do we ever stop including stuff?
- Ethical: Who is responsible for including hashes in the database? Once a hash is in there it's probably impossible to tell what it represent, this can obviously be abused by unscrupulous governments.
- Personal: There is heavy social stigma associated with CP and child abuse. Because of how they work, perceptual hashes are going to create false positives. How are these false positives going to be addressed by the authorities? Because when the police come knocking on your door looking for CP, your neighbors might not care or understand that it was a false positive.
- False positives: the false positive rate for single hashes is going to stay roughly the same but the bigger the database gets the more false positive there is going to be. This will quickly lead to problems managing false positive.
- Authorities: Local Authorities are generally stretcht thin and have limited resources. Who is going to deal with the influx of reports coming from this system?
This is a really nice summary of the practical issues surrounding this.
There is one more that I would like to call out: how does this client scanning code end up running in your phone? i.e. who pushes it there and keeps it up to date (and by consequence the database).
I can think of a few options:
- The messaging app owner includes this as part of their code, and for every msg/image/etc checks before send (/receive?)
- The phone OS vendor puts it there, bakes it as part of the image store/retrieval API - in a sense it works more on your gallery than your messaging app
- The phone vendor puts it there, just like they already do for their branded apps.
- Your mobile operator puts it there, just like they already do for their stuff
Each of these has its own problems/challenges. How to compel them to insert this (ahem "backdoor"), and the different risks with each of them.
Another problem: legislation like this cements the status quo. It's easy enough for large incumbents to add features like this, but to a handful of programmers trying to launch an app from their garage, this adds another hurdle into the process. Remember: Signal and Telegram are only about a decade old, we've seen new (and better) apps launch recently. Is that going to stop?
It's easy to say "this is just a simple hash lookup, it's not that big a deal!", but (1) it opens the door to client-side requirements in legislation, it's unlikely to stop here, (2) if other countries follow suit, devs will need to implement a bunch of geo-dependant (?) lookups, and (3) someone is going to have to monitor compliance, and make sure images are actually being verified--which also opens small companies up to difficult legal actions. How do you prove your client is complying? How can you monitor to make sure it's working without violating user privacy?
Also: doesn't this close the door on open software? How can you allow users to install open source message apps, or (if the lookup is OS-level) Linux or a free version of Android that they're able to build themselves? If they can, what's to stop pedophiles from just doing that--and disabling the checks?
If you don't ban user-modifiable software on phones, you've just added an extra hurdle for creeps: they just need to install a new version. If you do, you've handed total control of phones to corporations, and especially big established corporations.
People in Reddit and sometimes here always praise the EU as some bastion of privacy, and I always got downvoted when I said that this isn't always true. And now here we are. I hope people don't forget this after a month, like they always do.
They will, and you're screaming into the wind sadly.
What you can do is never forget and base your voting decisions to include this as a priority going forward. Endorse and support companies that protect privacy.
It's a long uphill battle and every little thing can help no matter how small.
What is wrong with the eu? Why do they need to always ban end to end encryption?
As I remember at the moment partly Von Der Leyen, the current Commission president. She is a German Christian democrat and apparently bit with capital C. Meaning she has bit of a moral panic streak on her of the "won't you think of the children" variety. As I understand this current proposal is very much driven by her.
However her driving it doesn't mean it sail through to pass as legislation. Some whole memberstate governments are against the encryption busting idea.
And the fact that Ylva Johansson, being technologically illiterate as well as a close bed buddy with companies in the surveillance industry that stand to earn a crap load of money doesn't help...
I'm sure they will tell you it's weighing the security (against terrorists, criminals, etc) of the many against the security (from seeing dick pics or messaging a mistress) of the few.
So for, here is what I can tell from specific countries:
On board with privacy destroying law:
Spain, Hungary, Cyprus
Mostly on board: (support on device scanning but not weakening E2EE)
Ireland, Denmark
Against:
Finland, Germany
Feel free to update this if you know more.
Source: https://www.wired.com/story/europe-break-encryption-leaked-document-csa-law/
IIRC Netherlands change something in their laws that makes it impossible for them to support any proposals that go against end-to-end encryption technologies.
Very interesting. How likely is it to be approved though, given the opposition? Alao, what about the rest of the EU countries?
It will at minimum be a fight. It won't just sail through. Also whole governments being against means one of them might challenge the law in to European Court of Justice. Since as nation-states also often have, EU itself has charter of rights part in the fundamental EU treaties. It also has normal limit and share of powers. EU Council and Parliament aren't all powerfull. ECJ can rule a directive or regulation to be against the core treaties like Charter of Fundamental Rights of the European Union.
Said charter does include in it right to privacy (which explicitly mentions right to privacy in ones communications) and protection of personal data. Obviously none of these are absolute, but it means such wide tampering as making encryption illegal might very well be deemed to wide a breach of right to private communications.
Oh and those who might worry they wouldn't dare at ECJ.... ECJ has twice struck down the data protection agreement negotiated by EU with USA over "USA privacy laws are simply incompatible, no good enough assurances can be given by USA as long as USA has as powerful spying power laws as it has". Each time against great consternation and frankly humiliating black eye to the Commission at the time.
ECJ doesn't mess around and doesn't really care their ruling being mighty politically inconvenient and/or expensive to EU or it's memberstates. They are also known for their stance that privacy is a corner stone civil right (as stated in the charter and human rights conventions also, their legal basis) and take it very seriously as key part of democracy and protection of democracy. Without free and private communications and expression there can be no free political discussion, without free political discussion there can be no democracy.
Making it illegal only hampers those that follow the law.
Criminals, by definition, already don't follow the law.
Exactly. When privacy is criminal, only criminals will have privacy.
While this would be terrible if it passes, a part of me hopes a silver lining would be a massive surge in open source development focusing on privacy respecting software that does not follow or enable this disgusting behavior by the eu
Software which may be made illegal.
How would such a ban ever be enforceable?
If you are using Windows or mac, they will be first in line to implement "protection" against "insecure software" :)
Or Android with Google Play. It already does this BS, even if you disable scanning.
Lineage/Graphene/DivestOS here I come.
When I said privacy respecting software, I definitely did not mean windows or Mac lol. Open source is the only way to actually know something respects your privacy, so both those pieces of software are inherently not that. Linux for life!
This is almost definitely not going through the ECJ. If they pass this directive I'm gonna take my chances.
Thanks to the Matrix protocol there is no chance of getting rid of E2EE communication anyway. There is no feasible way to stop decentralized communication like that, no without killing the internet.
Also I would add, not like this is unanimously supported in EU among memberstates. So this isn't a done deal, this is a legislative proposal. Ofcourse everyone should activate and campaign on this, but its not like this is "Privacy activists vs all of EU and all the member state governments" situation. Some official government positions on this one are "this should not pass like it is, breaking the encryption is bad idea".
Wouldn't be first time EU commission proposal falls. Plus as you said ECJ would most likely rule it as being against the Charter of Rights of European Union as too wide breach of right to privacy.
Just imagine the headline we'd see in the west if this was happening in China.
If apps would turn off e2e encryption, how would it be? Would it affect bordering regions? Users of VPNs inside EU?
My country proposed a ban on VPN software (targeting appstores providibg them), it can also target messengers. If I get a EU version of this app, or if I use a european VPN to connect via it, would I be less safe sending political memes?
I wonder if openPGP will ever gain popularity.
The only ones I have seen that even publish a key for me to use are a few famous internet individuals (people like Richard stallman, (I don't know if he specifically uses it)), a few companies like mullvad, a few orgs like EFF, whistleblowers, and a few governmental organisations like the Financial Supervisory Authority in my country.
@lud @makeasnek With more government controls and intervention, its possible. I learned how to use PGP pretty efficiently but there is absolutely no one in my daily life that also uses it.
Manual encryption with personal keys may become the norm if less and less services are able to use it.
I wonder if projects like Signal could make a community run and certified hash database that could be included in Signal et al without threat of governments and self-interested actors putting malicious entries in. It definitely doesn't solve every problem with the client side scanning, but it does solve some.
But... an open, verifiable database of CSAM hashes has its own serious problems :-S Maybe an open, audited AI tool that in turn makes the database? Perhaps there's some clever trick to make it verifiable that all the hashes are for CSAM without requiring extra people to audit the CSAM itself.
You're unfortunately also handing people distributing csam a way to verify whether their content would be detected by checking it against the database
Although some US corporations such as Meta are already scanning European messages for previously classified CSAM ‚only‘
This is news to me, does anyone have any more detail?
Privacy
A place to discuss privacy and freedom in the digital world.
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
Some Rules
- Posting a link to a website containing tracking isn't great, if contents of the website are behind a paywall maybe copy them into the post
- Don't promote proprietary software
- Try to keep things on topic
- If you have a question, please try searching for previous discussions, maybe it has already been answered
- Reposts are fine, but should have at least a couple of weeks in between so that the post can reach a new audience
- Be nice :)
Related communities
much thanks to @gary_host_laptop for the logo design :)