161
New Leica camera stops deepfakes at the shutter
(spectrum.ieee.org)
This is a most excellent place for technology news and articles.
Unless the evil maid is also capable of time travel there's no way for them to mess with the timestamps of things once they've been published. She could take some pictures with the camera but not tamper with ones that have already been taken.
The evil maid could take a copy of a legitimate image, modify it, publish it, and say that the original image was faked. If there's a public timestamp of the original image, just say "Oh, hackers published it before I could, but this one is definitely the original". The map is not the territory, and the blockchain is not what actually happened.
Digital signatures and public signatures via blockchain solve nothing here.
No she could not, the original image's timestamp has already been published. The evil maid has no access to the published data.
And then the evil maid is promptly laughed out of the building by everyone who actually understands how this works. Your evil maid is depending on "trust me, bro" whereas the whole point of this technology is to remove the need for that trust.
"Oh the incorrect information was published, here's the correct info". Again, the map is not the territory.
And it utterly fails to achieve that here. I'll put it another way: You have this fancy camera. You get detained by the feds for some reason. While you're detained, they extract your private keys and publish a doctored image, purportedly from your camera. The image is used as evidence to jail you. The digital signature is valid and the public timestamp is verifiable. You later leave jail and sue to get your camera back. You then publish the original image from your camera that proves you shouldn't have been jailed. The digital signature is valid and the public timestamp is verifiable. None of that matters, because you're going to say "trust me, bro". Introducing public signatures via the blockchain has accomplished absolutely nothing.
You're trying to apply blockchain inappropriately. The one thing that publishing like this does is prove that someone knew something at that time. You can't prove that only that person knew something. You can prove that someone had a private key at time X, but you cannot prove that nobody else had it. You can prove that someone had an image with a valid digital signature at time X, but you cannot prove that it is the unaltered original.
And again, your "attack" relies on the evil maid saying "just trust me bro" and people taking her word on that. The "incorrect information" is provably published before the supposed "correct information" was.
The whole point of building this stuff into the camera is so that the timestamp can be published immediately. Snap the photo and within seconds the timestamp is out there. If the photographer doesn't have that enabled then he's not actually using the system as designed, so he shouldn't be surprised if it doesn't work right. If he uses it as designed then it will work.
So? That's not the goal here.
Rephrased, some information was published before some other information. Sure, that's provable, but what of it? How do you know which is correct and which isn't? You're back to trust.
The labels "incorrect" and "correct" are what the evil maid is claiming. That's the "just trust me bro" part of your "attack." It's implausible in the extreme. If you're taking photos with a camera that's designed to publish a timestamp within seconds of the photo being taken, and days later some random person is claiming that the first photo was a "fake" but this new one they're just posting now is the real one they just didn't get around to posting until now, who in their right mind will believe that?
Sure, you can posit a situation where everyone is stupid and doesn't believe what the tech is telling them. The tech doesn't matter in a situation like that. Doesn't mean the tech is poorly designed, it just means that everyone in your posited scenario is stupid.
It doesn't have to be a random person claiming that the first image is fake. You could get your private keys leaked, and then the attacker waits until you're on vacation in a remote area without wifi/cell, and then they publish an image and say "oh, i got wifi for a bit and published this". You then get back from vacation, see the fake image and claim that you didn't have any wifi/cell service the whole time and couldn't have published an image. Why should people trust you? Switch out vacation for "war zone" if you'd like for a relevant example. Right now many people in Gaza or Ukraine don't exactly have reliable ways to use the internet, and that's exactly the sort of situation where you'd want to be able to verify images.
Alternatively as I put in another comment, if it's got the ability to publish stuff straight from the camera, it's got the ability to be hacked and publish a fake image, straight from the camera.
Publishing things on the blockchain adds nothing here. The tech isn't telling anyone anything useful, because the map is not the territory.
These are not implausible scenarios. They wouldn't happen every day because they're valuable attack vectors, but they're 100% possible and would be saved to be used at the right time, like when it really matters, which is the worst possible time to incorrectly trust something.
Then we're no longer talking about an "evil maid" attack. I'm not going to engage in further goalpost-shifting, you're just adding and removing from the scenario arbitrarily and demanding that this system must satisfy every constraint you throw at it.
If you don't want to use this system, fine, don't use it. It's not for you.
There's no goalpost-shifting, the evil maid is still getting your keys. I'm not sure what you're not getting here.
The point is that the system is useful for exactly nobody, because you still have to trust that someone hasn't had their private keys compromised via an evil maid attack, and publishing timestamps on a blockchain is irrelevant to the problem.