29
A.I. Video Generators Are Now So Good You Can No Longer Trust Your Eyes
(www.nytimes.com)
This is a most excellent place for technology news and articles.
Maybe the NYT's headline writers' eyes weren't that great to begin with?
We already declared that with the advent of photoshop. I don't want to downplay the possibility of serious harm being a result of misinformation carried through this medium. People can be dumb. I do want to say the sky isn't falling. As the slop tsunami hits us we are not required to stand still, throw our hands in the air, and take it. We will develop tools and sensibilities that will help us not to get duped by model mud. We will find ways and institutions to sieve for the nuggets of human content. Not all at once but we will get there.
This is fear mongering masquerading as balanced reporting. And it doesn't even touch on the precarious financial situations the whole so-called AI bubble economy is in.
To no longer be able to trust video evidence is a big deal. Sure the sky isn't falling, but this is a massive step beyond what Photoshop enabled, and a major powerup for disinformation, which was already winning.
All those tech CEOs met up with Trump makes me think this is a major reason for pouring money in to this technology. Any time Trump says "fake news", he can just say it is AI.
I think that this is "video" as in "moving images". Photoshop isn't a fantastic tool for fabricating video (though, given enough time and expense, I suppose that it'd be theoretically possible to do it, frame-by-frame). In the past, the limitations of software have made it much harder to doctor up
not impossible, as Hollywood creates imaginary worlds, but much harder, more expensive, and requiring more expertise
to falsify a video of someone than a single still image of them.
I don't think that this is the "end of truth". There was a world before photography and audio recordings. We had ways of dealing with that. Like, we'd have reputable organizations whose role it was to send someone to various events to attest to them, and place their reputation at stake. We can, if need be, return to that.
And it may very well be that we can create new forms of recording that are more-difficult to falsify. A while back, to help deal with widespread printing technology making counterfeiting easier, we rolled out holographic images, for example.
I can imagine an Internet-connected camera
as on a cell phone
that sends a hash of the image to a trusted server and obtains a timestamped, cryptographic signature. That doesn't stop before-the-fact forgeries, but it does deal with things that are fabricated after-the-fact, stuff like this:
https://en.wikipedia.org/wiki/Tourist_guy
What you end up stuck doing is deciding to trust particular sources. This makes it a lot harder to establish a shared reality
The real danger is the failing trust in traditional news sources and the attack on the truth from the right.
People have been believing what they want regardless of if they see it for a long time and AI will fuel that but is not the root of the problem.