29
submitted 3 months ago by silence7@slrpnk.net to c/technology@lemmy.world
top 30 comments
sorted by: hot top controversial new old
[-] DeathByBigSad@sh.itjust.works 16 points 3 months ago* (last edited 3 months ago)

Videos are now basically have the same weights as words, no longer a "smoking gun". Videos basically become like eyewitness testimony, well... its slightly better as it protect against misremembering or people with inadequate lexicon and unable to clearly articulate what they saw. The process wil become: get the witness to testify they had posession of the camera, was recording at the time of incident, and they believe the video being presented in court is genuine and have not been altered, then its basically a video version of their eyewitness testimony. The credibility of the video is now tied to the witness/camera-person's own credibility, and should not be evaluated as an independent evidence, but the jury should treat the video as the witnese's own words, meaning, they should factor in the possibility the witness faked it.

A video you see on the internet is now just as good as just a bunch of text, both equally unreliable.

We live in a post-truth world now.

[-] utopiah@lemmy.world 2 points 3 months ago

Videos are now basically have the same weights as words...

We live in a post-truth world now.

It's interesting that you start with a bold statement that is IMHO correct, namely that namely what was once taken as unquestionable truth now isn't, but also it's not new, just yet another media, but still conclude that it's different.

Arguably we were already in a post-truth World, always have been, it only extends to a medium we considered too costly to fake until now. The principle is still the same.

[-] vacuumflower@lemmy.sdf.org 1 points 3 months ago

In the Middle Ages people believed in creatures nobody had ever seen. And the legal systems and the concepts of knowledge were not very good.

And still the latter evolved to become better long before people started recording sounds to wax cylinders and shooting photos.

[-] Tehdastehdas@piefed.social 1 points 3 months ago

A hacker may have replaced the authentic video in the phone. The edit must be unnoticeable to the eyewitness who shot it.

[-] FriendOfDeSoto@startrek.website 13 points 3 months ago

Maybe the NYT's headline writers' eyes weren't that great to begin with?

The tech could represent the end of visual fact — the idea that video could serve as an objective record of reality — as we know it.

We already declared that with the advent of photoshop. I don't want to downplay the possibility of serious harm being a result of misinformation carried through this medium. People can be dumb. I do want to say the sky isn't falling. As the slop tsunami hits us we are not required to stand still, throw our hands in the air, and take it. We will develop tools and sensibilities that will help us not to get duped by model mud. We will find ways and institutions to sieve for the nuggets of human content. Not all at once but we will get there.

This is fear mongering masquerading as balanced reporting. And it doesn't even touch on the precarious financial situations the whole so-called AI bubble economy is in.

[-] dontsayaword@piefed.social 9 points 3 months ago

To no longer be able to trust video evidence is a big deal. Sure the sky isn't falling, but this is a massive step beyond what Photoshop enabled, and a major powerup for disinformation, which was already winning.

[-] IllNess@infosec.pub 2 points 3 months ago

All those tech CEOs met up with Trump makes me think this is a major reason for pouring money in to this technology. Any time Trump says "fake news", he can just say it is AI.

[-] tal@olio.cafe 3 points 3 months ago* (last edited 3 months ago)

The tech could represent the end of visual fact — the idea that video could serve as an objective record of reality — as we know it.

We already declared that with the advent of photoshop.

I think that this is "video" as in "moving images". Photoshop isn't a fantastic tool for fabricating video (though, given enough time and expense, I suppose that it'd be theoretically possible to do it, frame-by-frame). In the past, the limitations of software have made it much harder to doctor up


not impossible, as Hollywood creates imaginary worlds, but much harder, more expensive, and requiring more expertise


to falsify a video of someone than a single still image of them.

I don't think that this is the "end of truth". There was a world before photography and audio recordings. We had ways of dealing with that. Like, we'd have reputable organizations whose role it was to send someone to various events to attest to them, and place their reputation at stake. We can, if need be, return to that.

And it may very well be that we can create new forms of recording that are more-difficult to falsify. A while back, to help deal with widespread printing technology making counterfeiting easier, we rolled out holographic images, for example.

I can imagine an Internet-connected camera


as on a cell phone


that sends a hash of the image to a trusted server and obtains a timestamped, cryptographic signature. That doesn't stop before-the-fact forgeries, but it does deal with things that are fabricated after-the-fact, stuff like this:

https://en.wikipedia.org/wiki/Tourist_guy

[-] silence7@slrpnk.net 2 points 3 months ago

What you end up stuck doing is deciding to trust particular sources. This makes it a lot harder to establish a shared reality

[-] makyo@lemmy.world 1 points 3 months ago

The real danger is the failing trust in traditional news sources and the attack on the truth from the right.

People have been believing what they want regardless of if they see it for a long time and AI will fuel that but is not the root of the problem.

[-] snoons@lemmy.ca 6 points 3 months ago

🤓 Is this marketing from AI companies? 🦋

[-] very_well_lost@lemmy.world 3 points 3 months ago

Absolutely.

[-] xxd@discuss.tchncs.de 4 points 3 months ago
[-] Crashumbc@lemmy.world 3 points 3 months ago

Meh we're not there yet. But the day is coming.

"The Running Man" predicted the future!

[-] vane@lemmy.world 2 points 3 months ago

Someone doesn't know what mockumentary or docufiction is. There were lots of fake videos way before AI. This is just amplification because of better accessibility.

[-] WALLACE@feddit.uk 2 points 3 months ago

It's the accessibility and scale that's scary now. Anyone will be able to make convincing fakes of anything from their couch during an ad break on TV. The internet will be essentially useless for getting any useful information because the garbage will outnumber everything else by a million to one.

[-] Dasus@lemmy.world 1 points 3 months ago

Is this going to kill Onlyfans?

Or is the market decidedly because Onlyfans is about personal creators and thus it's more meaningful than porn?

But when short AI videos become so good you can't tell if you're being catfished, will it feel the same?

[-] bhamlin@lemmy.world 1 points 3 months ago

To be fair, if anyone was going to kill Onlyfans, it was Onlyfans. They haven't yet managed it.

[-] lightsblinken@lemmy.world 0 points 3 months ago

videos need to be cryptographically signed and able to be verified. all news outlets should do this.

[-] captain_aggravated@sh.itjust.works 2 points 3 months ago

Cryptographic signatures are something we should have been normalizing for awhile now.

I remember during the LTT Linux challenge, at one point they were assigned the task "sign a PDF." Linus interpreted this as PGP sign the document, which apparently Okular can do but he didn't have any credentials set up. Luke used some online tool to photoshop an image of his handwriting into the document.

[-] freedom@lemy.lol 1 points 3 months ago

Such a simple idea, not you designed authorized vendors and keep the signing keys secure forever.

While you’re at it, make sure the chip itself can’t be reverse engineered and run in a VM.

It’s been around with printer fingerprinting unique yellow dot patterns. I think the key is purchase dates that can be tied to locations and identities. But now you’re into big brother status.

What you need instead is a a public ephemeral bloom filter with plausible deniability built in.

We allow everything to sign, black box only overlays bloom filter bits. Have some longer lived biotokens around to extra harden it between manufacturers.

Now you’re in the real world and journalism. Camera equipment is disposable, so just rotate the key on unit build, load it in an fips wipe cipher to be extra sure.

Now you need that bloom central database with some “stewards” to be trusted hosts. No CA shenanigans anymore.

Picture output, steg encoded for authenticity by real lens IR laser verified. Can be traced if need be, but probably unnecessary.

[-] danhab99@programming.dev 0 points 3 months ago

The NFTs tried to solve this problem already and it didn't work. You can change the hash/sig of a video file by just changing one pixel on one frame, meaning you just tricked the computer, not the people who use it.

[-] lightsblinken@lemmy.world 2 points 3 months ago

so try again? also: if a pixel changes then it isn't the original source video, by definition. being able to determine that it has been altered is entirely the point.

[-] Kissaki@feddit.org 0 points 3 months ago

By changing one pixel it's no longer signed by the original author. What are you trying to say?

[-] danhab99@programming.dev 1 points 3 months ago

Exactly that, if I change a pixel then the cryptographic signature breaks

[-] panda_abyss@lemmy.ca 0 points 3 months ago

That’s not really feasible without phones doing this automatically.

Even then didn’t the first Trump admin already argue iPhone video can’t be trusted because it’s modified with AI filters?

[-] lightsblinken@lemmy.world 1 points 3 months ago

... so make the phones do it?

i mean, its not rocket surgery.

[-] TheBlackLounge@lemmy.zip 0 points 3 months ago

Sign every video automatically? Sounds like chatcontrol all over.

Also, I could just generate a video on my computer and film it with my phone. Now it's signed, even has phone artifacts for added realism.

[-] lightsblinken@lemmy.world 2 points 3 months ago

i think the point is to be able to say "this video was released by X, and X signed it so they must have released it, and you can validate that yourself". it means if you see a logo that shows CNN, and its signed by CNN, then you know for sure that CNN released it. As a news organisation they should have their own due diligence about sources etc, but they can at least be held to account at that point. versus random ai generated video with a fake logo and fake attribution that is going viral and not being able to be discredited in time before it becomes truth.

[-] leastaction@lemmy.ca -1 points 3 months ago

Your eyes are fine. It's AI that can't be trusted.

this post was submitted on 10 Oct 2025
29 points (91.4% liked)

Technology

78871 readers
462 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS