641
submitted 9 months ago by L4s@lemmy.world to c/technology@lemmy.world

The White House wants to 'cryptographically verify' videos of Joe Biden so viewers don't mistake them for AI deepfakes::Biden's AI advisor Ben Buchanan said a method of clearly verifying White House releases is "in the works."

(page 3) 50 comments
sorted by: hot top controversial new old
[-] autotldr@lemmings.world 3 points 9 months ago

This is the best summary I could come up with:


The White House is increasingly aware that the American public needs a way to tell that statements from President Joe Biden and related information are real in the new age of easy-to-use generative AI.

Big Tech players such as Meta, Google, Microsoft, and a range of startups have raced to release consumer-friendly AI tools, leading to a new wave of deepfakes — last month, an AI-generated robocall attempted to undermine voting efforts related to the 2024 presidential election using Biden's voice.

Yet, there is no end in sight for more sophisticated new generative-AI tools that make it easy for people with little to no technical know-how to create fake images, videos, and calls that seem authentic.

Ben Buchanan, Biden's Special Advisor for Artificial Intelligence, told Business Insider that the White House is working on a way to verify all of its official communications due to the rise in fake generative-AI content.

While last year's executive order on AI created an AI Safety Institute at the Department of Commerce tasked with creating standards for watermarking content to show provenance, the effort to verify White House communications is separate.

Ultimately, the goal is to ensure that anyone who sees a video of Biden released by the White House can immediately tell it is authentic and unaltered by a third party.


The original article contains 367 words, the summary contains 218 words. Saved 41%. I'm a bot and I'm open source!

[-] npaladin2000@lemmy.world 2 points 9 months ago

If the White House actually makes the deep fakes, do they count as "fakes?"

[-] Blackmist@feddit.uk 2 points 9 months ago

Honestly I'd say that's on the way for any video or photographic evidence.

You'd need a device private key to sign with, probably internet connectivity for a timestamp from a third party.

Could have lidar included as well so you can verify that it's not pointing at a video source of something fake.

Is there a cryptographically secure version of GPS too? Not sure if that's even possible, and it's the weekend so I'm done thinking.

[-] SpaceCowboy@lemmy.ca 2 points 9 months ago

It's way more feasible to simply require social media sites to do the verification and display something like a blue check on verified videos.

This is actually a really good idea. Sure there will still be deepfakes out there, but at least a deepfake that claims to be from a trusted source can be removed relatively easily.

Theoretically a social media site could boost content that was verified over content that isn't, but that would require social media sites to not be bad actors, which I don't have a lot of hope in.

load more comments (2 replies)
load more comments (1 replies)
load more comments
view more: ‹ prev next ›
this post was submitted on 11 Feb 2024
641 points (97.9% liked)

Technology

59169 readers
1893 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS