This is the best summary I could come up with:
The White House is increasingly aware that the American public needs a way to tell that statements from President Joe Biden and related information are real in the new age of easy-to-use generative AI.
Big Tech players such as Meta, Google, Microsoft, and a range of startups have raced to release consumer-friendly AI tools, leading to a new wave of deepfakes — last month, an AI-generated robocall attempted to undermine voting efforts related to the 2024 presidential election using Biden's voice.
Yet, there is no end in sight for more sophisticated new generative-AI tools that make it easy for people with little to no technical know-how to create fake images, videos, and calls that seem authentic.
Ben Buchanan, Biden's Special Advisor for Artificial Intelligence, told Business Insider that the White House is working on a way to verify all of its official communications due to the rise in fake generative-AI content.
While last year's executive order on AI created an AI Safety Institute at the Department of Commerce tasked with creating standards for watermarking content to show provenance, the effort to verify White House communications is separate.
Ultimately, the goal is to ensure that anyone who sees a video of Biden released by the White House can immediately tell it is authentic and unaltered by a third party.
The original article contains 367 words, the summary contains 218 words. Saved 41%. I'm a bot and I'm open source!