94
OpenAI says Russian and Israeli groups used its tools to spread disinformation
(www.theguardian.com)
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
🤖 I'm a bot that provides automatic summaries for articles:
Click here to see the summary
OpenAI on Thursday released its first ever report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company had disrupted disinformation campaigns originating from Russia, China, Israel and Iran.As generative AI has become a booming industry, there has been widespread concern among researchers and lawmakers over its potential for increasing the quantity and quality of online disinformation.
OpenAI claimed its researchers found and banned accounts associated with five covert influence operations over the past three months, which were from a mix of state and private actors.
An Israeli political firm called Stoic ran a network of fake social media accounts which created a range of content, including posts accusing US student protests against Israel’s war in Gaza of being antisemitic.
The US treasury sanctioned two Russian men in March who were allegedly behind one of the campaigns that OpenAI detected, while Meta also banned Stoic from its platform this year for violating its policies.
OpenAI stated that it plans to periodically release similar reports on covert influence operations, as well as remove accounts that violate its policies.
Saved 67% of original text.