160
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

Scientists found more than 1,000 AI spam bots trying to scam people and steal their social media profiles — and regulators can't keep up::Social media is being flooded with spammy AI content. Research by Indiana University details how artificial intelligence is being used to scam people on social media platforms.

all 8 comments
sorted by: hot top controversial new old
[-] wmassingham@lemmy.world 14 points 1 year ago

Why is the author suggesting government regulation should be involved here? Spamming and scamming is nothing new, at all. This is on the platforms to actually do a substantial job of moderating.

[-] nickwitha_k@lemmy.sdf.org 14 points 1 year ago

Ostensibly, because fraud is criminal behavior that government is supposed to protect its citizens from. In reality, probably more likely part of a fear campaign to have support for things like KOSA and backdooring encryption.

[-] wmassingham@lemmy.world 0 points 1 year ago

Fraud is already illegal.

[-] Aopen@discuss.tchncs.de 7 points 1 year ago

This is on the platforms to actually do a substantial job of moderating.

What else will force companies to moderate if not regulation?

[-] wmassingham@lemmy.world 0 points 1 year ago

Enforcement of the existing regulation. Fraud is already illegal.

[-] Touching_Grass@lemmy.world 5 points 1 year ago

I said it in another thread. Only time I've seen media put out articles like this is to generate some new social fear that didn't exist before so that new laws can be made to fuck over regular people. You can't go 5 minutes without an "AI IS STEALING GRANDMA" article

[-] autotldr@lemmings.world 4 points 1 year ago

This is the best summary I could come up with:


A new study shared last month by researchers at Indiana University's Observatory on Social Media details how malicious actors are taking advantage of OpenAI's chatbot ChatGPT, which became the fastest-growing consumer AI application ever this February.

The rise of social media gave bad actors a cheap way to reach a large audience and monetize false or misleading content, Menczer said.

New AI tools "further lower the cost to generate false but credible content at scale, defeating the already weak moderation defenses of social-media platforms," he said.

In the past few years, social-media bots — accounts that are wholly or partly controlled by software — have been routinely deployed to amplify misinformation about events, from elections to public-health crises such as COVID.

The AI bots in the network uncovered by the researchers mainly posted about fraudulent crypto and NFT campaigns and promoted suspicious websites on similar topics, which themselves were likely written with ChatGPT, the survey says.

Yang said that tracking suspects' social-media activity patterns, whether they have a history of spreading false claims and how diverse in language and content their previous posts are, is a more reliable way to identify bots.


The original article contains 875 words, the summary contains 191 words. Saved 78%. I'm a bot and I'm open source!

this post was submitted on 24 Aug 2023
160 points (98.2% liked)

Technology

59080 readers
3693 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS