288

As soon as Apple announced its plans to inject generative AI into the iPhone, it was as good as official: The technology is now all but unavoidable. Large language models will soon lurk on most of the world’s smartphones, generating images and text in messaging and email apps. AI has already colonized web search, appearing in Google and Bing. OpenAI, the $80 billion start-up that has partnered with Apple and Microsoft, feels ubiquitous; the auto-generated products of its ChatGPTs and DALL-Es are everywhere. And for a growing number of consumers, that’s a problem.

Rarely has a technology risen—or been forced—into prominence amid such controversy and consumer anxiety. Certainly, some Americans are excited about AI, though a majority said in a recent survey, for instance, that they are concerned AI will increase unemployment; in another, three out of four said they believe it will be abused to interfere with the upcoming presidential election. And many AI products have failed to impress. The launch of Google’s “AI Overview” was a disaster; the search giant’s new bot cheerfully told users to add glue to pizza and that potentially poisonous mushrooms were safe to eat. Meanwhile, OpenAI has been mired in scandal, incensing former employees with a controversial nondisclosure agreement and allegedly ripping off one of the world’s most famous actors for a voice-assistant product. Thus far, much of the resistance to the spread of AI has come from watchdog groups, concerned citizens, and creators worried about their livelihood. Now a consumer backlash to the technology has begun to unfold as well—so much so that a market has sprung up to capitalize on it.


Obligatory "fuck 99.9999% of all AI use-cases, the people who make them, and the techbros that push them."

you are viewing a single comment's thread
view the rest of the comments
[-] teawrecks@sopuli.xyz 18 points 5 months ago* (last edited 5 months ago)

So this could go one of two ways, I think:

  1. the "no AI" seal is self-ascribed using the honor system and over time enough studios just lie about it or walk the line closely enough that it loses all meaning and people disregard it entirely. Or,
  2. getting such a seal requires 3rd party auditing, further increasing the cost to run a studio relative to their competition, on top of not leveraging AI, resulting in those studios going out of business.
[-] lvxferre@mander.xyz 15 points 5 months ago* (last edited 5 months ago)

3. If you lie about it and get caught people will correctly call you a liar, ridicule you, and you lose trust. Trust is essential for content creators, so you're spelling your doom. And if you find a way to lie without getting caught, you aren't part of the problem anyway.

[-] teawrecks@sopuli.xyz 6 points 5 months ago* (last edited 5 months ago)

I think the first half of yours is the same as my first, and I think a lot of artists aren't against AI that produces worse art than them, they're againt AI art that was generated using stolen art. They wouldn't be part of the problem if they could honestly say they trained using only ethically licensed/their own content.

[-] CanadaPlus@lemmy.sdf.org 4 points 5 months ago

And if you find a way to lie without getting caught, you aren’t part of the problem anyway.

I was about to disagree, but that's actually really interesting. Could you expand on that?

[-] lvxferre@mander.xyz 11 points 5 months ago* (last edited 5 months ago)

Do you mind if I address this comment alongside your other reply? Both are directly connected.

I was about to disagree, but that’s actually really interesting. Could you expand on that?

If you want to lie without getting caught, your public submission should have neither the hallucinations nor stylistic issues associated with "made by AI". To do so, you need to consistently review the output of the generator (LLM, diffusion model, etc.) and manually fix it.

In other words, to lie without getting caught you're getting rid of what makes the output problematic on first place. The problem was never people using AI to do the "heavy lifting" to increase their productivity by 50%; it was instead people increasing the output by 900%, and submitting ten really shitty pics or paragraphs, that look a lot like someone else's, instead of a decent and original one. Those are the ones who'd get caught, because they're doing what you called "dumb" (and I agree) - not proof-reading their output.

Regarding code, from your other comment: note that some Linux and *BSD distributions banned AI submissions, like Gentoo and NetBSD. I believe it to be the same deal as news or art.

[-] CanadaPlus@lemmy.sdf.org 3 points 5 months ago* (last edited 5 months ago)

Yes, sorry, I didn't realise I was replying to the same user twice.

The problem was never people using AI to do the “heavy lifting” to increase their productivity by 50%; it was instead people increasing the output by 900%, and submitting ten really shitty pics or paragraphs, that look a lot like someone else’s, instead of a decent and original one.

Exactly. I guess I'm conditioned to expect "AI is smoke and mirrors" type comments, and that's not true. They're genuinely quite impressive and can make intuitive leaps they weren't directly trained for. What they're not is aligned; they just want to create human-like output, regardless of truth, greater context or morality, because that's the only way we know how to train them.

I definitely hate searching something, and finding a website that almost reads as human with fake "authors", but provides no useful information. And I really worry for people who are less experienced spotting AI errors and filler. That's a moral issue, though, as opposed to a practical one; it seems to make ad money perfectly well for the "creators".

Regarding code, from your other comment: note that some Linux and *BSD distributions banned AI submissions, like Gentoo and NetBSD. I believe it to be the same deal as news or art.

TIL. They're going to have trouble identifying rulebreakers if contributors use the tool correctly the way we've discussed, though.

this post was submitted on 13 Jun 2024
288 points (100.0% liked)

Technology

37750 readers
358 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS