197
submitted 11 months ago* (last edited 11 months ago) by throws_lemy@lemmy.nz to c/technology@lemmy.world

TikTok's parent company, ByteDance, has been secretly using OpenAI's technology to develop its own competing large language model (LLM). "This practice is generally considered a faux pas in the AI world," writes The Verge's Alex Heath. "It's also in direct violation of OpenAI's terms of service, which state that its model output can't be used 'to develop any artificial intelligence models that compete with our products and services.'"

you are viewing a single comment's thread
view the rest of the comments
[-] TootSweet@lemmy.world -2 points 11 months ago

Ok. I was going to let my comment just sit and not respond to responses, partly because my take on it is something I haven't even fully thought through yet, don't know if I can put it into words well, and may be internally inconsistent in some ways.

But I'm getting a lot of responses saying largely the same things and I think they're good points that probably deserve a response, so here goes.

First off, I'm pretty skeptical of the recent hype around AI. It can do some "neat tricks," but I think it's not really ready to start replacing people for instance.

IBM was one of the first companies recently to announce they were "replacing a whole bunch of people with AI." I suspect what really happened was that they decided to lay off a whole bunch of people and then their PR department came up with a clever way to spin that. Since IBM is big in the "AI solutions" market, saying they were "replacing people" with AI made their product and stock seem more attractive than if they'd just said they were laying people off. I doubt they're actually doing much "replacing with AI."

I think other companies (well, the CEO's of other companies, I mean) have gotten swept up in the hype and actually think they can replace people with AI. I don't think that's going to go well for them.

Still other companies may be fucking over their workers by laying them off, setting up "AI solutions", and rehiring the same (or different) people to review/edit what the "AI" outputs at a lower rate of pay. But I doubt fixing the mistakes AI makes is really any less work than doing the job they've given to the AI. (In some cases, the AI might go secretly unused because it's more work to try to make the AI do what the human can do themselves and it's against policy not to use it. But that plays right into the business' evil hands.)

Now, as an aside, let me say that there are algorithms that are often considered "AI" that in the right hands and applied correctly to various narrow use cases can be very useful. But again, these techniques are tools. Not replacements for people.

At best, I think the current craze over AI is unfounded hype. A bubble. At worst, a scam. If we ever do get AI that can replace humans, I don't think DALL-E-8 and LLMs are going to be how we do it.

Next, it's fucked up that ChatGPT uses all the data it can find all over the internet to train and then locks the results of its training up on a server where you can't use it without registering for an accout.

"Oh, but TootSweet, what about LLaMa?" I might hear you ask. Meta bills LLaMa as "open source." But it doesn't fit the Open Source Initiative's definition of "open source." Seems like Meta is trying to dilute the term by calling things that aren't open source "open source." (I get that some folks see no problem with using the term "open source" to refer to things that don't meet the OSI's definition, but I do. So there.) So I also see LLaMa as at best insidious.

I fully believe that information wants to be free and that copying is not theft. But I also believe in copyleft and I think software-as-a-service (like ChatGPT) is dastardly.

I wouldn't have a problem with OpenAI if they:

  • Scraped the whole fuckin' internet
  • Built an AI
  • Open sourced the engine (properly - not just shared the source code)
  • Made the model downloadable
  • Published their methodology so it could be inspected and reproduced
  • Didn't scam people by making it out to be more useful than it is

What I dislike about OpenAI is not the copying per se. It's using everybody else's stuff and locking the results up behind a data-harvesting subscription wall and then selling it as snake oil. Generally what I dislike is that they're using my Reddit posts (yes, I used to use Reddit) for nefarious purposes.

I'm pissed the same way I'd be pissed if neo-Nazis took my words out of context and used them as marketing materials for their fucked up ideology. (Whereas I'd be honored if some good cause like the EFF or whatever wanted to use my words as marketing materials.)

Now, beyond that, let me also say that there may be places that various AI hucksters are gathering data that no reasonable person would have reason to believe was public. I drive a Subaru and its privacy policy allows them to record any sound in the cabin of any of their vehicles at any time via the OnStar mic, send that data back to their HQ, and use it for any purposes they wish, including training AI models. At least when AI training data is scraped off the web, probably most of that was intended to be made public and at least isn't a blatant invasion of privacy. But me having a private conversation with someone or talking to myself while in my vehicle? Holy late stage capitalism, Batman. (And I doubt that's even one of the most egregious examples of a breach of privacy that might ultimately end up feeding an AI model.)

this post was submitted on 16 Dec 2023
197 points (99.5% liked)

Technology

59623 readers
1396 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS