288
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 03 Apr 2025
288 points (96.8% liked)
Technology
73455 readers
822 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
Except these AI systems aren't search engines, and people treating them like they are is really dangerous
They are. They record the data, stealing it. They search it (or characteristics of it), and reprint it (in whole or in part) upon request.
Viewing it as something creative, or other than a glorified remixing machine is the problem. It's a search engine for creative works they've stolen, and reproduce parts of.
They search the data-space of what they're "trained" on (our content, the content of human beings), and reproduce statistically defined elements of it.
They're search engines that have stolen what they're "trained on", and reproduce it as "results" (be that images or written text, it has to come from our collective data. Data we created). It's theft. It's copywrite fraud. Same as google stealing books (which they had to he sued over the digitizing of, and enter into rights agreements over).
Searching and reproducing content they've already recorded (aka stolen without permission), is absolutely part of what they are. Part of what they do.
Don't stan for them or pretend they're creative, intelligent, or doing anything original.
The real lie is that it's "training data". It's not. It's the internet, and it's not training - it's theft, it's stealing and copying (violating copyright). Digital stealing, and processing into a "data set", a representation or repackaging of our original works.
Their input sides are based on crawling, just as search is.
Yeah, and then they convert that to weighted probabilities or a "data space" which they then search during content generation.
The basic graphing technology used by AI is the same pioneered by Alta Vista and optimized by Google years later. We've added a layer of abstraction through user I/O, such that you get a formalized text response encapsulating results rather than a series of links containing related search terms. But the methodology used to harvest, hash, and sort results is still all rooted in graph theory.
The difference between then and now is that back then you'd search "Horse" in Alta Vista and getting a dozen links ranging from ranches and vet clinics to anime and porn. Now, you get a text blob that tries to synthesize all the information in those sources down to a few paragraphs of relevant text.
That simply isn't true. There's nothing in common between an LLM and a search engine, except insofar as the people developing the LLM had access to search engines, and may have used them during their data gathering efforts for training data