108
submitted 1 year ago by JRepin@lemmy.ml to c/technology@lemmy.ml

cross-posted from: https://lemmy.ml/post/2811405

"We view this moment of hype around generative AI as dangerous. There is a pack mentality in rushing to invest in these tools, while overlooking the fact that they threaten workers and impact consumers by creating lesser quality products and allowing more erroneous outputs. For example, earlier this year America’s National Eating Disorders Association fired helpline workers and attempted to replace them with a chatbot. The bot was then shut down after its responses actively encouraged disordered eating behaviors. "

you are viewing a single comment's thread
view the rest of the comments
[-] Norgur@kbin.social 7 points 1 year ago* (last edited 1 year ago)

Yes, it is impossible. There are no "thoughts". The bloody thing doesn't know what an Apple is if you ask it to write a 500 page book about them. It just guesses a word, then from there guesses the next one and so on. That's why it will very often confidently tell you aggravating bullshit. It has no concept of the things it spits out. It's a "word calculator" so to speak. The whole thing is not "revolutionary" or "new" by any stretch. What is new is the ability to use tons and tons and tons of reference data which makes the output halfway decent and the GPU power that will make it's speed halfway decent. Other than that, LLMs are.not."thinking".

[-] FaceDeer@kbin.social 1 points 1 year ago

A rather categorical statement given that you didn't say anything with regards to how you think.

Maybe wait until we actually know more what's going on under the hood - both in LLMs and in the human brain - before stating with such confident finality that there's absolutely no similarities.

If it turns out that LLMs aren't thinking, but they're still producing the same sort of interaction that humans are capable of, perhaps that says more about humans than it does about LLMs.

[-] Norgur@kbin.social 4 points 1 year ago

They produce this kind of output because they break doen one mostly logical system (language) onto another (numbers). The irregularities language has get compensated by the vast number of sources.

We don't need to know more about anything. If I tell you "hey, don't think of an Apple", your brain will conceptualize an Apple and then go from there. LLMs don't know "concepts". They spit out numbers just as mindlessly as your Casio calculator watch.

[-] radarsat1@lemmy.ml 5 points 1 year ago

I would argue that what's going on is that they are compressing information. And it just so happens that the most compact way to represent a generative system (like mathematical relations for instance) is to model their generative structure. For instance, it's much more efficient to represent addition by figuring out how to add two numbers, than by memorizing all possible combinations of numbers and their sum. So implicit in compression is the need to discover generalizations. But, the network has limited capacity and limited "looping power", and it doesn't really know what a number is, so it has to figure all this out by example and as a result will often come to approximate versions of these generalizations. Thus, it will often appear to be intelligent until it encounters something that doesn't quite fit whatever approximation it came up with and will suddenly get something wrong that seems outside the pattern that you thought it understood, because it's hard to predict what it's captured at a very deep level and what it only has surface concepts of.

In other words, I think it is "kind of" thinking, if thinking can be considered a kind of computation, but it doesn't always capture concepts completely because it's not quite good enough at generalizing what it's learned, but it's just good enough to appear really smart within a certain distribution of inputs.

Which, in a way, isn't so different from us, but is maybe not the same as how we learn and naturally integrate information.

load more comments (10 replies)
load more comments (11 replies)
this post was submitted on 04 Aug 2023
108 points (88.6% liked)

Technology

34879 readers
49 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS