39
submitted 1 year ago by alex@jlai.lu to c/technology@beehaw.org

Thoughts from James who recently held a Gen AI literacy workshop for older teenagers.

On risks:

One idea I had was to ask a generative model a question and fact check points in front of students, allowing them to see fact checking as part of the process. Upfront, it must be clear that while AI-generated text may be convincing, it may not be accurate.

On usage:

Generative text should not be positioned as, or used as, a tool to entirely replace tasks; that could disempower. Rather, it should be taught to be used as a creativity aid. Such a class should involve an exercise of making something.

you are viewing a single comment's thread
view the rest of the comments
[-] ConsciousCode@beehaw.org 1 points 1 year ago* (last edited 1 year ago)

Let me flip it around again - humans regularly "hallucinate", it's just not something we recognize as such. There's neuro-atypical hallucinations, yes, but there's also misperceptions, misunderstandings, brain farts, and "glitches" which regularly occur in healthy cognition, and we have an entire rest of the brain to prevent those. LLMs are most comparable to "broca's area", which neurological case studies suggest naturally produces a stream of nonsense (see: split brain patients explaining the actions of their mute half). It's the rest of our "cognitive architecture" which conditions that raw language model to remain self-consistent and form a coherent notion of self. Honestly this discussion on "conceptualization" is poorly conceived because it's unfalsifiable and says nothing about the practical applications. Why do I care if the LLM can conceptualize if it does whatever subset of conceptualization I need to complete a natural language task?

AI is being super overhyped right now, which is unfortunate because it really is borderline miraculous, yet somehow they've overdone it. Emergent properties are empirical observations of behaviors they're able to at least semi-consistently demonstrate - where it becomes "eye of the beholder" is when we dither on about psychology and philosophy about whether or not they're some kind of "conscious" - I would argue they aren't, and the architecture makes that impossible without external aid, but "conscious(ness)" is such a broad term that it barely has a definition at all. I guess to speedrun the overhype misinformation I see:

  • "They just predict one token at a time" is reductive and misleading even though it's technically true - the loss function for language modeling inevitably requires learning abstract semantic operations. For instance, to complete "The capital of France is" a language model must in some way "know" about countries, cities, and the ontology of France.
  • "It's just a chatbot" - ChatGPT is a chatbot, GPT-4 is a language model. Language models model how the likelihood of words and language changes over time. When I said "causal" before, this is an arbitrary restriction of the math such that the model only predicts the "next" word. If you remove this restriction, you can get it a sentence with a hole in it and it'll tell you what words are most likely to be in that hole. You can think of it as being like a physics model, which describes how objects change over time. Putting these into a "generative" context allows you to extract latent semantic information generalized from the training corpus, including higher-order relationships. tl;dr "chatbot" is the first and least interesting application - anything which relates to "understanding" natural language is a potential application.
  • "Hallucinations show that they're broken" - Hallucinations are actually what you'd expect from these sorts of models. If I had to broadly class the sorts of hallucinations I see, they would be:
    1. Model inaccuracy - Inevitable, but not the only reason. Essentially it failed to generalize in that specific way, like SD and hands.
    2. Unlikely sampling - It's possible the code which picks the next word given the probability distribution accidentally picks one (or a series) with a very low chance. When this happens, the LLM has no way to "undo" that, which puts it in a very weird position where it has to keep predicting but it's already in a space that shouldn't really be possible. There are actually some papers which attempt to correct that, like adding an "undo token" (unfortunately can't find the paper) or detecting OOD conditions
    3. Extrapolation - Especially for the earlier models with small context windows, if it needs information which is now outside that window it's still modeling language, just without the necessary context. Without this context, it will instead pick one at random and talk about something unrelated. Compare this to eg dementia patients.
    4. Imagination - When you give it some kind of placeholder, like "<...>", "etc etc etc" or "## code here ##", most text in the training data like that will continue as if there was information in that place. Lacking context, just like with "extrapolation", it picks one at random. You can mitigate this somewhat by telling it to only respond to things that are literally in the text, and GPT-4 doesn't seem to have this problem much anymore, probably from the RLHF.
    5. Priming - If you prompt the LLM authoritatively enough, eg "find me a case that proves X" which implies such a case exists, if it doesn't know of any such case, it will create one at random. Essentially, it's saying "if there was a case that proved X it would look like this". This is actually useful when properly constrained, eg if you want it to recursively generate code it might use an undefined function that it "wishes" existed.
  • "GPT-5 could be roko's basilisk!" - No. This architecture is fundamentally incapable of iterative thought processes, for it to develop those itself would require trillions more parameters, if it's even possible. What's more, LLMs aren't utility-maximizers or reinforcement learning agents like we thought AGI would be; they do whatever you ask and have no will or desires of their own. There's almost 0 chance this kind of model would go rogue, offset only slightly by people using RLHF but that's human-oriented so the worst you get is the model catering to humans being dumb.
  • "They tek er jerbs!" - Yes, but not because they're "as good as humans" - they are better when given a specific task to narrowly focus on. The models are general, but they need to be told exactly what to do, which makes them excellent for capitalism's style of alienated labor. I would argue this is actually be desirable if working wasn't tied to people's privilege to continue living - no living human should have to flip burgers when a robot can do it better, otherwise you're treating the human like a robot.

I'll add more if I see or think of any. And if you have any specific questions, I'd be happy to answer. Also I should note, I'm of course using a lot of anthropomorphizing language here but it's the closest we have to describing these concepts. They're not human, and while they may have comparable behaviors in isolation, you can't accurately generalize all human behaviors and their interactions onto the models. Even if they were AGI or artificial people, they would "think" in fundamentally different ways.

If you want a more approachable but knowledgeable discussion on LLMs and their capabilities, I would recommend a youtuber named Dave Shapiro. Very interesting ideas, he gets a bit far into hype and futurism but those are more or less contained within their own videos.`

[-] lvxferre@lemmy.ml 2 points 1 year ago

humans regularly “hallucinate”, it’s just not something we recognize as such. There’s neuro-atypical hallucinations, yes, but there’s also misperceptions, misunderstandings, brain farts, and “glitches” which regularly occur in healthy cognition, and we have an entire rest of the brain to prevent those.

Can you please tone down on the fallacies? Until now I've seen the following:

  • red herring - "LLMs are made of dozen of layers" (that don't contextually matter in this discussion)
  • appeal to ignorance - "they don't matter because [...] they exist as black boxes"
  • appeal to authority - "for the record, I personally know [...]" (pragmatically the same as "chrust muh kwalifikashuns")
  • inversion of the burden of proof (already mentioned)
  • faulty generalisation (addressing an example as if it addressed the claim being exemplified)

And now, the quoted excerpt shows two more:

  • moving the goalposts - it's trivial to prove that humans can be sometimes dumb. And it does not contradict the claim of the other poster.
  • equivocation - you're going out of way to label incorrect human output by the same word used to label incorrect LLM output, without showing that they're the same. (They aren't.)

Could you please show a bit more rationality? This sort of shit is at the very least disingenuous, if not worse (stupidity), it does not lead to productive discussion. Sorry to be blunt but you're just wasting the time of everyone here, this is already hitting Brandolini's Law.

I won't address the rest of your comment (there's guilt by association there BTW), or further comments showing the same lack of rationality. However I had to point this out, specially for the sake of other posters.

this post was submitted on 04 Sep 2023
39 points (100.0% liked)

Technology

37739 readers
593 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS