69
submitted 18 hours ago by hperrin@lemmy.ca to c/technology@beehaw.org

A fully automated, on demand, personalized con man, ready to lie to you about any topic you want doesn’t really seem like an ideal product. I don’t think that’s what the developers of these LLMs set out to make when they created them either. However, I’ve seen this behavior to a certain extent in every LLM I’ve interacted with. One of my favorite examples was a particularly small-parameter version of Llama (I believe it was Llama-3.1-8B) confidently insisting to me that Walt Disney invented the Matterhorn (like, the actual mountain) for Disneyland. Now, this is something along the lines of what people have been calling “hallucinations” in LLMs, but the fact that it would not admit that it was wrong when confronted and used confident language to try to convince me that it was right, is what pushes that particular case across the boundary to what I would call “con-behavior”. Assertiveness is not always a property of this behavior, though. Lately, OpenAI (and I’m sure other developers) have been training their LLMs to be more “agreeable” and to acquiesce to the user more often. This doesn’t eliminate this con-behavior, though. I’d like to show you another example of this con-behavior that is much more problematic.

you are viewing a single comment's thread
view the rest of the comments
[-] Powderhorn@beehaw.org 10 points 18 hours ago

Confidence mixed with a lack of domain knowledge is a tale as old as time. There's not always a con in play -- think Pizzagate -- but this certainly isn't restricted to LLMs, and given the training corpus, a lot of that shit is going to slip in.

It's really unclear where we go from here, other than it won't be good.

[-] jarfil@beehaw.org 1 points 11 hours ago

That's why AI companies have been giving out generic chatbots for free, but charge for training domain-specific ones. People paying for using the generic ones, is just the tip of the iceberg.

The future is going to be local or on-prem LLMs, fine tuned on domain knowledge, most likely multiple ones per business/user. It is estimated that businesses are holding orders of magnitude more knowledge, than what has been available for AI training. Will also be interesting to see what kind of exfiltration becomes possible, when one of those internal LLMs gets leaked.

[-] Powderhorn@beehaw.org 1 points 11 hours ago

I'm sure that, as with Equifax, there will be no consequences. Shareholders didn't rebel then; why would they in the face of a massive LLM breach?

[-] jarfil@beehaw.org 1 points 10 hours ago

It's going to be funnier: imagine throwing in tons of data at an LLM, most of the data will get abstracted and grouped, most will be extractable indirectly, some will be extractable verbatim... and any piece of it might be a hallucination, no guarantees! 😅.
Courts will have a field day with that.

[-] Powderhorn@beehaw.org 1 points 9 hours ago

Oh, yeah. Hilarity at its finest. Just call it a glorified database and a day.

this post was submitted on 12 May 2025
69 points (100.0% liked)

Technology

38642 readers
494 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS