186
submitted 1 month ago by misk@sopuli.xyz to c/technology@lemmy.world
all 50 comments
sorted by: hot top controversial new old
[-] db0@lemmy.dbzer0.com 38 points 1 month ago

As always, never rely on llms for anything factual. They're only good with things which have a massive acceptance for error, such as entertainment (eg rpgs)

[-] kboy101222@sh.itjust.works 6 points 1 month ago

I tried using it to spit ball ideas for my DMing. I was running a campaign set in a real life location known for a specific thing. Even if I told it to not include that thing, it would still shoe horn it in random spots. It quickly became absolutely useless once I didn't need that thing included

Sorry for being vague, I just didn't want to post my home town on here

[-] homesweethomeMrL@lemmy.world 1 points 1 month ago

You can say Space Needle. We get it.

[-] kat@orbi.camp 4 points 1 month ago

Or at least as an assistant on a field your an expert in. Love using it for boilerplate at work (tech).

[-] 1rre@discuss.tchncs.de 4 points 1 month ago

The issue for RPGs is that they have such "small" context windows, and a big point of RPGs is that anything could be important, investigated, or just come up later

Although, similar to how deepseek uses two stages ("how would you solve this problem", then "solve this problem following this train of thought"), you could have an input of recent conversations and a private/unseen "notebook" which is modified/appended to based on recent events, but that would need a whole new model to be done properly which likely wouldn't be profitable short term, although I imagine the same infrastructure could be used for any LLM usage where fine details over a long period are more important than specific wording, including factual things

[-] db0@lemmy.dbzer0.com 2 points 1 month ago

The problem is that the "train of the thought" is also hallucinations. It might make the model better with more compute but it's diminishing rewards.

Rpg can use the llms because they're not critical. If the llm spews out nonsense you don't like, you just ask to redo, because it's all subjective.

[-] Eheran@lemmy.world 0 points 1 month ago

Nonsense, I use it a ton for science and engineering, it saves me SO much time!

[-] Atherel@lemmy.dbzer0.com 1 points 1 month ago

Do you blindly trust the output or is it just a convenience and you can spot when there's something wrong? Because I really hope you don't rely on it.

[-] Eheran@lemmy.world 1 points 1 month ago

How could I blindly trust anything in this context?

[-] Nalivai@lemmy.world 0 points 1 month ago

In which case you probably aren't saving time. Checking bullshit is usually harder and longer to just research shit yourself. Or should be, if you do due diligence

[-] Womble@lemmy.world 0 points 1 month ago

Its nice that you inform people that they cant tell if something is saving them time or not without knowing what their job is or how they are using a tool.

[-] WagyuSneakers@lemmy.world 0 points 1 month ago

If they think AI is working for them then he can. If you think AI is an effective tool for any profession you are a clown. If my son's preschool teacher used it to make a lesson plan she would be incompetent. If a plumber asked what kind of wrench he needed he would be kicked out of my house. If an engineer of one of my teams uses it to write code he gets fired.

AI "works" because you're asking questions you don't know and it's just putting words together so they make sense without regard to accuracy. It's a hard limit of "AI" that we've hit. It won't get better in our lifetimes.

[-] stephen01king@lemmy.zip 0 points 1 month ago

Anyone blindly saying a tool is ineffective for every situation that exists in the world is a tool themselves.

[-] WagyuSneakers@lemmy.world -1 points 1 month ago

Lame platitude

[-] mentalNothing@lemmy.world 25 points 1 month ago

Idk guys. I think the headline is misleading. I had an AI chatbot summarize the article and it says AI chatbots are really, really good at summarizing articles. In fact it pinky promised.

[-] brucethemoose@lemmy.world 11 points 1 month ago* (last edited 1 month ago)

What temperature and sampling settings? Which models?

I've noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.

I find my local thinking models (FuseAI, Arcee, or Deepseek 32B 5bpw at the moment) are quite good at summarization at a low temperature, which is not what these UIs default to, and I get to use better sampling algorithms than any of the corporate APis. Same with “affordable” flagship API models (like base Deepseek, not R1). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.

My point is that LLMs as locally hosted tools you understand the mechanics/limitations of are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification and crypto-bro type hype in one package.

[-] 1rre@discuss.tchncs.de 7 points 1 month ago

I've found Gemini overwhelmingly terrible at pretty much everything, it responds more like a 7b model running on a home pc or a model from two years ago than a medium commercial model in how it completely ignores what you ask it and just latches on to keywords... It's almost like they've played with their tokenisation or trained it exclusively for providing tech support where it links you to an irrelevant article or something

[-] brucethemoose@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

Gemini 1.5 used to be the best long context model around, by far.

Gemini Flash Thinking from earlier this year was very good for its speed/price, but it regressed a ton.

Gemini 1.5 Pro is literally better than the new 2.0 Pro in some of my tests, especially long-context ones. I dunno what happened there, but yes, they probably overtuned it or something.

[-] Imgonnatrythis@sh.itjust.works 1 points 1 month ago

Bing/chatgpt is just as bad. It loves to tell you it's doing something and then just ignores you completely.

[-] paraphrand@lemmy.world 3 points 1 month ago* (last edited 1 month ago)

I don’t think giving the temperature knob to end users is the answer.

Turning it to max for max correctness and low creativity won’t work in an intuitive way.

Sure, turning it down from the balanced middle value will make it more “creative” and unexpected, and this is useful for idea generation, etc. But a knob that goes from “good” to “sort of off the rails, but in a good way” isn’t a great user experience for most people.

Most people understand this stuff as intended to be intelligent. Correct. Etc. Or they At least understand that’s the goal. Once you give them a knob to adjust the “intelligence level,” you’ll have more pushback on these things not meeting their goals. “I clearly had it in factual/correct/intelligent mode. Not creativity mode. I don’t understand why it left out these facts and invented a back story to this small thing mentioned…”

Not everyone is an engineer. Temp is an obtuse thing.

But you do have a point about presenting these as cloud genies that will do spectacular things for you. This is not a great way to be executing this as a product.

I loathe how these things are advertised by Apple, Google and Microsoft.

[-] brucethemoose@lemmy.world 2 points 1 month ago* (last edited 1 month ago)
  • Temperature isn't even "creativity" per say, it's more a band-aid to patch looping and dryness in long responses.

  • Lower temperature is much better with modern sampling algorithms, E.G., MinP, DRY, maybe dynamic temperature like mirostat and such. Ideally, structure output, too. Unfortunately, corporate APIs usually don't offer this.

  • It can be mitigated with finetuning against looping/repetition/slop, but most models are the opposite, massively overtuning on their own output which "inbreeds" the model.

  • And yes, domain specific queries are best. Basically the user needs separate prompt boxes for coding, summaries, creative suggestions and such each with their own tuned settings (and ideally tuned models). You are right, this is a much better idea than offering a temperature knob to the user, but... most UIs don't even do this for some reason?

What I am getting at is this is not a problem companies seem interested in solving.They want to treat the users as idiots without the attention span to even categorize their question.

[-] Eheran@lemmy.world 1 points 1 month ago

This is really a non-issue, as the LLM itself should have no problem at setting a reasonable value itself. User wants a summary? Obviously maximum factual. He wants gaming ideas? Etc.

[-] brucethemoose@lemmy.world 2 points 1 month ago* (last edited 1 month ago)

For local LLMs, this is an issue because it breaks your prompt cache and slows things down, without a specific tiny model to "categorize" text... which few have really worked on.

I don't think the corporate APIs or UIs even do this. You are not wrong, but it's just not done for some reason.

It could be that the trainers don't realize its an issue. For instance, "0.5-0.7" is the recommended range for Deepseek R1, but I find much lower or slightly higher is far better, depending on the category and other sampling parameters.

[-] Eheran@lemmy.world 0 points 1 month ago

Rare that people here argument for LLMs like that here, usually it is the same kind of "uga suga, AI bad, did not already solve world hunger".

[-] brucethemoose@lemmy.world 2 points 1 month ago* (last edited 1 month ago)

Lemmy is understandably sympathetic to self-hosted AI, but I get chewed out or even banned literally anywhere else.

In one fandom (the Avatar fandom), there used to be enthusiasm for a "community enhancement" of the original show since the official DVD/Blu-ray looks awful. Years later in a new thread, I don't even mention the word "AI," just the idea of restoration, and I got bombed and threadlocked for the mere tangential implication.

[-] tal@lemmy.today 10 points 1 month ago* (last edited 1 month ago)

They are, however, able to inaccurately summarize it in GLaDOS's voice, which is a strong point in their favor.

[-] JackGreenEarth@lemm.ee 3 points 1 month ago

Surely you'd need TTS for that one, too? Which one do you use, is it open weights?

[-] brucethemoose@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

Zonos just came out, seems sick:

https://huggingface.co/Zyphra

There are also some “native” tts LLMs like GLM 9B, which “capture” more information in the output than pure text input.

[-] ag10n@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

A website with zero information, and barely anything on their huggingface page. What’s exciting about this?

Ahh, you should link to the model

https://www.zyphra.com/post/beta-release-of-zonos-v0-1

[-] brucethemoose@lemmy.world 1 points 1 month ago

Whoops, yeah, should have linked the blog.

I didn't want to link the individual models because I'm not sure hybrid or pure transformers is better?

[-] chemical_cutthroat@lemmy.world 9 points 1 month ago

Which is hilarious, because most of the shit out there today seems to be written by them.

[-] ininewcrow@lemmy.ca 4 points 1 month ago

The owners of LLMs don't care about 'accurate' ... they care about 'fast' and 'summary' ... and especially 'profit' and 'monetization'.

As long as it's quick, delivers instant content and makes money for someone ... no one cares about 'accurate'

[-] Eheran@lemmy.world 3 points 1 month ago

Especially after the open source release of DeepSeak... What...?

[-] untorquer@lemmy.world 3 points 1 month ago

Fuckin news!

[-] homesweethomeMrL@lemmy.world 2 points 1 month ago

Turns out, spitting out words when you don't know what anything means or what "means" means is bad, mmmmkay.

It got journalists who were relevant experts in the subject of the article to rate the quality of answers from the AI assistants.

It found 51% of all AI answers to questions about the news were judged to have significant issues of some form.

Additionally, 19% of AI answers which cited BBC content introduced factual errors, such as incorrect factual statements, numbers and dates.

Introduced factual errors

Yeah that's . . . that's bad. As in, not good. As in - it will never be good. With a lot of work and grinding it might be "okay enough" for some tasks some day. That'll be another 200 Billion please.

[-] badbytes@lemmy.world 2 points 1 month ago

Why, where they trained using MAIN STREAM NEWS? That could explain it.

[-] rottingleaf@lemmy.world 1 points 1 month ago

Yes, I think it would be naive to expect humans to design something capable of what humans are not.

[-] maniclucky@lemmy.world 1 points 1 month ago

We do that all the time. It's kind of humanity's thing. I can't run 60mph, but my car sure can.

[-] Turbonics@lemmy.sdf.org 1 points 1 month ago

BBC is probably salty the AI is able to insert the word Israel alongside a negative term in the headline

[-] Krelis_@lemmy.world 1 points 1 month ago* (last edited 1 month ago)

Some examples of inaccuracies found by the BBC included:

Gemini incorrectly said the NHS did not recommend vaping as an aid to quit smoking

ChatGPT and Copilot said Rishi Sunak and Nicola Sturgeon were still in office even after they had left

Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed "restraint" and described Israel's actions as "aggressive"

[-] Turbonics@lemmy.sdf.org 1 points 1 month ago

Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed “restraint” and described Israel’s actions as “aggressive”

I did not even read up to there but wow BBC really went there openly.

[-] Phoenicianpirate@lemm.ee 0 points 1 month ago

I learned that AI chat bots aren't necessarily trustworthy in everything. In fact, if you aren't taking their shit with a grain of salt, you're doing something very wrong.

[-] Redex68@lemmy.world 0 points 1 month ago

This is my personal take. As long as you're careful and thoughtful whenever using them, they can be extremely useful.

[-] echodot@feddit.uk 0 points 1 month ago* (last edited 1 month ago)

Could you tell me what you use it for because I legitimately don't understand what I'm supposed to find helpful about the thing.

We all got sent an email at work a couple of weeks back telling everyone that they want ideas for a meeting next month about how we can incorporate AI into the business. I'm heading IT, so I'm supposed to be able to come up with some kind of answer and yet I have nothing. Even putting aside the fact that it probably doesn't work as advertised, I still can't really think of a use for it.

The main problem is it won't be able to operate our ancient and convoluted ticketing system, so it can't actually help.

Everyone I've ever spoken to has said that they use it for DMing or story prompts. All very nice but not really useful.

[-] quokka1@mastodon.au 1 points 1 month ago

@echodot @Redex68 off top of my head, script generation. making content more readable. dictating a brain dump while walking and having it spit out a cohesive summary.

it's all about the prompt you put in. shit in/shit out. And making sure you check/understand what it spits out. and that sometimes it's garbage.

[-] Paradox@lemdro.id -4 points 1 month ago

Funny, I find the BBC unable to accurately convey the news

[-] bilb@lem.monster 1 points 1 month ago* (last edited 1 month ago)

Yeah, haha

Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed "restraint" and described Israel's actions as "aggressive"

Perplexity did fail to summarize the article, but it did correct it.

[-] small44@lemmy.world -4 points 1 month ago

BBC finds lol. No, we slresdy knew about that

this post was submitted on 11 Feb 2025
186 points (98.4% liked)

Technology

68247 readers
2378 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS