539
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 23 Sep 2024
539 points (98.0% liked)
Technology
59366 readers
1840 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
You can call that confidence if you want, but it got very little to do with how "sure" the model is.
Actually, it would be "The confidence of token Th is 0.95, the confidence of S is 0.32, the confidence of ... " and so on for each possible token, many llm's have around 16k-32k token vocabulary. Most will be at or near 0. So you pick Th, and then token "e" will probably be very high next, then a space token, then.. Anyway, the confidence of the word "Paris" won't be until far into the generation.
Now there is some overseeing logic in a way, if you ask what the capitol of a non existent country is it'll say there's no such country, but is that because it understands it doesn't know, or the training data has enough examples of such that it has the statistical data for writing out such an answer?
I assume by SLM you mean smaller LLM's like for example mistral 7b and llama3.1 8b? Well those were the kind of models I did try for local RAG.
Well, it was before llama3, but I remember trying mistral, mixtral, llama2 70b, command-r, phi, vicuna, yi, and a few others. They all made mistakes.
I especially remember one case where a product manual had this text : "If the same or a newer version of is already installed on the computer, then the installation will be aborted, and the currently installed version will be maintained" and the question was "What happens if an older version of is already installed?" and every local model answered that then that version will be kept and the installation will be aborted.
When trying with OpenAI's latest model at that time, I think 4, it got it right. In general, about 1 in ~5-7 answers to RAG backed questions were wrong, depending on the model and type of question. I could usually reword the question to get the correct answer, but to do that you kinda already have to know the answer is wrong. Which defeats the whole point of it.
More or less that. There's a point during the path that the input is taking on the language model were the induced randomness can significantly affect the output or not. If all the weights are pointing to the same end node, because the "confidence" is high, the no matter the random seed, the output will be the same. When the seed greatly affect the final result is because the weights don't point with that confidence to an unique end node, so the small randomness introduced at the beginning (the seed to say so) greatly change the result. It is here were you are most likely to get an hallucination.
To put again in terms of the much more easier to view earlier neural networks. When you didn't trail the model enough mario just made random movements without doing attempts to complete the level. Because the weights of the neurons could not reliably take the input and transform into an useful output. It os something that could be solved in smaller models. For larger models gets incredibly complicated because the massive amount of data. The complexity of the data. And the complexity of a proper training. But it's not something imposible or that could not get rid of. The same you can get Mario to finally complete all levels every time without issues, you can get a non hallucinanting chat bot, it just takes more technology improvements.
I suppose it could be said that the nature of language is chaotic like weather and not deterministic like a Mario level, and thus it would be actually "impossible" to get large results, like it's impossible to get precise weather a month in advance. But I'm not sure there would be enough evidence to support that, as hallucinations are not just across the board, they just tend to happen on matters that had little training data. Matters with plenty of training data do not hallucinate even in today models.
I searched slm online and found out that small models you said. I wasn't refering to those. Those are just small large language models IMO if that makes any sense. A proper slm should also have a small purpose, cannot be general chat. I mostly refer to the current chatbots that point you to predefined answers, or summarizing ones. Nothing that could really elaborate a wrote answer word by word.
Currently and to my knowledge. There isn't any general language model that can just write up answers and that is good enough to not hallucinate. But certainly we are getting closer each year.
Edit: I've been looking for an example, here https://www.tax.service.gov.uk/ask-hmrc/chat/self-assessment These kind of chatbots, they know when their answer is not precise and default to a polite "ask again" answer instead of just tell you the first "hallucination" that came to them. They are powered by similar AI technology but it's not a general use and cannot write word by word. But it "knows" when te answer is precise or not.
The example you shared is not an LLM. It's a classic chatbot with pre-defined answers. It basically knows keyword to KB article. If no term is known, it will tell "I don't know". It will also suggest incorrect KB if picks one keyword, ignoring the rest of the context. It has no idea of the answer is correct by any means. At best somebody will periodically check a sample of questions that the user didn't consider correct to evaluate the pairings, but it's not AI, at least not a good one
If you read my answers you'll see that I said they are not llm. They are language models powered by smaller datasets and with smaller neural networks.
I picked a tax agency in particular because I know first hand that tax agencies (I would surprise me that UK didn't use it) do use language models with neural networks, notice that again I'm not saying generative llm, to parse the question and select a proper answer. Not the keyword method you think they use.
I would have provided the first hand example I know but it is spanish and people may not be able to effectively understand it. But I do know that tax agencies usually use very similar tools one country from another. So probably UK does use it. If you want to test the spanish one here it is. And sources on what type of AI is used.
https://sede.agenciatributaria.gob.es/Sede/ayuda/herramientas-asistencia-virtual.html
https://es.newsroom.ibm.com/2018-02-28-La-Agencia-Tributaria-utiliza-IBM-Watson-para-ayudar-a-las-empresas-en-la-gestion-del-IVA
Again, because it seems that I need to repeat this so people can properly train on the info I'm writing, not LLM, not GPT, not a large general use language model. As for that amount of parameters cutting not confident answers would cut most answers, probably. At least with nowadays state of technology, things keep improving each year.
Edit: found some english source on the matter https://www.investinspain.org/en/news/2024/ibm
The chatbot it is still only in spanish and co-official languages still.
That's what you're missing. Those are not language models nor use neural networks. At best they use a classification NLP. They do not generate text, use pick pre-constructed answers based on the inputs. Because it this three's no confidence beyond "what's generally the correct based on this keyword"
I've worked with IBM Watson. That existed and was used for basic bots a decade ago. You have you manually feed the terms to outputs.
Y he usado la web de la agencia tributaria para confirmar lo que digo.