362
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

you are viewing a single comment's thread
view the rest of the comments
[-] mojo@lemm.ee 19 points 1 year ago

Not with our current tech. We'll need some breakthroughs, but I feel like it's certainly possible.

[-] GenderNeutralBro@lemmy.sdf.org 11 points 1 year ago

You can potentially solve this problem outside of the network, even if you can't solve it within the network. I consider accuracy to be outside the scope of LLMs, and that's fine since accuracy is not part of language in the first place. (You may have noticed that humans lie with language rather often, too.)

Most of what we've seen so far are bare-bones implementations of LLMs. ChatGPT doesn't integrate with any kind of knowledge database at all (only what it has internalized from its training set, which is almost accidental). Bing will feed in a couple web search results, but a few minutes of playing with it is enough to prove how minimal that integration is. Bard is no better.

The real potential of LLMs is not as a complete product; it is as a foundational part of more advanced programs, akin to regular expressions or SQL queries. Many LLM projects explicitly state that they are "foundational".

All the effort is spent training the network because that's what's new and sexy. Very little effort has been spent on the ho-hum task of building useful tools with those networks. The out-of-network parts of Bing and Bard could've been slapped together by anyone with a little shell scripting experience. They are primitive. The only impressive part is the LLM.

The words feel strange coming off my keyboard, but....Microsoft has the right idea with the AI integrations they're rolling into Office.

The potential for LLMs is so much greater than what is currently available for use, even if they can't solve any of the existing problems in the networks themselves. You could build an automated fact-checker using LLMs, but the LLM itself is not a fact-checker. It's coming, no doubt about it.

[-] 8ace40@programming.dev 5 points 1 year ago* (last edited 1 year ago)

The other day I saw a talk made by one of the wiki media guys, that talked about integrating LLM with knowledge graphs. It was very cool, I'll try to find it again.

Edit: found it! https://youtu.be/WqYBx2gB6vA

[-] GenderNeutralBro@lemmy.sdf.org 2 points 1 year ago

That's a fantastic video. Thanks!

[-] Phlogiston@lemmy.world 2 points 1 year ago

Good video.

In summary we should leverage the strengths of LLMs (language stuff, complex thinking) and leverage the strengths of knowledge graphs for facts.

I think the engineering hurdle will be in getting the LLMs to use knowledge graphs effectively when needed and not when pure language is a better option. His suggestion of “it’s complicated” could be a good signal for that.

[-] pufferfischerpulver@feddit.de 3 points 1 year ago

Honestly, the integration into office is an excellent idea. I've been using chatgpt to work on documents, letting it write entirely new sections for me based on my loose notes and existing text. Which for now I have to either paste in or feed as a pdf through a plugin. But the 25USD I paid I literally earned in a single day through the time saved vs the hours I was justified to bill.

Once I have that integrated into word directly it'll be huge.

People also seem to expect llms to just do all the work. But that's not my experience, for generative text anyway. You have to have a solid idea of what you want and how you want it. But the time the llm saves on formulation and organisation of your thoughts is incredible.

[-] justastranger@sh.itjust.works 2 points 1 year ago

LLMs will work great for the purpose of translating raw thoughts into words but until we create a neural networks that actually think independently all they'll be is transformers that approximate their training data in response to prompts

this post was submitted on 02 Aug 2023
362 points (94.1% liked)

Technology

58164 readers
3593 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS