85
submitted 1 year ago by barsoap@lemm.ee to c/technology@lemmy.world

A new paper suggests diminishing returns from larger and larger generative AI models. Dr Mike Pound discusses.

The Paper (No "Zero-Shot" Without Exponential Data): https://arxiv.org/abs/2404.04125

you are viewing a single comment's thread
view the rest of the comments
[-] bamboo@lemm.ee 23 points 1 year ago* (last edited 1 year ago)

I think it’s incredibly naïve to think that because we’ve hit a boundary on one particular aspect of LLMs that the technology has peaked as a whole. There are lots of ways to improve LLMs that aren’t just increasing the parameter size, for example there’s been an uptick in smaller models that are optimized to run on client devices without large GPUs. There is probably a future where we have small 3-7B models that are competitive with today’s best 70B models, but can run in real time on any smartphone. We’ll have larger context windows, allowing LLMs to work on larger problems. And we’ll have better techniques for getting high quality information out of LLMs, there are already adversarial methods where two LLMs hold a debate on a subject that have proven more accurate and comprehensive data is possible. They’ll also continue to be embedded into different places in software that make them more useful, not just like a chatbot that lives in its own world.

[-] Murvel@lemm.ee 7 points 1 year ago* (last edited 1 year ago)

What you mentioned is assumed video and paper in question.

The main argument being that no matter our computational techniques, the diminishing returns in predictive precision is reached far sooner than we achieve general intelligence.

[-] boyi@lemmy.sdf.org 2 points 1 year ago* (last edited 1 year ago)

no matter our computational techniques, the diminishing returns in predictive precision is reached far sooner than we achieve general intelligence

That's very bold presumption. How can they be so sure of this, that any future models can't tackle the issue? have they got proof or something.

[-] Murvel@lemm.ee 2 points 1 year ago

No, they just calculate with increased size of the training roster.. it's not that complicated. Which is a fair presumption as that is how we've increased the predictive precision so far.

load more comments (1 replies)
load more comments (3 replies)
load more comments (5 replies)
this post was submitted on 09 May 2024
85 points (85.7% liked)

Technology

72219 readers
1638 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS