283
(page 2) 34 comments
sorted by: hot top controversial new old
[-] j4p@lemm.ee 1 points 8 months ago

Sigh I hope LLMs get dropped from the AI bandwagon because I do think they have some really cool use cases and love just running my little local models. Cut government spending like a madman, write the next great American novel, or eliminate actual jobs are not those use cases.

[-] OsrsNeedsF2P@lemmy.ml 1 points 8 months ago

I work with people who work in this field. Everyone knows this, but there's also an increased effort in improvements all across the stack, not just the final LLM. I personally suspect the current generation of LLMs is at its peak, but with each breakthrough the technology will climb again.

Put differently, I still suspect LLMs will be at least twice as good in 10 years.

[-] Zier@fedia.io 1 points 8 months ago

It's gonna crash like a self driving tesla. It's gonna fall apart like a cybertrukkk.

[-] theacharnian@lemmy.ca 1 points 8 months ago

It's so funny how all this is only a problem within a capitalist frame of reference.

[-] masquenox@lemmy.world 1 points 8 months ago

What they call "AI" is only "intelligent" within a capitalist frame of reference, too.

[-] Hazor@lemmy.world 1 points 8 months ago

I don't understand why you're being downvoted. Current "AI" based on LLM's have no capacity for understanding of the knowledge they contain (hence all the "hallucinations"), and thus possess no meaningful intelligence. To call it intelligent is purely marketing.

[-] iAvicenna@lemmy.world 1 points 8 months ago

so long, see you all in the next hype. Any guesses?

[-] Mwa@lemm.ee 1 points 8 months ago

yep Knew ai should die some day.

[-] karl_chungus@lemm.ee 0 points 8 months ago

Even Pied Piper didn’t scale.

[-] dejected_warp_core@lemmy.world 0 points 8 months ago

Welcome to the top of the sigmoid curve.

If you were wondering what 1999 felt like WRT to the internet, well, here we are. The Matrix was still fresh in everyone's mind and a lot of online tech innovation kinda plateaued, followed by some "market adjustments."

[-] Hackworth@lemmy.world -1 points 8 months ago* (last edited 8 months ago)

I think it's more likely a compound sigmoid (don't Google that). LLMs are composed of distinct technologies working together. As we've reached the inflection point of the scaling for one, we've pivoted implementations to get back on track. Notably, context windows are no longer an issue. But the most recent pivot came just this week, allowing for a huge jump in performance. There are more promising stepping stones coming into view. Is the exponential curve just a series of sigmoids stacked too close together? In any case, the article's correct - just adding more compute to the same exact implementation hasn't enabled scaling exponentially.

[-] jpablo68@infosec.pub 0 points 8 months ago

I just want a portable self hosted LLM for specific tasks like programming or language learning.

[-] plixel@programming.dev 1 points 8 months ago

You can install Ollama in a docker container and use that to install models to run locally. Some are really small and still pretty effective, like Llama 3.2 is only 3B and some are as little as 1B. It can be accessed through the terminal or you can use something like OpenWeb UI to have a more "ChatGPT" like interface.

load more comments (1 replies)
[-] randon31415@lemmy.world 0 points 8 months ago

The hype should go the other way. Instead of bigger and bigger models that do more and more - have smaller models that are just as effective. Get them onto personal computers; get them onto phones; get them onto Arduino minis that cost $20 - and then have those models be as good as the big LLMs and Image gen programs.

[-] JayDee@lemmy.ml -1 points 8 months ago

That would be innovation, which I'm convinced no company can do anymore.

It feels like I learn that one of our modern innovations was already thought up and written down into a book in the 1950s, and just wasn't possible at that time due to some limitation in memory, precision, or some other metric. All we did was do 5 decades of marginal improvement to get to it, while not innovating much at all.

[-] ikidd@lemmy.world -1 points 8 months ago

I believe this about as much as I believed the "We're about to experience the AI singularity" morons.

load more comments
view more: ‹ prev next ›
this post was submitted on 13 Nov 2024
283 points (95.2% liked)

Technology

72764 readers
1104 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS