19
submitted 3 weeks ago by jeffw@lemmy.world to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] Paradox@lemdro.id 33 points 3 weeks ago

Can I download their model and run it on my own hardware? No? Then they're inferior to deepseek

[-] Teanut@lemmy.world 12 points 3 weeks ago

In fairness, unless you have about 800GB of VRAM/HBM you're not running true Deepseek yet. The smaller models are Llama or Qwen distilled from Deepseek R1.

I'm really hoping Deepseek releases smaller models that I can fit on a 16GB GPU and try at home.

[-] Padit@feddit.org 13 points 3 weeks ago

Well, honestly: I have this kind of computational power at my university, and we are in dire need of a locally hosted LLM for a project, so at least for me as a researcher, its really really cool to have that.

[-] Teanut@lemmy.world 4 points 2 weeks ago

Lucky you! I need to check my university's current GPU power but sadly my thesis won't be needing that kind of horsepower, so I won't be able to give it a try unless I pay AWS or someone else for it on my own dime.

load more comments (5 replies)
load more comments (5 replies)
this post was submitted on 02 Feb 2025
19 points (78.8% liked)

Technology

63023 readers
1014 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS