87
submitted 6 days ago by yogthos@lemmy.ml to c/technology@lemmy.ml
top 50 comments
sorted by: hot top controversial new old
[-] slacktoid@lemmy.ml 14 points 6 days ago* (last edited 5 days ago)

Where can I buy this?

Edit: I realized after I commented this was the product page.. My bad. It was more of a take my money now scenario

[-] frongt@lemmy.zip 11 points 6 days ago

This is literally a product page to buy them

[-] eldavi@lemmy.ml 6 points 5 days ago

i wonder if the driver to run is compatible with linux.

[-] slacktoid@lemmy.ml 5 points 5 days ago

Why wouldn't it? (Like I'm thinking why would they support Microsoft, and the only other viable option is FreeBSD)

[-] eldavi@lemmy.ml 1 points 4 days ago

the world still uses windows heavily so adoption for the end consumer relies on it.

[-] locuester@lemmy.zip 2 points 5 days ago

Try the link of the post you’re responding to.

[-] uberstar@lemmy.ml 7 points 5 days ago

I kinda want an individual consumer-friendly, low-end/mid-end alternative that can run my games and video editing software for very small projects.. so far I'm only eyeing the Lisuan G100, which seems to fit that bill..

This seems cool though, other than AI, it could be used for distributed cloud computing or something of that sort

[-] ICastFist@programming.dev 2 points 4 days ago

Does anyone know if it can run CUDA code? Because that's the silver bullet ensuring Nvidia dominance in the planet-wrecking servers

[-] peppers_ghost@lemmy.ml 5 points 4 days ago

llama and pytorch support it right now. CUDA isn't available on its own as far as I can tell. I'd like to try one out but the bandwidth seems to be ass. About 25% as fast as a 3090. It's a really good start for them though.

[-] geneva_convenience@lemmy.ml 6 points 5 days ago

For inference only. NVIDIA GPU's are so big because they can train models. Not just run them. All other GPU's seem to lack that capacity.

[-] lorty@lemmygrad.ml 6 points 5 days ago

And training them requires a LOT of VRAM, and this is why they do as much as they can to limit VRAM on their gaming cards: better market segmentation.

[-] nutbutter@discuss.tchncs.de 6 points 5 days ago

You can train or fine-tune a model on any GPU. Surely, It will be slower, but higher VRAM is better.

load more comments (12 replies)
[-] WalnutLum@lemmy.ml 6 points 5 days ago

These only work with ARM cpus I think

load more comments
view more: next ›
this post was submitted on 30 Aug 2025
87 points (97.8% liked)

Technology

39613 readers
177 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS