78
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 10 Oct 2025
78 points (96.4% liked)
Linux
57274 readers
704 users here now
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
founded 6 years ago
MODERATORS
Yep, that'd be me. That said if I were to buy a new GPU today (well, tomorrow, waiting on Valve announcement for its next HMD) I might still get an NVIDIA because even though I'm convinced 99% of LLM/GenAI is pure hype, if 1% might be useful, might be built ethically and might run on my hardware, I'd be annoyed if it wouldn't because ROCm is just a tech demo but is too far performance wise. That'd say the percentage is so ridiculously low I'd probably pick the card which treats the open ecosystem best.
ROCm works just fine on consumer cards for inferencing and is competetive or superior in $/Token/s and beats NVIDIA power consumption. ROCm 7.0 seems to be giving >2x uplift on consumer cards over 6.9, so that's lovely. Haven't tried 7 myself yet, waiting for the dust to settle, but I have no issues with image gen, text gen, image tagging, video scanning etc using containers and distroboxes on Bazzite with a 7800XT.
Bleeding edge and research tends to be CUDA, but mainstream use cases are getting ported reasonably quickly. TLDR unless you're training or researching (unlikely on consumer cards) AMD is fine and performant, plus you get stable linux and great gaming.
I use local ai for speech/object recognition from my video security system and control over my HomeAssistant and media services. These services are isolated from the Internet for security reasons, that wouldn’t be possible if they required OpenAI to function.
ChatGPT and Sora are just tech toys, but neural networks and machine learning are incredibly useful components. You would be well served by staying current on the technology as it develops.
What did I miss? https://fabien.benetou.fr/Content/SelfHostingArtificialIntelligence#UsedLocally
Edited : direct link to tried things.
Ollama works just fine for me with an AMD GPU.
From what I've heard, ROCm may be finally getting out of its infancy; at the very least, I think by the time we get something useful, local, and ethical, it will be pretty well-developed.
Honestly, though, I'm in the same boat as you and actively try to avoid most AI stuff on my laptop. The only "AI" thing I use is I occasionally do an image upscale. I find it kind of useless on photos, but it's sometimes helpful when doing vector traces on bitmap graphics with flat colors; Inkscape's results aren't always good with lower resolution images, so putting that specific kind of graphic through "cartoon mode" upscales sometimes improves results dramatically for me.
Of course, I don't have GPU ML acceleration, so it just runs on the CPU; it's a bit slow, but still less than 10 minutes.