63
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 08 Jan 2026
63 points (95.7% liked)
Privacy
48080 readers
662 users here now
A place to discuss privacy and freedom in the digital world.
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
Some Rules
- Posting a link to a website containing tracking isn't great, if contents of the website are behind a paywall maybe copy them into the post
- Don't promote proprietary software
- Try to keep things on topic
- If you have a question, please try searching for previous discussions, maybe it has already been answered
- Reposts are fine, but should have at least a couple of weeks in between so that the post can reach a new audience
- Be nice :)
Related communities
much thanks to @gary_host_laptop for the logo design :)
founded 6 years ago
MODERATORS
Thats awesome! I was going to add some sort of AI to my proxmox homelab for researching but I figured the risk of halloucination was too high, and I thought that the only way to fix this was getting a bigger model. But thid seams like a really good setup (if I can actually figure out how to implement it.) And I wont need to upgrade my gpu!
Althogh I only have one ai suitable gpu (I have a gtx 1660 6gb in my homelab which is really only suitable for movie transcoding.) I have a 3060 12gb that I use in my gaming pc I was thinking I could setup some kind of wol system that boots the pc and sets up the ai software on that. Maybe my homelab hosts openwebui and when I send a queory it prompts my gaming pc to wake up and do the ai crunching.
Well, technically, you don't need any GPU for the system I've set up, because only 2-3 models are "hot" in memory (so about....10GB?) and the rest are cold / invoked as needed. My own GPU is only 8GB (and my prior one was 4GB!). I designed this with low end rigs in mind.
The minimum requirement is probably a CPU equal to or better than mine (i7-8700; not hard to match), 8-10GB RAM and maybe 20GB disk space. Bottom of the barrel would be 4gb but you'll have to deal with ssd thrashing.
Anything above that is a bonus / tps multiplier.
FYI; CPU only (my CPU at least) + 32gb system RAM, this entire thing runs at about 10-11 tps, which is interactive enough speed / faster than reading speed. Any decent gpu should get you 3-10x that. I designed this for peasant level hardware / to punch GPTs in the dick thru clever engineering, not sheer grunt. Fuck OpenAi. Fuck Nvidia. Fuck DDR6. Spite + ASD > "you can't do that" :). Yes I fucking can - watch me.
If you want my design philosophy, here is one of my (now shadowbanned) posts from r/lowendgaming. Seeing you're a gamer, this might make sense to you! The MoA design I have is pure "level 8 spite, zip tie Noctura fan to server grade GPU and stick it in a 1L shoebox" YOLOing :).
It works, but it's ugly, in a beautiful way.
Lowend gaming iceberg
Level 1
Level 2
Level 3
Level 4
Level 5
Level 6
Level 7
Level 8
Level 9
I have a 12 gig gpu that I dont use for most of the time, might as well put it to work doing something. And even second hand ddr4 memory has gotten so expensive I'd rather not have to upgrade my homelab.
What is your main use case for this anyway? Do you use it for researching? Thats what I would mainly use it for, but also finding things in my obsidian volt.
What stage have you actually gotten to?
I do like the idea of this all though. I should really get into undervolting/overclocking my stuff, there is really no reason not to I could either gain performance or longevity or both!
Also I hate that the stock fans on cpu's are so garbage. Luckily arctic fans are really cheap and quiet. Noctua is great but i'd sooner buy a budget aio than a single noctua fan lol.
Sorry - I think I misunderstood part of your question (what stage have you actually gotten to). See what I mean about needing sentiment analysis LOL
Did you mean about the MoA?
The TL;DR - I have it working - right now - on my rig. It's strictly manual. I need to detangle it and generalise it, strip out personal stuff and then ship it as v1 (and avoid the oh so tempting scope creep). It needs to be as simple as possible for someone else to retool.
So, it's built and functional right now...but the detangling, writing up specs and docs, uploading everything to Codeberg and mirroring etc will take time. I'm back to work this week and my fun time will be curtailed...though I want nothing more than to hyperfocus on this LOL.
One of the issues with ASD is most of us over-engineer everything for the worst case adversarial outcomes, as a method of reducing meltdowns/shutdowns. Right now, I am specifically using my LLM like someone who hates it and wants to break it...to make sure it does what I say it does.
If you'd like, I can drop my RFC (request for comments, in engineering talk) for you to look at / verify with another LLM / ask someone about. This thing is real, not hype and not vibe coding. I built this because my ASD brain needs it and because I was driven by spite / too miserly to pay out the ass for decent rig. Ironically, those constraints probably led to something interesting (I hope) that can help others (I hope). Like everything else, it's not perfect but it does what it says on the tin 9/10...which is about all you can hope for.
Oh I didn't realise you were going to release it! I was just going to try and setup a simplified version myself, that's really cool. Don't worry I'm patient and I will be too busy this year to implement anything for myself anyway, but I too (with my likely getting diagnosed soon adhd brain) share your enthusiasm for a way to implement an AI that collects information for you without lying.
Ha ha! I actually finished it over the weekend. Now it's onto the documentation...ICBF lol
I just tried to get shit GPT to do it this morning, as it's generally pretty ok for that. As always, it produces real "page turners".
In any case, watch this space
https://github.com/BobbyLLM
(it will mirror Codebase repo)
The deed is done! Woot! I'll do a longer post else where but you get to be the first cab off the rank :)
https://codeberg.org/BobbyLLM/llama-conductor
OR
https://github.com/BobbyLLM/llama-conductor <---mirror
I just saw it, this is really cool I hope I get around to this in the not too distant future.