462

Hello!

As a handsome local AI enjoyer™ you’ve probably noticed one of the big flaws with LLMs:

It lies. Confidently. ALL THE TIME.

(Technically, it “bullshits” - https://link.springer.com/article/10.1007/s10676-024-09775-5

I’m autistic and extremely allergic to vibes-based tooling, so … I built a thing. Maybe it’s useful to you too.

The thing: llama-conductor

llama-conductor is a router that sits between your frontend (OWUI / SillyTavern / LibreChat / etc) and your backend (llama.cpp + llama-swap, or any OpenAI-compatible endpoint). Local-first (because fuck big AI), but it should talk to anything OpenAI-compatible if you point it there (note: experimental so YMMV).

Not a model, not a UI, not magic voodoo.

A glass-box that makes the stack behave like a deterministic system, instead of a drunk telling a story about the fish that got away.

TL;DR: “In God we trust. All others must bring data.”

Three examples:

1) KB mechanics that don’t suck (1990s engineering: markdown, JSON, checksums)

You keep “knowledge” as dumb folders on disk. Drop docs (.txt, .md, .pdf) in them. Then:

  • >>attach <kb> — attaches a KB folder
  • >>summ new — generates SUMM_*.md files with SHA-256 provenance baked in
  • `>> moves the original to a sub-folder

Now, when you ask something like:

“yo, what did the Commodore C64 retail for in 1982?”

…it answers from the attached KBs only. If the fact isn’t there, it tells you - explicitly - instead of winging it. Eg:

The provided facts state the Commodore 64 launched at $595 and was reduced to $250, but do not specify a 1982 retail price. The Amiga’s pricing and timeline are also not detailed in the given facts.

Missing information includes the exact 1982 retail price for Commodore’s product line and which specific model(s) were sold then. The answer assumes the C64 is the intended product but cannot confirm this from the facts.

Confidence: medium | Source: Mixed

No vibes. No “well probably…”. Just: here’s what’s in your docs, here’s what’s missing, don't GIGO yourself into stupid.

And when you’re happy with your summaries, you can:

  • >>move to vault — promote those SUMMs into Qdrant for the heavy mode.

2) Mentats: proof-or-refusal mode (Vault-only)

Mentats is the “deep think” pipeline against your curated sources. It’s enforced isolation:

  • no chat history
  • no filesystem KBs
  • no Vodka
  • Vault-only grounding (Qdrant)

It runs triple-pass (thinker → critic → thinker). It’s slow on purpose. You can audit it. And if the Vault has nothing relevant? It refuses and tells you to go pound sand:

FINAL_ANSWER:
The provided facts do not contain information about the Acorn computer or its 1995 sale price.

Sources: Vault
FACTS_USED: NONE
[ZARDOZ HATH SPOKEN]

Also yes, it writes a mentats_debug.log, because of course it does. Go look at it any time you want.

The flow is basically: Attach KBs → SUMM → Move to Vault → Mentats. No mystery meat. No “trust me bro, embeddings.”

3) Vodka: deterministic memory on a potato budget

Local LLMs have two classic problems: goldfish memory + context bloat that murders your VRAM.

Vodka fixes both without extra model compute. (Yes, I used the power of JSON files to hack the planet instead of buying more VRAM from NVIDIA).

  • !! stores facts verbatim (JSON on disk)
  • ?? recalls them verbatim (TTL + touch limits so memory doesn’t become landfill)
  • CTC (Cut The Crap) hard-caps context (last N messages + char cap) so you don’t get VRAM spikes after 400 messages

So instead of:

“Remember my server is 203.0.113.42” → “Got it!” → [100 msgs later] → “127.0.0.1 🥰”

you get:

!! my server is 203.0.113.42 ?? server ip203.0.113.42 (with TTL/touch metadata)

And because context stays bounded: stable KV cache, stable speed, your potato PC stops crying.


There’s more (a lot more) in the README, but I’ve already over-autism’ed this post.

TL;DR:

If you want your local LLM to shut up when it doesn’t know and show receipts when it does, come poke it:

PS: Sorry about the AI slop image. I can't draw for shit.

PPS: A human with ASD wrote this using Notepad++. If it the formatting is weird, now you know why.

top 50 comments
sorted by: hot top controversial new old
[-] recklessengagement@lemmy.world 8 points 22 hours ago

I strongly feel that the best way to improve the useability of LLMs is through better human-written tooling/software. Unfortunately most of the people promoting LLMs are tools themselves and all their software is vibe-coded.

Thank you for this. I will test it on my local install this weekend.

[-] SuspciousCarrot78@lemmy.world 1 points 17 hours ago

You're welcome. Hope its of some use to.you

[-] floquant@lemmy.dbzer0.com 12 points 1 day ago* (last edited 1 day ago)

Holy shit I'm glad to be on the autistic side of the internet.

Thank you for proving that fucking JSON text files are all you need and not "just a couple billion more parameters bro"

Awesome work, all the kudos.

[-] SuspciousCarrot78@lemmy.world 2 points 1 day ago

Thanks. It's not perfect but I hope it's a step in a useful direction

[-] termaxima@slrpnk.net 8 points 1 day ago

Hallucination is mathematically proven to be unsolvable with LLMs. I don't deny this may have drastically reduced it, or not, I have no idea.

But hallucinations will just always be there as long as we use LLMs.

[-] SuspciousCarrot78@lemmy.world 4 points 1 day ago* (last edited 23 hours ago)

Agree-ish

Hallucination is inherent to unconstrained generative models: if you ask them to fill gaps, they will. I don’t know how to “solve” that at the model level.

What you can do is make “I don’t know” an enforced output, via constraints outside the model.

My claim isn’t “LLMs won’t hallucinate.” It’s “the system won’t silently propagate hallucinations.” Grounding + refusal + provenance live outside the LLM, so the failure mode becomes “no supported answer” instead of “confident, slick lies.”

So yeah: generation will always be fuzzy. Workflow-level determinism doesn’t have to be.

I tried yelling, shouting, and even percussive maintenance but the stochastic parrot still insisted “gottle of geer” was the correct response.

[-] PolarKraken@lemmy.dbzer0.com 3 points 1 day ago

This sounds really interesting, I'm looking forward to reading the comments here in detail and looking at the project, might even end up incorporating it into my own!

I'm working on something that addresses the same problem in a different way, the problem of constraining or delineating the specifically non-deterministic behavior one wants to involve in a complex workflow. Your approach is interesting and has a lot of conceptual overlap with mine, regarding things like strictly defining compliance criteria and rejecting noncompliant outputs, and chaining discrete steps into a packaged kind of "super step" that integrates non-deterministic substeps into a somewhat more deterministic output, etc.

How involved was it to build it to comply with the OpenAI API format? I haven't looked into that myself but may.

[-] SuspciousCarrot78@lemmy.world 3 points 1 day ago* (last edited 1 day ago)

Cheers!

Re: OpenAI API format: 3.6 - not great, not terrible :)

In practice I only had to implement a thin subset: POST /v1/chat/completions + GET /v1/models (most UIs just need those). The payload is basically {model, messages, temperature, stream...} and you return a choices[] with an assistant message. The annoying bits are the edge cases: streaming/SSE if you want it, matching the error shapes UIs expect, and being consistent about model IDs so clients don’t scream “model not found”. Which is actually a bug I still need to squash some more for OWUI 0.7.2. It likes to have its little conniptions.

But TL;DR: more plumbing than rocket science. The real pain was sitting down with pen and paper and drawing what went where and what wasn't allowed to do what. Because I knew I'd eventually fuck something up (I did, many times), I needed a thing that told me "no, that's not what this is designed to do. Do not pass go. Do not collect $200".

shrug I tried.

[-] PolarKraken@lemmy.dbzer0.com 3 points 23 hours ago

The very hardest part of designing software, and especially designing abstractions that aim to streamline use of other tools, is deciding exactly where you draw the line(s) between intended flexibility (user should be able and find it easy to do what they want), and opinionated "do it my way here, and I'll constrain options for doing otherwise".

You have very clear and thoughtful lines drawn here, about where the flexibility starts and ends, and where the opinionated "this is the point of the package/approach, so do it this way" parts are, too.

Sincerely that's a big compliment and something I see as a strong signal about your software design instincts. Well done! (I haven't played with it yet, to be clear, lol)

[-] SuspciousCarrot78@lemmy.world 1 points 13 hours ago* (last edited 12 hours ago)

Thank you for saying that and for noticing it! Seeing you were kind enough to say that, I'd like to say a few things about how/why I made this stupid thing. It might be of interest to people. Or not LOL.

To begin with, when I say I'm not a coder, I really mean it. It's not false modesty. I taught myself this much over the course of a year and the reactivation of some very old skills (30 years hence). When I decided to do this, it wasn't from any school of thought or design principle. I don't know how CS professionals build things. The last time I looked at an IDE was Turbo Pascal. (Yes, I'm that many years old. I think it probably shows, what with the >> ?? !! ## all over the place. I stopped IT-ing when Pascal, Amiga and BBS were still the hot new things)

What I do know is - what was the problem I was trying to solve?

IF the following are true;

  1. I have ASD. If you tell me a thing, I assume your telling me a thing. I don't assume you're telling me one thing but mean something else.
  2. A LLM could "lie" to me, and I would believe it, because I'm not a subject matter expert on the thing (usually). Also see point 1.
  3. I want to believe it, because why would a tool say X but mean Y? See point 1.
  4. A LLM could lie to me in a way that is undetectable, because I have no idea what it's reasoning over, how it's reasoning over it. It's literally a black box. I ask a Question--->MAGIC WIRES---->Answer.

AND

  1. "The first principle is that you must not fool yourself and you are the easiest person to fool"

THEN

STOP.

I'm fucked. This problem is unsolvable.

Assuming LLMs are inherently hallucinatory within bounds (AFAIK, the current iterations all are), if there's even a 1% chance that it will fuck me over (it has), then for my own sanity, I have to assume that such an outcome is a mathematical certainty. I cannot operate in this environment.

PROBLEM: How do I interact with a system that is dangerously mimetic and dangerously opaque? What levers can I pull? Or do I just need to walk away?

  1. Unchangeable. Eat shit, BobbyLLM. Ok.
  2. I can do something about that...or at least, I can verify what's being said, if the process isn't too mentally taxing. Hmm. How?
  3. Fine, I want to believe it...but, do I have to believe it blindly? How about a defensive position - "Trust but verify"?. Hmm. How?
  4. Why does it HAVE to be opaque? If I build it, why do I have to hide the workings? I want to know how it works, breaks, and what it can do.

Everything else flowed from those ideas. I actually came up with a design document (list of invariants). It's about 1200 words or so, and unashamedly inspired by Asimov :)

MoA / Llama-swap System

System Invariants


0. What an invariant is (binding)

An invariant is a rule that:

  • Must always hold, regardless of refactor, feature, or model choice
  • Must not be violated temporarily, even internally. The system must not fuck me over silently.
  • Overrides convenience, performance, and cleverness.

If a feature conflicts with an invariant, the feature is wrong. Do not add.


1. Global system invariant rules:

1.1 Determinism over cleverness

  • Given the same inputs and state, the system must behave predictably.

  • No component may:

    • infer hidden intent,
    • rely on emergent LLM behavior
    • or silently adapt across turns without explicit user action.

1.2 Explicit beats implicit

  • Any influence on an answer must be inspectable and user-controllable.

  • This includes:

    • memory,
    • retrieval,
    • reasoning mode,
    • style transformation.

If something affects the output, the user must be able to:

  • enable it,
  • disable it,
  • and see that it ran.

Assume system is going to lie. Make its lies loud and obvious.


On and on it drones LOL. I spent a good 4-5 months just revising a tighter and tighter series of constraints, so that 1) it would be less likely to break 2) if it did break, it do in a loud, obvious way.

What you see on the repo is the best I could do, with what I had.

I hope it's something and I didn't GIGO myself into stupid. But no promises :)

[-] cypherpunks@lemmy.ml 8 points 1 day ago
[-] SuspciousCarrot78@lemmy.world 5 points 1 day ago* (last edited 1 day ago)

Spite based inference?

You dirty pirate hooker.

I don’t believe you.

[-] nagaram@startrek.website 7 points 1 day ago

This + Local Wikipedia + My own writings would be sick

[-] SuspciousCarrot78@lemmy.world 11 points 1 day ago* (last edited 1 day ago)

I’m not claiming I “fixed” bullshitting. I said I was TIRED of bullshit.

So, the claim I’m making is: I made bullshit visible and bounded.

The problem I’m solving isn’t “LLMs sometimes get things wrong.” That’s unsolvable AFAIK. What I'm solving for is "LLMs get things wrong in ways that are opaque and untraceable".

That's solvable. That’s what hashes get you. Attribution, clear fail states and auditability. YOU still have to check sources if you care about correctness.

The difference is - YOU are no longer checking a moving target or a black box. You're checking a frozen, reproducible input.

That’s… not how any of this works…

Please don't teach me to suck lemons. I have very strict parameters for fail states. When I say three strikes and your out, I do mean three strikes and you're out. Quants ain't quants, and models ain't models. I am very particular in what I run, how I run it and what I tolerate.

[-] nagaram@startrek.website 4 points 1 day ago

I think you missed the guy this is targeted at.

Worry not though. I get it. There isn't a lot of nuance in the AI discussion anymore and the anti-AI people are quite rude these days about anything AI at all.

You did good work homie!

[-] SuspciousCarrot78@lemmy.world 3 points 1 day ago* (last edited 1 day ago)

Thank you! I appreciate you.

PS: Where's the guy this should be targeted at?

[-] Buddahriffic@lemmy.world 4 points 1 day ago

Not the original commenter, but your reply looks like it's for termaxima's comment (about hallucinations being a mathematical certainty).

[-] ThirdConsul@lemmy.zip 9 points 1 day ago

I want to believe you, but that would mean you solved hallucination.

Either:

A) you're lying

B) you're wrong

C) KB is very small

[-] SuspciousCarrot78@lemmy.world 16 points 1 day ago

D) None of the above.

I didn’t "solve hallucination". I changed the failure mode. The model can still hallucinate internally. The difference is it’s not allowed to surface claims unless they’re grounded in attached sources.

If retrieval returns nothing relevant, the router forces a refusal instead of letting the model free-associate. So the guarantee isn’t “the model is always right.”

The guarantee is “the system won’t pretend it knows when the sources don’t support it.” That's it. That's the whole trick.

KB size doesn’t matter much here. Small or large, the constraint is the same: no source, no claim. GTFO.

That’s a control-layer property, not a model property. If it helps: think of it as moving from “LLM answers questions” to “LLM summarizes evidence I give it, or says ‘insufficient evidence.’”

Again, that’s the whole trick.

You don't need to believe me. In fact, please don't. Test it.

I could be wrong...but if I'm right (and if you attach this to a non-retarded LLM), then maybe, just maybe, this doesn't suck balls as much as you think it might.

Maybe it's even useful to you.

I dunno. Try it?

[-] ThirdConsul@lemmy.zip 4 points 1 day ago

So... Rag with extra steps and rag summarization? What about facts that are not rag retrieval?

[-] SuspciousCarrot78@lemmy.world 11 points 1 day ago* (last edited 1 day ago)

Parts of this are RAG, sure

RAG parts:

  • Vault / Mentats is classic retrieval + generation.
  • Vector store = Qdrant
  • Embedding and reranker

So yes, that layer is RAG with extra steps.

What’s not RAG -

KB mode (filesystem SUMM path)

This isn’t vector search. It’s deterministic, file-backed grounding. You attach folders as needed. The system summarizes and hashes docs. The model can only answer from those summaries in that mode. There’s no semantic retrieval step. It can style and jazz around the answer a little, but the answer is the answer is the answer.

If the fact isn’t in the attached KB, the router forces a refusal. Put up or shut up.

Vodka (facts memory)

That’s not retrieval at all, in the LLM sense. It's verbatim key-value recall.

  • JSON on disk
  • Exact store (!!)
  • Exact recall (??)

Again, no embeddings, no similarity search, no model interpretation.

"Facts that aren’t RAG"

In my set up, they land in one of two buckets.

  1. Short-term / user facts → Vodka. That for things like numbers, appointments, lists, one-off notes etc. Deterministic recall, no synthesis.

  2. Curated knowledge → KB / Vault. Things you want grounded, auditable, and reusable.

In response to the implicit "why not just RAG then"

Classic RAG failure mode is: retrieval is fuzzy → model fills gaps → user can’t tell which part came from where.

The extra "steps" are there to separate memory from knowledge, separate retrieval from synthesis and make refusal a legal output, not a model choice.

So yeah; some of it is RAG. RAG is good. The point is this system is designed so not everything of value is forced through a semantic search + generate loop. I don't trust LLMs. I am actively hostile to them. This is me telling my LLM to STFU and prove it, or GTFO. I know that's a weird way to operate maybe (advesarial, assume worst, engineer around issue) but that's how ASD brains work.

load more comments (5 replies)
[-] Kobuster@feddit.dk 6 points 1 day ago

Hallucination isn't nearly as big a problem as it used to be. Newer models aren't perfect but they're better.

The problem addressed by this isn't hallucination, its the training to avoid failure states. Instead of guessing (different from hallucination), the system forces a Negative response. That's easy and any big and small company could do it, big companies just like the bullshit

[-] SuspciousCarrot78@lemmy.world 8 points 1 day ago

^ Yes! That. Exactly that. Thank you!

I don't like the bullshit...and I'm not paid to optimize for bullshit-leading-to-engagment-chatty-chat.

"LLM - tell me the answer and then go away. If you can't, say so and go away. Optionally, roast me like you've watched too many episodes of Futurama while doing it"

load more comments (2 replies)
[-] Zexks@lemmy.world 5 points 1 day ago* (last edited 1 day ago)

This is awesome. Ive been working on something similar. Youre not likely to get much useful from here though. Anything AI is by default bad here

[-] SuspciousCarrot78@lemmy.world 6 points 1 day ago

Well, to butcher Sinatra: if it can make it on Lemmy and HN, it can make it anywhere :)

[-] BaroqueInMind@piefed.social 76 points 2 days ago

I have no remarks, just really amused with your writing in your repo.

Going to build a Docker and self host this shit you made and enjoy your hard work.

Thank you for this!

load more comments (3 replies)
[-] pineapple@lemmy.ml 5 points 1 day ago

This is amazing! I will either abandon all my other commitments and install this tomorrow or I will maybe hopefully get it done in the next 5 years.

Likely accurate jokes aside this will be a perfect match with my obsidian volt as well as researching things much more quickly.

[-] SuspciousCarrot78@lemmy.world 4 points 1 day ago

I hope it does what it I claim it does for you. Choose a good LLM model. Not one of the sex-chat ones. Or maybe, exactly one of those. For uh...research.

[-] SuspciousCarrot78@lemmy.world 6 points 1 day ago

Responding to my own top post like a FB boomer: May I make one request?

If you found this little curio interesting at all, please share in the places you go.

And especially, if you're on Reddit, where normies go.

I use to post heavily on there, but then Reddit did a reddit and I'm done with it.

https://lemmy.world/post/41398418/21528414

Much as I love Lemmy and HN, they're not exactly normcore, and I'd like to put this into the hands of people :)

PS: I am think of taking some of the questions you all asked me here (de-identified) and writing a "Q&A_with_drBobbyLLM.md" and sticking it on the repo. It might explain some common concerns.

And, If nothing else, it might be mildly amusing.

[-] domi@lemmy.secnd.me 5 points 1 day ago

I have a Strix Halo machine with 128GB VRAM so I'm definitely going to give this a try with gpt-oss-120b this weekend.

[-] recklessengagement@lemmy.world 2 points 22 hours ago

Strix halo gang. Out of curiosity, what OS are you using?

[-] domi@lemmy.secnd.me 1 points 16 hours ago

Fedora 43 with the Rawhide kernel.

load more comments (4 replies)
[-] WolfLink@sh.itjust.works 21 points 2 days ago

I’m probably going to give this a try, but I think you should make it clearer for those who aren’t going to dig through the code that it’s still LLMs all the way down and can still have issues - it’s just there are LLMs double-checking other LLMs work to try to find those issues. There are still no guarantees since it’s still all LLMs.

[-] skisnow@lemmy.ca 7 points 1 day ago

I haven’t tried this tool specifically, but I do on occasion ask both Gemini and ChatGPT’s search-connected models to cite sources when claiming stuff and it doesn’t seem to even slightly stop them bullshitting and claiming a source says something that it doesn’t.

load more comments (4 replies)
load more comments (1 replies)
[-] Pudutr0n@lemmy.world 4 points 1 day ago* (last edited 1 day ago)

re: the KB tool, why not just skip the llm and do two chained fuzzy finds? (what knowledge base & question keywords)

[-] SuspciousCarrot78@lemmy.world 7 points 1 day ago* (last edited 1 day ago)

re: the KB tool, why not just skip the llm and do two chained fuzzy finds? (what knowledge base & question keywords)

Yep, good question. You can do that, it's not wrong. If your KB is small + your question is basically “find me the paragraph that contains X,” then yeah: two-pass fuzzy find will dunk on any LLM for speed and correctness.

But the reason I put an LLM in the loop is: retrieval isn’t the hard part. Synthesis + constraint are. What a LLM is doing in KB mode (basically) is this -

  1. Turns question into extraction task. Instead of “search keywords,” it’s: “given these snippets, answer only what is directly supported, and list what’s missing.”

  2. Then, rather that giving 6 fragments across multiple files, the LLM assembles the whole thing into a single answer, while staying source locked (and refusing fragments that don't contain the needed fact).

  3. Finally: it has "structured refusal" baked in. IOW, the whole point is that the LLM is forced to say "here are the facts I saw, and this is what I can't answer from those facts".

TL;DR: fuzzy search gets you where the info lives. This gets you what you can safely claim from it, plus an explicit "missing list".

For pure retreval: yeah - search. In fact, maybe I should bake in a >>grep or >>find commands. That would be the right trick for "show me the passage" not "answer the question".

I hope that makes sense?

[-] Pudutr0n@lemmy.world 4 points 1 day ago
[-] SuspciousCarrot78@lemmy.world 3 points 1 day ago

Thank you. I appreciate you saying so!

load more comments
view more: next ›
this post was submitted on 22 Jan 2026
462 points (94.3% liked)

Privacy

44804 readers
830 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

much thanks to @gary_host_laptop for the logo design :)

founded 6 years ago
MODERATORS