[-] ThirdConsul@lemmy.zip 1 points 1 day ago

A debugger will always interfere with the processes you are looking at, hence making debugging of multithreading-related errors a game of whack-a-mole.

It's a very pleasant debugging experience when you can easily switch threads, have them log what happened first, check the variables in the thread at the moment in time it was hit (vs now), etc. etc.

[-] ThirdConsul@lemmy.zip 3 points 1 day ago

This and an easy way to attach a line-by-line debugger and I'm golden.

[-] ThirdConsul@lemmy.zip 3 points 1 day ago

I believe that is a vast minority of developments. And tbh multithreading debugging is a breeze in C# on Rider (except race conditions, those will always be tricky, but also easily identifiable).

[-] ThirdConsul@lemmy.zip 2 points 1 day ago

Huh? That is the literal opposite of what I said. Like, diametrically opposite.

The system summarizes and hashes docs. The model can only answer from those summaries in that mode. There’s no semantic retrieval step.

No, that's exactly what you wrote.

Now, with this change

SUMM -> human reviews

That would be fixed, but will work only for small KBs, as otherwise the summary would be exhaustive.

Case in point: assume a Person model with 3-7 facts per Person. Assume small 3000 size set of Persons. How would the SUMM of work? Do you expect a human to verify that SUMM? How are you going to converse with your system to get the data from that KB Person set? Because to me that sounds like case C, only works for small KBs.

Again: the proposition is not “the model will never hallucinate.”. It’s “it can’t silently propagate hallucinations without a human explicitly allowing it to, and when it does, you trace it back to source version”.

Fair. Except that you are still left with the original problem of you don't know WHEN the information is incorrect if you missed it at SUMM time.

[-] ThirdConsul@lemmy.zip -2 points 2 days ago

The system summarizes and hashes docs. The model can only answer from those summaries in that mode

Oh boy. So hallucination will occur here, and all further retrievals will be deterministically poisoned?

[-] ThirdConsul@lemmy.zip 4 points 2 days ago

So... Rag with extra steps and rag summarization? What about facts that are not rag retrieval?

[-] ThirdConsul@lemmy.zip 2 points 2 days ago* (last edited 2 days ago)

A very tailored to llms strengths benchmark calls you a liar.

https://artificialanalysis.ai/articles/gemini-3-flash-everything-you-need-to-know (A month ago the hallucination rate was ~50-70%)

[-] ThirdConsul@lemmy.zip 9 points 2 days ago

I want to believe you, but that would mean you solved hallucination.

Either:

A) you're lying

B) you're wrong

C) KB is very small

[-] ThirdConsul@lemmy.zip 6 points 1 week ago

In the early 2010s there was a trend to throw in obfuscation into the minis.

[-] ThirdConsul@lemmy.zip 6 points 1 week ago

So... A mediocre story at best?

[-] ThirdConsul@lemmy.zip 9 points 1 week ago

Because AI can make 10 commercials in the same time traditional creators can make 1

Famously Cola Christmas commercial took more time and money to create than a traditional ad would?

[-] ThirdConsul@lemmy.zip 11 points 2 weeks ago

Still, technically it's right. Since USA are not at war with Russia, this is literally piracy.

view more: next ›

ThirdConsul

joined 3 weeks ago