145
you are viewing a single comment's thread
view the rest of the comments
[-] darkmode@hexbear.net 3 points 2 days ago

this is an incredible list of research. TYSM! In spare work time i have a small tool that tries to accomplish what #2 describes i have not clicked the link and read yet but now i will read everything

[-] yogthos@lemmygrad.ml 4 points 2 days ago

I played around with implementing the recursive language model paper, and that actually turned out pretty well https://git.sr.ht/~yogthos/matryoshka

Basically, I spin up a js repl in a sandbox, and the agent can feed files into it, and then run commands against them. What normally happens is that the agent has to ingest the whole file into its context, but now it can just shove files into the repl, and then do operations on them akin to a db. And it can create variables. For example, if it searches for something in a file, it can bind the result to a variable and keep track of it. If it needs to filter the search later, it can just reference the variable it already made. This saves a huge amount of token use, and also helps the model stay more focused.

[-] darkmode@hexbear.net 1 points 1 day ago

about how large are the codebases you’ve used this rlm with

[-] yogthos@lemmygrad.ml 3 points 1 day ago

Around around 10k lines or so. I use it as MCP that the agent uses when it decides it needs to. The whole code base doesn't get loaded in the repl, just individual files as it searches through them.

this post was submitted on 16 Feb 2026
145 points (100.0% liked)

technology

24252 readers
443 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS