10

I have a db with a lot of data that all need precise summarisation, I would do it myself if it wasn't 20 thousand fields long

It is about 300k tokens, and Gemini 2.5 struggles missing points and making up facts

Separating them into smaller sections is not an option, because even when seperated they can take up 30k tokens, and the info that needs summarisation may span 100k token ranges

I learnt that fine tuning may have better results than general purpose models, and now I'm wondering if there is anything high token count for summarisation.

Any help would be appreciated, even if its to suggest another general purpose model that has better coherency

top 6 comments
sorted by: hot top controversial new old
[-] hendrik@palaver.p3x.de 5 points 2 days ago* (last edited 2 days ago)

From my personal experience, I'd say generative AI isn't the best tool for summarization. It also frequently misses the point when I try. Or makes up additional facts which haven't been in the input text. (Or starts going on (wrong) tangents despite the task being to keep it short and concise.) And I'd say all(?) models do that. Even the ones that are supposed to be big and clever.

Edit: Lots of people use ChatGPT etc for summarization, though. So I really don't know who's right here. Maybe my standards are too high, but what I've read as output from small to big models like ChatGPT wasn't great.

There are other approaches in NLP. For example extractive summarization like the BART model from Facebook. That's precise. Some Lemmy bot uses LsaSummarizer, but I don't really know how that works. Or maybe you can re-think what you're trying to do and use RAG instead of summarization.

[-] OmegaLemmy@discuss.online 3 points 2 days ago

Looking into BART, thanks.

[-] Smokeydope@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

As other commenter said your workflow requires more than what LLMs are currently capable of.

Summarization capability in LLMs is an equation of LLMs capacity for coherence over long conversational scaling operated on by the LLMs ability to navigate and distill internal structural mappings of conceptual & contextual archetype patterns as discrete objects across a continuous ambiguity sheaf.

This technical jargon that boils down to the idea that an llms summarization capability depends on its parameter size and enough vram for context lengths. Higher parameter and less quantized models maintaining more coherence over long conversations/datasets.

While enterprise llms are able to get up to 128k tokens while maintaining some level of coherence, the local models of medium quantization can handle 16-32k reliably. Theoretically 70b could maybe handle around 64k tokens but even thats stretching it.

Then comes the problem of transformer attention. You can't just put a whole books worth of text into an LLMs input and expect it to inspect any part in real detail. For best results you have to chunk it section by section, chapter by chapter.

So local llms may not be what you're looking for. If you are willing to go enterprise then Claude sonnet and deepseek R1 might be good especially if you set up a API interface.

[-] OmegaLemmy@discuss.online 2 points 1 day ago

I have attempted those solutions, R1 was best, even then I would have to chunk it, it may be possible to feed it extensive summary of previous information for better summaries (maybe)

Gemini is good until 200k. Scout is good until 100k. R1 was always good, till context limit.

[-] Smokeydope@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

You can try to use VSCode + roo to intelligently chunk it autonomously. Get a API key from your llm provider of choice, put your data into a text file, Edit the roo agent personalites thats set to coding by default. Instead add and select a custom summarizer persona, for roo to use then tell it to summarize the text file.

[-] pepperfree@sh.itjust.works 1 points 1 day ago

So something like

Previously the text talk about [last summary]
[The instruction prompt]...
[Current chunk/paragraphs]
this post was submitted on 11 Aug 2025
10 points (91.7% liked)

LocalLLaMA

3525 readers
4 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS