-
Take a human and have him study every single repo on GitHub
-
Take an AI and train it on every single repo on Github
Which of those two will continue to make novice mistakes like SQL injection and XSS vulnerabilities?
These AI "coding agents" aren't learning or thinking. They're just natural language statistical search engines, and as such it's easy to anthropomorphize them. Future generations will laugh at us, kinda like how we laugh at old products that contain cocaine, asbestos, lead, uranium, etc.
You don't actually need to "split" anything, you just read from different offsets per thread. Mmap might be the most efficient way to do this (or at least the easiest)
Whether or not that's going to run into hardware bottlenecks is a separate issue from designing a parallel algorithm. Idk what OP is trying to accomplish, but if their hardware is known (eg this is an internal tool meant to run in a data center), they'll need to read up on their hardware and virtualization architecture to squeeze the most IO performance.
But if parsing is actually the bottleneck, there's a lot you can do to optimize it in software. Simdjson would be a good place to start.