15
How would you design parallel grep for huge JSONL files?
(lemmy.world)
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Follow the wormhole through a path of communities !webdev@programming.dev
The question is, what will be your limiting factor: CPU or disk I/O? Parallel processing doesn't do much good if the workers have to wait on the disk to deliver more data. I'd start with an async architecture, where the program can do its processing while it is waiting on more data.
One additinal trick is to compress your files before writing them to disk, using some kind of fast lightweight compression like parallel gzip (pigz command) or lzop. When parsing them, you will have smaller disk reads but higher CPU usage, which will give speed advantage if you have server-class CPU with lots of cache.
Yeh, JSON will compress well.