100
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 21 Jun 2023
100 points (100.0% liked)
Free and Open Source Software
17949 readers
88 users here now
If it's free and open source and it's also software, it can be discussed here. Subcommunity of Technology.
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
I can’t quite find the blog post but I saw someone do a blog post using AWS' map reduce on multiple servers to process a dataset… and then they redid their pipeline using bash, awk, and maybe grep and a single 8-core machine did it 100 times or so faster.
Edit: found it https://adamdrake.com/command-line-tools-can-be-235x-faster-than-your-hadoop-cluster.html
I think you can put this under the Linux command line. I.E. the bash shell and the commonly installed Linux command set. Way powerful for certain things.
I think this is more of a problem of knowing when a specific tool should be used. Probably most people familiar with hadoop are aware of all the overhead it creates. At the same time you hit a point in dataset sizes (I guess even more with "real time" data processing) where it's not even feasible with a single machine. (at the same time I'm not too knowledgeable about hadoop and bigdata, so anyone else feel free to chime in)
Some context though is that this article was written when cloud computing was all the buzz like crypto just was and AI is now. So many people used cloud just for the buzz and without understanding the tool.