Interesting quantitative look at web performance and how designs made for people with high-end devices can be practically unusable for people on low-end devices, which disproportionately affects poorer people and people in developing countries. Also discusses how sites game Google's performance metrics—maybe not news to the web devs among ye, but it was new to me. The arrogance of the Discourse founder was astounding.
RETVRN to static web pages.^[Although even static web pages can be fraught—see his other post on speeding up his site 50x by tearing out a bunch of unnecessary crap.]
Also, from one of the appendices:
In principle, HN should be the slowest social media site or link aggregator because it's written in a custom Lisp that isn't highly optimized and the code was originally written with brevity and cleverness in mind, which generally gives you fairly poor performance. However, that's only poor relative to what you'd get if you were writing high-performance code, which is not a relevant point of comparison here.
Thanks for sharing (and thanks to @ea6927d8@lemmy.ml for finding the link for) that article! Really interesting stuff. I knew the basics of TCP and Ethernet frames, but I didn't know about the TCP slow start thing. I've been thinking about building my own static website, so I'll keep this in mind when I do tackle that project.