You should do some research on wasm
.
You can run frickin’ docker containers in the browser now.
I don’t make the rules.
You should do some research on wasm
.
You can run frickin’ docker containers in the browser now.
I don’t make the rules.
Along with this, once you’ve dealt with enough kinds of problems, you end up developing an intuition for how something was probably implemented.
This can help you anticipate what features are probably included in a framework/library, as well as how likely they are to work efficiently/correctly (you know that XYZ is a hard problem vs. ABC which is pretty easy for a journeyman to get right.)
As an example, a friend of mine reported a performance issue to a 3rd-party vendor recently. Based on a little bit of information he had on data scale and changes the 3rd-party made to their query API, he basically could tell them that they probably didn’t have index coverage on the new fields that could be queried from the API. That’s with almost no knowledge of how the internals of their API were implemented, other than that they were using Postgres (and he was right, by the way).
That’s not always going to happen, but there are just a lot of common patterns with known limitations that you can start to anticipate stuff after awhile.
That’s interesting. Usually when I see people talking about Rust, they really like it. Are there specific parts that make it less enjoyable than go for you?
I’ve tumbled down this rabbit hole on more than one occasion.
This line of thinking can lead you to the conclusion that the only ecologically just thing to do is for humans to cease to exist.
It’s a trap that can lead to despair.
Do your part to be mindful, respectful, and conservative with resources, but don’t give in to nihilism.
Which is what putting most of this stuff on the background accomplishes. It necessitates designing the UX with appropriate feedback. Sometimes you can’t make things go faster than they go. For example, a web request, or pulling data from an ancient disk that a user is using - you as an author don’t have control over these, the OS doesn’t even have control over them.
Should software that depends on external resources refuse to run?
The author is talking about switching to some RTOS due to this, which is extreme. OS vendors have spent decades trying to sort out the “Beachball of Death” issue, that is exceedingly rare on modern systems, due to better multi-tasking support, and dramatically faster hardware.
Most GUI apps are not hard RT and trying to make them so would be incredibly costly and severely limit other aspects of systems that users regularly prefer (like keeping 100 apps and browser tabs open).
The problem with the article is that it’s confusing hard realtime and low latency requirements. Most UIs do not require hard realtime, even soft realtime is a nice to have and users will tolerate some latency.
I also think the author handwaves “too many blocking calls end up on the main thread.”
Hardly. This is like rule zero for building gui apps. Put any non-trivial or blocking work on a background thread. It was harder to do before mainstream languages got good green thread/async support, but it’s almost trivial now.
I agree that there are still calls that could have variable response times (such as virtual memory being paged in or out), but even low-end machines are RAM-rich and SSDs are damn fast. The kernel is likely also doing some optimization to page stuff in from disk for the foreground app.
It’s nice to think through the issue, but I don’t think it’s quite as dire as the author claims.
Probably the more important question is whether DHH is your boss.
It’s fine to look for people with real experience/opinions on the internet, but at the end of the day, you have to build your own product.
I also am going to just say that I’m betting the kinds of stuff rails does in JS doesn’t really need a lot of complex JS. My guess is a lot of it paints on behavior similarly to what htmx
does now, which doesn’t really require a ton of js code anymore. I don’t much see the point removing TS for the vast majority of projects.
I loathe this line of reasoning. It's like saying "unless you wrote assembly, compiling your code could change what it does."
Guess what, the CPU reorders/ellides assembly, too! You can't trust anything!
I haven’t made a UML diagram in years. Or an ER diagram, for that matter.
Getting a schema dump and/or generating a diagram from an existing system would be useful, it won’t be UML, but can convey similar information. At a certain point, keeping an updated UML diagram is extra work that is almost guaranteed to go out of data instantly.
Breaking larger tasks down effectively removes uncertainty.
My general rule of thumb in planning is that any task that is estimated for longer than 1 day should be broken up.
Longer than one day communicates that the person doing the estimate knows it’s a large task, but not super clear about the details. It also puts a boundary around how long someone waits before trying to re-scope:
A task that was expected to take one week, but ends up going 2x is a slide of a week, but a task that is estimated at one day but takes 3x before re-scope is a loss of 2 days.
You can pick up one or two days, but probably not one or two weeks.
As an end result, maybe. But it also means that you get specific feedback on how to properly author it correctly and fix it before pushing it live.
IDK, I lived through that whole era, and I’d attribute it more to the fact that HTML is easy enough to author in any text editor by complete novices. XHTML demands a hell of a lot more knowledge of how XML works, and what is valid (and, more keystrokes). The barrier to entry for XHTML is much, much, higher.