[-] o11c@programming.dev 3 points 1 year ago

True, speed does matter somewhat. But even if xterm isn't the ultimate in speed, it's pretty good. Starts up instantly (the benefit of no extraneous libraries); the worst question is if it's occasionally limited to the framerate for certain output patterns, and if there's a clog you can always minimize it for a moment.

[-] o11c@programming.dev 5 points 1 year ago

Speed is far from the only thing that matters in terminal emulators though. Correctness is critical.

The only terminals in which I have any confidence of correctness are xterm and pangoterm. And I suppose technically the BEL-for-ST extension is incorrect even there, but we have to live with that and a workaround is available.

A lot of terminal emulators end up hard-coding a handful of common sequences, and fail to correctly ignore sequences they don't implement. And worse, many go on to implement sequences that cannot be correctly handled.

One simple example that usually fails: \e!!F. More nasty, however, are the ones that ignore intermediaries and execute some unrelated command instead.

I can't be bothered to pick apart specific terminals anymore. Most don't even know what an IR is.

[-] o11c@programming.dev 3 points 1 year ago

The problem with XCB is that it's designed to be efficient, not easy. If you're avoiding toolkits for some reason, "so what if I block the world" may be a reasonable tradeoff.

[-] o11c@programming.dev 4 points 1 year ago

I haven't managed to break into the JS-adjacent ecosystem, but tooling around Typescript is definitely a major part of the problem:

  • following a basic tutorial somehow ended up spending multiple seconds just to transpile and run "Hello, World!".
  • there are at least 3 different ways of specifying the files and settings you want to use, and some of them will cause others to be ignored entirely, even though it looks like they should be used.
  • embracing duck typing means many common type errors simply cannot be caught. Also that means dynamic type checks are impossible, even though JS itself supports them (admittedly with oddities, e.g. with string vs String).
  • there are at least 3 incompatible ways to define and use a "module", and it's not clear what's actually useful or intended to be used, or what the outputs are supposed to be for different environments.

At this point I'm seriously considering writing my own sanelanguage-to-JS transpiler or using some other one (maybe Haxe? but I'm not sure its object model allows full performance tweaking), because I've written literally dozens of other languages without this kind of pain.

WASM has its own problems (we shouldn't be quick to call asm.js obsolete ... also, C's object model is not what people think it is) but that's another story.


At this point, I'd be happy with some basic code reuse. Have a "generalized fibonacci" module taking 3 inputs, and call it 3 ways: from a web browser on the client side, as a web browser request to server (which is running nodejs), or as a nodejs command-line program. Transpiling one of the callers should not force the others to be transpiled, but if multiple of the callers need to be transpiled at once, it should not typecheck the library internals multiple times. I should also be able to choose whether to produce a "dynamic" library (which can be recompiled later without recompiling the dependencies) or a "static" one (only output a single merged file), and whether to minify.

I'm not sure the TS ecosystem is competent enough to deal with this.

[-] o11c@programming.dev 4 points 1 year ago

The thing is - I have probably seen hundreds of projects that use tabs for indentation ... and I've never seen a single one without tab errors. And that ignoring e.g. the fact that tabs break diffs or who knows how many other things.

Using spaces doesn't automatically mean a lack of errors but it's clearly easy enough that it's commonly achieved. The most common argument against spaces seems to boil down to "my editor inserts hard tabs and I don't know how to configure it".

[-] o11c@programming.dev 3 points 1 year ago

It's solving (and facing) some very interesting problems at a technical level ...

but I can't get over the dumb decision for how IO is done. It's $CURRENTYEAR; we have global constructors even if your platform really needs them (hint: it probably doesn't).

[-] o11c@programming.dev 4 points 1 year ago

For an extension like this - unlike most prior extensions - you're best off with essentially an entirely separately compiled copy of the program/library. So IFUNC is a poor fit, even with peer optimization.

[-] o11c@programming.dev 3 points 1 year ago

The problem with pathlib is that it normalizes away critical information so can't be used in many situations.

./path should not be path should not be path/.

Also the article is wrong about "Path('some\\path') becomes some/path on Linux/Mac."

[-] o11c@programming.dev 3 points 1 year ago* (last edited 1 year ago)

I've done something similar. In my case it was a startup script that did something like the following:

  • poll github using the search API for PR labels (note that this has sometimes stopped returning correct results, but ...).
    • always do this once at startup
    • you might do this based on notifications; I didn't bother since I didn't need rapid responsiveness. Note that you should not do this for the specific data from a notification though; it's only a way to wake up the script.
    • but no matter what, you should do this after N minutes, since notifications can be lost.
  • perform a git fetch for your main development branch (the one you perform the real merges to) and all pull/ refs (git does not do this by default; you'll have to set them up for your local test repo. Note that you want to refer to the unmerged commits for these)
  • if the set of commits for all tagged PRs has not changed, wait and poll again
  • reset the test repo to the most recent commit from your main development branch
  • iterate over all PRs with the appropriate label:
    • ordering notes:
      • if there are commits that have previously tested successfully, you might do them first. But still test again since the merge order could be different. This of course depends on the level of tests you're doing.
      • if you have PRs that depend on other PRs, do them in an appropriate order (perhaps the following will suffice, or maybe you'll have some way of detecting this). As a rule we soft-forbid this though; such PRs should have been merged early.
      • finally, ordering by PR number is probably better than ordering by last commit date
    • attempt the merge (or rebase). If a nop, log that somewhere. If not clean, skip the PR for now (and log that), but only mark this as an error if it was the first PR you've merged (since if there's a conflict it could be a prior PR's fault).
    • Run pre-build stuff that might need to create further commits, build the product, and run some quick tests. If they fail, rollback the repo to the previous merge and complain.
    • Mark the commit as apparently good. Note that this is specifically applying to commits not PRs or branch names; I admit I've been sloppy above.
  • perform a pre-build, build and quick test again (since we may have rolled back and have a dirty build - in fact, we might not have ended up merging anything!)
  • if you have expensive tests, run them only here (and treat this as "unexpected early exit" below). It's presumed that separate parts of your codebase aren't too crazily entangled, so if a particular test fails it should be "obvious" which PR is relevant. Keep in mind that I used this system for assumed viable-work-in-progress PRs.
  • kill any existing instance and launch a new instance of the product using the build from the final merged commit and begin accepting real traffic from devs and beta users.
  • users connecting to the instance should see the log
  • if the launched instance exits unexpectedly within M minutes AND we actually ended up merging anything into the known-good branch, then reset to the main development branch (and build etc.) so that people at least have a functioning test server, but complain loudly in the MOTD when they connect to it. The condition here means that if it exits suddenly again the whole script goes up and starts again, which may be necessary if someone intentionally tried to kill the server to force a new merge sequence but it was too soon.
    • alternatively you could try bisecting the set of PR commits or something, but I never bothered. Note that you probably can't use git bisect for this since you explicitly do not want to try commit from the middle of a PR. It might be simpler to whitelist or blacklist one commit at a time, but if you're failing here remember that all tests are unreliable.
[-] o11c@programming.dev 3 points 1 year ago
from __future__ import annotations
[-] o11c@programming.dev 3 points 1 year ago

The with approach would work if you use the debugger to change the current line I think.

I don't understand why this stop using ASTs in favor of buggy regexes - you're allowed to do whatever you want during the codec ...

Don't forget to handle increment before continue.

The main time I miss C-style for loops is dealing with linked lists and when manipulating the current iteration.

The former should be easy enough - make the advancement provide __getattr__ expressions.

The latter already works since it is in fact being transformed into a while. It's impossible if you try to use for though.

[-] o11c@programming.dev 4 points 1 year ago

What you are missing, of course, is the Rc<Refcell<T>> that you have to stick everywhere to make a nontrivial Rust program. It's like monads in Haskell, parentheses in lisp, verbosity in Java, or warnings in C - they're the magic words you have to incant correctly to make things work in their weird paradigms.

view more: ‹ prev next ›

o11c

joined 1 year ago