They’ve already been using Claude for at least the last few months; you’ll find a CLAUDE.md file and related settings in the uv repo if you look
Hopefully nobody tells the corpos that they can just use BSD if they want a MIT licensed kernel and user-land
The 2021 release of Tex included several bug-fixes, so not quite 12 years:
https://tug.org/texmfbug/tuneup21bugs.html
See also the following list of potential bugs, that may be included in the planned 2029 release of Tex:
https://tug.org/texmfbug/newbug.html
That said, Tex is still an impressive piece of software
These are all very useful things to know about, but besides maybe the difference between stack and heap, they are not things you need to learn about before getting starting with Rust.
So if you actually do want to learn Rust, then that's your next rabbit hole
At this point the people complaining about Rust at every opportunity have become more annoying than the "rewrite it in Rust" people ever were
The article is about an internal kernel API: They can easily rename those, since they are not exposed to user-space.
But you seem to be talking about the kill command and/or the kill function, that are part of the POSIX and C standards, respectively. Renaming either would break a shit-ton of code, unless you merely aliased them. And while I agree that kill is a poor name, adding non-standard aliases doesn't really offer much benefit
I set the timestamps of my music to its original release date, so that I can sort it chronologically... OK, I don't actually do that, but now I'm tempted
How the fuck can it not recover the files?
Undeleting files typically requires low-level access to the drive containing the deleted files.
Do you really want to give an AI, the same one that just wiped your files, that kind of access to your data?
What do you want me to write?
To meet the bar set by onlinepersona, you'd need to write safe C code, not just some of the time, but all of the time. What you appear to be proposing is to provide evidence that you can write safe C code some of the time.
It's like if somebody said "everyone gets sick!", and some other person stepped up and said "I never get sick. As proof, you can take my temperature right now; see, I'm healthy!". Obviously, the evidence being offered is insufficient to refute the claim being made by the first person
I'm surprised that you didn't mention Zig. It seems to me to be much more popular than either C3 or D's "better C" mode.
It is “FUD” if you ask why it’s still const by default.
I'd be curious if you could show any examples of people asking why Rust is const by default being accused of spreading "FUD". I wasn't able to find any such examples myself, but I did find threads like this one and this one, that were both quite amiable.
But I also don't see why it would be an issue to bring up Rust's functional-programming roots, though as you say the language did change quite a lot during its early development, and before release 1.0. IIRC, the first compiler was even implemented in OCaml. The language's Wikipedia page goes into more detail, for anyone interested. Or you could read this thread in /r/rust, where a bunch of Rust users try to bury that sordid history by bringing it to light
Makes memory unsafe operations ugly, to “disintensivise the programmer from them”.
From what I've seen, most unsafe rust code doesn't look much different compared to safe rust code. See for example the Vec implementation, which contains a bunch of unsafe blocks. Which makes sense, since it only adds a few extra capabilities compared to safe rust. You can end up with gnarly code of course, but that's true of any non-trivial language. Your code could also get ugly if you try to be extremely granular with unsafe blocks, but that's more of a style issue, and poor style can make code in any language look ugly.
Has a pretty toxic userbase
At this point it feels like an overwhelming majority of the toxicity comes from non-serious critics of Rust. Case in point, many of the posts in this thread
Like, one of the issues that Linus yelled at Kent about was that bcachefs would fail on big endian machines. You could spend your limited time and energy setting up an emulator of the powerPC architecture, or you could buy it at pretty absurd prices — I checked ebay, and it was $2000 for 8 GB of ram…
It's not that BCacheFS would fail on big endian machines, it's that it would fail to even compile, and therefore impacted everyone who had it enabled in their build. And you don't need actual big endian hardware to compile something for that arch: Just now it took me a few minutes to figure what tools to install for cross-compilation, download the latest kernel, and compile it for a big endian arch with BCacheFS enabled. Surely a more talented developer than I could easily do the same, and save everyone else the trouble of broken builds.
ETA: And as pointed out in the email thread, Overstreet had bypassed the linux-next mailing list, which would have allowed other people to test his code before it got pulled into the mainline tree. So he had multiple options that did not necessitate the purchase of expensive hardware
One option is to drop standards. The Asahi developers were allowed to just merge code without being subjected to the scrutiny that Overstreet has been subjected to. This was in part due to having stuff in rust, and under the rust subsystem — they had a lot more control over the parts of Linux they could merge too. The other was being specific to macbooks. No point testing the mac book-specific patches on non-mac CPU’s.
It does not sound to me like standards were dropped for Asahi, nor that their use of Rust had any influence on the standards that were applied to them. It is simply as you said: What's the point of testing code on architectures that it explicitly does not and cannot support? As long as changes that touches generic code are tested, then there is no problem, but that is probably the minority of changes introduced by the Asahi developers
That’s a lot less likely to be the case; I am aware of just one example of what you describe, and that’s the example you give, whereas I’ve “sped up” my own code many times, by accidentally breaking stuff.
Rather than assume the presence of backdoors, the rational thing is simply to work out why you are seeing a difference in performance, and to determine if you fixed something by accident, or (the more likely scenario) if you broke something by accident