[-] lysdexic@programming.dev 4 points 1 month ago* (last edited 1 month ago)

Why restrict to 54-bit signed integers?

Because number is a double, and IEEE754 specifies the mantissa of double-precision numbers as 53bits+sign.

Meaning, it's the highest integer precision that a double-precision object can express.

I suppose that makes sense for maximum compatibility, but feels gross if we’re already identifying value types.

It's not about compatibility. It's because JSON only has a number type which covers both floating point and integers, and number is implemented as a double-precision value. If you have to express integers with a double-precision type, when you go beyond 53bits you will start to experience loss of precision, which goes completely against the notion of an integer.

[-] lysdexic@programming.dev 4 points 1 month ago

The whole idea to check the donations came from stumbling upon this post which discussed costs per user.

Things should be put into perspective. The cost per user is actually the fixed monthly cost of operating an instance divided by the average number of active users.

In the discussion you linked to, there's a post on how Lemmy.ml costs $80/month + domain name to serve ~2.4k users. If we went through opex/users metric, needlessly expensive setups with low participation would be a justification to ask for more donations.

Regardless, this is a good reminder that anyone can self-host their own Lemmy instance. Some Lemmy self-host posts go as far as to claim a Lemmy instance can be run on a $5/month virtual private server from the likes of scaleway.

[-] lysdexic@programming.dev 4 points 4 months ago

Also interesting, successful software projects don't just finish and die. They keep on going and adapt changes and implement new features. If we have a successful project that goes on for a decade but we have a clusterfuck of a project which blows up each year for the same time period, by this metric you'll have only a 10% success rate.

[-] lysdexic@programming.dev 4 points 6 months ago* (last edited 6 months ago)

Honestly, I don't mind the downvotes. What puzzles me is how some people feel strongly enough about a topic to subscribe to a community, but still feel compelled to slap down contributions in a time nothing is being submitted, as if seeing no new posts is better than seeing a post that might not tickle their fancy.

It's the difference between building up and tearing down.

[-] lysdexic@programming.dev 4 points 6 months ago

Most software is built under non-ideal circumstances. Especially in the beginning there’s often tight deadlines involved.

Exactly this.

I think a bunch of people commenting in this thread on the virtues of rewriting things from scratch using the flavour of the month are instead showing the world they have zero professional experience working on commercial software projects. They are clearly oblivious to very basic and pervasive constraints that anyone working on software for a living is very well aware.

Things like prioritizing how a button is laid out over fixing a rarely occurring race condition is the norm in professional settings. You are paid to deliver value to your employer, and small things like paying technical debt are very hard sells for project managers running tight schedules.

Yet, here we are, seeing people advocating complete rewrites and adding piles of complexity while throwing out major features, and doing so with a straight face.

Unbelievable.

[-] lysdexic@programming.dev 4 points 10 months ago

For the article-impaired,

Using OFFSET+LIMIT for pagination forces a full table scan, which in large databases is expensive.

The alternative proposed is a cursor+based navigation, which is ID+LIMIT and requires ID to be an orderable type with monotonically increasing value.

[-] lysdexic@programming.dev 4 points 11 months ago

Because while you do have control (and “copies”) of the source code repository, that’s not really true for the ecosystem around it - tickets, pull requests, …

The announcement to drop Mercurial quite clearly states that their workflow won't change and that GitHub pull requests are not considered a part of their workflow.

Also, that's entirely irrelevant to start with. Either you care about software freedom and software quality, or you don't. If you care about software freedom you care about having free and unrestricted access to FLOSS projects such as Firefox, which GitHub clearly provides. If you care about software quality you'd care about the Firefox team picking the absolute best tools for the job that they themselves picked.

[-] lysdexic@programming.dev 4 points 1 year ago* (last edited 1 year ago)

I’m not sure you are aware, but TypeScript is not the first language (...)

This discussion is about TypeScript.

[-] lysdexic@programming.dev 4 points 1 year ago

There’s no bulky management of a virtual environment, no make files, no maven, etc. Just a human-readable cargo.toml for your packages

In your perspective, what's the difference between a cargo.toml and a requirements.txt, packages.json, pom.xml, etc? Is there any?

1
1
1
20
22
1
6

The title says it all: which version of Java do you work with?

Upvote any version that you work on a weekly basis. Vote on as many as you'd like.

1
Metastable failures in the wild (muratbuffalo.blogspot.com)
1
7
[-] lysdexic@programming.dev 4 points 1 year ago* (last edited 1 year ago)

The main problem is that dynamic linking is hard.

That is not a problem. That is a challenge for those who develop implementations, but it is hardly a problem. Doing hard things is the job description of any engineer.

Dynamic linking does not even reliably work with C++, an “old” language with decades of tooling and experience on the matter.

This is not true at all. Basically all major operating systems rely on dynamic linking, and all of them support C++ extensively. If I recall correctly, macOS even supports multiple types of dynamic linking. On Windows, DLLs are use extensively by system and userland applications. There are no problems other than versioning and version conflicts, and even that is a solved problem.

You get into all kind of UB when interacting with a separate DSO, especially since there are minimal verification of the ABI compatibility when loading a dynamic library.

This statement makes no sense at all. Undefined behavior is just behavior that the C++ standard intentionally did not imposed restrictions upon by leaving the behavior without a definition. Implementations can and do fill in the blanks.

ABI compatibility is also a silly thing to bring up in terms of dynamic linking because it also breaks for static linking.

So dynamic linking never really worked,

This statement is patently and blatantly false. There was no major operating system in use, not a single one, where dynamic linking is/was not used extensively. This has been the case for decades.

66
37
[-] lysdexic@programming.dev 4 points 1 year ago

If there was a single language, afterwards the same broken logic would be applied to frameworks and libraries, and we all know how many people bitch and whine over Java and it's extensive standard library.

[-] lysdexic@programming.dev 4 points 1 year ago

I like old reddit. This project is a reminder that it's highly likely those bastards will start to work on making it unusable, if not outright end it.

view more: ‹ prev next ›

lysdexic

joined 1 year ago
MODERATOR OF