[-] lysdexic@programming.dev 9 points 9 months ago

From the whole blog post, the thing that caught my eye was the side remark regarding SPAs vs MPAs. It was one of those things that people don't tend to think about it but once someone touches on the subject, the problem become obvious. It seems that modern javascript frameworks focus on SPAs and try to shoehorn the concept everywhere, even when it clearly does not fit. Things reached a point where rewriting browser history to get that SPA to look like a MPA is now a basic feature of multiple pages, and it rarely works well.

Perhaps it's too extreme to claim that MPAs are the future, but indeed there are a ton of webapps that are SPAs piling on complexity just to masquerade as MPAs.

12
40
18
125
Vim 9.1 released (www.vim.org)
21
12
Functional Classes (2023) (blog.cleancoder.com)
41
41
3
Urchin Tracking Module (UTM) (support.google.com)
1
[-] lysdexic@programming.dev 8 points 10 months ago

Perhaps I'm being dense and coffee hasn't kicked in yet, but I fail to see where is this new computing paradigm that's mentioned in the title.

From their inception, computers have been used to plug in sensors, collect their values, and use them to compute stuff and things. For decades each and every single consumer-grade laptop has adaptive active cooling, which means spinning fans and throttling down CPUs when sensors report values over a threshold. One of the most basic aspects of programming is checking if a memory allocation was successful, and otherwise handle an out-of-memory scenario. Updating app states when network connections go up or down is also a very basic feature. Concepts like retries, jitter, exponential back off have become basic features provided by dedicated modules. From the start Docker provided support for health checks, which is basically am endpoint designed to be probed periodically. There are also canary tests to check if services are reachable and usable.

These exist for decades. This stuff has been done in production software since the 90s.

Where's the novelty?

39
[-] lysdexic@programming.dev 9 points 10 months ago

I'd love to see benchmarks testing the two, and out of curiosity also including compressed JSON docs to take into account the impact of payload volume.

Nevertheless, I think there are two major features that differentiate protobuff and fleece, which are:

  • fleece is implemented as an appendable data structure, which might open the door to some usages,
  • protobuf supports more data types than the ones supported by JSON, which may be a good or bad thing depending on the perspective.

In the end, if the world survived with XML for so long, I'd guess we can live with minor gains just as easily.

82
[-] lysdexic@programming.dev 10 points 11 months ago

We spend so much time building devices that are meant to break, and be unfixable, and making software that fights the user instead of helping.

Kudos to the EU for forcing mobile phone manufacturers to support replaceable batteries and standardize on USB-C charging.

[-] lysdexic@programming.dev 9 points 11 months ago* (last edited 11 months ago)

you meant that the focus of the change wasn’t GH

They are dropping Mercurial and focusing on Git. Incidentally, they happen to host the Git project on GitHub. GitHub is used for hosting, and they don't even use basic features such as pull requests.

Again, this is really not about GitHub at all.

[-] lysdexic@programming.dev 8 points 11 months ago

Github for organizations becomes rather expensive rather quickly (...)

I'm not sure if that's relevant. GitHub's free plan also supports GitHub organizations, and GitHub's Team plan costs only around $4/(developer*month). You can do the math to check how many developers you'd have to register in a GitHub Team plan to match the operational expense of hiring a person to manage a self-hosted instance from 9-to-5.

[-] lysdexic@programming.dev 9 points 1 year ago

I feel like so much effort is spent trying to solve problems that just aren’t problems.

I don't think your belief has any merit.

The popularity of tools such as Lombok and JVM languages such as Kotlin demonstrate the pressing need to eliminate the need for boilerplate code in Java to do basic things.

It matters nothing if an IDE can generate all the getters and setters you wish. The problem is the need to generate all those getters and setters for a very mundane and recurrent usecase. All this boilerplate code adds to the cognitive load and maintenance needs of all projects, and contribute to the introduction of bugs.

I can count on the fingers of one hand the number of times I’ve actually needed to write a hash or equals method.

That's fine. Other people write code and are able to assess their own needs, and the verdict is that not having to write boilerplate code beats having to write it.

If your personal experience was shared by many, Lombok or Kotlin would not be popular.

[-] lysdexic@programming.dev 9 points 1 year ago

I like this bit because it really is a common answer whenever someone complains about how maddening/inefficient some tooling is nowadays.

I don't think this is a valid take. What we see in these vague complains about levels of abstraction is actually an entirely different problem: people complaining that they don't understand things, and they feel the cognitive load of specific aspects is too much for them to handle.

If the existing layers of abstraction were actually a problem and they solved nothing, and if removing them would solve everything, it would be trivial to remove them and replace them with the simpler solutions these critics idealize.

Except that never happens. Why is that, exactly?

[-] lysdexic@programming.dev 9 points 1 year ago* (last edited 1 year ago)

FYI, there's a TypeScript community in Lemmy.

!typescript@programming.dev

I'm sure that any non-trolling/flamebait discussion over TypeScript is welcomed in there.

[-] lysdexic@programming.dev 9 points 1 year ago

You’re right that that’s extremely unambiguous, but I still don’t love the idea that users don’t get to decide what’s in $HOME, like, maybe we could call it “$STORAGE_FOR_RANDOM_BULLSHIT” instead?

That's basically what $HOME is is used for in UNIX: a place for applications to store user-specific files, including user data and user files.

https://www.linfo.org/home_directory.html

If anything in computing conventions implies “user space” it’s a global variable named HOME. And it makes sense that there should be a $STORAGE_FOR_RANDOM_BULLSHIT location too - but maybe not the same place?

UNIX, and afterwards Unix-like OSes, were designed as multi-user operating systems that supported individual user accounts. Each user needs to store it's data, and there's a convenient place to store it: it's $HOME directory. That's how things have been designed and have been working for close to half a century.

Some newer specs such as Freedesktop's directory specification build upon the UNIX standard and Unix-like tradition, but the truth of the matter is that there aren't that many reasons to break away from this practice.

[-] lysdexic@programming.dev 8 points 1 year ago

but we can agree on which of two implementations is shorter.

Shortness for the sake of being short sounds like optimizing for the wrong metric. Code needs to be easy to read, but it's more important that the code is easy to change and easy to test. Inline code and function calls are renowned to render code untestable, and introducing abstract classes and handles is a renowned technique to stub out dependencies.

[-] lysdexic@programming.dev 9 points 1 year ago

That should be a disciplinary issue.

And that's how you get teams to stop collaborating and turn your work environment to shit.

view more: ‹ prev next ›

lysdexic

joined 1 year ago
MODERATOR OF