[-] lysdexic@programming.dev 0 points 1 year ago

It’s used because the ones who use it have enough money to pay for any problems that may arise from it’s use, (...)

That's laughable. Literally the whole world uses it. Are you telling me that everyone in the world just loves to waste money? Unbelievable.

[-] lysdexic@programming.dev 0 points 1 year ago

You’d have had me ignore them all and keep using C for everything.

Please tell me which language other than C is widely adopted to develop firmware.

You're talking about so many up-and-comers during all these decades. Name one language other than C that ever came close to become a standard in firmware and embedded development.

Right.

[-] lysdexic@programming.dev 0 points 2 years ago

If you write it down it is documentation.

I think you're not getting the point.

It matters nothing if you write down something. For a project, only the requirements specification matters. The system requirements specification document lists exactly what you need to deliver and under which conditions. It matters nothing if you write a README.md or post something in a random wiki.

Requirements are not the same thing as specifications either, but both are documentation!

https://en.wikipedia.org/wiki/System_requirements_specification

[-] lysdexic@programming.dev 0 points 2 years ago

All of the other things you mention can be solved with money. In terms of the things that are easy and hard, this very much the former.

I don't think you know what you're talking about, or have any experience working in a corporate environment and asking for funding or extraordinary payments to external parties to deliver something. I even personally know of cases where low-level grunts opt to pay for licenses out of pocket just to not have to deal with the hassle of jumping through the necesssry hoops. You just don't reach out for the cash bag and throw money at things. Do you think that corporations work like hip-hop videos?

[-] lysdexic@programming.dev 0 points 2 years ago

C has always been (...)

I think you tried too hard to see patterns where there are none.

It's way simpler than what you tried to make it out to be: C was one of the very first programming languages put together. It's authors rushed to get a working compiler while using it to develop an operating system. In the 70s you did not had the benefit of leveraging half a century of UX, DX, or any X at all. The only X that was a part of the equation was the developers' own personal experience.

Once C was made a reality, it stuck. Due to the importance of preserving backward compatibility, it stays mostly the same.

Rust was different. Rust was created after the world piled up science, technology, experience, and craftsmanship for half a century. Their authors had the benefit of a clean slate approach and insight onto what worked and didn't worked before. They had the advantage of having a wealth of knowledge and insight being formed already before they started.

That's it.

[-] lysdexic@programming.dev 0 points 2 years ago* (last edited 2 years ago)

And that’s how software development became just another profession.

I don't think that's a healthy way of framing things. Software development was always, from the very start, just another profession. What changed in the last decade or so was a) supply and demand in the job market, b) the quality of the pool of workers searching for jobs. Companies still look for developers, and most still pay handsomely well, but the hiring bar is currently met only by those who are far more experienced and/or paid attention to their career growth. You still see companies hiring people straight out of bootcamps, but they come out of the bootcamp pipeline with proper portfolios and they hit the ground running without requiring that much training or onboarding.

In contrast, the blogger states that "After more than a decade of sitting behind a single company's desk, my CV looks bleak." A decade is a very long time to stay idle by without updating their skills, isn't it?

I saw this phenomenon throughout the past decade in the hiring loops I was involved. In the demand peak I already saw a few developers with over a decade of experience interviewing for senior positions that started their interviews already defeated and broken, complaining that in their last roles they just went with the flow and never bothered to do anything relevant with their career. They claimed they could fit the role and do whatever needed to be done, but the truth of the matter is that that's true for each and every single developer called for a technical review. We needed to have some assurance that we were hiring the best candidate for the job, and these developers with a long experience of "sitting behind a single company's desk" gave us nothing to work with. So why would we hire them over those who could show off something?

[-] lysdexic@programming.dev 0 points 2 years ago* (last edited 2 years ago)

Like everyone has mentioned, because you want the data to persist across program runs.

RDBMS do not imply persisted data. There are plenty of databases which support or are designed explicitly to work as in-memory data stores. Perhaps the most popular one is SQLite, whose in-memory store is a popular choice.

[-] lysdexic@programming.dev 0 points 2 years ago

Also, it’s worth noting that cargo is a fairly good package manager all things considered.

Yes, I'm familiar with Cargo. My point was to point out the absurdity and silliness of OP's remarks on "no bulky management of a virtual environment, no make files, no maven, etc." Once Rust fundamentalista take off their rose-tinted glasses, it's clear that Cargo is just as good (or as bad) as any contemporary integrated build system.

[-] lysdexic@programming.dev 0 points 2 years ago* (last edited 2 years ago)

I think it would be beneficial for their community to take the wish for more credit more serious and try to make him feel welcome.

I think they did. Apparently the maintainer trusted the first-time contributor enough to propose tackling another bug.

If the goal is to get more contributions, I think that's exactly what should happen. I feel the kernel maintainer is being treated unfairly.

Whining about getting extra work feels like the author didn't intended to contribute anything else and just put all this reputation chips on that one isolated ticket.

[-] lysdexic@programming.dev 0 points 2 years ago

A database carry the schema, structure, that allow you to validate that you are still having the structure you want.

So do all file formats.

SQLite is both a file and a database, but what I’m saying is that people shouldn’t mess with the file, but the database interface instead.

The same holds for all file formats: don't go around licking random bits in a file, use a client instead.

I have nothing against third party clients, the important thing is keeping the structure.

That's what file format clients are for, and anyone can even roll out their own if they want it.

The facts a DB use the Filesystem behind the scenes, is an implementation details the user shouldn’t be much concerned about, some DB can do without Filesystems.

That's really besides the point. The point is that it doesn't make sense to frame using databases over files as using a higher level client over persisted data.

[-] lysdexic@programming.dev 0 points 2 years ago

It is branching away from Java, even if it still uses it primarily.

I'm sorry to tell you, but I assure you it is not. Some small subset of teams uses non-java tech stacks but that's because they have very particular requirements, such as being android apps or running on Linux devices. The bulk of the company heavily standardized on Java and has no plans to ever move away from it.

Unusually, off the top of my head, I happen to know more .NET developers working there than Java developers, and interestingly they develop one of the services on AWS.

First of all AWS is not Amazon.

Secondly, I can tell you for a fact that C# is one of the rarest tech stacks at Amazon. Even Amazon's internal build system does not support it.

I'm afraid you're talking about stuff you know close to nothing about.

[-] lysdexic@programming.dev 0 points 2 years ago* (last edited 2 years ago)

No> Context is whatever makes sense to provide to a consumer to help them debug it or respond to it

So it's both optional and unspecified. This means it can't be parsed or relied upon, specially by consumers. It's useless.

the same basic idea as in the rfc under details.

No, it isn't. Contrary to your ad-hoc format, RFC9457 specifies exactly the data type of detail and what's its purpose. This allows third parties to reliably consume resources that comply with RFC9457 while your ad-hoc format leaves clients no option other than to ignore it.

IMO, it can’t easily be generalized. Some APIs may have context to provide, others may not.

It matters nothing what services can produce. What matters is whether clients can consume it. Your ad-hoc format fails to specify this field, which is optional, and thus leaves no option other than to ignore it. It's unusable.

Success is something that you can sniff for after deserializing, as IIRC Fetch API will not throw except for a network errors, even in the event of a 4XX or 5XX.

What the Fetch API does or does not do is irrelevant. The responsibility of putting together a response and generating the resource shipped with it lies exclusicely in your service. If it outputs a resource that is unable to tell clients what went on, that's a problem cause by both how your service is designed and the ad-hoc format it outputs.

The main take is that RFC9457 is well specified and covers basic usecases, while your ad-hoc format is broken by design. Thus when you describe the RFC as "overwrought", you're actually expressing the half-baked approach you took.

view more: ‹ prev next ›

lysdexic

joined 2 years ago
MODERATOR OF