[-] lysdexic@programming.dev 0 points 6 months ago

All of the other things you mention can be solved with money. In terms of the things that are easy and hard, this very much the former.

I don't think you know what you're talking about, or have any experience working in a corporate environment and asking for funding or extraordinary payments to external parties to deliver something. I even personally know of cases where low-level grunts opt to pay for licenses out of pocket just to not have to deal with the hassle of jumping through the necesssry hoops. You just don't reach out for the cash bag and throw money at things. Do you think that corporations work like hip-hop videos?

[-] lysdexic@programming.dev -1 points 9 months ago

C syntax is simple, yes, but C semantics are not; there have been numerous attempts to quantify what percentage of C and C++ software bugs and/or security vulnerabilities are due to the lack of memory safety in these languages, and (...)

...and the bulk of these attempts don't even consider onboarding basic static analysis tools to projects.

I think this comparison is disingenuous. Rust has static code analysis checks built into the compiler, while C compilers don't. Yet, you can still add static code analysis checks to projects, and from my experience they do a pretty good job flagging everything ranging from Critical double-frees to newlines showing up where they shouldn't. How come these tools are kept out of the equation?

[-] lysdexic@programming.dev -1 points 9 months ago

Are you going to post links to all Wikipedia articles here?

What problem do you have with Wikipedia?

[-] lysdexic@programming.dev -1 points 10 months ago* (last edited 10 months ago)

Nobody’s perfect and time has shown multiple time that you can’t trust human beings with memory safety.

That's perfectly fine. That's not a problem caused UB, or involving UB.

Again, UB is a red herring.

It is however the language’s fault to allow UB in the first place.

It really isn't. Again, mindlessly parroting this doesn't give any substance to this claim. Please try to think about it for a second. For starters, do you believe it would make any difference if the C or C++ standard defined how the language should handle dereferencing a null pointer? I mean, in some platforms NULL is a tombstone, but on specific platforms NULL actually points to a valid memory address. The standards purposely leave this as undefined. Why is that? Seriously, think about it for a second.

Am I blaming those languages? Nah, it was a different time.

It really isn't. It's a design choice that reflects the need to work with the widest possible range of platforms. The standards have already been updated with backwards-incompatible changes, but even the latest revisions purposely include UB.

I repeat: I see people mindlessly parroting nonsense about UB when they clearly have no idea what they're talking about.

[-] lysdexic@programming.dev -1 points 10 months ago

Some people also feel strongly about topics they are very familiar with 🙂. I have experienced my fair share of undefined behaviour in C++ and it has never been a pleasant experience.

If you had half the experience you claim to have, you'd know that code that triggers UB is broken code by definition, and represents a bug that you introduced.

It's not the language's fault that you added bugs to the code. UB is a red herring.

Sure, sometimes use of undefined behaviour works (...)

You missed the whole point of what I said.

By definition, UB does not work. It does not work because by design there is no behavior that should be expected. By design it's up to the implementation to fill in the blanks, but as far as the language spec goes there is no behavior that should be expected.

Thus, code with UB is broken code, and if your PR relies on UB then you messed up.

Nevertheless, some implementations do use UB to add guardrails to typical problems. However, if you crash onto a guardrail, that does not mean you know how to drive. Do you get the point?

[-] lysdexic@programming.dev -1 points 10 months ago

What do you mean wrong “already”?

This is one of the problems in these discussions about undefined behavior: some people feel very strongly about topics they are entirely unfamiliar with.

According to the C++ standard, "undefined behavior may be expected when this document omits any explicit definition of behavior or when a program uses an erroneous construct or erroneous data." Some examples of undefined behavior still lead to the correct execution of a program, but even so the rule of thumb is to interpret all instances as wrong already.

[-] lysdexic@programming.dev -1 points 10 months ago

It seems to be a combination of both things. They believe that switching will attract contributors and make it more modern… but also they seem to have had some trouble with thread safety in C++ that would have required them to do some restructuring anyway.

It still feels like at best they are optimizing for the wrong metric and at worse they are just trying to rationalize an arbitrary choice.

I mean, the first reason they point out is "high probability of still being relevant in a decade." I mean, is Rust even a candidate in this domain? All leading programming languages have been around for longer than Rust and are actually specified in international standards, which ensures they will be around for essentially all eternity. Rust provides nothing of the sort. Is there anyone willing to put their hands in the fire for the expectation they will be able to build today's Rust projects a decade from now?

Also, Rust is renowned for having a steep and tough learning curve. Those are hardly the traits you seek to increase your potential userbase.

More importantly, threading stuff is limited to key architecture components that once in place are expected to change little to nothing. It's like picking .NET because you think it supports background processes well. Except the bulk of your code changes won't touch that, so what's the point?

Anyway, anyone is free to invest their time and effort in any venture without having to explain their motivations to anyone.

[-] lysdexic@programming.dev -1 points 11 months ago

Gitea is so much better than this.

Is it, though?

Also, Apache Allura supports revision control services other than Git, which apparently Gitea does not.

MIT licensed as well.

Why do you think that is relevant, specially given Apache Allura is released under the Apache license?

[-] lysdexic@programming.dev -1 points 11 months ago

Use unsafe and write like you’re a C/C++ programmer. You can do it.

Onboard the C/C++ project to any C++ static code analysis tool and check back with me later.

This is a nothingburger.

[-] lysdexic@programming.dev -1 points 1 year ago* (last edited 1 year ago)

The problem with the article is that it’s confusing hard realtime and low latency requirements. Most UIs do not require hard realtime, even soft realtime is a nice to have and users will tolerate some latency.

I don't think that's a valid take from the article.

The whole point of the article is that if a handler from a GUI application runs for too long then the application will noticeably block and degrade the user experience.

The real time mindset is critical to be mindful of this failure mode: handlers should have a time budget (compute, waiting dor IO, etc), beyond which the user experience degrades.

The whole point is that GUI applications, just like real-time applications, must be designed with these execution budgets in mind, and once they are not met them the application needs to be redesigned avoid these issues.

[-] lysdexic@programming.dev -1 points 1 year ago

Let me give you a quick life lesson.

You've typed too many words to try to rationalize your toxic behavior.

I repeat: pay attention to what you say and do. All communities have their bad apples, but this does not grant the likes of you the right to spoil the whole barrel.

[-] lysdexic@programming.dev -1 points 1 year ago

I'm going to play devil's advocate for a moment.

following best practices we laid out in our internal documentation

Are you absolutely sure those "best practices" are relevant or meaningful?

I once worked with a junior dev who only cared about "best practices" because it was a quickly whipped document they hastily put together that only specified stuff like coding styles and if spaces should appear before or after things. That junior dev proceeded to cite their own "best practices" doc with an almost religious fervor in everyone else's pull requests. That stopped the very moment I made available a linter to the project, but mind you the junior dev refused to run it.

What's the actual purpose of your "best practices" doc? Does it add any value whatsoever? Or is it just fuel for grandstanding and petty office politics?

his code works mind you,

Sounds like the senior dev is doing the job he was paid to do. Are you doing the same?

It’s weird because I literally went through most of the same training in company with him on best practices and TDD, but he just seems to ignore it.

Perhaps his job is to deliver value instead of wasting time with nonsense that serves no purpose. What do you think?

view more: ‹ prev next ›

lysdexic

joined 1 year ago
MODERATOR OF