[-] lysdexic@programming.dev -4 points 3 months ago* (last edited 3 months ago)

(...) you can see what’s going on with the rest of the company, too.

That's a huge security problem.

Edit for those who are down voting this post, please explain why you believe that granting anyone in the organization full access to all the projects used across all organizations does not represent a security problem.

[-] lysdexic@programming.dev -2 points 5 months ago* (last edited 5 months ago)

it’s about deploying multiple versions of software to development and production environments.

What do you think a package is used for? I mean, what do you think "delivery" in "continuous delivery" means, and what's it's relationship with the deployment stage?

Again, a cursory search for the topic would stop you from wasting time trying to reinvent the wheel.

https://wiki.debian.org/DebianAlternatives

Deviam packages support pre and post install scripts. You can also bundle a systemd service with your Deb packages. You can install multiple alternatives of the same package and have Debian switch between them seemlessly. All this is already available by default for over a decade.

[-] lysdexic@programming.dev -3 points 6 months ago

I have experience contributing to a semi successful FLOSS project, one that I’m 100% certain you use daily.

I'm not talking about contributing. A drive-by PR does not make you a maintainer, nor gets you to triage bugs. The problems I mention are the bread and butter of maintainers engaged in community support, which you would know if you had any semblance of experience in the subject.

And the truth of the matter is that your choice to use weasel words as seaways to a rant to go off on a tangent demonstrates your complete lack of insight and experience in the subject.

[-] lysdexic@programming.dev -3 points 7 months ago

Such a braindead exercise to see Redis follow suit

I agree, this sounds like a desperate cash grab.

I mean, cloud providers who are already using Redis will continue to do so without paying anything at all, as they're using stable versions of a software project already released under a permissive license. That ship has sailed.

Major cloud providers can certainly afford developing their own services. If Amazon can afford S3 and DynamoDB, they can certainly develop from the ground up their own Redis-like memory cache. In fact, Microsoft already announced Garnet, which apparently outperforms Redis in no small way.

So who exactly is expected to pay for this?

[-] lysdexic@programming.dev -4 points 7 months ago* (last edited 7 months ago)

Oh, okay. I’ve never encountered a situation where I needed that bug fixed for the task but it shouldn’t be fixed as part of the task;

So you never stumbled upon bugs while doing work. That's ok, but others do. Those who stumble upon bugs see the value of being able to sort out local commits with little to no effort.

Also, some teams do care about building their work on atomic commits, because they understand the problems caused by mixing up unrelated work on the same PR, specially when auditing changes to track where a regression was introduced. You might feel it's ok to post a PR that does multiple things like bumping up a package version, linting unrelated code, fixing an issue, and post comments on an unrelated package, but others know those are four separate PRs and should be pushed as four separate PRs.

if they’re touching the same functionality like that I really don’t see the need for two PRs.

That's ok, not everyone works with QA teams. Once you grow over a scale where you have people whose job is to ensure a bug is fixed following specific end to end tests and detect where a regression was introduced, you'll understand the value of having tests that verify if a bug is fixed, and only afterwards proceed with changing the user-facing behavior. For those with free-for-all commits where "fixes bug" and "update" show up multiple times in their commit history, paying attention to how a commit history is put together is hardly a concern.

[-] lysdexic@programming.dev -3 points 7 months ago

And those who don’t immediately insult

Pointing out someone's claim that they don't care about processes when it's the critical aspect of any professional work is hardly what I'd call an insult.

Just go ahead and say you don't use a tool and thus you don't feel the need to learn it. Claiming that a tool's basic functionality is "a solution in search for a problem" is as good as announcing your obliviousness,and that you're discussing stuff you hardly know anything about.

[-] lysdexic@programming.dev -3 points 7 months ago

See, I don't think you understood the example. The commits built upon each other (bugs are fixed while you work on the task, and to work on your task you need the bugs to be fixed) and reordering commits not only take no time at al but they are also the very last thing you do and you have to do the just once.

[-] lysdexic@programming.dev -2 points 11 months ago* (last edited 11 months ago)

If it’s not constant at you may get the loop invariant movement. But only if the compiler can tell that it’s invariant.

The point is that if the predicate is evaluated at runtime then the compiler plays no role because there is no compile-time constant and all code paths are deemed possible.

I suppose what I should have said is more like “in many cases you won’t see any performance difference because the compiler will do that for you anyway.”

I understand that you're trying to say that compilers can leverage compile-time constants to infer if code paths are dead code or not.

That's just a corner case though. Your compiler has no say on what code paths are dead if you're evaluating a predicate from, say, the response of a HTTP request. It doesn't make sense to expect this hypothetical scenario to be realistic when you have no info on where a predicate is coming from.

[-] lysdexic@programming.dev -5 points 11 months ago

My advice: use descriptive variable names.

The article is really not about naming conventions.

[-] lysdexic@programming.dev -4 points 1 year ago* (last edited 1 year ago)

Specifically, do you worry that Microsoft is going to eventually do the Microsoft thing and horribly fuck it up for everyone?

I'm not sure you are aware, but Microsoft created TypeScript.

https://devblogs.microsoft.com/typescript/announcing-typescript-1-0/

Without Microsoft, TypeScript would not exist.

[-] lysdexic@programming.dev -4 points 1 year ago* (last edited 1 year ago)

This is certainly a way to dismiss all other programming paradigms, I suppose.

My comment has nothing to do with paradigms.

In fact, your strawman is proven to be false by the fact that there is no mainstream tech stack for the web which is not object oriented and provides a request pipeline that uses inversion of control for developers to pass their event handlers. They all reimplement the exact same solution and follow the exact same pattern to handle requests.

[-] lysdexic@programming.dev -3 points 1 year ago

Rust's borrow checker is a bad example. There are already a few projects that target C++ and support both runtime and compile-time checks, not to mention static code analysis tools that can be added to any project.

view more: ‹ prev next ›

lysdexic

joined 1 year ago
MODERATOR OF