[-] douglasg14b@programming.dev 4 points 2 months ago* (last edited 2 months ago)

Too bad commenters are as bad as reading articles as LLMs are at handling complex scenarios. And are equally as confident with their comments.

This is a pretty level headed, calculated, approach DARPA is taking (as expected from DARPA).

[-] douglasg14b@programming.dev 10 points 2 months ago

They're not on to anything here. As further stated by your comment.

[-] douglasg14b@programming.dev 6 points 3 months ago* (last edited 3 months ago)

They work great when you have many teams working alongside each other within the same product.

It helps immensely with having consistent quality, structure, shared code, review practices, CI/CD....etc

The downside is that you essentially need an entire platform engineering team just to set up and maintain the monorepo, tooling, custom scripts, custom workflows....etc that support all the additional needs a monorepo and it's users have. Something that would never be a problem on a single repository like the list of pull requests maybe something that needs custom processes and workflows for in a monorepo due to the volume of changes.

(Ofc small mono repos don't require you to have a full team doing maintenance and platform engineering. But often you'll still find yourself dedicating an entire FTE worth of time towards it)

It's similar to microservices in that monorepo is a solution to scaling an organizational problem, not a solution to scaling a technology problem. It will create new problems that you have to solve that you would not have had to solve before. And that solution requires additional work to be effective and ergonomic. If those ergonomic and consistency issues aren't being solved then it will just devolve over time into a mess.

[-] douglasg14b@programming.dev 4 points 3 months ago

Yeah, and electron already has a secureStorage API that handles the OS interop for you. Which signal isn't using, and a PR already exists to enable...

[-] douglasg14b@programming.dev 7 points 6 months ago

This is a weird take given that the majority of projects relevant to this article are massive projects with hundreds or thousands of developers working on them, over time periods that can measure in decades.

Pretending those don't exist and imagining fantasy scenarios where all large projects are made up of small modular pieces (while conveniently making no mention to all of the new problems this raises in practice).

Replace functions replace files and rewrite modules, that's expected and healthy for any project. This article is referring to the tendency for programmers to believe that an entire project should be scrapped and rewritten from scratch. Which seems to have nothing to do with your comment...?

[-] douglasg14b@programming.dev 7 points 1 year ago* (last edited 1 year ago)

I do feel like C# saw C++ and said "let's do that" in a way.

One of the biggest selling points about the language is the long-term and cross repo/product/company..etc consistency. Largely the language will be very recognizable regardless of where it's written and by who it's written due to well established conventions.

More and more ways to do the same thing but in slightly different ways is nice for the sake of choices but it's also making the language less consistent and portable.

While at the same time important language features like discriminated unions are still missing. Things that other languages have started to build features for by default. C# is incredibly "clunky" in comparison to say Typescript solely from a type system perspective. The .Net ecosystem of course more than makes up for any of this difference, but it's definitely not as enjoyable to work with the language itself.

[-] douglasg14b@programming.dev 5 points 1 year ago* (last edited 1 year ago)

The great thing about languages like C# is that you really don't need to "catch up". It's incredibly stable and what you know about C#8 (Really could get away with C# 6 or earlier) is more than enough to get you through the grand majority of personal and enterprise programming needs for the next 5-10 years.

New language versions are adding features, improving existing ones, and improving on the ergonomics. Not necessarily breaking or changing anything before it.

That's one of the major selling points really, stability and longevity. Without sacrificing performance, features, or innovation.

[-] douglasg14b@programming.dev 4 points 1 year ago* (last edited 1 year ago)

.Net + EF Core + Vue/TS + Postgres. Redis as needed, Kafka as needed.

I can get applications built extremely quickly, and their maintenance costs are incredibly low. The backends are stable, and can hang around for 5, 10+ years without issue or problems with ecosystem churn.

You can build a library of patterns and reusable code that you can bring to new projects to get them off the ground even faster.

Would recommend.

[-] douglasg14b@programming.dev 5 points 1 year ago

Pretty much.

For instance focusing on PR size. PR size may be a side effect of the maturity of the product, the type of work being performed, the complexity or lack thereof of the real world space their problems touch, and in the methodologies habits and practices of the team.

Just looking at PR size or really any other single dimensional KPI lead you to lose the nuance that was driving the productivity in the first place.

Honestly in my experience high productivity comes from a high level of unity in how the team thinks, approaches problems, and how diligent they are about their decisions. And isn't necessarily something that's strictly learned, it can be about getting the right people together.

[-] douglasg14b@programming.dev 4 points 1 year ago

This is the kinda stuff I expect to find in this kind of community! ADRs are a good topic that can help teams act more mature.

And less general career questions and low-level "what technology should I learn" 🤔

[-] douglasg14b@programming.dev 5 points 1 year ago* (last edited 1 year ago)

That single line of code may be using a slow abstraction, doesn't cover edge cases, has no caching of reused values, has no optimization for the common path, or any other number of issues. Thus being slower, fragile, or sometimes not even solving the problem it's meant to solve.

More often than not performance and robustness comes at a significant increase to the amount of code you have to write in high level languages... Performance optimizations especially.

A high performance parser I was involved in writing was nearly 60x the amount of code (~12k LOC) of the lowest LOC solution you could make (~200LOC), but also several orders of magnitude faster. It also covered more edge cases, and could short circuit to more optimal paths during parsing, increasing the performance for common use cases which had optimized code written just for them.

More lines of code = slower

It doesn't. This is a fundamental misunderstanding of software engineering and is flawed in almost every way. To the point of it being an armchair statement. Often this is even objectively provable...

[-] douglasg14b@programming.dev 6 points 1 year ago* (last edited 1 year ago)

Yes, tons. But it depends on the team and the software. If I'm on a small and inexperienced team for example I'm going to be doing a lot of the work, if I'm on a small but competent team then I may be doing a lot more design & abstraction then the actual work.

Right now as a tech lead I would say ~40% of my time is actual programming.

view more: ‹ prev next ›

douglasg14b

joined 1 year ago