Thanks! Turns out I have a lot more time on my hands to be found around the internet since I got laid off last month ๐
I wish I had more advice, but I'm in a similar boat, just got laid off earlier this month after being with the same company from Series A in 2018 all the way until today. I'm sending job applications and trying to get interviews, but it's hard to get past the resume screening stage, even with 8+ years of experience.
I've mainly been working in DevOps/SRE/Platform Infrastructure, but I am also an accomplished developer with a pretty thick portfolio of widely used open source projects, though it doesn't seem to matter.
There are so many applicants for every single job now that it feels hopeless, and of course every single opening wants you to waste your time on multiple asinine LeetCode gotcha questions.
If I lived somewhere with a public health system I'd love to take what money I have saved up and open a traditional middle eastern bakery, but I need to do something that will bring health coverage for myself and my family. Who knows, I might just end up working at Trader Joe's. ๐คทโโ
I think it's a stack that really pays off in the long run for solo projects. After a long week of work the last thing I want to do is go tracking down runtime errors (undefined is not a function
, my old friend) or messing around with Docker containers and Kubernetes clusters. It also doesn't hurt that once you throw away the costly deployment abstractions, the operating expenses turn out to be a lot cheaper.
Highly recommended viewing if you'd like to learn more about the limits of reproducibility in the Docker ecosystem.
I understood your point, and while there are situations where it can be optional, in a context and scale of hundreds of developers, who mostly don't have any real docker
knowledge, and who work almost exclusively on macOS, let alone enough to set up and maintain alternatives to Docker Desktop, the only practical option becomes to pay the licensing fees to enable the path of least resistance.
Lot's of (incorrect) assumptions here and generally a very poorly worded post that doesn't make any attempt to engage in good faith. These are the reasons for what I believe is my very first down-vote of a comment on Lemmy.
NixOS on WSL2 is actually my development environment of choice these days! (With my tiling window manager komorebi, of course! ๐)
I believe this is the Docker Desktop license pricing.
On an individual scale and even some smaller startup scales, things are a little bit different (you qualify for the free tier, everyone you work with is able to debug off-the-beaten-path Docker errors, knowledge about fixes is quick and easy to disseminate, etc.), but the context of this article and the thread on Mastodon that spawned it was a "unicorn" company with an engineering org comprised of hundreds of developers.
Hi!
First I'd like to clarify that I'm not "anti-container/Docker". ๐
There is a lot of discussion on this article (with my comments!) going on over at Tildes. I don't wanna copy-paste everything from there, but I'll share the first main response I gave to someone who had very similar feedback to kick-start some discussion on those points here as well:
Some high level points on the "why":
-
Reproducibility: Docker builds are not reproducible, and especially in a company with more than a handful of developers, it's nice not to have to worry about a
docker build
command in the on-boarding docs failing inexplicably (from the POV of the regular joe developer) from one day to the next -
Cost: Docker licenses for most companies now cost $9/user/month (minimum of 5 seats required) - this is very steep for something that doesn't guarantee reproducibility and has poor performance to boot (see below)
-
Performance: Docker performance on macOS (and Windows), especially storage mount performance remains poor; this is even more acutely felt when working with languages like Node where the dependencies are file-count heavy. Sure, you could just issue everyone Linux laptops, but these days hiring is hard enough without shooting yourself in the foot by not providing a recent MBP to new devs by default
I think it's also worth drawing a line between containers as a local development tool and containers as a deployment artifact, as the above points don't really apply to the latter.
More and more lulls with more and more years of experience. I hit the gym more, socialize more, cook more extravagantly, take walks more often etc. The most important thing was to train myself to not give a damn when people were making stupid decisions at work that were going to bite them N months down the line during those lulls.
If I can even help one person avoid that same fate, it's worth it!
tl;dr all the same caveats with self-hosted software apply; don't do anything you wouldn't do with a self hosted database or monitoring stack.
The rules themselves are the same public rules in the IAM docs on AWS, GCP etc., while the collections of these public rules (eg. the
storage_analytics_ro
example in the README) defined at the org level will likely be stored in two ways: 1) in a (presumably private) infra-as-code repo most probably using the Terraform provider or a future Pulumi provider, 2) the data store backing the service which I talk about more below."Who received access to what" is something that is tracked in the runtime logs and audit logs, but as this is a temporary elevated access management solution where anyone who is given access to the service can make a request that can be approved or denied, this is not the right place or tool for a general long-lived least-privilege mapping of "this rule => this person/this whole team".
This is largely up to the the team responsible for the implementation and maintenance, just like it would be for a self-hosted monitoring stack like Prom + Grafana or a self-hosted PostgreSQL instance; you can have your data exposed through public IPs, FQDNs and buckets with PostgreSQL or Prom + Grafana, or you can have them completely locked down and only available through a private network, and the same applies with Satounki.
Yes, yes, yes, yes and yes, though the degree of confidence in each of these depends to some degree on the competence of the people responsible for the implementation and the maintenance of the service as is the case with all things self-hosted.
If deployed in an organization which doesn't adhere to at least a basic least-privilege permissions approach, there is nothing stopping a bad internal actor with Administrator permissions wherever this is deployed from opening up the database directly and making whatever malicious changes they want.