-13
Arch Linux users, help needed!
(lemmy.ml)
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
Agreed, but the post was about helping installing the aforementioned softwares but imma yoink the idea for my future projects.
We have to define what installing software even means. If you install a Flatpak, it basically does the same thing as Docker but somewhat differently. Snaps are similar.
"Installing" software generally means any way that gets the software on your computer semi-permanently and run it. You still end up with its files unpacked somewhere, the main difference with Docker is it ships with the whole runtime environment in the form of a copy of a distro's userspace.
But fair enough, sometimes you do want to run things directly. Just pointing out it's not a bad answer, just not the one you wanted due to missing intents from your OP. Some things are so finicky and annoying to get running on the "wrong" distro that Docker is the only sensible way to install it. I run the Unifi controller in a container for example, because I just don't want to deal with Java versions and MongoDB versions. It just comes with everything it needs and I don't need to needlessly keep Java 8 around on my main system potentially breaking things that needs a newer version.
Basically, I should use docker as a VPS, right? The only thing I was taught in bootcamp was how to use docker to create a setup for a new dev for a specific codebase, i.e. download required packages to work on the codebase through docker and use AWS as a VPS using Elastic and S3 to display the website.
Kind of but also not really.
Docker is one kind of container, which itself is a set of kinds of Linux namespaces.
It's possible to run them as if they were a virtual machine with LXC, LXD, systemd-nspawn. Those run an init system and have a whole Linux stack of their own running inside.
Docker/OCI take a different approach: we don't really care about the whole operating system, we just want apps to run in a predictable environment. So while the container does contain a good chuck of a regular Linux installation, it's there so that the application has all the libraries it expects there. Usually network software that runs on a specified port. Basically, "works on my machine" becomes "here's my whole machine with the app on it already configured".
And then we were like well this is nice, but what if we have multiple things that need to talk to eachother to form a bigger application/system? And that's where docker-compose and Kubernetes pods comes in. They describe a set of containers that form a system as a single unit, and links them up together. In the case of Kubernetes, it'll even potentially run many many copies of your pod across multiple servers.
The last one is usually how dev environments go: one of them have all your JS tooling (npm, pnpm, yarn, bun, deno, or all of them even). That's all it does, so you can't possibly have a Python library that conflicts or whatever. And you can't accidentally depend on tools you happen to have installed on your machine because then the container won't have it and it won't work, you're forced to add it to the container. Then that's used to build and run your code, and now you need a database. You add a MongoDB container to your compose, and now your app and your database are managed together and you can even access the other containers by their name! Now you need a web server to run it in a browser? Add NGINX.
All isolated, so you can't be in a situation where one project needs node 16 and an old version of mongo, but another one needs 20 and a newer version of mongo. You don't care, each have a mongo container with the exact version required, no messing around.
Typically you don't want to use Docker as a VPS though. You certainly can, but the overlay filesystems will become inefficient and it will drift very far from the base image. LXC and nspawn are better tools for that and don't use image stacking or anything like that. Just a good ol' folder.
That's just some applications of namespaces. All of process, network, time, users/groups, filesystems/mount can be independently managed so many namespaces can be in the same network namespace, while in different mount namespaces.
And that's how Docker, LXC, nspawn, Flatpak, Snaps are kinda all mostly the same thing under the hood and why it's a very blurry line which ones you consider to be isolation layers, just bundled dependencies, containers, virtual machines. It's got an infinite number of ways you can set up the namespaces the ranges from seeing
/tmp
as your own personal/tmp
to basically a whole VM.