44
submitted 1 year ago by j4k3@lemmy.world to c/linux@lemmy.ml

I'm doing a bunch of AI stuff that needs compiling to try various unrelated apps. I'm making a mess of config files and extras. I've been using distrobox and conda. How could I do this better? Chroot? Different user logins for extra home directories? Groups? Most of the packages need access to CUDA and localhost. I would like to keep them out of my main home directory.

top 50 comments
sorted by: hot top controversial new old
[-] DryTomatoes@lemmy.world 19 points 1 year ago* (last edited 1 year ago)

I did Linux From Scratch recently and they have a brilliant solution. Here's the full text but it's a long read so I'll briefly explain it. https://www.linuxfromscratch.org/hints/downloads/files/more_control_and_pkg_man.txt

Basically you make a new user with the name of the package you want to install. Login to that user then compile and install the package.

Now when you search for files owned by the user with the same name as the package you will find every file that package installed.

You can document that somewhere or just use the find command when you are ready to remove all files related to the package.

I didn't actually do this for my own LFS build so I have no further experience on the matter. I think it will eventually lead to dependency hell when two packages want to install the same file.

I guess flatpaks are better about keeping libraries separate but I'm not sure if they leave random files all over your hard drive the way apt remove/apt purge does. (Getting really annoyed about all the crud left in my home dir)

[-] FarraigePlaisteach@kbin.social 6 points 1 year ago

That’s clever. It should work on any system, shouldn’t it?

[-] DryTomatoes@lemmy.world 2 points 1 year ago

Any POSIX compliant system as far as I know.

[-] FarraigePlaisteach@kbin.social 3 points 1 year ago

Thanks. I’ll keep that in mind for again.

[-] j4k3@lemmy.world 2 points 1 year ago

Thanks for the read. This is what I was thinking about trying but hadn't quite fleshed out yet. It is right on the edge of where I'm at in my learning curve. Perfect timing, thanks.

Do you have any advice when the packages are mostly python based instead of makefiles?

[-] doot@social.bug.expert 5 points 1 year ago

for python, a bunch of venvs should do it

[-] DryTomatoes@lemmy.world 2 points 1 year ago

This method should work with any command that's installing files on your disk but it's probably not worth the headache when virtual environments exist for python.

[-] j4k3@lemmy.world 2 points 1 year ago

Python, in these instances, is being used as the installer script. As far as I can tell it involves all of the same packaging and directory issues as what make is doing. Like, most of the packages have a Python startup script that takes a text file and installs everything from it. This usually includes a pip git+address or two. So far, just getting my feet wet to try out AI has been enough for me to overlook what all is happening behind the curtain. The machine is behind an external whitelist firewall all by itself. I am just starting to get to the point where I want to dial everything in so I know exactly what is happening.

I've noticed a few oddball times during installations pip said something like "package unavailable; reverting to base system." This was while it is inside conda, which itself is inside a distrobox container. I'm not sure what "base system" it might be referring to here or if this is something normal. I am probing for any potential gotchas revolving around python and containers. I imagine it is still just a matter of reading a lot of code in the installation path.

[-] DryTomatoes@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

I hope someone who has more info comes along. It might be time for you to make a new post though since we're getting to the heart of the problem now.

Also it will be a lot easier for people to diagnose if you are specific about which programs you are failing to install.

I've only experimented with Python in docker and it gave me a lot of headaches.

That's why I prefer to pip install things inside venvs because I can just tar them myself and have decent portability.

But since your installing files across the system I'm not sure what the best solution is.

[-] demesisx@infosec.pub 13 points 1 year ago
[-] Lily33@kbin.social 4 points 1 year ago

NixOS containers could do what OP's asking for, but it'll be trickier with just nix (on other distro). It'll handle build dependencies and such, but you'll still need to keep your home or other directories clean some other way.

[-] demesisx@infosec.pub 5 points 1 year ago

OP could use flakes to create these dev environments and clean them up without a trace once done.

[-] Lily33@kbin.social 0 points 1 year ago

Any files created by programs running in the dev environments will remain.

[-] demesisx@infosec.pub 3 points 1 year ago
[-] Lily33@kbin.social 3 points 1 year ago* (last edited 1 year ago)

Does NOT delete any files that were written to, for example, ~/.local or ~/.config from dev shell.

One of OP's problems was,

I’m making a mess of config files and extras.

I use a mixture of systemd-nspawn and different user logins. This is sufficient for experimentation, for actual use I try to package (makepkg) those tools to have them organized by my package manager.

Also LVM thinpools with snapshots are a great tool. You can mount a dedicated LV to each single user home to keep everything separated.

[-] jet@hackertalks.com 7 points 1 year ago

Qubes: you can install software inside of its own disposable VM. Or it can be a persistent VM we're only the data in home persists. Or it can be a VM where the root persists. You have a ton of control. And it's really useful to see what's changed in the system.

All the other solutions here are talking about in the operating system, qubes is doing it outside the operating system

[-] Kangie@lemmy.srcfiles.zip 6 points 1 year ago

I use Gentoo where builds from source are supported by the package manager. ;)

Overall though, any containerisation option such as Docker / Podman or Singularity is what I would typically do to put things in boxes.

For semi-persistent envs a chroot is fine, and I have a nice Gentoo-specific chroot script that makes my life easier when reproing bugs or testing software.

[-] j4k3@lemmy.world 1 points 1 year ago

Wait. Does emerge support building packages natively when they are not from Gentoo?

Most of the stuff I'm messing with is mixed repos with entire projects that include binaries for the LLMs, weights, and such. Most of the "build" is just setting up the python environment with the right dependency versions for each tool. The main issues are the tools and libraries like transformers, pytorch, and anything that interacts with CUDA. These get placed all over the file system for each build.

[-] Kangie@lemmy.srcfiles.zip 3 points 1 year ago

Ebuilds (Gentoo packages) are trivial to create for almost anything, so while the answer is 'no the package manager doesn't manage non PM packages', typically you'll make an ebuild (or two or three) to handle that because it's (typically) as easy as running make yourself. :)

[-] Gamey@lemmy.world 5 points 1 year ago

I think Podman should do a good job but I never used it myself, Distrobox is build on it and a lot easier to use so that's what I would recommend!

[-] VonReposti@feddit.dk 5 points 1 year ago

For "desktop" stuff (gaming, office etc.) I just install bare-metal, for "server" stuff I basically only look for containerisation in the form of Podman (Docker compatible). If it doesn't exist as a compose file it isn't worth my time.

[-] sneaky_b45tard@feddit.de 4 points 1 year ago

Not sure if that's a good idea but if you use Fedora, you also have your root on a BTRFS partition after a default installation. You could utilize the snapshot features of BTRFS to roll back after testing.

[-] j4k3@lemmy.world 2 points 1 year ago

I need to explore this BTRFS feature, I just don't have a good place or reason to start Dow that path yet. I've been on Silverblue for years, but decided to try Workstation for now. Someone in the past told me I should have been using BTRFS for FreeCAD saves, but I never got around to trying it.

[-] triplenadir@lemmygrad.ml 3 points 1 year ago

software like stow keeps track of files installed, and helps you remove it later

[-] Reborn2966@feddit.it 3 points 1 year ago

it it does not need a gui, use docker and log in into it. do the stuff and when you are done, docker rm and everything disappear.

you can enable cuda inside the container, follow the docs for that.

bonus point, vs code can open itself inside a container.

[-] meteokr@community.adiquaints.moe 2 points 1 year ago* (last edited 1 year ago)

You can use GUI stuff in docker as well, though it can be a bit fiddly to setup.

[-] gabriele97@lemmy.g97.top 2 points 1 year ago

Give a look at distrobox

[-] InverseParallax@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

Have an lxc config that enables glx on x11 in the container, spin one up and throw stuff in there, temp zfs volume.

Lxc-rm when done.

[-] user8e8f87c@berlin.social 2 points 1 year ago
[-] Glome@feddit.nl 2 points 1 year ago

You can't completely remove distrobox image and contents later?

[-] j4k3@lemmy.world 1 points 1 year ago

By default it is just the packages and dependencies that are removed. Your /home/user/ directory is still mounted just the same. This puts all of your config and dot files in all the normal places. If you install another distro like Arch on a Fedora base, it also installs all of the extra root package locations for arch and these get left on the host system after removing the distrobox instance. So yeah it still makes a big mess.

[-] mrh@mander.xyz 6 points 1 year ago

You can mount any directory you want as the “home” directory of a given container with distrobox, it just defaults to using your home directory.

[-] j4k3@lemmy.world 1 points 1 year ago

Do you happen to know what distrobox options there are for extra root directories associated with other distro containers, if there is an effective option to separate these, or if this is part of the remote "home" mount setting? I tried installing an Arch container on a fedora base system. Distrobox automatically built various Arch root directories even though the container should have been rootless.

[-] Sims@lemmy.ml 2 points 1 year ago

Haven't tried it (and don't use docker), so a wild shot: https://github.com/jupyterhub/repo2docker

'repo2docker fetches a repository (from GitHub, GitLab, Zenodo, Figshare, Dataverse installations, a Git repository or a local directory) and builds a container image in which the code can be executed. The image build process is based on the configuration files found in the repository.'

That way you can perhaps just delete the docker image and everything is gone. Doesn't seem to depend on jupyter..

[-] epocsquadron@kbin.social 1 points 1 year ago

There’s a method using systemd-sysext that would work well for this on any distro without dealing with poking holes in containers. One of the gnome folks blogged about it recently here: https://blogs.gnome.org/alatiera/2023/08/04/developing-gnome-os-systemd-sysext/

[-] socphoenix@midwest.social 1 points 1 year ago

Chroot would be fine for this and not overly complicated

[-] Justin@apollo.town 1 points 1 year ago

I've never worried about this but I'd use Flatpak. The whole install goes in a specific directory and the metadata/config/data files go in their own specific directory.

[-] j4k3@lemmy.world 1 points 1 year ago

Those Flatpak configs are not quite as scattered, most are in .config .var or .local. Most Flatpaks leave junk behind in these directories. I just deleted a few today. A lot of the problems start happening when you need to compile stuff where each package has the same dependency but a different version of the dep in each one. Then you have a problem and need to track down some related library that is not in the execution path and suddenly there are 10 copies of a dozen files all related to the stupid thing on your system and scattered all over the place. It becomes nearly impossible to track down which file is related to the container with the problem.

This is only an issue if you find yourself playing in software that is not yet supported directly my any packagers for Linux distros; stuff like FOSS AI right now.

[-] akik@lemmy.world 1 points 1 year ago* (last edited 1 year ago)
export LDFLAGS="-Wl,-rpath=/sw/app/version/lib"
./configure --prefix=/sw/app/version
make
sudo make install
unset LDFLAGS
load more comments
view more: next ›
this post was submitted on 05 Aug 2023
44 points (92.3% liked)

Linux

48214 readers
756 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS