145
submitted 6 months ago by christophski@feddit.uk to c/linux@lemmy.ml

When I first started using Linux 15 years ago (Ubuntu) , if there was some software you wanted that wasn't in the distro's repos you can probably bet that there was a PPA you could add to your system in order to get it.

Seems that nowadays this is basically dead. Some people provide appimage, snap or flatpak but these don't integrate well into the system at all and don't integrate with the system updater.

I use Spek for audio analysis and yesterday it told me I didn't have permission to read a file, I a directory that I owned, that I definitely have permission to read. Took me ages to realise it was because Spek was a snap.

I get that these new package formats provide all the dependencies an app needs, but PPAs felt more centralised and integrated in terms of system updates and the system itself. Have they just fallen out of favour?

you are viewing a single comment's thread
view the rest of the comments
[-] Gecko@lemmy.world 6 points 6 months ago* (last edited 6 months ago)

Jia Tan liked your comment

Without the traditional distribution workflow [...]

You are aware that the xz exploit made it into Debian Testing and Fedora 40 despite the traditional distribution workflows? Distro maintainers are not a silver bullet when it comes to security. They have to watch hundreds to thousands of packages so having them do security checks for each package is simply not feasible.

[-] federico3@lemmy.ml 3 points 6 months ago

I am well aware of it. It is an example of the traditional distribution workflow preventing a backdoor from landing into Debian Stable and other security-focused distributions. Of course the backdoor could have been spotted sooner, but also much later, given its sophistication.

In the specific case of xz, "Jia Tan" had to spend years of efforts in gaining trust and then to very carefully conceal the backdoor (and still failed to reach Debian Stable and other distributions). Why so much effort? Because many simpler backdoors or vulnerabilities have been spotted sooner. Also many less popular FOSS projects from unknown or untrusted upstream authors are simply not packaged.

Contrast that with distributing large "blobs", be it containers from docker hub or flatpak, snap etc, or statically linked binaries or pulling dependencies straight from upstream repositories (e.g. npm install): any vulnerability or backdoor can reach end users quickly and potentially stay unnoticed for years, as it happened many times.

There has been various various reports and papers published around the topic, for example https://www.securityweek.com/analysis-4-million-docker-images-shows-half-have-critical-vulnerabilities/

They have to watch hundreds to thousands of packages so having them do security checks for each package is simply not feasible.

That is what we do and yes, it takes effort, but it is still working better than the alternatives. Making attacks difficult and time consuming is good security.

If there is anything to learn from the xz attack is that both package maintainers and end users should be less lenient in accepting blobs of any kind.

this post was submitted on 27 May 2024
145 points (95.6% liked)

Linux

48376 readers
1750 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS