458

Thought this was a good read exploring some how the "how and why" including several apparent sock puppet accounts that convinced the original dev (Lasse Collin) to hand over the baton.

top 50 comments
sorted by: hot top controversial new old
[-] KarnaSubarna@lemmy.ml 142 points 8 months ago
[-] Kindness@lemmy.ml 97 points 8 months ago

Imagine finding a backdoor within 45 day of it's release into a supply chain instead of months after infection. This is a most astoundingly rapid discovery.

Fedora 41 and rawhide, Arch, a few testing and unstable debian distributions and some apps like HomeBrew were affected. Not including Microsoft and other corporations who don't disclose their stack.

What a time to be alive.

[-] flying_sheep@lemmy.ml 22 points 8 months ago

Arch was never affected, as described in their news post about it. Arch users had malicious code on their hard disks, but not the part that would have called into it.

[-] SpaceCowboy@lemmy.ca 17 points 8 months ago

Before resting on our laurels, we should consider it's possible it's more widespread but just not being disclosed until after it's patched.

It would be wise to be on the lookout for security patches for the next few days.

[-] Lemmchen@feddit.de 10 points 8 months ago

Consider this the exception to the rule. There's no reason we should assume this timeline is the norm.

load more comments (1 replies)
[-] LiveLM@lemmy.zip 53 points 8 months ago* (last edited 8 months ago)

Disguising the virus as a corrupted test file then 'uncorrupting' it is crazy

[-] Ephera@lemmy.ml 131 points 8 months ago

Pretty bad is also that it intersects with another problem: Bus factor.

Having just one person as maintainer of a library is pretty bad. All it takes is one accident and no one knows how to maintain it.
So, you're encouraged to add more maintainers to your project.

But yeah, who do you add, if it's a security-critical project? Unless you happen to have a friend that wants to get in on it, you're basically always picking a stranger.

[-] Kindness@lemmy.ml 52 points 8 months ago

Unless you happen to have a friend that wants to get in on it, you’re basically always picking a stranger.

At risk of sounding tone deaf to the situation that caused this: that's what community is all about. The likelihood you know the neighbors you've talked to for years is practically nil. Your boss, your co-workers, your best friend and everyone you know, has some facet to them you have never seen. The unknown is the heart of what makes something strange.

We must all trust someone, or we are alone.

Finding strangers to collaborate with, who share your passions, is what makes society work. The internet allows you ever greater access to people you would otherwise never have met, both good and bad.

Everyone you've ever met was once a stranger. To make them known, extend blind trust, then quietly verify.

[-] umbrella@lemmy.ml 42 points 8 months ago* (last edited 8 months ago)

honestly these people should be getting paid if a corporation wants to use a small one-man foss project for their own multibillion software. the lawyer types in foss could put that in GPLv5 or something whenever we feel like doing it.

also hire more devs to help out!

[-] taladar@sh.itjust.works 21 points 8 months ago

If you think people are going to be trustworthy just because they are getting paid you are naive.

[-] umbrella@lemmy.ml 11 points 8 months ago* (last edited 8 months ago)

not trustworthy per se but maybe less overworked and inclined to review code more hastily, or less tired and inclined to have the worse judgement that makes such a project more vulnerable to stuff like this.

these people maintain the basis of our entire software infrastructure thanklessly for us in between the full time jobs they need to survive, this has to change.

as for trust in foss projects, the community will often notice bad faith code just like they just did (and very quickly this time, i might add!)

load more comments (2 replies)
load more comments (5 replies)
[-] digdilem@lemmy.ml 22 points 8 months ago

I think bus factor would be a lot easier to cope with than a slowly progressing, semi-abandoned project and a White Knight saviour.

In a complete loss of a sole maintainer, then it should be possible to fork and continue a project. That does require a number of things, not least a reliable person who understands the codebase and is willing to undertake it. Then the distros need to approve and change potentially thousands of packages that rely upon the project as a dependency.

Maybe, before a library or any software gets accepted into a distro, that distro does more due diligence to ensure it's a sustainable project and meets requirements like a solid ownership?

The inherited debt from existing projects would be massive, and perhaps this is largely covered already - I've never tried to get a distro to accept my software.

Nothing I've seen would completely avoid risk. Blackmail upon an existing developer is not impossible to imagine. Even in this case, perhaps the new developer in xz started with pure intentions and they got personally compromised later? (I don't seriously think that is the case here though - this feels very much state sponsored and very well planned)

It's good we're asking these questions. None of them are new, but the importance is ever increasing.

[-] taladar@sh.itjust.works 6 points 8 months ago

Maybe, before a library or any software gets accepted into a distro, that distro does more due diligence to ensure it’s a sustainable project and meets requirements like a solid ownership?

And who is supposed to do that work? How do you know you can trust them?

load more comments (1 replies)
[-] KarnaSubarna@lemmy.ml 122 points 8 months ago
  • Careful choice of program to infect the whole Linux ecosystem
  • Time it took to gain trust
  • Level of sophistication in introducing backdoor in open source product

All of these are signs of persistent threat actors aka State sponsor hacker. Though the real motive we would never know as it's now a failed project.

[-] jackpot@lemmy.ml 28 points 8 months ago

imagine how pissed they are. or maybe they silently alerted the microsoft guy themselves as they only did it for cash and theyd been paid

[-] baseless_discourse@mander.xyz 24 points 8 months ago* (last edited 8 months ago)

I am sure most super powers in the world can easily sink 2 years to maintain an obscure project in order to break system as important as openssh.

I doubt they will be pissed for one failure, and we can only hope there isn't more vulnerable projects out there (spoiler alert: probably many).

[-] ILikeBoobies@lemmy.ca 86 points 8 months ago

Hopefully shows why you should never trust closed source software

If the world didn’t have source access then we would have never found it

[-] howrar@lemmy.ca 31 points 8 months ago

And if they do find it, it'll all be kept hush hush, they'll force an update on everyone with no explanation, some people will do everything in their power to refuse because they need to keep their legacy software running, and the exploit stays alive in the wild.

[-] hatedbad@lemmy.sdf.org 8 points 8 months ago

open source software getting backdoored by nefarious committers is not an indictment on closed source software in any way. this was discovered by a microsoft employee due to its effect on cpu usage and its introduction of faults in valgrind, neither of which required the source to discover.

the only thing this proves is that you should never fully trust any external dependencies.

[-] BreakDecks@lemmy.ml 18 points 8 months ago

The difference here is that if a state actor wants a backdoor in closed source software they just ask/pay for it, while they have to con their way in for half a decade to touch open source software.

How many state assets might be working for Microsoft right now, and we don't get to vet their code?

[-] oscardejarjayes@hexbear.net 52 points 8 months ago
load more comments (1 replies)
[-] lemmyreader@lemmy.ml 52 points 8 months ago

"Paid for by a state actor" Yes, who knows.

  • Could be a lone "black hat" or a group of "black hats". Who knows.

  • Could be the result of a lot of public criticism in the news regarding Pegasus spyware. Who knows.

  • Could be paid by companies without any state actors involved. Who knows.

  • Could be a lone programmer who wants power or is seeking revenge for some heated mailing list discussion. Who knows.

The question of trust has been mentioned in this case of a sole maintainer with health problems. What I asked myself is : How did this trust develop years ago ? People trusted Linus Torvalds and used the Linux kernel to build Linux distributions with to the point that the Linux kernel became from a tiny hobby thing a giant project. At some point compiling from source code became less fashionable and most people downloaded and installed binaries. New projects started and instead of tar and gzip things like xz and zstd were embraced. When do you trust a person or a project, and who else gets on board of a project ? Nowadays something like :

curl -sSL https://yadayada-flintstones-revival.com | bash

is considered perfectly normal as the default installation of some software. Open source software is cool and has kind of produced a sort of revolution in technology but there is still a lot of work to do.

[-] Jennykichu@lemmy.dbzer0.com 24 points 8 months ago

Strongly doubt it's a lone actor for the reasons already given.

[-] tetris11@lemmy.ml 23 points 8 months ago* (last edited 8 months ago)

Boostrapping a full distribution from a 357-byte seed file is possible in GUIX:

https://lemmy.ml/post/8046326

If that seed is compromised, then the whole software stack just won't build.

It's an answer to the "Trusting Trust" problem outlined by Ken Thompson in 1984.

[-] lemmyreader@lemmy.ml 45 points 8 months ago

Reading a bit into this https://guix.gnu.org/manual/en/html_node/Binary-Installation.html The irony!

The only requirement is to have GNU tar and Xz.

[-] tetris11@lemmy.ml 7 points 8 months ago

Hahaha! Oh dear

[-] lemmyreader@lemmy.ml 8 points 8 months ago

That's cool. Thank you.

load more comments (1 replies)
[-] dessalines@lemmy.ml 42 points 8 months ago

Any speculations on the target(s) of the attack? With stuxnet the US and Israel were willing to to infect the the whole world to target a few nuclear centrifuges in Iran.

[-] KarnaSubarna@lemmy.ml 25 points 8 months ago

Definitely state sponsored attack. It could be any nation - US to North Korea, and any other nation in between.

[-] khannie@lemmy.world 18 points 8 months ago

There is some indication based on commit times and the VPN used that it's somewhere in Asia. Really interesting detail in this write up.

The timezone bit is near the end iirc.

[-] GamingChairModel@lemmy.world 6 points 8 months ago

Good writeup.

The use of ephemeral third party accounts to "vouch" for the maintainer seems like one of those things that isn't easy to catch in the moment (when an account is new, it's hard to distinguish between a new account that will be used going forward versus an alt account created for just one purpose), but leaves a paper trail for an audit at any given time.

I would think that Western state sponsored hackers would be a little more careful about leaving that trail of crumbs that becomes obvious in an after-the-fact investigation. So that would seem to weigh against Western governments being behind this.

Also, the last bit about all three names seeming like three different systems of Romanization of three different dialects of Chinese is curious. If it is a mistake (and I don't know enough about Chinese to know whether having three different dialects in the same name is completely implausible), that would seem to suggest that the sponsors behind the attack aren't that familiar with Chinese names (which weighs against the Chinese government being behind it).

Interesting stuff, lots of unanswered questions still.

load more comments (1 replies)
[-] Jennykichu@lemmy.dbzer0.com 25 points 8 months ago

Stuxnet was an extremely focused attack, targeting specific software on specific PLCs in a specific way to prevent them mixing up nuclear batter into a boom boom cake. Even if it managed to affect the whole world, it would be a laser compared to this wide-net.

[-] SpaceCowboy@lemmy.ca 11 points 8 months ago

Given how low level it is and the timespan involved, there probably wasn't a specific use in mind. Just adding capability for a future attack to be determined later.

load more comments (1 replies)
[-] KarnaSubarna@lemmy.ml 40 points 8 months ago
load more comments (1 replies)
[-] Fubarberry@sopuli.xyz 39 points 8 months ago

I had assumed it was probably a state sponsored attack. This looks like it was planned from the beginning, and any cyber attack that had years of planning and waiting strikes me as state-sponsored.

[-] uriel238@lemmy.blahaj.zone 26 points 8 months ago

Historically there have been several instances of anarcho-communist organizations and social movements flourishing.

Most of them were sabotaged by plutocrat agents invoking violence or mischief. Often just giving an angry militants in the region some materiel support and bad intel.

[-] speaker_hat@lemmy.one 14 points 8 months ago* (last edited 7 months ago)

What if the unexpected SSH latency hadn’t been introduced, this backdoor would live?

I wonder how many OSS projects include backdoors that doesn't appear in performance checks

load more comments (2 replies)
[-] cygon@lemmy.world 13 points 8 months ago* (last edited 8 months ago)

~~Linux~~ Unix since 1979: upon booting, the kernel shall run a single "init" process with unlimited permissions. Said process should be as small and simple as humanly possible and its only duty will be to spawn other, more restricted processes.

Linux since 2010: let's write an enormous, complex system(d) that does everything from launching processes to maintaining user login sessions to DNS caching to device mounting to running daemons and monitoring daemons. All we need to do is write flawless code with no security issues.

Linux since 2015: We should patch unrelated packages so they send notifications to our humongous system manager whether they're still running properly. It's totally fine to make a bridge from a process that accepts data from outside before even logging in and our absolutely secure system manager.

Excuse the cheap systemd trolling, yes, it is actually splitting into several, less-privileged processes, but I do consider the entire design unsound. Not least because it creates a single, large provider of connection points that becomes ever more difficult to replace or create alternatives to (similarly to web standard if only a single browser implementation existed).

[-] ConstantPain@lemmy.world 18 points 8 months ago

Yes, I remember Linux in 1979...

[-] BreakDecks@lemmy.ml 11 points 8 months ago

Linus was a child prodigy.

[-] blind3rdeye@lemm.ee 7 points 8 months ago

And so the microkernal vs monolithic kernal debate continues...

[-] mea_rah@lemmy.world 6 points 8 months ago

its only duty will be to spawn other, more restricted processes.

Perhaps I'm misremembering things, but I'm pretty sure the SysVinit didn't run any "more restricted processes". It ran a bunch of bash scripts as root. Said bash scripts were often absolutely terrible.

load more comments (1 replies)
[-] JoeKrogan@lemmy.world 11 points 8 months ago* (last edited 8 months ago)

I'm curious to know about the distro maintainers that were running bleeding edge with this exploit present. How do we know the bad actors didn't compromise their systems in the interim ?

The potential of this would have been catastrophic had it made its way into the stable versions, they could have for example accessed the build server for tor or tails or signal and targeted the build processes . not to mention banks and governments and who knows what else... Scary.

I'm hoping things change and we start looking at improving processes in the whole chain. I'd be interested to see discussions in this area.

I think the fact they targeted this package means that other similar packages will be attacked. A good first step would be identifying those packages used by many projects and with one or very few devs even more so if it has root access. More Devs means chances of scrutiny so they would likely go for packages with one or few devs to improve the odds of success.

I also think there needs to be an audit of every package shipped in the distros. A huge undertaking , perhaps it can be crowdsourced and the big companies FAAGMN etc should heavily step up here and set up a fund for audits .

What do you think could be done to mitigate or prevent this in future ?

[-] ipkpjersi@lemmy.ml 7 points 8 months ago* (last edited 8 months ago)

Interesting to hear and it wouldn't surprise me either tbh. At least none of my systems were vulnerable apparently, which is good because I am running the latest Ubuntu LTS and latest Proxmox - if those were affected then wow this would have affected so many more people.

load more comments (2 replies)
load more comments
view more: next ›
this post was submitted on 31 Mar 2024
458 points (98.3% liked)

Open Source

31358 readers
200 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS