agree. Hell i wouldnt be shocked if some corporations or even nation-state (ie: NSA) actors do this, in a much better/more professional manner to ensure things like....backdoor access.
Yeah. I think the discussion is kind of nonsensical and a tautology. Nothing in life is 100% safe, if foss or not. And we don't know what we don't know.
We got a few cases where we know something got intercepted after people tried to do malicious PRs or intercepted network equipment.
I think the more interesting question has long been: what's (or who is) your threat? Against a sufficiently motivated and resourced adversary, there are few real obstacles. Conversely, some people are just not interesting because there's little or nothing to gain from attacking them.
Exactly. I just wanted to point out that most of the people here honestly have no idea what they're talking about.
If people had read the articles about that 'study' if malicious pull requests got accepted... and the aftermath... If they had read the articles how the NSA(?) helped(?!) with the mathematical constants of elliptic curve encryption... How cisco networking equipment got intercepted... If you knew how the internet and freedom worked... You'd know it's not that easy. Every 'simple' answer is just plain wrong. It depends... What is the thread model, what are you able and willing to invest, what are you trying to achieve? Sometimes you don't even know who's friend or foe.
Idk why people want to piss on open source software. It's a fact that one can have a look at open source software and not at closed source. And don't tell me nobody does, because i know i do. And millions of github users contribute code and read some code here and there. And i know a few tech blogs who like to check apps and see if they respect privacy and so on. ... And that's not everything as we pointed out earlier. If this helps you, depends on your own goals and thread model.
I really enjoy the discussion here. Refreshing! Most of the time I as a relative non-expert have no idea what I'm doing, but I do read things as much as I can. Otherwise I'm a fallen sysadmin who got a job managing cyber because bills need to be paid.
Open, closed, it's all object code in the end which can be examined in disassembly, or the behaviours observed during runtime. Open makes some processes easier in this area. I think the real strengths in this have been beyond security, to enhance cooperation and reuse so we don't waste time constantly reinventing.
Have you ever had a look at source code or disassembly? The first is like reading a book where somebody gives the computer instructions. It's kinda readable (if you learned it) and you can figure out with 'little' effort what it's supposed to do and actually doing.
Disassembly is like opening the maintenance door of a strange machine and you just see millions of moving cogs and wheels. Sure you can figure out what a single cog is for, or how a part of the machine works. But you'd have to trace thousands of movements by hand, sometimes while running. And it takes you days, sometimes weeks to do that. Even with help of quite sophisticated tools.
You're right there is a difference in effort. That said source code can also be obscure if you are trying to hide something. Behavioural analysis at runtime is effective no matter what, but it typically doesn't tell anything about code coverage.
Sure. You can try to sneak something in that isn't obvious. But you can also try to evade behavioural analysis. Not load load your malicious code if you detect you're running inside a virtual machine. Stop sending packets if some sniffer software is installed, only send data every 2 months, etc... It's an arms race, either way.
Regarding 'a difference in effort': Idk. It's a pretty big difference. You could also call taking a plane to fly to hawaii for two weeks or swimming there - a difference in effort. And while there might be one or two outliers with obscure code, the majority will be kind of readable. But i agree. You have to be intelligent, pay close attention if somebody tries to sneak something in in plain sight, know how you could be tricked and use multiple tools and approaches simultaneously, to be effective.
At least there have been attempts to subvert open standards for cryptography through the standards process. And occasional suspicious pull requests in critical places - I assume those are done through cut-out proxies so we don't know who tried.
Like others have said. It’s a survivorship bias. So the meme has some weight. But it doesn’t make Foss any less secure than closed source. If anything it’s better to allow anyone to examine it. Similar to how secrets can’t be kept when large numbers of folks know, the same goes here I guess.
agree. Hell i wouldnt be shocked if some corporations or even nation-state (ie: NSA) actors do this, in a much better/more professional manner to ensure things like....backdoor access.
No hypothesis needed https://en.wikipedia.org/wiki/EternalBlue can't have been a one-off either.
Yeha that was my though. But more a dedicated program to do similar with large FOSS projects.
They also have hardware/supply chain intercept programs to install back doors in closed source appliances (ie: Cisco firewalls)
So something similar but dedicated to open source PRs.
Yeah. I think the discussion is kind of nonsensical and a tautology. Nothing in life is 100% safe, if foss or not. And we don't know what we don't know. We got a few cases where we know something got intercepted after people tried to do malicious PRs or intercepted network equipment.
I think the more interesting question has long been: what's (or who is) your threat? Against a sufficiently motivated and resourced adversary, there are few real obstacles. Conversely, some people are just not interesting because there's little or nothing to gain from attacking them.
Exactly. I just wanted to point out that most of the people here honestly have no idea what they're talking about.
If people had read the articles about that 'study' if malicious pull requests got accepted... and the aftermath... If they had read the articles how the NSA(?) helped(?!) with the mathematical constants of elliptic curve encryption... How cisco networking equipment got intercepted... If you knew how the internet and freedom worked... You'd know it's not that easy. Every 'simple' answer is just plain wrong. It depends... What is the thread model, what are you able and willing to invest, what are you trying to achieve? Sometimes you don't even know who's friend or foe.
Idk why people want to piss on open source software. It's a fact that one can have a look at open source software and not at closed source. And don't tell me nobody does, because i know i do. And millions of github users contribute code and read some code here and there. And i know a few tech blogs who like to check apps and see if they respect privacy and so on. ... And that's not everything as we pointed out earlier. If this helps you, depends on your own goals and thread model.
I really enjoy the discussion here. Refreshing! Most of the time I as a relative non-expert have no idea what I'm doing, but I do read things as much as I can. Otherwise I'm a fallen sysadmin who got a job managing cyber because bills need to be paid.
Open, closed, it's all object code in the end which can be examined in disassembly, or the behaviours observed during runtime. Open makes some processes easier in this area. I think the real strengths in this have been beyond security, to enhance cooperation and reuse so we don't waste time constantly reinventing.
Have you ever had a look at source code or disassembly? The first is like reading a book where somebody gives the computer instructions. It's kinda readable (if you learned it) and you can figure out with 'little' effort what it's supposed to do and actually doing. Disassembly is like opening the maintenance door of a strange machine and you just see millions of moving cogs and wheels. Sure you can figure out what a single cog is for, or how a part of the machine works. But you'd have to trace thousands of movements by hand, sometimes while running. And it takes you days, sometimes weeks to do that. Even with help of quite sophisticated tools.
You're right there is a difference in effort. That said source code can also be obscure if you are trying to hide something. Behavioural analysis at runtime is effective no matter what, but it typically doesn't tell anything about code coverage.
Sure. You can try to sneak something in that isn't obvious. But you can also try to evade behavioural analysis. Not load load your malicious code if you detect you're running inside a virtual machine. Stop sending packets if some sniffer software is installed, only send data every 2 months, etc... It's an arms race, either way.
Regarding 'a difference in effort': Idk. It's a pretty big difference. You could also call taking a plane to fly to hawaii for two weeks or swimming there - a difference in effort. And while there might be one or two outliers with obscure code, the majority will be kind of readable. But i agree. You have to be intelligent, pay close attention if somebody tries to sneak something in in plain sight, know how you could be tricked and use multiple tools and approaches simultaneously, to be effective.
At least there have been attempts to subvert open standards for cryptography through the standards process. And occasional suspicious pull requests in critical places - I assume those are done through cut-out proxies so we don't know who tried.
We definately know of some. NSA tried to slip a faulty rng algo into rsa a while back
https://blog.cloudflare.com/how-the-nsa-may-have-put-a-backdoor-in-rsas-cryptography-a-technical-primer/
Like others have said. It’s a survivorship bias. So the meme has some weight. But it doesn’t make Foss any less secure than closed source. If anything it’s better to allow anyone to examine it. Similar to how secrets can’t be kept when large numbers of folks know, the same goes here I guess.