66
submitted 1 year ago* (last edited 1 year ago) by Socialphilosopher@lemm.ee to c/nostupidquestions@lemmy.world

I'm note a programmer. I Don't Understand Codes. How do I Know If An Open Source Application is not Stealing My Data Or Passwords? Google play store is scanning apps. It says it blocks spyware. Unfortunately, we know that it was not very successful. So, can we trust open source software? Can't someone integrate their own virus just because the code is open?

top 50 comments
sorted by: hot top controversial new old
[-] Gibberish9031@lemmy.ml 54 points 1 year ago

Yes, but the idea is that because the code is open source anyone can look at it and determine on their own whether it is in fact safe or not. Generally speaking the open source community is very good at figuring this kind of stuff out but I would say your fear is not necessarily out of place since nothing is 100% guaranteed. That said though, the more popular FOSS apps are quite safe.

[-] dustojnikhummer@lemmy.world 15 points 1 year ago

But someone has to actually go and check, instead of going "someone else will check it"

[-] pineapplelover@lemm.ee 12 points 1 year ago

This is why lots of open source projects critical for privacy and security are audited. ProtonVPN, ProtonMail, Mullvad, Signal, Matrix, GrapheneOS, and more. Are audited and are very big projects with many eyes upon them. The more eyes, the more secure it will be.

[-] dustojnikhummer@lemmy.world 7 points 1 year ago

Yes, those are much more trustworthy than audited closed source projects. Just saying that "anyone can check" doesn't mean "someone will check"

[-] gvasco@discuss.tchncs.de 6 points 1 year ago

Well if the app is actively maintained the code is checked every time someone makes a push request to the main code base. You still have to trust the managers of the repository (code base) to verify every push request thoroughly, however, it's in the best interest of the repository managers to do so to maintain trust in the project and it's users.

[-] DogMuffins@discuss.tchncs.de 4 points 1 year ago

Well, not exactly.

Some open source projects have many contributors, and while they're working on fixing bugs and adding new features, the chances that no one would notice say, a key logger or crypto miner are very slim.

Other opensource projects are maintained by large sophisticated organisations who would monitor security in some fashion. They would monitor for obvious things like transmitting data at the very least.

That's not a 100% guarantee of security, but it's not as reckless as just hoping someone will check.

[-] Z4rK@lemmy.world 25 points 1 year ago* (last edited 1 year ago)

Check activity before trusting open source

By default, FOSS is no more secure or privacy protected than proprietary software. However, it allows the community to peer review the code. So, a popular and active FOSS project can be trusted to be honest and not do nefarious things to your data or devices.

Check activity on their code repository - Stars / Followers and Forks says something about popularity, Issues and pull requests tells you about activity (check comments or check recently closed issues and pull requests), as does the code commits itself.

Edit: Changed wording from secure to trust / honesty. Not all code focus on security; in fact, most code doesn’t.

[-] pjhenry1216@kbin.social 22 points 1 year ago

You mention the Google Play issue. That is an example of a disadvantage of closed source (Android is open, the Google Play Protect is not). Google Play Protect is essentially static code analysis. Think of it almost like antivirus. It tries to look for anomalies in the code itself. But it's not great. It can be tricked. And we don't even know how good it is or what kind of checks it does.

FOSS code has many people looking at it. You can compile it yourself. It's extremely unlikely for something that's remotely popular to have explicitly malicious code in it. Is it impossible? No. But just as you get folks deep diving video game code assets, you get people looking at code of many FOSS projects. Likely because they either want to contribute or make changes.

It comes down to it being easier to find malicious actors in FOSS. Its just more difficult to hide than closed source.

Why would you think closed source is any safer for any of the same reasons but worse? Closed source can just as easily (arguably more easily) steal your info (and many did but bury it in EULAs).

[-] Serinus@lemmy.world 2 points 1 year ago

I wouldn't assume there are many people looking at most open source code. And even if there are, it's not impossible to hide malicious code.

Just because people can review it doesn't mean they are reviewing it.

It does introduce more risk of discovery though. Malicious code is easier to find, and there will be at least a username associated with it.

[-] pjhenry1216@kbin.social 5 points 1 year ago

There are more people looking than there are elsewhere. And unless you're suggesting the authors as being malicious (which can happen), most FOSS is reviewed. Especially larger ones. You can tell by the number of contributors. Smaller projects will surely be an issue, but popular ones do get reviewed, simply because many people want to be able to contribute.

It's almost certainly more than proprietary though. Like, all these risks still apply to proprietary.

[-] zencat@kbin.social 1 points 1 year ago

How come users don't have root access on Android even though Android is open?

[-] pjhenry1216@kbin.social 10 points 1 year ago

Because of the handset makers and wireless carriers (honestly more the latter than the former). It's not because of Google or Android.

[-] exscape@kbin.social 8 points 1 year ago

Most phones use customized versions of Android and decide you shouldn't have root access. It opens up security issues and makes it easier to bypass ads and DRM which they don't like.

You can get it on some phones, including Google's.

load more comments (4 replies)
load more comments (4 replies)
[-] moobythegoldensock@geddit.social 22 points 1 year ago

How do you know if a closed source application is stealing your data?

With open source, you can learn to read it, or talk to a community of people who know how to read it. If even just 1 in 500 people who downloads the software looks at the source, there are external eyes on it. Whereas with closed source, no one but the creator is looking.

Biggest thing is to still only install software you trust.

[-] CthulhuDreamer@lemmy.world 17 points 1 year ago

One more note about safety when it comes to open source or FOSS, is that you should use only the main repository and distributions provided by the official team. Often people clone existing repo, insert malicious code and publish it as their app on play store etc.

[-] Whirlybird@aussie.zone 13 points 1 year ago

No, open source code is no safer than closed source code by default. What it does is gives the opportunity for people to verify that it’s safe, but it doesn’t mean it is safe. Also just because some people have “verified” that it is safe doesn’t mean they didn’t just miss the vulnerabilities or nasty code.

[-] 1847953620@lemmy.world 9 points 1 year ago

Software companies are not known for their accountability over hacky code though, foss leads to better quality because it solves the accountability conflict of interest in an efficient way.

[-] GlowHuddy@lemmy.world 3 points 1 year ago

Agreed. I'd say with open source it is harder to 'get away' with malicious features, since the code is out in the open. I guess if authors were to put those features, open nature of their code also serves as a bit of a deterrent sice there is a much bigger possibility of people finding out compared to closed source. However as you said it is not impossible, especially since not many people look through the code of everything they run. And even then it is not impossible to obfuscate it well enough for it not to be spotted on casual read-through.

[-] RightHandOfIkaros@lemmy.world 3 points 1 year ago

Accounts that post "verifying code" can also be sock puppet accounts, so it is always good to double check for yourself if you know the programming language, or check the account history to see if they have verified other software from different writers that aren't all connected to each other. Nothing sketchier than a verification ring, where accounts all verify for each other.

[-] pjhenry1216@kbin.social 2 points 1 year ago

This is only an issue if it's only been reviewed by one or two coders with zero history on the repo's host. This is rare for anything that is remotely popular.

[-] onescomplement@lemm.ee 12 points 1 year ago

In terms of telemetry, free software has the advantage over the proprietary counterpart.

It's a lot more complicated to hide telemetry without the user knowing in free software.

You could always use a network tool, like iftop, to see network traffic on your PC. That could be a way too see if a program is phoning home. But you'll probably want to use a suite of tools.

[-] spizzat2@lemm.ee 4 points 1 year ago* (last edited 1 year ago)

free software

To make a common clarification: free as in "free speech", not (necessarily) free as in "free beer".

Just because the software costs nothing, doesn't mean that it's not hiding something. In fact, the opposite is often true.

I'm sure you know that. I'm just clarifying for OP, who isn'te a programmer.

[-] PM_ME_VINTAGE_30S@lemmy.sdf.org 8 points 1 year ago

Tl;Dr: you shouldn't trust anyone or anything blindly or unconditionally. However, open source software and its community offer compelling reasons to trust it over proprietary software.

Technically, if you do not read all of the source code of an application and all its dependencies, you can never be 100% sure that it isn't doing nefarious things. For things that require a connection to the internet, you could monitor all connections to and from the application and its dependencies and see if it is making objectionable connections.

However, in my view, open-source software is in general safer than closed-source software. Open-source software can be audited by any who knows the languages the program is coded in, whereas closed-source software can only be audited by the developer or the few parties they might authorize to see it. Closed-source apps can easily hide spyware because the source code is completely unavailable. Spyware could possibly be missed by the community, but it's still a whole hell of a lot less likely to occur with so many eyes on the program.

And practically, whenever an open-source software gets even close to including nefarious stuff, the community generates a huge hoopla about it.

Also, Google Play Store is not open source! A better example would be F-Droid, which is an app store that is open-source. While I am not aware of F-Droid delivering spyware ala Google, it is still theoretically possible that they could screw up or be corrupted in the distant future. Therefore, we must stay vigilant, even with groups and people we trust. Practically, this just means... check their work once in a while. It wouldn't kill you to learn a programming language; try Python for quick results. What I do is whenever an open-source software is written in a language I understand, I'll pick a few files that look the most important and skim them to see that the program "does what it says on the tin". Otherwise, I'll check through the issues on GitHub for any weirdness.

I haven't even mentioned free and open-source software (free as in speech). I genuinely do not know how to convince people who are disinterested in their own freedom to consider FOSS options, or to do very nearly anything at all. For everyone else...FOSS software respects your freedom to compute as you please. We can quibble about different licenses and if and how effective they are at safeguarding user freedom, but at the end of the day, FOSS licenses are at least intended to give users back your freedom. In my view, it is mightily refreshing to finally take some freedom back!

[-] MonkderZweite@feddit.ch 7 points 1 year ago

No. But more safe than closed source.

[-] rufus@discuss.tchncs.de 5 points 1 year ago* (last edited 1 year ago)

Tl;dr: Don't download random APKs from the internet, just because they claim to be FOSS. Just get them from F-Droid and you're safe.

Long answer: Depends on the project. Look how many people use it. If it's a bunch, chances are other people also keep an eye on it. Even better if you get that sofware packaged. That means from the package manager of your linux distribution or - in your case, using Android - from F-Droid. This way somebody from that team has a look at it, and F-Droid even strips all those trackers from Apps. I'd say chances for a virus/spyware getting through the F-Droid process are close to none. Not more than chances are of a virus slipping past Google's antivirus.

(Play Store doesn't do anything against excessive tracking.)

[-] zencat@kbin.social 3 points 1 year ago

I'm curious, how does F-Droid detect malicious codes within an app?

[-] copygirl@lemmy.blahaj.zone 2 points 1 year ago

From what I know, F-Droid compiles apps from source so you can be sure that the code you're running is actually made from the source code that it claims to be built from. On most other platforms, the developers could be uploading malicious programs that actually have the code changed from what's shared online as its source code. Then add the fact that other developers can and do look at the code, and what changes are made from version to version.

[-] Peruvian_Skies@kbin.social 2 points 1 year ago

Part of it is automated, part of it is real people looking at the source code. That's done by sampling of course, since it's not feasible to have someone manually look over every new update to every app.

[-] rufus@discuss.tchncs.de 2 points 1 year ago* (last edited 1 year ago)

Yeah. I haven't looked it up, but a huge part seems to be manual labor. They have a good look at it when it gets included into the f-droid repository. The app then gets re-packaged to meet their standards and compiled from source. During this process tracking libraries and other (proprietary) components get stripped.

They have an automated build server. I'm not sure if that does any additional tests or just checks if it can build the app. But this also prepares the updates.

I doubt there are automated antivirus scans involved. Usually only windows users do that.

And you have a community with many other users who use the same build of an app. They'll file bugreports and maybe notice if an app stops working or starts consuming huge amounts of data and battery. Those users also tend to be more tech-savy than playstore users.

[-] Kissaki@feddit.de 5 points 1 year ago

You shouldn't see trustworthyness or trust as a binary system of full or nothing.

You should assess - to your and the products possibilities - and then weigh risk and necessity and value.

Source exposure makes it more likely people may look at it, without cause or when something seems surprising or questionable. Source available alone doesn't mean you'd see concerns though - you'd need an obvious platform or publicity.

FOSS may be funded and implemented by voluntary work or paid or sponsored, with or without control by the involved parties.

Security scanning is a best effort weighing known and similarity and suspect parts against false positives and user and publisher inconvenience and hindrance. It can't be perfect.

Android Play Store security scanning can only scan for some things I'd consider security relevant and likely largely ignores questionable behavior that does not endanger device security.

Established projects are more trustworthy than those that are not. Personal projects with a clear goal are more trustworthy because of likely hood of good intention and personal interest than those who seem obscure or unclear.

Don't trust blindly.

Safety is a big topic and theme. So such a broad question can only be answered with broad assessment and overview.

[-] jesterraiin@lemmy.world 5 points 1 year ago

Is FOSS really safe?

It's not an attempt at edginess, but the answer is that in the long run, IT NOTHING is safe. It might be now, it might be for some time, but theres no guarantee that even the most dependable piece of software will get some new update that will break some of its functionality, or the OS will interfere with it, thus breaking it.

FOSS? It's safe as a principle. If anyone has the access to the code, then any suspicious inclusion to it will be spotted quickly and patched up.

...but the reality isn't as straightforward.

[-] Amcro@lemm.ee 4 points 1 year ago

A question i always ask myself is, if we can see code on github for example, it still doesn’t mean their release has the same code right? They could actually compile their program with some extra stuff that sends data and just add that version on github release page, but the code itself would be clean on github right?

[-] Nibodhika@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

Yes, however there are ways of verifying that. Compiled programs are not black boxes, they're just complicated enough that we can consider them beyond human comprehension (at least complicated programs), but they're very much readable. Which means programs can check differences between what should be there and what is. Not to mention that you can also compile the code they said they put there and check for differences with what they're distributing.

Is anyone doing that? Don't know, but because it's possible to be verified it's unlikely that people would try to do something nasty.

Edit: I'm talking about official releases on official channels, download binaries from different sources at your own peril since those are unlikely to be checked, and even if someone found differences they could claim patches or different compilers.

[-] zalack@kbin.social 2 points 1 year ago* (last edited 1 year ago)

It's worth pointing out that reproducible builds aren't always guaranteed if software developers aren't specifically programming with them in mind.

imagine a program that inserts randomness during compile time for seeds. Reach build would generate a different seed even from the same source code, and would fail being diffed against the actual release.

Or maybe the developer inserts information about the build environment for debugging such as the build time and exact OS version. This would cause verification builds to differ.

Rust (the programing language) has had a long history of working towards reproducible builds for software written in the language, for instance.

It's one of those things that sounds straightforward and then pesky reality comes and fucks up your year.

load more comments (1 replies)
[-] elxeno@lemm.ee 4 points 1 year ago

You wouldn't know unless it's checked by you or someone you trust, but IMO open source should generally be better cause if you're doing shady stuff you're probably less likely to make it public. Also projects with lots of activity by different people are usually safer.

[-] moreeni@lemm.ee 2 points 1 year ago
load more comments
view more: next ›
this post was submitted on 26 Jul 2023
66 points (91.2% liked)

No Stupid Questions

35868 readers
530 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 1 year ago
MODERATORS