Lemmy implements a scoring system allowing people to upvote or downvote posts. You know that since you are using Lemmy :)
Score can be used to increase or lower visibility of posts, in particular when using some sorting algorithms (active, hot, top).
This can be used to increase the visibility of good quality posts, and lower that of low quality or irrelevant posts.
Yet, from what I observe, the tool is mostly used for communities to self-administer filter bubble. Some communities seem to behave like a hive mind, massively upvoting or downvoting until either the dissident is assimilated in a very Borg way, or excommunicated.
Also, scores seem to be used often to convey cheap moral judgement, without having the need to expose oneself to criticism by providing arguments to sustain their opinion.
Overall, I think scores are more toxic than useful, and I would be in favor of hiding them by default, so that new comers are not put out by them.
What is your opinion about this? What are the advantages of having the score visible by default?
Just a clarification: the question is not "should scores exist or not?". If people find value in scores, good for them. I'm not one to dictate other people preferences. :)
Good article. Thank you. You make some excellent points.
I agree that source access is not sufficient to get a secure software and that the many-eyes argument is often wrong. However, I am convinced that transparency is a requirement for secure software. As a consequence, I disagree with some points and especially that one:
In my experience as a developer, the vast majority of vulnerabilities are caught by linters, source code static analysis, source-wise fuzzers and peer reviews. What is caught by blackbox (dynamic, static, and negative) testing, and scanners is the remaining bugs/vulnerabilities that were not caught during the development process. When using a closed source software, you have no idea if the developers did use these tools (software and internal validation) and so yeah: you may get excellent results with the blackbox testing. But that may just be the sign that they did not accomplish their due diligence during the development phase.
As an ex-pentester, I can assure you that having a blackbox security tools returning no findings is not a sign that the software is secure at all. Those may fail to spot a flawed logic leading to a disaster, for instance.
And yeah, I agree that static analysis has its limits, and that running the damn code is necessarry because UT, integrations tests and load tests can only get you so far. That's why big companies also do blue/green deployments etc.
But I believe this is not an argument for saying that a closed-source software may be secure if tested that way. Dynamic analysis is just one tool in the defense-in-depth strategy. It is a required one, but certainly not a sufficient one.
Again, great article, but I believe that you may not be paranoid enough 😁 Which might be a good thing for you 😆 Working in security is bad for one's mental health 😂