92
submitted 9 months ago* (last edited 9 months ago) by Kalcifer@sh.itjust.works to c/linux@lemmy.ml

I've spent some time searching this question, but I have yet to find a satisfying answer. The majority of answers that I have seen state something along the lines of the following:

  1. "It's just good security practice."
  2. "You need it if you are running a server."
  3. "You need it if you don't trust the other devices on the network."
  4. "You need it if you are not behind a NAT."
  5. "You need it if you don't trust the software running on your computer."

The only answer that makes any sense to me is #5. #1 leaves a lot to be desired, as it advocates for doing something without thinking about why you're doing it -- it is essentially a non-answer. #2 is strange -- why does it matter? If one is hosting a webserver on port 80, for example, they are going to poke a hole in their router's NAT at port 80 to open that server's port to the public. What difference does it make to then have another firewall that needs to be port forwarded? #3 is a strange one -- what sort of malicious behaviour could even be done to a device with no firewall? If you have no applications listening on any port, then there's nothing to access. #4 feels like an extension of #3 -- only, in this case, it is most likely a larger group that the device is exposed to. #5 is the only one that makes some sense; if you install a program that you do not trust (you don't know how it works), you don't want it to be able to readily communicate with the outside world unless you explicitly grant it permission to do so. Such an unknown program could be the door to get into your device, or a spy on your device's actions.

If anything, a firewall only seems to provide extra precautions against mistakes made by the user, rather than actively preventing bad actors from getting in. People seem to treat it as if it's acting like the front door to a house, but this analogy doesn't make much sense to me -- without a house (a service listening on a port), what good is a door?

(page 2) 50 comments
sorted by: hot top controversial new old
[-] smb@lemmy.ml 2 points 9 months ago

As i see it, the term "firewall" was originally the neat name for an overall security concept for your systems privacy/integrity/security. Thus physical security is (or can be) as well part of a firewall concept as maybe training of users. The keys of your server rooms door could be part of that concept too.

In general you only "need" to secure something that actually is there, you won't build a safe into the wall and hide it with an old painting without something to put in it or - could be part of the concept - an alarmsensor that triggers when that old painting is moved, thus creating sort of a honeypot.

if and what types of security you want is up to you (so don't blame others if you made bad decisions).

but as a general rule out of practice i would say it is wise to always have two layers of defence. and always try to prepare for one "error" at a time and try to solve it quickly then.

example: if you want an rsync server on an internet facing machine to only be accessible for some subnets, i would suggest you add iptables rules as tight as possible and also configure the service to reject access from all other than the wanted addresses. also consider monitoring both, maybe using two different approaches: monitor the config to be as defined as well as setup an access-check from one of the unwanted, excluded addresses that fires an alarm when access becomes possible.

this would not only prevent those unwanted access from happening but also prevent accidental opening or breaking of config from happen unnoticed.

here the same, if you want monitoring is also up to you and your concept of security, as is with redundancy.

In general i would suggest to setup an ip filtering "firewall" if you have ip forwarding activated for some reason. a rather tight filtering would maybe only allow what you really need, while DROPping all other requests, but sometimes icmp comes in handy, so maybe you want ping or MTU discovery to actually work. always depends on what you have and how strong you want to protect it from what with what effort. a generic ip filter to only allow outgoing connections on a single workstation may be a good idea as second layer of "defence" in case your router has hidden vendor backdoors that either the vendor sold or someone else simply discovered. Disallowing all that might-be-usable-for-some-users-default-on-protocols like avahi & co in some distros would probably help a bit then.

so there is no generic fault-proof rule of thumb..

to number 5.: what sort of "not trusting" the software? might, has or "will" have: a. security flaws in code b. insecurity by design c. backdoors by gov, vendor or distributor d. spy functionality e. annoying ads as soon as it has internet connection f. all of the above (now guess the likely vendors for this one)

for c d and e one might also want to filter some outgoing connection..

one could also use an ip filtering firewall to keep logs small by disallowing those who obviously have intentions you dislike (fail2ban i.e.)

so maybe create a concept first and ask how to achieve the desired precautions then. or just start with your idea of the firewall and dig into some of the appearing rabbit holes afterwards ;-)

regards

load more comments (7 replies)
[-] bizdelnick@lemmy.ml 1 points 9 months ago

You always need it and you actually use it. The smarter question is when you need to customize its settings. Defaults are robust enough, so unless you know what and why you need to change, you don't.

[-] Kalcifer@sh.itjust.works 1 points 8 months ago

Defaults are robust enough

Would you mind defining what "defaults" are?

[-] bizdelnick@lemmy.ml 1 points 8 months ago

Defaults are the default settings of your firewall (netfilter in linux).

[-] Kalcifer@sh.itjust.works 1 points 8 months ago

Is netfilter not just the API through which you can make firewall rules (e.g. nftables) for the networking stack?

[-] thanks_shakey_snake@lemmy.ca 1 points 9 months ago

For me, it's primarily #5: I want to know which apps are accessing the network and when, and have control over what I allow and what I don't. I've caught lots of daemons for software that I hadn't noticed was running and random telemetry activity that way, and it's helped me sort-of sandbox software that IMO does not need access to the network.

Not much to say about the other reasons, other than #2 makes more sense in the context of working with other people: If your policy is "this is meant to be an HTTPS-only machine," then you might want to enforce that at the firewall level to prevent some careless developer from serving the app on port 80 (HTTP), or exposing the database port while they're throwing spaghetti at the wall wrestling with some bug. That careless developer could be future-you, of course. Then once you have a policy you like, it's also easier to copy a firewall config around to multiple machines (which may be running different apps), instead of just making sure to get it consistently right on a server-by-server basis.

So... Necessary? Not for any reason I can think of. But useful, especially as systems and teams grow.

[-] Kalcifer@sh.itjust.works 1 points 8 months ago

I’ve caught lots of daemons for software that I hadn’t noticed was running and random telemetry activity that way

I did the exact same thing recently when I installed OpenSnitch -- it was quite interesting to see all the requests that were being made.

If your policy is “this is meant to be an HTTPS-only machine,” then you might want to enforce that at the firewall level to prevent some careless developer from serving the app on port 80 (HTTP), or exposing the database port while they’re throwing spaghetti at the wall wrestling with some bug. That careless developer could be future-you, of course.

That's a fair point!

[-] Paragone@lemmy.ml 1 points 9 months ago* (last edited 9 months ago)

A couple of decades ago, iirc, SANS.org ( IF I'm remembering who it was who did it ) put a fresh-install of MS-Windows on a machine, & connected it to the internet.

It took SEVERAL MINUTES for it to be broken-into, & corrupted, botnetted.

The auto-attacks by botnets are continuous: hitting different ports, trying to break-in, automatically.

I've had linux desktops pwned from me.

the internet should be considered something like a mix of toxic & corrosive chemicals: "maybe" your hand will be fine, if you dip it in for a moment & immediately rinse it off ( for 3 hours ), but if you leave you limbs dwelling in the virulent slop, Bad Things(tm) are going to happen, sooner-or-later.


I used to de-infest Windows machines for my neighbours...

haven't done it in years: they'll not pay-for good anti-virus, they'll not resist installing malware: therefore there is no point.

Let 'em rot.

I've got a life to work-on uncrippling, & too-little strength/time left.


"but I don't need antivirus: i never get infected!!"

then how come I needed to de-infest it for you??

"but I don't need an immune-system: pathogens are a hoax!!"

get AIDS, then, & don't use anti-AIDS drugs, & see how "healthy" you are, 2 years in.

Same argument, different context-mapping.


Tarpit was a wonderful-looking invention, for Linux's netfilter/iptables, years ago: don't help botnets scan quickly & efficiently to help them find a way to break-in...


Anyways, just random thoughts from an old geek...


EDIT: "when do I need to wear a seatbelt?"

is essentially the same category of question.

_ /\ _

load more comments (1 replies)
load more comments
view more: ‹ prev next ›
this post was submitted on 25 Jan 2024
92 points (94.2% liked)

Linux

47820 readers
1184 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS