36
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 05 Jan 2024
36 points (97.4% liked)
Asklemmy
43822 readers
906 users here now
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
It really depends on your use case. For example, I use Nextcloud, running on my own server, as a replacement for all things "cloud". In my use case, I wanted to have a system where pictures/videos/files which I took on my phone were auto-magically synced to a server. My main requirements were:
I now have NextCloud running in a container on my home server, with a public IP and domain. This gives me all the advantages of having my pictures, videos and important files, from my phone and computer, backed up to "the cloud" without having them on someone else's computer. The down side is that I have to sort out security, updates and backups on my own. I'm fine with that trade-off, though not everyone would be.
As a bonus, I can provide "cloud" functions to my family as well. And sharing files out to extended family is as easy as setting a file to "shared" and sending a link. Technically, that exposes the file to the public internet, but I only do this for files which I don't consider "sensitive" and the link contains a long, random string to obfuscate it. So long as I take it down before search engines have a chance to pick up on it, the risk is minimal.
No search engine is going to find a long obfuscated URL. I don't think NC publishes a site tree for a crawler to use.
In fact, unless you post your domain somewhere online or its registration is available somewhere, it's unlikely anyone will ever visit your server without a direct link provided by you or someone else who knows it.
You might still get discovered by IP crawlers, but even then they aren't going to be trial and erroring their way to shared files, for the same reason they can't brute force any sane SSH password.
If you use HTTPS with a publicly-trusted certificate (such as via Let's Encrypt), the host names in the certificate will be published in certificate transparency logs. So at least the "main" domain will be known, as well as any subdomains you don't hide by using wildcards.
I'm not sure whether anyone uses those as a list of sites to automatically visit, but I certainly would not count on nobody doing so.
That just gives them the domain name though, so URLS with long randomly-generated paths should still be safe.
There is also the DNS system itself, not sure if reverse lookup is possible in some way without a PTR record, but suffice to say there are ways, and there are many.
Obscurity is not security, just a reasonable first line of defense. If you run something publicly accessible, lock it down.
Stuff that can't be brute forced in a million years is a good way to do that, even if it's just a string in a URL. It's basically like having to enter a password. You could even fail2ban it by banning IPs that try a bunch of random URLs that aren't valid, or use a simple rate-limit.
Nah I have some services running on unpublished domains and I get hit by brute force attempts at SSH logins all the time. It might not be sane but botnet gonna botnet.
Oh, same. Though on my current IP it hasn't happened for a couple years, now.
But finding an SSH port with an IP crawler is a lot easier than finding all the services accessible behind different paths/subdomains on port 80. And even then, mapping out a site-tree all the way out to uncrackable-password-lenght URLs, is never gonna happen by brute force.