NFS is the best option if you only need to access the shared drives over your LAN. If you want to mount them over the internet, there's SSHFS.
See, this is interesting. I'm out here looking for the new shiny easy button, but what I'm hearing is "the old config-file based thing works really well. ain't broken, etc."
I may give that a swing and see.
I'm at the same age - just to mention, samba is nowhere near the horror show it used to be. That said, I use NFS for my Debian boxes and mac mini build box to hit my NAS, samba for the windows laptop.
Yeah, Samba has come a long way. I run a Linux based server but all clients are Windows or Android so it just makes sense to run SMB shares instead of NFS.
I've always had weird issues with SMB like ghost files, issues with case sensitivity (zfs pool), it dropping out and me having to reboot to re-establish the connection... Since switching to Linux and using NFS, it's been almost indistinguishable from a native drive for my casual use (including using a ssd pool as a steam library...)
I can definitely say I'm the past I had similar experiences. I haven't really had any problems with SMB in the last 5 years that I can recall. It really was a shit show back in the day, but it's been rock solid for me anyway.
I agree, NFS is eazy peazy, livin greazy.
I have an old ds211j synology for backup. I just can't bring myself to replace it, it still works. However, it doesn't support zfs. I wish I could get another Linux running on this thing.
However, NFS does work on it and is so simple and easy to lock down, it works in a ton of corner cases like mine.
If you already know NFS and it works for you, why change it? As long as you’re keeping it between Linux machines on the LAN, I see nothing wrong with NFS.
Isn’t nfs pretty much completely insecure unless you turn on nfs4 with Kerberos? The fact that that is such a pain in the ass is what keeps me from it. It is fine for read-only though.
I'd use an s3 bucket with s3fs. Since you want to host it yourself, Minio is the open-source tool to use instead of s3.
I hear good things about seaweedfs instead of minio these days
For smaller folders I like using syncthing, that way it's like having multiple updated backups
I like this solution because I can have the need filled without a central server. I use old-fashioned offline backups for my low-churn, bulk data, and SyncThing for everything else to be eventually consistent everywhere.
If my data was big enough so as to require dedicated storage though, I'd probably go with TrueNAS.
For all its flaws and mess, NFS is still pretty good and used in production.
I still use NFS to file share to my VMs because it still significantly outperforms virtiofs, and obviously network is a local bridge so latency is non-existent.
The thing with rsync is that it's designed to quickly compute the least amount of data transfer to sync over a remote (possibly high latency) link. So when it comes to backups, it's literally designed to do that easily.
The only cool new alternative I can think of is, use btrfs or ZFS and btrfs/zfs send | ssh backup btrfs/zfs recv
which is the most efficient and reliable way to backup, because the filesystem is aware of exactly what changed and can send exactly that set of changes. And obviously all special attributes are carried over, hardlinks, ACLs, SELinux contexts, etc.
The problem with backups over any kind of network share is that if you're gonna use rsync anyway, the latency will be horrible and take forever.
Of course you can also mix multiple things: rsync laptop to server periodically, then mount the server's backup directory locally so you can easily browse and access older stuff.
I use sshfs.
For linux only, lan only shared drive NFS is probably the easiest you'll get, it's made for that usecase.
If you want more of a dropbox/onedrive/google drive experience, Syncthing is really cool too, but that's a whole other architecture qhere you have an actual copy on all machines.
NFS is still the standard. Were slowly seeing better adoption of VFS for things like hypervisors.
Otherwise something like SFTPgo or Copyparty if you want a solution that supports pretty much every protocol.
I would say SMB is more the standard. It is natively supported in Linux and works a bit better for file shares.
NFS is better for server style workloads
truenas is cool. I've only used core so far, but i hear scale is taking over
this looks promising. Seems a little heavy-weight at first glance... How was it to get up and running?
the GUI makes it pretty painless. it was my first real attempt at self hosting anything, my first experience with any kind of NFS/SMB setup at all. i was running it as bare metal for around 2 years before using installing as a vm on proxmox.
NFS is pretty good
Check out SyncThing, which can sync a folder of your choice across all 3 devices
[edit] oops, just saw you don't plan on using it
In that case, if you use KDE, you can use Dolphin to set up network drives to your local network machines through SSH
I use NFS for linking VMs and Docker containers to my file server. Haven't tried it for desktop usage, but I imagine it would work similarly.
LAN or internet?
Https is king for internet protocols.
LAN only. I may set up a VPN connection one day but it's not currently a priority. (edited post to reflect)
NFS works, but http was designed for shitty internet. Keep that in mind. Owncloud or similar might be a good idea.
TrueNas is pretty top notch and offers a variety of storage and protocol options. If you're at all familiar with Linux style OS, it should be pretty easy to work with. Setting up storage comes with a little bit of a learning curve, but it's not too bad. This SAN/NAS OS is polished, performant, and extensible. If you're not planning on using SMB or Samba, you can most certainly use NFS, or iSCSI if that's your thing.
I think a reasonable quorum already said this, but NFS is still good. My only complaint is it isn't quite as user-mountable as some other systems.
So...I know you said no SAMBA, but SAMBA 4 really isn't bad any more. At least, not nearly as shit as it was.
If you want a easily mountable filesystem for users (e.g. network discovery/etc.) it's pretty tolerable.
I still use sshfs. I can't be bothered to set up anything else I just want something that works out of the box.
Isn't that super clunky ? I keep getting all kind of sluggishness, hangs and the occasional error every time I use that. It ends up working but wow, does it suck.
I mostly use samba / cifs clients and it's fast and reliable with properly setup dns and using only the dns or IP address, not smbios or active directory those are overkill
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!