36
submitted 11 months ago by KDE to c/asklemmy@lemmy.ml

Hello fellow lemmy users ,

i was wondering whats the best file sharing protocol/app/website , tbh send.vis.ee seems to be currently the best to me but still i wanted your opinion here are things i found

  1. Localsend
  2. ffsend
  3. croc
  4. webtorrent
  5. magic-wormhole
  6. using curl on 0x0.st or pixeldrain
  7. (anonfiles has left so thats sad)
  8. rsync / ssh
  9. onionshare
  10. ipfs

#from what i am hearing , magic-wormhole makes the most sense since they seem to be the most open standard of sharing files but still seems incomplete or the lack of information on such topic makes me feels wierd.

croc seems to have a lot of cve and magic wormhole passed that test from suse's audit. webtorrent seems to fit in a wierd niche and its implementations like file.pizza arent really that well built ( considering you cant send multi files over there)

i would prefer cli but gui's as well so that i can send it to somebody else , i would like foss protocol since we can build on that other apps as well , and earlier i used to use shareit which was so bad that literally the govt pulled it because of chinese concerns
currently using localsend but warp (magic-wormhole)/warpinator is also looking good

top 8 comments
sorted by: hot top controversial new old
[-] synapse1278@lemmy.world 11 points 11 months ago

I don't think there is a one-fits-all solution. It depends what you are trying to do. For me it goes something like this:

  • transfer multiple files from 2 computers under my control: rsync
  • transfer few files between computers in my local network: NFS or SMB
  • tranfer 1 file from 2 computers in my control: scp
  • synchronize files between computers/phones: syncthing
  • synchronize files with cloud hosting: rclone
  • distribute files with relatives: cloud hosting, share link (I use pCloud, not FOSS)
[-] Darkassassin07@lemmy.ca 2 points 11 months ago* (last edited 11 months ago)

I'm lazy and really don't need anything more than direct web hosting, no encryption (beyond https), no auth, not even a web app.

An nginx instance uses try_files on a folder either returning the file you asked for or a 404 page.

Drop file in folder, Domain.tld/folder/file.ext returns file. Adding '/download/' to the start of the path adds the Content-Disposition 'attachment' header so it downloads instead of displaying (images/video/html/etc)

Not used for anything sensitive ofc, but handy for simple file sharing to friends/family. (or just stupid backgrounds for the warehouse computer 🤷)

[-] sylver_dragon@lemmy.world 1 points 11 months ago

It really depends on your use case. For example, I use Nextcloud, running on my own server, as a replacement for all things "cloud". In my use case, I wanted to have a system where pictures/videos/files which I took on my phone were auto-magically synced to a server. My main requirements were:

  1. Server is under my control.
  2. Android client compatibility.
  3. Automatic syncing of files in folders I select when an internet connection exists.
  4. Two factor authentication via YubiKey.
  5. Encryption "in flight".
  6. Open Source.

I now have NextCloud running in a container on my home server, with a public IP and domain. This gives me all the advantages of having my pictures, videos and important files, from my phone and computer, backed up to "the cloud" without having them on someone else's computer. The down side is that I have to sort out security, updates and backups on my own. I'm fine with that trade-off, though not everyone would be.

As a bonus, I can provide "cloud" functions to my family as well. And sharing files out to extended family is as easy as setting a file to "shared" and sending a link. Technically, that exposes the file to the public internet, but I only do this for files which I don't consider "sensitive" and the link contains a long, random string to obfuscate it. So long as I take it down before search engines have a chance to pick up on it, the risk is minimal.

[-] MentalEdge@sopuli.xyz 1 points 11 months ago

No search engine is going to find a long obfuscated URL. I don't think NC publishes a site tree for a crawler to use.

In fact, unless you post your domain somewhere online or its registration is available somewhere, it's unlikely anyone will ever visit your server without a direct link provided by you or someone else who knows it.

You might still get discovered by IP crawlers, but even then they aren't going to be trial and erroring their way to shared files, for the same reason they can't brute force any sane SSH password.

[-] nixcamic@lemmy.world 1 points 11 months ago

Nah I have some services running on unpublished domains and I get hit by brute force attempts at SSH logins all the time. It might not be sane but botnet gonna botnet.

[-] MentalEdge@sopuli.xyz 2 points 11 months ago

Oh, same. Though on my current IP it hasn't happened for a couple years, now.

But finding an SSH port with an IP crawler is a lot easier than finding all the services accessible behind different paths/subdomains on port 80. And even then, mapping out a site-tree all the way out to uncrackable-password-lenght URLs, is never gonna happen by brute force.

[-] Perhyte@lemmy.world 1 points 11 months ago

In fact, unless you post your domain somewhere online or its registration is available somewhere, it’s unlikely anyone will ever visit your server without a direct link provided by you or someone else who knows it.

If you use HTTPS with a publicly-trusted certificate (such as via Let's Encrypt), the host names in the certificate will be published in certificate transparency logs. So at least the "main" domain will be known, as well as any subdomains you don't hide by using wildcards.

I'm not sure whether anyone uses those as a list of sites to automatically visit, but I certainly would not count on nobody doing so.

That just gives them the domain name though, so URLS with long randomly-generated paths should still be safe.

[-] MentalEdge@sopuli.xyz 1 points 11 months ago* (last edited 11 months ago)

There is also the DNS system itself, not sure if reverse lookup is possible in some way without a PTR record, but suffice to say there are ways, and there are many.

Obscurity is not security, just a reasonable first line of defense. If you run something publicly accessible, lock it down.

Stuff that can't be brute forced in a million years is a good way to do that, even if it's just a string in a URL. It's basically like having to enter a password. You could even fail2ban it by banning IPs that try a bunch of random URLs that aren't valid, or use a simple rate-limit.

this post was submitted on 05 Jan 2024
36 points (97.4% liked)

Asklemmy

44151 readers
819 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS