[-] thomas@lemmy.douwes.co.uk 6 points 1 year ago

If my fingers prune I'm going to die or something

[-] thomas@lemmy.douwes.co.uk 6 points 1 year ago

Thanks for maintaning this fork, this was the last missing thing on lemmy for me

[-] thomas@lemmy.douwes.co.uk 4 points 1 year ago

same, I feel lucky to have such an uncommon surname because I easily got the domain lol. there was a short version with the last two letters as the TLD but that was already taken sadly

[-] thomas@lemmy.douwes.co.uk 1 points 1 year ago

Webtorrent seems to have some issues with peer discovery. I've tried the instant.io site they have linked on webtorrent.io and I can't get it to download or share anything, the desktop client managed to download a torrent from my peertube instance over normal BitTorrent but I can't share it over webtorrent. I downloaded a video from my peertube instance using btorrent.xyz over webtorrent but I can't seed new files because the peers don't find each other. when I use a webtorrent with a tracker (like peertube) it works fine but how were sites like instant.io supposed to discover peers without trackers? I don't think DHT exists for webtorrent yet.

You can manually seed videos on instances using redundancy but I was thinking automatic redundancy for watched videos might be a good idea, I guess you can do automatic redundancy for entire instances but that would take up a lot of storage space.

One of the nice thing with BitTorrent is the high reliability so I assumed that was what peertube was trying to do, I guess the idea is not to provide data redundancy but to split load instead?

[-] thomas@lemmy.douwes.co.uk 1 points 1 year ago

why? if 5 instances are seeding the video, clients should be able to download from all 5 instances and spread the bandwidth usage right?

[-] thomas@lemmy.douwes.co.uk 1 points 1 year ago

Why not also use the instance to re-seed? it could keep seeding after the visitor closed the video.

[-] thomas@lemmy.douwes.co.uk 2 points 1 year ago

Would it not make more sense if your instance downloaded and redistributed the torrent? then you could keep seeding after the tab closed. it also wouldn't leak your IP then.
What about peer discovery? I opened that webtorrent website in two browsers and they didn't peer, is that demo real?

89
submitted 1 year ago* (last edited 1 year ago) by thomas@lemmy.douwes.co.uk to c/fediverse@lemmy.ml

Can someone help me with how peertube P2P works? I can understand how ActivityPub is used for all the "social" parts but I'm a bit confused about the actual video player.

Redundancy:
I have my own instance and I made a redundancy of a video from the blender instance. if I watch the video on my instance I see 2 peers, my instance and the blender one. I can seen both in Firefox dev tools.
If I watch the same video on the blender instance I see 7 peers, the blender instance, mine, and others. why are these extra peers not showing on my instance? do I need to do something? If I watch the video on one of these other instances mine does show up in the their peers list.

I also made a video from framatube redundant but my instance doesn't appear as a peer on framatube.

Client P2P:
If I watch a video does my browser share It over P2P? if so what is the point of this? it seems to lose the video as soon as I leave the page so this functionality seems a bit useless to me.

EDIT: Answered in comments.

BitTorrent:
If I download a video I get the option of a BitTorrent torrent. If I seed this torrent can it be leeched by web clients? I tried and It doesn't show up in the peer list. What's the point of running a full BitTorrent tracker if it doesn't work with the main P2P system?

EDIT: BitTorrent is incompatible with webtorrent that peertube uses. Peertube also uses HLS instead of webtorrent and behaves a bit different (you can't seed it with a webtorrent client).

Peer discovery:
As I said in 1 and 2 how does the player actually find peers? Is there something like DHT or a tracker built in to peertube? if it's an internal tracker how does the tracker find peers?

EDIT: It uses a tracker build in to peertube.

Thanks for any help.

[-] thomas@lemmy.douwes.co.uk 3 points 1 year ago

What the hell, I wondered why the website wasn't loading then I see a 228 page PDF open lol

[-] thomas@lemmy.douwes.co.uk 15 points 1 year ago

This is wrong, I use IPTables but the device is absolutely not dedicated lol.

[-] thomas@lemmy.douwes.co.uk 59 points 1 year ago* (last edited 1 year ago)

Download ML thing.
make new venv.
pip install -r requirements.txt.
pip can't find the right versions.
pip install --update pip.
pip still can't find the right versions.
install conda.
conda breaks for some reason.
fix conda.
install with conda.
pytorch won't compile with CUDA support.
install 2,000,000GB of nvidia crap from conda.
pytorch still won't compile.
install older version of gcc with conda.
pytorch still won't compile.
reinstall the entire operating system with debian 11.
apt can't find shitlib-1.
install shitlib-2.
it's not compatible with shitlib-1.
compile it from source.
automake breaks.
install debian 10.
It actually works.
"Join our discord to get the model".
give up.

2
Client side loading: (lemmy.douwes.co.uk)

[-] thomas@lemmy.douwes.co.uk 1 points 1 year ago

The file you downloaded is a compressed JSON file, it's not something you can really just look at. But it contains all the data needed to build a nice UI around.
I don't know what OS you are on but on linux you can run zstd -d -c file.zst | jq . and it will print everything in the file. It's not really readable though. Also it doesn't have any of the media content, only the text

[-] thomas@lemmy.douwes.co.uk 2 points 1 year ago* (last edited 1 year ago)

I hate reddit. But it feels like the library of Alexandria burning down (yea I know). All those google search results and educational subreddits that are shutting down forever, and because they are too small reddit won't force open them again.
A lot are in the pushshift archive, but that cuts of at 2022. Also, it doesn't include a lot of the smaller subreddits.
I have had my PC running 24/7 with multiple VPNs to avoid rate limits downloading as much as I can before the API dies, but with some blackouts moving forward a day I have already missed a few.
Like many others, I would often add "reddit" to the end of my searches to get better results, half the websites on web searches now are either AI generated, copies or are completely AD ridden websites that ask you to turn off your AD blocker.

0
submitted 1 year ago* (last edited 1 year ago) by thomas@lemmy.douwes.co.uk to c/technology@lemmy.ml
4
submitted 1 year ago* (last edited 1 year ago) by thomas@lemmy.douwes.co.uk to c/technology@beehaw.org
view more: next ›

thomas

joined 1 year ago