[-] ssdfsdf3488sd@lemmy.world 4 points 6 months ago

Thats not the feature i would port to paperless.paperless needs an o counter lol.

[-] ssdfsdf3488sd@lemmy.world 8 points 6 months ago

In firefox on android i just flip the switch to request desktop site amd its mostly fine...

[-] ssdfsdf3488sd@lemmy.world 8 points 6 months ago

50 watts is maybe halfof one of my 10 gig switches...

[-] ssdfsdf3488sd@lemmy.world 3 points 7 months ago

Just came here to say this, it workson a 10 dollar a year racknerd vps for me no problem. Matrix chugs on my much bigger vps, although it is sharing that with a bunch of other things, overall it should have mich more resources.

[-] ssdfsdf3488sd@lemmy.world 18 points 8 months ago

Because if you use relative bind mounts you can move a whole docker compose set of contaibera to a new host with docker compose stop then rsync it over then docker compose up -d.

Portability and backup are dead simple.

[-] ssdfsdf3488sd@lemmy.world 4 points 8 months ago

That's the dude who was butt hurt about something this dude did: https://github.com/iamadamdev/bypass-paywalls-chrome

and so forked it and arguably does a better job, lol.

[-] ssdfsdf3488sd@lemmy.world 4 points 8 months ago

just fyi, direct streaming isn't really direct streaming as you may think of it if you have specified samba shares on your nas instead of something on the vm running jellyfin. it will still pull from the nas into jellyfin and then http stream from jellyfin, whihc is super annoying.

34

Anybody see a 48 port managed 2.5 Gig ethernet switch for reasonable pricing yet? it seems like these are still either thousands of dollars or sold for chinese market without appropriate certificatiosn to be plugged into the north american electric grid. Any help would be appreciated (even better if it has 2-4 SFP+ 10 gig ports on it)

[-] ssdfsdf3488sd@lemmy.world 4 points 11 months ago

I started on planka but ended on vikunja, it was just a lot nicer and more flexible for my needs.

[-] ssdfsdf3488sd@lemmy.world 2 points 11 months ago

That's what I'm using right now. I am kind of curious if you are aware of any apk using tiny operating systems like alpine but that also have systemd? I want to experiement with quadlets/podman but don't really want to lose how simple alpine is to administer and how fast it boots.

[-] ssdfsdf3488sd@lemmy.world 3 points 11 months ago

I used to host anonaddy, I don't have the docker compose or configs anymore but I don't remember it being that bad. I stopped a couple years ago because simplelogin became included with my vpn subscription (and then I found fastmail, which has a similar feature built in so I ended up canceling simplelogin and that vpn and going to fastmail and mullvad). I basically just edite their example compose/env files and ran it behind my existing nginxproxymanager setup (that is gone now too, ended up moving to traefik but that's a story for another time). compose example here: https://github.com/anonaddy/docker/tree/master/examples/compose

[-] ssdfsdf3488sd@lemmy.world 3 points 1 year ago

It's really easy with headscale so I assume it must be really easy with tailscale too. How I did it was I created tiny tailscale vm to advertise the route to the ips I wanted access to on my internal lan. Then I shared the nfs share with the ip of that subnet router. now everything on my headscale network looks like it's coming from the subnet router and it works no problem (Just remember you have it setup this way in case you ever expand your userbase, as this is inherently insecure if there is anything connected to your tailscale that you don't want to have full access to your nfs shares)

[-] ssdfsdf3488sd@lemmy.world 9 points 1 year ago* (last edited 1 year ago)

Get rid of iscsi. Instead, use truenas scale for nas and use a zvol on truenas to run a vm of proxmox backup server. Run proxmox on the other box with local vms and just backup the vms to proxmox backup server at a rate you are comfortable with (i.e. once a night). Map nfs shares from truenas to any docker containers directly that you are running on your vms. map cifs shares to any windows vms, map nfs shares directly to any linux things. This is way more resilient, gets local nvme speeds for the vms and still keepa the bulk of your files on the nas, while also not abusing your 1gbit ethernet for vm stuff, just for file transfer (the vm stuff happens at multi GB speeds on the local nvme on the proxmox server).

view more: next ›

ssdfsdf3488sd

joined 1 year ago