[-] UberMentch@lemmy.world 10 points 1 month ago

We're in an online echo chamber, we don't need to look at reality. Just find the opinions that we agree with, and agree with us, and put 'em at the top!

[-] UberMentch@lemmy.world 5 points 3 months ago* (last edited 3 months ago)

I'm not very experienced with OpenWRT - how sensitive is it to device changes? If your Barracuda dies tomorrow, do you have to purchase the same brand / model, or could you slap your saved config onto a similar device? Is there some sort of device compatibility to consider?

74
submitted 3 months ago* (last edited 3 months ago) by UberMentch@lemmy.world to c/selfhosted@lemmy.world

My Linksys router died this morning - fortunately, I had a spare Netgear one laying around, but manually replacing all DHCP reservations (security cameras, user devices, network devices, specific IoT devices) and port forwarding options was a tedious pain. I needed a quick solution; my job is remote, so I factory reset the Netgear (I wasn't sure what settings were already on it) and applied the most important settings to get the job done.

I'm looking for recommendations for either a more mature setup, backup solution, or another solution. Currently, my internet is provided from an AT&T ONT, which has almost everything disabled (DHCP included), and was passing through to my Linksys router. This acted as the router and DHCP server, and provided a direct connection to an 8-port switch, which split off into devices, 2 more routers acting as access points (one for the other side of the house, one for the separated garage, DHCP disabled on both).

If going the route of a backup solution, is it feasible to install OpenWRT on all of my devices, with the expectation that I can do some sort of automated backups of all settings and configurations, and restore in case of a router dying?

If going the route of a smarter solution, I'm not sure what to consider, so I'd love to hear some input. I think having so many devices using DHCP reservations might not be the way to go, but it's the best way I've been able to provide organization and structure to my growing collection of network devices.

If going with a more mature setup, I'm not sure what to consider for a fair ballpark budget / group of devices for a home network. I've been eyeing the Ubiquiti Cloud Gateway + 3 APs for a while (to replace my current 1 router / 2 routers-in-AP-mode setup), but am wondering if the selfhosted community has any better recommendations.

I'm happy to provide more information - I understand that selfhosting / home network setup is not a one-size-fits-all.

Edit: Forgot to mention! Another minor gripe I have is that my current 1 router / 2 routers-as-AP solution isn't meshed, so my devices have to be aware of all 3 networks as I walk across my property. It's a pain that I know can be solved with buying dedicated access points (...right?), but I'd like to know other's experiences with this, either with OpenWRT, or other network solutions!

Edit 2: Thanks for the suggestions and discussion everybody, I appreciate hearing everybody's recommendations and different approaches. I think I'm leaning towards the Ubiquiti UCG Ultra and a few Ubiquiti APs, they seem to cover my needs well. If in a few years that bites me in the ass, I think my next choices will be Mikrotik, OPNsense, or OpenWRT.

[-] UberMentch@lemmy.world 5 points 3 months ago

I’ve had issues with .local on my Android device. Straight up doesn’t work. I had to change to .lan

[-] UberMentch@lemmy.world 14 points 3 months ago* (last edited 3 months ago)

I've had issues with .local on my Android device. Straight up doesn't work. I had to change to .lan

[-] UberMentch@lemmy.world 3 points 5 months ago

Yeah, you and I have very similar use cases with this. Gluetun, VPN, download clients + *arr stack, I get it. I'll be sure to update with a solution, if I spot one (when I get around to looking)!

[-] UberMentch@lemmy.world 3 points 5 months ago* (last edited 5 months ago)

I am also currently dealing with this same exact issue, I'm wanting to run multiple instances of Lidarr for MP3 / FLAC libraries with Gluetun. I don't have an answer (I haven't put in the time to try and solve it yet), so apologies if I got your hopes up. I'm just here to confirm that others have this issue too!

Edit: Regarding that documentation, it seems like it's not saying that changing the port breaks it, it's just that you have to set both sides of the mapping to be the same. The default is 8080, so instead of 8080:8080, change the mapping to 8081:8081. That's how I'm reading it, anyways.

I should also mention that the closest that I got to fixing this was to boot up my 2nd Lidarr container separately, setting the port in the Lidarr WebUI console to something different (8687, for example), and then attach it to my Gluetun docker compose file. I did a docker compose pull to update my stack, then docker compose up -d for it. You might try this approach, and tinker around with it. I just haven't had time to really play with this "solution"

Edit 2: Played more with the solution I mentioned, and that LifeBandit666 found. We both gave the same solution, and the solution seems to work. Just don't be a dumbass, and remember to do application configuration to your container (unlike me, who, after putting the container into my Gluetun docker compose file, forgot that I didn't do application configuration and just saw a bunch of errors with Lidarr).

[-] UberMentch@lemmy.world 8 points 5 months ago

I used to have this issue more often as well. I've had good results recently by **not ** pointing out mistakes in replies, but by going back to the message before GPT's response and saying "do not include y."

[-] UberMentch@lemmy.world 4 points 8 months ago

My solution to this was a radicale docker container, with DAVx5 on f-droid. Very lightweight, just works without much configuration needed. I don't use my phone for email much, so I just use thunderbird for my email client

[-] UberMentch@lemmy.world 7 points 10 months ago

FMD2 (for auto-downloading manga), Komga (for hosting manga on a local server), and Tachiyomi for a self-hosted solution, if you're interested.

[-] UberMentch@lemmy.world 10 points 11 months ago

I think the reason I'm not comfortable with using the term "lying" is because it implies some sort of negative connotation. When you say that someone lies, it comes with an understanding that they made a choice to lie, usually with ill intent. I agree, we don't need to get into a philosophical discussion on choice and free will. But I think saying something like "GPT lies" is a bit irresponsible for the purposes of a discussion

[-] UberMentch@lemmy.world 16 points 11 months ago

They said "it just repeats words that simulate human responses," and I'd say that concisely answers your question.

Antropomorphizing inanimate objects and machines is fine for offering a rough explanation of what is happening, but when you're trying to critically evaluate something, you probably want to offer a more rigid understanding.

In this case, it might be fair to tell a child that the AI is lying to us, and that it's wrong. But if you want a more serious discussion on what GPT is doing, you're going to have to drop the simple explanation. You can't ascribe ethics to what GPT is doing here. Lying is an ethical decision, one that GPT doesn't make.

[-] UberMentch@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

Would love to deploy this, but unfortunately I'm running server equipment that apparently doesn't support MongoDB 5 (Error message MongoDB 5.0+ requires a CPU with AVX support, and your current system does not appear to have that!). Tried deploying with both 4.4.18 and 4.4.6 and can't get it to work. If anybody has some recommendations, I'd appreciate hearing them!

Edit: Changed my proxmox environment processor to host, fixed my issue.

9
submitted 1 year ago* (last edited 1 year ago) by UberMentch@lemmy.world to c/selfhosted@lemmy.world

I feel like I'm losing my mind. A few days ago, all of my containers running on Docker Desktop on my Windows Server host were working nicely. I had NFS volumes set up on a few of them to reach my synology NAS on my local network, and things were working fine. I've done so much digging and tweaking over the last few days, so I can't be certain where all I've broken this connection further, but I woke up one morning and the containers that all had connections to my NAS via NFS volumes were no longer working. I hadn't restarted my host, I don't know what changed. Containers like NPM that I had set up for my internal DNS would no longer redirect to any IP that wasn't within my docker network (for example, I run Plex NOT in a container on my host PC). I had all of my containers on the default bridge network, and now nothing on this docker network can connect to anything on my local network.

I've tried setting static routes in my router, changed a lot of configurations, dug through tutorials, guides, and posts all weekend, but I couldn't make any progress in figuring things out. I'd really appreciate some help on this one, and can provide more details, logs, compose files, when needed. Just don't want to dump everything at once

Duplicate thread over on reddit https://reddit.com/r/docker/comments/15qaotn/cant_ping_local_network_from_inside_containers/

Edit: For anybody looking for a solution, I bit the bullet and installed proxmox. Running my docker containers on an Ubuntu VM now. Linux Docker seems to be working much better now. I suppose the answer is just "run Docker on a Linux OS," since Docker on Windows seems to be limited. Plus, it gives me something new to play around with.

view more: next ›

UberMentch

joined 1 year ago