[-] rehydrate5503@lemmy.world 1 points 5 days ago

I actually tried this as my second step in trouble shooting, the first being using different ports.

In the non-omada management software, it defaults to 10G, and if the devices is on before the switch it negotiates 10G correctly and works at full speed (tested with iperf3). As soon as any of the 10G connected devices is rebooted, I’m back to 1G. To fix it, I then have to set the port to 1G with flow control on, apply changes, save config, refresh page, change to 10G with flow control off, apply, save config and it goes back to 10G again. Alternatively I can reboot their switch and it’s fine again.

In Omada its the same, fewer steps to get there but I have to sometimes do it 2-3 times before it works.

Same issue with both 10G TP-Link switches, so I’m thinking it might be the SFP. Using Intel SFP+ with FS optical cables. I’m using a DAC for the uplink from the 10G switch to my unmanaged 2.5G switch, and that doesn’t have the problem of dropping, always works max speed.

[-] rehydrate5503@lemmy.world 1 points 5 days ago

Fair enough. Is there anything one can do to mitigate? Like I know for the recent issue in the news, a mitigation strategy for consumers is to basically reboot their router often. I keep my router and all hardware up to date, and try to follow news here. Not sure if there is really anything else I could do.

[-] rehydrate5503@lemmy.world 1 points 5 days ago

Oh wow, hard to believe a huge bug like that would make it to production. What do you recommend instead? Stick with TP-Link?

[-] rehydrate5503@lemmy.world 2 points 5 days ago

From what I’ve seen it seems consumer routers, but it raises flags is all, and makes me reconsider options.

44

cross-posted from: https://lemmy.world/post/21641378

So I just added a TP-Link switch (TL-SG3428X) and access point (EAP670) to my network, using OPNSense for routing, and was previously using a TP-Link SX-3008F switch as an aggregate (which I no longer need). I’m still within the return window for the new switch and access point, and have to admit the sale prices were my main reason with going for these items. I understand there have been recent articles mentioning TP-Link and security risks, so I’m thinking if I should consider returning these, and upping my budget to go for ubiquity? The AP would only be like $30 more for an equivalent, so that’s negligible, but a switch that meets my needs is about 1.6x more, however still only has 2 SFP+ ports, while I need 3 at absolute minimum.

I’m generally happy with the performance, however there is a really annoying bug where if I reboot a device, the switch drops down to 1G speed instead of 10G, and I have to tinker with the settings or reboot the switch to get 10G working again. This is true for the OPNSense uplink, my NAS and workstation. Same thing happened with the 3008F, and support threads on the forums have not been helpful.

In any case, any opinions of switching to ubiquity would be worth it?

[-] rehydrate5503@lemmy.world 2 points 5 days ago

So I just added a TP-Link switch (TL-SG3428X) and access point (EAP670) to my network, using OPNSense for routing. I’m still within the return window for both items. I understand the article mentions routers, but should I consider returning these, and upping my budget to go for ubiquity? The AP would only be like $30 more for an equivalent, so that’s negligible, but a switch that meets my needs is about 1.6x more. And still only has 2 SFP+ ports, while I need 3 at minimum.

20

Hi all,

I’m having an issue with an NFS mount that I use for serving podcasts through audibookshelf. The issue has been ongoing for months, and I’m not sure where the problem is and how to start debugging.

My setup:

  • Unraid with NFS share “podcasts” set up
  • Proxmox on another machine, with VM running Fedora Server 40.
  • Storage set up in Fedora to mount the “podcasts” share on boot, works fine
  • docker container on the same Fedora VM has Audiobookshelf configured with the “podcasts” mount passed through in the docker-compose file.

The issue:

NFS mount randomly drops. When it does, I need to manually mount it again, then restart the Audiobookshelf container (or reboot the VM, but I have other services).

There doesn’t seem to be any rhyme or reason to the unmount. It doesn’t coincide to any scheduled updates or spikes in activity. No issue on the Unraid side that I can see. Sometimes it drops over night, sometimes mid day. Sometimes it’s fine for a week, other times I’m remounting twice a day. What has finally forced me to seek help is the other day I was listening to a podcast, paused for 10-15 mins and couldn’t restart the episode until I went through the manual mount procedure. I checked and it was not due to the disk sinning down.

I’ve tried updating everything I could, issue persists. I only just updated to Fedora 40. It was on 38 previously and initially worked for many months without issue, then randomly started dropping the NFS mounts (I tried setting up other share mounts and same problem). Update to 39, then 40 and issue persists.

I’m not great with logs but I’m trying to learn. Nothing sticks out so far.

Does anyone have any ideas how I can debug and hopefully fix this?

[-] rehydrate5503@lemmy.world 48 points 2 months ago

Similar situation here. Lots of ghosting, or unmatching the day of a scheduled date. Had two dates in the last few months of using the apps. First woman was about 15 years older than her pics. Not unattractive by any means, but felt lied to from the get go. The other, let’s just say she had some work done after most recent pics, and the surgeon shouldn’t be practicing.

[-] rehydrate5503@lemmy.world 19 points 4 months ago

Omg there’s sound to this nightmare too 🤢

[-] rehydrate5503@lemmy.world 28 points 8 months ago

Yeah, this is not new and not shrinkflation… here in Canada the 440ml has been around for over 20 years in multi-packs, and the 500ml is available as individual cans.

[-] rehydrate5503@lemmy.world 15 points 8 months ago

While I agree with the general sentiment, these are both standard sizes for Guinness and the 440ml 4 packs and 8 packs have been around for well over 20 years. Here in Canada, the 440ml are available in the multi packs while the 500ml are sold as individual cans. I’ve seen the same in the states. I think OP just saw the smaller volume can next to the slightly bigger one and jumped on the shrinkflation hate train without checking.

[-] rehydrate5503@lemmy.world 65 points 8 months ago

Don’t forget our bee friends drink water!

I’ve put a couple of small bowls in and around my flower beds, with small flat stones for them to sit on while they drink.

[-] rehydrate5503@lemmy.world 16 points 11 months ago* (last edited 11 months ago)

For the occasional time where I’m troubleshooting something and Reddit has the only solution on an “unmoderated sub” or one with 18+ posts, I just change the “www” in the URL to “old”, and get the old, non mobile friendly UI. It lets you bypass the other app popups, etc. Sometimes when you go into a post you’re back in the new UI, and might get another pop up, but backing out or changing the url to “old” usually solves it in my experience.

47

Hello, I’m planning a rather large trip later this year and have been searching for something to help me plan and organize. I’ve come across a few apps that are not exactly privacy friendly, like TripIt and Wanderlog.

Does anyone know of any self hosted or otherwise open source alternatives to these apps?

[-] rehydrate5503@lemmy.world 96 points 1 year ago

We’re only a few steps away from having to drink our Mountain Dew verification cans.

20
submitted 1 year ago* (last edited 1 year ago) by rehydrate5503@lemmy.world to c/linux@lemmy.ml

Hi,

I’ve been running Linux for some time, currently on Nobara and happy. Running it on a 1TB NVME, and a second 1TB NVME drive for extra storage for games, etc., both at gen 3.

I find myself running out of room and just picked up 2TB and 1TB NVME drives, both gen 4, and am thinking as to what the best partition layout would be. The 2x1TB gen 3 will be moved to my NAS as a cache pool.

The PC is used for gaming, photo/video editing and web development.

I guess options would be:

  1. OS on 2TB, and the 1TB for extra storage, call it a day.
  2. OS on 1TB, and the 2TB for extra storage
  3. Divy up the 1TB to have a partition for /, another for /home and another for /var and maybe another for games, then on the 2TB have one big partition for games and scratch disk for videos.
  4. Same as option 3 but swap the drives around.

What would YOU do in this situation? I’m leaning towards option 3 or a variation there of, as it gives versatility to hop to a new distro if I want relatively easy, and one big partition for game storage/video scratch.

My mobo only supports 2xNVME drives unfortunately (regret not spending an extra $60-70 on a better one), but I have a USB-C NVME enclosure that I might use with a a spare 1TB that will be removed from the NAS.

Any thoughts?

Edit: sorry forgot to reply. Thank you all for the input, this was great information and I took a deep dive researching some solutions. I ended up just keeping it simple and went with option 2, with the 1TB as the OS drive and 2TB as additional storage, no additional partitions.

74
submitted 1 year ago* (last edited 1 year ago) by rehydrate5503@lemmy.world to c/selfhosted@lemmy.world

Hi everyone,

I’m not sure if this is the right community, but the home networking magazines seem to be pretty dead. I’m a bit green with regard to networking, and am looking for help to see if the plan I’ve come up with will work.

The main image in the post is my current network setup. Basically the ISP modem/router is just a pass through and the 10 Gb port is connected to my Asus router, which has the DHCP server activated. All of my devices, home lab and smart home devices are connected to the Asus router via either Wifi or Ethernet. This works well, but I have many neighbours close by, and with my 30+ wifi devices, I think things aren’t working as well as they could be. I guess you could say one of my main motivations to start messing with this is to clean it up and move all possible devices to Ethernet.

The planned new setup is as follows, but I’m not sure if it’s even possible to function this way.

https://i.postimg.cc/7YftSFt6/IMG-9281.jpg

ISP modem/router > 2.5 Gb unmanaged switch > 2.5 Gb capable devices (NAS, hypervisor, PCs) will connect directly here, along with a 1 Gb managed switch to handle the DHCP > Asus router would connect to the managed switch to provide wifi, and remaining wired devices will all connect to the managed switch as well.

Any assistance would be appreciated! Thanks!

Edit: fixed second image url

5

Hello!

I’ve been running unRAID for about two years now, and recently had a thought to use some spare parts and separate my server into two based on use. The server was used for personal photos, videos, documents, general storage, projects, AI art, media, multitude of docker containers, etc. But I was thinking, it’s a bit wasteful to run parts that I use once or twice a week or less 24/7, there is just no need for the power use and wear and tear on the components. So I was thinking to separate this into a server for storage of photos, videos and documents powered on when needed, and then a second server for the media which can be accessed 24/7.

Server 1 (photos, videos, documents, AI experiments): 1 x 16TB parity, 2 x 14TB array. I7 6700k, 16GB ram

Server 2 (media, docker): 1 x 10TB parity, 1 x 10TB and 2 x 6TB array. Cheap 2 core skylake CPU from spare parts, 8GB ram.

With some testing, server 2 only pulls about 10w while streaming media locally, which is a huge drop from the 90+ watts at idle that it was running when I had everything combined.

I was hoping to use an old laptop I have laying around for the second server instead, which has an 8 core CPU, 16GB ram, and runs at 5w idle. I have a little NVMe to SATA adapter that works well but the trouble is powering the drives reliably.

Anyways, pros of separating it out, lower power usage, less wear and tear on HDDs so I will have to replace them less frequently.

Cons, running and managing two servers.

Ideally, I’d like to run server 1 on the cheap 2 core skylake CPU (it’s only serving some files after all), server 2 on the laptop with 8 cores (but still have the issue of powering the drives), and then take the i7 6700 for a spare gaming PC for family.

Alternative would be to just combine everything back into one server and manage the shares better, have drives online only when needed, etc. But I had issues with this, and would sometimes log into the web ui to find all drives spun up even though nothing was being accessed.

Anyways, I hope all of that makes sense. Any insight or thoughts would be appreciated!

view more: next ›

rehydrate5503

joined 1 year ago