[-] greyfox@lemmy.world 27 points 3 weeks ago

Named volumes are often the default because there is no chance of them conflicting with other services or containers running on the system.

Say you deployed two different docker compose apps each with their own MariaDB. With named volumes there is zero chance of those conflicting (at least from the filesystem perspective).

This also better facilitates easier cleanup. The apps documentation can say "docker compose down -v", and they are done. Instead of listing a bunch of directories that need to be cleaned up.

Those lingering directories can also cause problems for users that might have wanted a clean start when their app is broken, but with a bind mount that broken database schema won't have been deleted for them when they start up the services again.

All that said, I very much agree that when you go to deploy a docker service you should consider changing the named volumes to standard bind mounts for a couple of reasons.

  • When running production applications I don't want the volumes to be able to be cleaned up so easily. A little extra protection from accidental deletion is handy.

  • The default location for named volumes doesn't work well with any advanced partitioning strategies. i.e. if you want your database volume on a different partition than your static web content.

  • Old reason and maybe more user preference at this point but back before the docker overlay2 storage driver had matured we used the btrfs driver instead and occasionally Docker would break and we would need to wipe out the entire /var/lib/docker btrfs filesystem, so I just personally want to keep anything persistent out of that directory.

So basically application writers should use named volumes to simplify the documentation/installation/maintenance/cleanup of their applications.

Systems administrators running those applications should know and understand the docker compose well enough to change those settings to make them production ready for their environment. Reading through it and making those changes ends up being part of learning how the containers are structured in the first place.

[-] greyfox@lemmy.world 24 points 2 months ago

Btrfs is a copy on write (COW) filesystem. Which means that whenever you modify a file it can't be modified in place. Instead a new block is written and then a single atomic operation is done to flip that new block to be the location of that data.

This is a really good thing for protecting your data from things like power outages or system crashes because the data is always in a good state on disk. Either the update happened or it didn't there is never any in-between.

While COW is good for data integrity it isn't always good for speed. If you were doing lots of updates that are smaller than a block you first have to read the rest of the block and then seek to the new location and write out the new block. On ssds this isn't a issue but on HDDs it can slow things down and fragment your filesystem considerably.

Btrfs has a defragmentation utility though so fragmentation is a fixable problem. If you were using ZFS there would be no way to reverse that fragmentation.

Other filesystems like ext4/xfs are "journaling" filesystems. Instead of writing new blocks or updating each block immediately they keep the changes in memory and write them to a "journal" on the disk. When there is time those changes from the journal are flushed to the disk to make the actual changes happen. Writing the journal to disk is a sequential operation making it more efficient on HDDs. In the event that the system crashes the filesystem replays the journal to get back to the latest state.

ZFS has a journal equivalent called the ZFS Intent Log (ZIL). You put the ZIL on fast SSDs while the data itself is on your HDDs. This also helps with the fragmentation issues for ZFS because ZFS will write incoming writes to the ZIL and then flush them to disk every few seconds. This means fewer larger writes to the HDDs.

Another downside of COW is that because the filesystem is assumed to be so good at preventing corruption, in some extremely rare cases if corruption gets written to disk you might lose the entire filesystem. There are lots of checks in software to prevent that from happening but occasionally hardware issues may let the corruption past.

This is why anyone running ZFS/btrfs for their NAS is recommended to run ECC memory. A random bit flipping in ram might mean the wrong data gets written out and if that data is part of the metadata of the filesystem itself the entire filesystem may be unrecoverable. This is exceedingly rare, but a risk.

Most traditional filesystems on the other hand were built assuming that they had to cleanup corruption from system crashes, etc. So they have fsck tools that can go through and recover as much as possible when that happens.

Lots of other posts here talking about other features that make btrfs a great choice. If you were running a high performance database a journaling filesystem would likely be faster but maybe not by much especially on SSD. But for a end user system the snapshots/file checksumming/etc are far more important than a tiny bit of performance. For the potential corruption issues if you are lacking ECC backups are the proper mitigation (as of DDR5 ECC is in all ram sticks).

[-] greyfox@lemmy.world 6 points 2 months ago

They stick 9.81 in for acceleration, so that is presumably for gravity.

[-] greyfox@lemmy.world 8 points 6 months ago* (last edited 6 months ago)

Slows down then freezes sure sounds like an out of memory situation, so to add to yours here they might actually want less swap. Sometimes you would rather hit the oom killer sooner instead of waiting on swap to fill.

Ideally login via SSH from another machine to figure out what is using the memory (hopefully the system is responsive enough for SSH), and if it is your critical programs causing the problem then you should consider a memory upgrade.

[-] greyfox@lemmy.world 6 points 6 months ago

Contrary to a lot of posts that I have seen, I would say ZFS isn't pointless with a single drive. Even if you can't repair corruption with a single drive knowing something is corrupt in the first place is even more important (you have backups to restore it from right?).

And a ZFS still has a lot of features that are useful regardless. Like snapshots, compression, reflinks, send/receive, and COW means no concerns about data loss during a crash.

BTRFS can do all of this too and I believe it is better about low memory systems but since you have ZFS on your NAS you unlock a lot of possibilities keeping them the same.

I.e. say you keep your T110ii running with ZFS you can use tools like syncoid to periodically push snapshots from the Optiplex to your T110.

That way your Optiplex can be a workhorse, and your NAS can keep the backup+periodic snapshots of the important data.

I don't have any experience with TrueNAS in particular but it looks like syncoid works with it. You might need to make sure that pool versions/flags are the same for sending/receive to work.

Alternatively keep that data on an NFS mount. The SSD in the Optiplex would just be for the base OS and wouldn't have any data that can't be thrown away. The disadvantage here being your Optiplex now relies on a lot more to keep running (networking + nas must be online all the time).

If you need HA for the VMs you likely need distributed storage for the VMs to run on. No point in building an HA VM solution if it just moves the single point of failure to your NAS.

Personally I like Harvester, but the minimum requirements are probably beyond what your hardware can handle.

Since you are already on TrueNAS Scale have you looked at using TrueNAS Scale on the Optiplex with replication tasks for backups?

[-] greyfox@lemmy.world 9 points 6 months ago

https://spotifyshuffler.com/

You can use sites like these to randomize your playlist. You can have it randomize the playlist or create a randomized copy if you want to keep the original.

I usually start the playlist, turn on their crappy shuffle to get me to a random position in the randomized playlist, then disable their ~~profitability maximizer~~ shuffle.

[-] greyfox@lemmy.world 11 points 7 months ago

If you are just looking to repurpose an old device for around the house use and it won't ever be leaving your home network, then the simplest method is to set a static IP address on the device and leave the default gateway empty. That will prevent it from reaching anything other than the local subnet.

If you have multiple subnets that the device needs to access you will need a proper firewall. Make sure that the device has a DHCP reservation or a static IP and then block outgoing traffic to the WAN from that IP while still allowing traffic to your local subnets.

If it is a phone who knows what that modem might be doing if there isn't a hardware switch for it. You can't expect much privacy when that modem is active. But like the other poster mentiond a private DNS server that only has records from your local services would at least prevent apps from reaching out as long as they aren't smart enough to fall back to an IP address if DNS fails.

A VPN for your phone with firewall rules on your router that prevent your VPN clients from reaching the WAN would hopefully prevent any sort of fallback like that.

[-] greyfox@lemmy.world 30 points 8 months ago

If you are accessing your files through dolphin on your Linux device this change has no effect on you. In that case Synology is just sharing files and it doesn't know or care what kind of files they are.

This change is mostly for people who were using the Synology videos app to stream videos. I assume Plex is much more common on Synology and I don't believe anything changed with Plex's h265 support.

If you were using the built in Synology videos app and have objections to Plex give Jellyfin a try. It should handle h265 and doesn't require a purchase like Plex does to unlock features like mobile apps.

Linux isn't dropping any codecs and should be able to handle almost any media you throw at it. Codec support depends on what app you are using, and most Linux apps use ffmpeg to do that decoding. As far as I know Debian hasn't dropped support for h265, but even if they did you could always compile your own ffmpeg libraries with it re-enabled.

How can I most easily search my NAS for files needing the removed codecs

The mediainfo command is one of the easiest ways to do this on the command line. It can tell you what video/audio codecs are used in a file.

With Linux and Synology DSM both dropping codecs, I am considering just taking the storage hit to convert to h.264 or another format. What would you recommend?

To answer this you need to know the least common denominator of supported codecs on everything you want to play back on. If you are only worried about playing this back on your Linux machine with your 1080s then you fully support h265 already and you should not convert anything. Any conversion between codecs is lossy so it is best to leave them as they are or else you will lose quality.

If you have other hardware that can't support h265, h264 is probably the next best. Almost any hardware in the last 15 years should easily handle h264.

When it comes to thumbnails for a remote filesystem like this are they generated and stored on my PC or will the PC save them to the folder on the NAS where other programs could use them.

Yes they are generated locally, and Dolphin stores them in ~/.cache/thumbnails on your local system.

[-] greyfox@lemmy.world 69 points 8 months ago

It also doesn't say that the line on the bottom is straight, so we have no idea if that middle vertex adds up to 180 degrees. I would say it is unsolvable.

[-] greyfox@lemmy.world 19 points 10 months ago

We asked our Dell sales guy this question years ago now, when they had been removed one year and quickly added back the next year.

They are there mostly for government builds, and other places with high security requirements. Usually the requirement is that they need to prevent any unauthorized USB devices from being plugged in. With the PS2 m&k ports they can disable the USB ports entirely in the BIOS.

[-] greyfox@lemmy.world 8 points 1 year ago

It's presumably to give you legal ground to sue if some corporation scrapes Lemmy content and uses it to train AI, or whatever other commercial purpose.

Hopefully if enough people do it they would consider the dataset too risky to use. They could try and parse out comments that have that license statement but if any get missed somehow they open themselves up to lawsuits.

That would force them to instead pay for content from somewhere that has a EULA forcing the users to hand over copyright regardless of what they put in their posts (i.e. Reddit).

[-] greyfox@lemmy.world 6 points 1 year ago

That probably would work well for those closer to the equator.

But for those in the 100 minutes zone of this map that would mean going to work at 6:30am in the summer (assuming we are using civil twilight as "sunrise"), and 9:30AM in the winter which is much more of a swing than daylight savings puts on us, but at least it is a gradual one.

For those above the Arctic Circle, they just work 24/7 for a couple of weeks in the summer but get a similar time off in the winter ;)

view more: next ›

greyfox

joined 2 years ago