[-] manifesto7473@lemmy.ml 1 points 6 months ago* (last edited 6 months ago)

It is fine. You can use the duperemove tool (or bees) to find and remove duplicates.

https://btrfs.readthedocs.io/en/latest/Deduplication.html

So it is out-of-band deduplication and has to be done manually.

Also, by default cp and most file managers use a reflink copy (data blocks are copied only when modified)

[-] manifesto7473@lemmy.ml 1 points 6 months ago

If I know correctly, defrag will always duplicate the reflink files.

https://btrfs.readthedocs.io/en/latest/Defragmentation.html

Defragmentation does not preserve extent sharing, e.g. files created by cp --reflink or existing on multiple snapshots. Due to that the data space consumption may increase.

[-] manifesto7473@lemmy.ml 6 points 6 months ago

It is only for new data.

For example, you would have to defragment your filesystem again with btrfs filesystem defragment -r -v -czstd /. Where zstd is an algorithm and /, a root path. With this command, the default compression level will be used, which is level 3.

Be careful, defragmenting the btrfs file system will/can duplicate the data.

As for a mount point, if you decided to use zstd algorithm with level 1 compression, just add the compress=zstd:1 or compress-force=zstd:1 to the mount options (fstab or while mounting manually)

[-] manifesto7473@lemmy.ml 1 points 11 months ago

No, but according to this Phoronix article, they will fix the RAID56 issues soon:

The support for RAID56 is in development and will eventually fix the problems with the current implementation. This is a backward incompatible feature and has to be enabled at mkfs time.

manifesto7473

joined 11 months ago