73
This may take a while... (sh.itjust.works)

483749 minutes by my math, or just under a year.

(the gory part is that the source drive is only 256GB with 10% used)

all 12 comments
sorted by: hot top controversial new old
[-] ikidd@lemmy.world 4 points 11 months ago

Good god, that's huge. You need to find a backup solution that dedups.

I keep 40 versions of each of my dozens of proxmox guest VMs, and it doesn't use 10% of that much space, including a virtualized NAS. I think it has a dedup ratio of 30x.

[-] Fuck_u_spez_@sh.itjust.works 9 points 11 months ago

I'm realizing reading the comments that not everyone noticed the description. While I easily have over 100TB on other systems, the gory part here is that the source drive is just a small laptop SSD with maybe 25GB used. Duplicati is buggy, that's all.

[-] Dark_Arc@social.packetloss.gg 4 points 11 months ago* (last edited 11 months ago)

Checkout Kopia...

Duplicati is really nice in many ways but when it comes to restore time it takes forever. If you ever need to actually use this backup Duplicati will definitely fail you.

I'm not saying Kopia won't, but I think it's got a way better chance of working in a timely fashion.


Edit: Just noticed where I am and the post body. Yeah, duplicati unfortunately isn't in a great maintenance state and has bugs and performance issues despite otherwise being a nice user experience.

[-] randomaside@lemmy.dbzer0.com 4 points 11 months ago* (last edited 11 months ago)

I tried this exact application and I feel like I learned a lot about backups in the process. I like to think about it like this: (info dump)

There are basically these three backups:

block level, filesystem level, file level backups.

The problem with a file level backup is that when you have many files (as opposed to a few large files) the backup will take much much longer. Backups need to be able to detect diffs and only backup what is needed to be efficient. Avoiding RSYNC like applications for backup whenever you can is ideal.

Block level and filesystem level backups are always faster. I understand this may not be suitable for everyone but I always expect whichever backup applications I choose to create a file in my backup destination that is an archive filled with immutable snapshots of the filesystem I am backing up. I almost always now choose my filesystem based on my backup strategy . Here are some examples of filesystems and the best ways to back them up.

NTFS (windows): Your backups should be done via volume shadow copy provider. Almost all mainstream backup software use this method for backups (Acronis, Veeam, built in windows Etc).

Ext4/XFS: there are a few different ways to peel this banana. You can use LVM to help you create your snapshots. Some backup applications will install a block filesystem backup driver to perform your backups. On Linux based NAS like QNAP, often they will have a backup application built in that will let you ship snapshots to another data location. This is ideal as then you don't need a third party application to perform your backups and your backups will be short and frequent.

ZFS: I love ZFS (I am biased).The key is to do this with snapshots as well. Have another ZFS server somewhere and ship your snapshots to it with Zsnapsend or SANOID (a Zsnapsend policy based backup). Backing up ZFS to cloud providers is less than ideal and usually serves better as a backups destination. Backing up data inside of ZFS datasets ends up being an RSYNC like process if you can't touch the snapshots with your backup software. At that point I would suggest using something that has a backup driver inside of it. I've in the past run a LXC container on the host and installed the Acronis agent with its block driver in the container and mounted by datasets as locations inside the container to get around this.

Virtualization: always grab the backup from the Host system. A lot of times this means you can lift and shift multiple VMs at greater speed this way. The filesystem won't matter at this point (unless you're using VM ware tools to perform a VSS snapshot before each backup in your VM before each snapshot).

Use Changed Block Tracking whenever you can.

I hope this helps you or someone else reading this. Thanks.

Edit: Just realized it says 128TB in the picture and not GB. You're gonna need the Lord's help. That or a 10Gb link to another seriously fast NAS on your network that will let you sustain writes above 1GBs. That would only take about a day.

[-] TimeNaan@lemmy.world 2 points 10 months ago

Duplicati is kinda terrible. I used it for about two years as my main backup, it was very unreliable and slow

[-] Fuck_u_spez_@sh.itjust.works 1 points 10 months ago

Agreed. I ended up going with Timeshift on this machine and it seems solid enough for my purposes.

[-] TimeNaan@lemmy.world 1 points 10 months ago* (last edited 10 months ago)

I ended up with Borg with Vorta GUI, it's fantastic.

[-] Fuck_u_spez_@sh.itjust.works 1 points 10 months ago

I'll check it out, thanks.

[-] MonkderZweite@feddit.ch 2 points 11 months ago

tar-ing them beforehand might be faster.

[-] GlitzyArmrest@lemmy.world 2 points 11 months ago

Duplicacy is a breath of fresh air compare to Duplicati. I never felt like I could trust Duplicati.

this post was submitted on 27 Jan 2024
73 points (97.4% liked)

Software Gore

4787 readers
6 users here now

Welcome to /c/SoftwareGore!


This is a community where you can poke fun at nasty software. This community is your go-to destination to look at the most cringe-worthy and facepalm-inducing moments of software gone wrong. Whether it's a user interface that defies all logic, a crash that leaves you in disbelief, silly bugs or glitches that make you go crazy, or an error message that feels like it was written by an unpaid intern, this is the place to see them all!

Remember to read the rules before you make a post or comment!


Community Rules - Click to expand


These rules are subject to change at any time with or without prior notice. (last updated: 7th December 2023 - Introduction of Rule 11 with one sub-rule prohibiting posting of AI content)


  1. This community is a part of the Lemmy.world instance. You must follow its Code of Conduct (https://mastodon.world/about).
  2. Please keep all discussions in English. This makes communication and moderation much easier.
  3. Only post content that's appropriate to this community. Inappropriate posts will be removed.
  4. NSFW content of any kind is not allowed in this community.
  5. Do not create duplicate posts or comments. Such duplicated content will be removed. This also includes spamming.
  6. Do not repost media that has already been posted in the last 30 days. Such reposts will be deleted. Non-original content and reposts from external websites are allowed.
  7. Absolutely no discussion regarding politics are allowed. There are plenty of other places to voice your opinions, but fights regarding your political opinion is the last thing needed in this community.
  8. Keep all discussions civil and lighthearted.
    • Do not promote harmful activities.
    • Don't be a bigot.
    • Hate speech, harassment or discrimination based on one's race, ethnicity, gender, sexuality, religion, beliefs or any other identity is strictly disallowed. Everyone is welcome and encouraged to discuss in this community.
  9. The moderators retain the right to remove any post or comment and ban users/bots that do not necessarily violate these rules if deemed necessary.
  10. At last, use common sense. If you think you shouldn't say something to a person in real life, then don't say it here.
  11. Community specific rules:
    • Posts that contain any AI-related content as the main focus (for example: AI “hallucinations”, repeated words or phrases, different than expected responses, etc.) will be removed. (polled)


You should also check out these awesome communities!


founded 2 years ago
MODERATORS