8
submitted 1 week ago by trilobite@lemmy.ml to c/selfhost@lemmy.ml

I have 2 servers both running a Debian VM each. The old VM was one of the first o installed several years ago when I knew lityle and its messed up and has little space left. It running on Truenas Scale and has a couple of docker apps that I'm very dependent on (Firefly, Hammond). I want to move the datasets for these docker apps to a newer VM running on Proxmox server. It a Debian 13 VM with loads of space. What are my options for moving the data given neither Firefly nor Hammond have the appropriate export / import functions? I could migrate the old VM that that wouldn't resolve my space issue. Plus it Debian 10 and it would take a lot to being it up to Trixie.

top 11 comments
sorted by: hot top controversial new old
[-] just_another_person@lemmy.world 6 points 1 week ago

The first rule of containers is that you do not store any data in containers.

The second rule of containers is that you run them from a versioned config with proper volumes and tagging. Always.

If you obey these rules, then it's as simple as moving the volumes to another host and starting your containers. They're fully portable that way.

[-] trilobite@lemmy.ml 3 points 1 week ago

The first rule of containers is that you do not store any data in containers.

Do you mean they should be bind mounts? From here, a bind mount should look like this:

version: '3.8'

services: my_container: image: my_image:latest volumes: - /path/on/host:/path/in/container

So referring to my Firefly compose above, then I shoudl simply be able to copy over the /var/www/html/storage/upload for the main app data and the database stored in here /var/lib/mysql can just be copied over? but then why does my local folder not have any strorage/upload folders?

user@vm101:/var/www/html$ ls index.html

[-] mhzawadi@lemmy.horwood.cloud 3 points 1 week ago

Assuming truenas does NFS, I would mount the old docker data into the new docker VM. Stop all running containers and copy the data, copy so that you have a backup should you need to revert.

Make sure all containers are in compose files, up date them for new data location.

[-] krolden@lemmy.ml 2 points 1 week ago
[-] thirdBreakfast@lemmy.world 2 points 1 week ago

I'm not clear from your question, but I'm guessing you're talking about data stored in Docker volumes? (if they are bind mounts you're all good - you can just copy it). The compose files I found online for FireflyIII use volumes, but Hammond looked like bind mounts. If you're not sure, post your compose files here with the secrets redacted.

To move data out of a Docker volume, a common way is to mount the volume into a temporary container to copy it out. Something like:

docker run --rm \
  -v myvolume:/from \
  -v $(pwd):/to \
  alpine sh -c "cd /from && tar cf /to/myvolume.tar ."

Then on the machine you're moving to, create the new empty Docker volume and do the temporary copy back in:

docker volume create myvolume
docker run --rm \
  -v myvolume:/to \
  -v $(pwd):/from \
  alpine sh -c "cd /to && tar xf /from/myvolume.tar"

Or, even better, just untar it into a data directory under your compose file and bind mount it so you don't have this problem in future. Perhaps there's some reason why Docker volumes are good, but I'm not sure what it is.

[-] trilobite@lemmy.ml 1 points 1 week ago

Here is my docker compose file below. I think I used the standard file that the developer ships, simply because I was keen to get firefly going without fully understanding the complexity of docker storage in volumes.

 The Firefly III Data Importer will ask you for the Firefly III URL and a "Client ID".
# You can generate the Client ID at http://localhost/profile (after registering)
# The Firefly III URL is: http://app:8080/
#
# Other URL's will give 500 | Server Error
#

services:
  app:
    image: fireflyiii/core:latest
    hostname: app
    container_name: firefly_iii_core
    networks:
      - firefly_iii
    restart: always
    volumes:
      - firefly_iii_upload:/var/www/html/storage/upload
    env_file: .env
    ports:
      - '84:8080'
    depends_on:
      - db
  db:
    image: mariadb:lts
    hostname: db
    container_name: firefly_iii_db
    networks:
      - firefly_iii
    restart: always
    env_file: .db.env
    volumes:
      - firefly_iii_db:/var/lib/mysql

  importer:
    image: fireflyiii/data-importer:latest
    hostname: importer
    restart: always
    container_name: firefly_iii_importer
    networks:
      - firefly_iii
    ports:
      - '81:8080'
    depends_on:
      - app
    env_file: .importer.env

  cron:
    #
    # To make this work, set STATIC_CRON_TOKEN in your .env file or as an environment variable and replace REPLACEME below
    # The STATIC_CRON_TOKEN must be *exactly* 32 characters long
    #
    image: alpine
    container_name: firefly_iii_cron
    restart: always
    command: sh -c "echo \"0 3 * * * wget -qO- http://app:8080/api/v1/cron/XTrhfJh9crQGfGst0OxoU7BCRD9VepYb;echo/" | crontab - && crond -f -L /dev/stdout"
    networks:
      - firefly_iii
volumes:
   firefly_iii_upload:
   firefly_iii_db:

networks:
  firefly_iii:
    driver: bridge
[-] thirdBreakfast@lemmy.world 1 points 1 week ago

Great. There's two volumes there - firefly_iii_upload & firefly_iii_db.

You'll definitely want to docker compose down first (to ensure the database is not being updated), then:

docker run --rm \
  -v firefly_iii_db:/from \
  -v $(pwd):/to \
  alpine sh -c "cd /from && tar cf /to/firefly_iii_db.tar ."

and

docker run --rm \
  -v firefly_iii_upload:/from \
  -v $(pwd):/to \
  alpine sh -c "cd /from && tar cf /to/firefly_iii_upload.tar ."

Then copy those two .tar files to the new VM. Then create the new empty volumes with:

docker volume create firefly_iii_db
docker volume create firefly_iii_upload

And untar your data into the volumes:

docker run --rm \
  -v firefly_iii_db:/to \
  -v $(pwd):/from \
  alpine sh -c "cd /to && tar xf /from/firefly_iii_db.tar"

docker run --rm \
  -v firefly_iii_upload:/to \
  -v $(pwd):/from \
  alpine sh -c "cd /to && tar xf /from/firefly_iii_upload.tar"

Then make sure you've manually brought over the compose file and those two .env files, and you should be able to docker compose up and be in business again. Good choice with Proxmox in my opinion.

[-] db_geek@norden.social 3 points 6 days ago

@thirdBreakfast @trilobite 🤔
Interestingly, handling of volumes with podman is much more easier:
podman volume export myvol --output myvol.tar
podman volume import myvol myvol.tar
https://docs.podman.io/en/latest/markdown/podman-volume-export.1.html

I also checked the docker volume client documentation and there is no export command available like for podman.
https://docs.docker.com/reference/cli/docker/volume/

[-] trilobite@lemmy.ml 1 points 6 days ago

Interestingly, I did something similar with Linkwarden where I installed the datasets in /home/user/linkwarden/data. The dam thing caused my VM to run out of space because it started downloading pages for the 4000 bookmarks I had. It went into crisis mode so I stopped it. I then created a dataset on my Truenas Scale machine and NFS exported to the VM on the same server. I simply cp -R to the new NFS mountpoint, edited the yml file with the new paths and voila! It seems to be working. I know that some docker container don't like working off NFS share so we'll see. I wonder ho well this will work when the VM is on a different machine as the there is a network cable, a switch, etc. in between. If for any reason the nas goes down, the docker containers on the Proxmox VM will be crying as they'll lose the link to their volumes? Can anything be done about this? I guess it can never be as risilient as having VM and has on the same machine.

[-] thirdBreakfast@lemmy.world 1 points 6 days ago

I run nearly all my Docker workloads with their data just in the home directory of the VM (or LXC actually since that's how I roll) I'm running them in, but a few have data on my separate NAS via and NFS share - so through a switch etc with no problems - just slowish.

[-] InEnduringGrowStrong@sh.itjust.works 1 points 1 week ago* (last edited 1 week ago)

Whatever you do, make sure you have working backups first.

I imagine you could copy the docker volumes over, but that's more work than of they're "mounts", in which case you can just copy the corresponding on the host. Use scp or rclone or whatever to copy the files over

this post was submitted on 11 Oct 2025
8 points (90.0% liked)

Self Hosted - Self-hosting your services.

16314 readers
1 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules

Important

Cross-posting

If you see a rule-breaker please DM the mods!

founded 4 years ago
MODERATORS