Expectations: Disaster recovery covered
1. Backup and Restore the entire Docker node from a single host to another (just necessary data).
2. For containers created from:
- Syno shell or docker pckg
- Portainer/stacks as docker-compose
3. Syno docker environment is stored here:
- images: /volume1/@docker/image
- volumes: /volume1/@docker/volumes
- networks: /volume1/@docker/network
- containers: /volume1/@docker/containers
4. There are also mounted volumes, up to your architecture, e.g. /volume1/docker/path to mounted volumes. This is the simplest part of the problem. I don't need a discussion about it.
-----------------
Consideration:
A. You can't use Synology Snapshots pckg for these directories (syno ./@docker)
B. rsync of them over SSH by defined and scheduled task is the simplest method (archive which preserves all timestamps and permission, compressed, human-readable, recursive)
useful for backup (e.g. x per day) also for the restore (in case of the disaster)
C. You can also use create a Backup of settings from the Portainer, but it will save mainly your Compose files, PortainerDB, certificates used and setup of connected Docker nodes. When you have a backup of the entire Portainer container (rsync), you have it all in your pocket.
So, here is also open discussion about:
- why to backup Images, when you can use pull from the repo? So, the answer is in the problem between versions (e.g. InfluxDB)
- when container by container backup/restore is chosen and not entire docker environment, then there is a problem, how to create script for an exact image+volume+mapped volume backup to single file, e.g. TAR.
My way - Backup of the defined container:
First - docker export doesn't work, because following the docker documentation:
1. You need to stop the container:
2. you need to create a new image from the existing:
3. Then you need to save the new image file somewhere, better in TAR:
and here is the hardest point where I don't have a useful solution till now: how to keep mapped volumes with the container IDs?
4. Both files (images.tar and volumes.tar) need to be transferred to the backup environment.
Done. tested only with the image. Even better, because you can use this backup for some migration purposes (from source host to another host):
docker load -i <new image name.tar>
then you can create a new container.
1. Backup and Restore the entire Docker node from a single host to another (just necessary data).
2. For containers created from:
- Syno shell or docker pckg
- Portainer/stacks as docker-compose
3. Syno docker environment is stored here:
- images: /volume1/@docker/image
- volumes: /volume1/@docker/volumes
- networks: /volume1/@docker/network
- containers: /volume1/@docker/containers
4. There are also mounted volumes, up to your architecture, e.g. /volume1/docker/path to mounted volumes. This is the simplest part of the problem. I don't need a discussion about it.
-----------------
Consideration:
A. You can't use Synology Snapshots pckg for these directories (syno ./@docker)
B. rsync of them over SSH by defined and scheduled task is the simplest method (archive which preserves all timestamps and permission, compressed, human-readable, recursive)
Bash:
rsync -azher
C. You can also use create a Backup of settings from the Portainer, but it will save mainly your Compose files, PortainerDB, certificates used and setup of connected Docker nodes. When you have a backup of the entire Portainer container (rsync), you have it all in your pocket.
So, here is also open discussion about:
- why to backup Images, when you can use pull from the repo? So, the answer is in the problem between versions (e.g. InfluxDB)
- when container by container backup/restore is chosen and not entire docker environment, then there is a problem, how to create script for an exact image+volume+mapped volume backup to single file, e.g. TAR.
My way - Backup of the defined container:
First - docker export doesn't work, because following the docker documentation:
So, the 'docker save' is the better way. Tested a few diff ways and this is a merge of them:docker export does not export the contents of volumes associated with the container
1. You need to stop the container:
Bash:
docker stop <container name or ID>
Code:
docker commit <container name> <new container name>
Code:
docker save -o <new image name.tar> <new container name>
and here is the hardest point where I don't have a useful solution till now: how to keep mapped volumes with the container IDs?
4. Both files (images.tar and volumes.tar) need to be transferred to the backup environment.
Done. tested only with the image. Even better, because you can use this backup for some migration purposes (from source host to another host):
docker load -i <new image name.tar>
then you can create a new container.