I was wondering if I was backing up my container's volumes correctly.
Info
System :
NAS Synology with docker App
What I use to build my containers :
Docker-compose
Process
This is how I do :
First I build my containers with docker-compose up -d :
docker-compose.yaml
...
nginx:
image: 'jlesage/nginx-proxy-manager:latest'
container_name: Nginx-jlesage
restart: always
environment:
TZ: ${GB_TZ}
volumes:
- '/volume1/docker/nginx/:/config:rw'
network_mode: host
...
I've around 10-15 containers (they are apps or databases)
Every container has it own folder in my /volume/docker/ folder.
Then in order to back up any container data, I simply stop every running container and I copy/paste the /docker/ folder somewhere else (not on my NAS)
Restore data
If I ever need to restore one of my container's data (dead NAS or corrupted file), I would delete the container, delete its data and copy/paste my back up at the same place, then rebuild my container with docker-compose up -d.
Conclusion
I've seen so much complicated way (like this one How can I backup a Docker-container with its data-volumes?) to do it and I can't figure if my solution is horrible or not and if my data are in danger or not.
I tried deleting my docker folder, then restoring everything using my method, it worked and I'm confused and scared about missing something.
Related
I've docker container build from debian:latest image.
I need to execute a bash script that will start several services.
My host machine is Windows 10 and I'm using Docker Desktop, I've found configuration files in
docker-desktop-data wsl2 drive in data\docker\containers\<container_name>
I've 2 config files there:
config.v2.json and hostcongih.json
I've edited the first of them and replaced:
"Entrypoint":null with "Entrypoint":["/bin/bash", "/opt/startup.sh"]
I have done it while the container was down, when I restarted it the script was not executed. When I opened config.v2.json file again the Entrypoint was set to null again.
I need to run this script at every container start.
Additional strange thing is that this container doesn't have any volume appearing in docker desktop. I can checkout this container and start another one, but I need to preserve current state of this container (installed packages, files, DB content). How can I change the entrypoint or run the script in other way?
Is there anyway to export the container to image alongside with it's configuration? I need to expose several ports and run the startup script. Is there anyway to make every new container made from the image exported from current container expose the same ports and run same startup script?
Docker's typical workflow involves containers that only run a single process, and are intrinsically temporary. You'd almost never create a container, manually set it up, and try to persist it; instead, you'd write a script called a Dockerfile that describes how to create a reusable image, and then launch some number of containers from that.
It's almost always preferable to launch multiple single-process containers than to try to run multiple processes in a single container. You can use a tool like Docker Compose to describe the multiple containers and record the various options you'd need to start them:
# docker-compose.yml
# Describe the file version. Required with the stable Python implementation
# of Compose. Most recent stable version of the file format.
version: '3.8'
# Persistent storage managed by Docker; will not be accessible on the host.
volumes:
dbdata:
# Actual containers.
services:
# The database.
db:
# Use a stock Docker Hub image.
image: postgres:15
# Persist its data.
volumes:
- dbdata:/var/lib/postgresql/data
# Describe how to set up the initial database.
environment:
POSTGRES_PASSWORD: passw0rd
# Make the container accessible from outside Docker (optional).
ports:
- '5432:5432' # first port any available host port
# second port MUST be standard PostgreSQL port 5432
# Reverse proxy / static asset server
nginx:
image: nginx:1.23
# Get static assets from the host system.
volumes:
- ./static:/usr/share/nginx/html
# Make the container externally accessible.
ports:
- '8000:80'
You can check this file into source control with your application. Also consider adding a third container that build: an image containing the actual application code; that probably will not have volumes:.
docker-compose up -d will start this stack of containers (without -d, in the foreground). If you make a change to the docker-compose.yml file, re-running the same command will delete and recreate containers as required. Note that you are never running an unmodified debian image, nor are you manually running commands inside a container; the docker-compose.yml file completely describes the containers, their startup sequences (if not already built into the images), and any required runtime options.
Also see Networking in Compose for some details about how to make connections between containers: localhost from within a container will call out to that same container and not one of the other containers or the host system.
I have a composer file with four services. I need to OPEN one of them to outside by settings ports.
After changing .yml file, do I need to 'rebuild the container' (docker-compose down/up) or do I just need to stop/start? (docker-compose stop/start)?
Specifically, what I neet to do accesible to outside is a Posgree Server. This is my actual postgres service definition in .yml:
mydb:
image: postgres:9.4
environment:
- POSTGRES_PASSWORD=myPassword
volumes:
- db-data:/var/lib/postgresql/data
I think I just need to change it to:
mydb:
image: postgres:9.4
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=myPassword
volumes:
- db-data:/var/lib/postgresql/data
I'm worried of loosing data on 'db-data' volume, or connection to the other services, if I use down/up.
Also, there are 3 other services specified in the .yml file. If it is necessary to REBUILD (without loosing data in db-data, of course), I don't want to touch these other containers. In this case, which would be the steps?:
First, rebuild 'mydb' container with 'docker run' (Could you provide me the right command, please?)
Modify the .yml as stated before, just adding the ports
Perform a simple docker-compose stop/start
Could you help me, please?
If you're only changing settings like ports:, it is enough to re-run docker-compose up -d again. Compose will figure out which things are different from the existing containers, and destroy and recreate only those specific containers.
If you're changing a Dockerfile or your application code you may specifically need to docker-compose build your application or use docker-compose up -d --build. But you don't specifically need to rebuild the images if you're only changing runtime settings like ports:.
docker-compose down tears down your entire container stack. You don't need it for routine rebuilds or container updates. You may want to intentionally shut down the container system (and free up host ports, memory, and other resources) and it's useful then.
docker-compose stop leaves the containers in an unusual state of existing but without a running process. You almost never need this. docker-compose start restarts containers in this unusual state, and you also almost never need it.
You have to rebuild it.
For that reason the best practice is to map all the mount points and resources externally, so you can recreate the container (with changed parameters) without any loss of data.
In your scenario I see that you put all the data in an external docker volume, so I think you could recreate it with changed ports in a safe way.
Here is my problem:
I have a container A (Node.js) and a container B (nginx). In the Dockerfile of container A, I build several files from the sources, as they are needed to run the server into a folder named build. I want to access this folder from container B to serve the static files.
The purpose is to have a simple workflow were you could just git clone the repo with the sources and run docker-compose up --build and everything is running. In this scenario, the host does not have the software needed to build the file, so the build must happen INSIDE the docker container.
My first attempt that almost work was the following:
version: "2"
services:
nginx:
volumes_from:
- node
node:
volumes:
- /code/build
When I first built docker compose build & up everything seemed to work fine, the container is created from the container A with the build files inside it and the container B can access them as expected.
However, the issue happens when the sources are updated. When it happens, the new build files do not replace the old one inside the container because the existing container seems to have the priority. So after the first time I always have old files for both container A and B.
I investigated a way to force the volume to be recreated from scratch everytime I run docker-compose build but did not find anything. The only thing I found would be to use docker-compose stop && docker-compose rm but it seems to be a bit hacky to do that everytime and in addition it leads to a quite long downtime compared to just replace existing container with new version with docker-compose up.
Is there any proper solution to acomplish what I am trying to achieve?
I'd redo the workflow, use a named volume that's mounted in multiple containers, and one of those containers is an updater that has the application build environment. Then on launch, the updater pulls the latest from git and updates the shared volume as part of its CMD or ENTRYPOINT.
Your compose file would look similar to:
version: "2"
volumes:
build:
driver: local
services:
nginx:
volumes:
- build:/code/build
updater:
volumes:
- build:/code/build
Then on any changes, you can run a docker-compose run updater and it will push the latest changes to your volume where nginx can use it without ever stopping your other containers. Since it's a batch job that exits, even a docker-compose up would launch the updater again.
I'm using an official MySQL docker image (https://github.com/docker-library/mysql/blob/3288a66368f16deb6f2768ce373ab36f92553cfa/5.6/Dockerfile) with docker-compose and I would like its data to be wiped out upon restart. The default is that it retains its data between container restarts.
Here's my docker-compose.yml:
version: "2"
services:
mydb:
image: mysql:5.6
environment:
MYSQL_ROOT_PASSWORD: foo
When I use docker inspect on my container it shows its location on the host machine. How can I instead have it store the data inside the container? I do not want it to persist.
When using docker-compose, the containers are not removed on docker-compose stop (or ctrl-c, or any other kind of interrupt/exit). Thus, if you're stopping the container, it's still going to exist the next time you start.
What you want is docker-compose down which, according to the docs will "Stop and remove containers, networks, images, and volumes". Note that only containers and networks are removed by default - you need to specify a command-line switch if you want to remove images or volumes.
I have been trying to user docker-compose to spin up a postgres container container, with a single, persisted named volume.
The goal is to have different postgres containers share the same persisted data (not concurrently!) - one container dies or is killed, another takes it place without losing previously persisted data.
As I understand "named volumes" are supposed to replace "Data Volume Containers".
However, so far either one of two things happen:
The postgres container fails to start up, with error message "ERROR: Container command not found or does not exist."
I achieve persistance for only that specific container. If it is stopped and removed and another container started, we start with a blank slate.
SO, as far as I understand, the postgres image does create it's own volume, which is of course bound to that specific container. Which would be fine, if I could just get THAT volume aliased or linked or something with the named volume.
Current incarnation of docker-compose.yml:
version: '2'
services:
db:
image: postgres
restart: allways
volumes:
- myappdb:/var/lib/postgresql/data/
environment:
- POSTGRES_PASSWORD=mysecretpasswordPleaseChangeME
volumes:
myappdb:
driver: local
Am I doing something stupidly wrong, or attempting something that is simply not supported?
Docker version 1.10.3, build 20f81dd
docker-compose version 1.6.0,
build d99cad6
Ok, after a lot of trial and error, things are now working as they should (meaning I am able to run docker-compose down and then docker-compose up and my data is in the state where it was left with the down command).
In general, a few things:
Don't use the PGDATA environment option with the official postgres image
If using spring boot (like I was), and docker compose (as I was) and passing environment options to a service linked to your database container, do not wrap a profile name in double quotes. It is passed as-is to the Spring as-is, resulting in a non-existing profile to be used as the active profile.
I had some subtle and strange things incorrectly configured initially, but I suspect the killer was point 2 above - it caused my app,when running in a container, to use in-mem H2 database instead of the linked container database. So everything functioned (almost) perfectly - until container shutdown. And, when running from IDE, against container DB (with ports exposed to host), all worked perfectly (including persistence), since the active profile parameter was correctly set in the IDE launcher (NO quotes!).
Live and learn I guess (but I do feel a LOT of egg on my face).
You need to tell Compose that it should manage creation of the Volume, otherwise it assumes it should already exist on the host.
volumes:
myappdb:
external: false
Docs: https://docs.docker.com/compose/compose-file/#external