Losing all modification when container is restarted with Docker Compose - docker

i'm using docker docker-compose for running web application. I want to change inside my container and modify some config file and restarting the container without losing modification .
I'm creating a container using
sudo docker-compose up
Then i run
sudo -it -u 0 <container-id> bash
After changing in config files everything looks good. If I restart the container executing
docker container restart $(docker ps -a -q)
all changes where discarded. Can someone explain to me the best way for doing this without losing modifications after restart ?

A useful technique here is to store a copy of the configuration files on the host and then inject them using a Docker-Compose volumes: directive.
version: '3'
services:
myapp:
image: me/myapp
ports: ['8080:8080']
volumes:
- './myapp.ini:/app/myapp.ini'
It is fairly routine to destroy and recreate containers, and you want things to be set up so that everything is ready to go immediately once you docker run or docker-compose up.
Other good uses of bind-mounted directories like this are to give a container a place to publish log files back out, and if your container happens to need persistent data on a filesystem, giving a place to store that across container runs.
docker exec is a useful debugging tool, but it is not intended to be part of your core Docker workflow.

thanks #David Maze for your reply in my case i have à script for changing many parameters in my app and generating ssl certificat after execution of script in my container i have to restart the contianer
my docker-compose.yml
version: '2.3'
services:
wso2iot-mysql:
image: mysql:5.7.20
container_name: wso2iot-mysql
hostname: wso2iot-mysql
ports:
- 3306
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- ./mysql/scripts:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-uroot", "-proot"]
interval: 10s
timeout: 60s
retries: 5
wso2iot-broker:
image: docker.wso2.com/wso2iot-broker:3.3.0
container_name: wso2iot-broker
hostname: wso2iot-broker
ports:
- "9446:9446"
- "5675:5675"
healthcheck:
test: ["CMD", "nc", "-z", "localhost", "9446"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./broker:/home/wso2carbon/volumes/wso2/broker
wso2iot-analytics:
image: docker.wso2.com/wso2iot-analytics:3.3.0
container_name: wso2iot-analytics
hostname: wso2iot-analytics
healthcheck:
test: ["CMD", "curl", "-k", "-f", "https://localhost:9445/carbon/admin/login.jsp"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./analytics:/home/wso2carbon/volumes/wso2/analytics
ports:
- "9445:9445"
wso2iot-server:
image: docker.wso2.com/wso2iot-server:3.3.0
container_name: wso2iot-server
hostname: wso2iot-server
healthcheck:
test: ["CMD", "curl", "-k", "-f", "https://localhost:9443/carbon/admin/login.jsp"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./iot-server:/home/wso2carbon/volumes
ports:
- "9443:9443"
links:
- wso2iot-mysql

Related

Minio: found backend type fs, expected xl or xl-single

I try to upgrade minio version in my docker commpose(previously I used image: minio/minio:RELEASE.2020-06-22T03-12-50Z and it was working
)
For now I have following docker-compose service:
version: '3.6'
services:
minio:
container_name: minio
image: minio/minio:RELEASE.2022-11-17T23-20-09Z.fips
volumes:
- minio-data:/data
ports:
- 9000:9000
environment:
- MINIO_ROOT_USER=minio
- MINIO_ROOT_PASSWORD=minio123
command: server /data
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
When I try to start(docker-compose up -d) I see the following error in the minio container log:
2022-11-25 11:40:56 ERROR Unable to use the drive /data: Drive /data: found backend type fs, expected xl or xl-single - to migrate to a supported backend visit https://min.io/docs/minio/linux/operations/install-deploy-manage/migrate-fs-gateway.html: Invalid arguments specified
I've googled the following article https://min.io/docs/minio/linux/operations/install-deploy-manage/migrate-fs-gateway.html
But I still don't understand what shoud I change in my compose file to make it working.
looks like you need migrate data/fs in your volum to be used in new version of minio
so you need to run steps from
https://min.io/docs/minio/linux/operations/install-deploy-manage/migrate-fs-gateway.html
In your compose you need to add
volumes:
minio-data:
driver: local
It is not a solution but workaround how to use a fresh version:
minio:
container_name: minio
image: bitnami/minio:2022.11.17-debian-11-r0
volumes:
- minio-data:/data
ports:
- 9000:9000
- 9001:9001
environment:
- MINIO_ROOT_USER=minio
- MINIO_ROOT_PASSWORD=minio123
- MINIO_DEFAULT_BUCKETS=mybucket1,mybucket2
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s

How to use two healthchecks in docker-compose file where the python app depends on both healthchecks?

I have two postgres containers one mdhillon/postgis and another postgrest/postgrest. And the python app depends on both the healthchecks of the postgres containers. Please help
In terminal after docker-compose up
Creating compose_postgis_1 ... done
Creating compose_postgrest_1 ... done
Error for app Container <postgrest_container_id> is unhealthy. And the terminal exits
Showing docker-compose.yml file
services:
postgis:
image: mdillon/postgis
volumes:
- ./data:/var/lib/postgresql/data:cached
ports:
- 5432:5432
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
postgrest:
image: postgrest/postgrest
volumes:
- ./data:/var/lib/postgresql/data:cached
environment:
PGRST_DB_URI: postgres://${PGRST_DB_ANON_ROLE}:#postgis:5432/postgres
PGRST_DB_SCHEMA: ${PGRST_DB_SCHEMA}
PGRST_DB_ANON_ROLE: ${PGRST_DB_ANON_ROLE}
PGRST_DB_POOL: ${PGRST_DB_POOL}
ports:
- 3000:3000
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
app:
image: newapp
command: python main.py
ports:
- 5000:5000
depends_on:
postgis:
condition: service_healthy
postgrest:
condition: service_healthy
If you are using the official Postgres Docker Image, there is an option to RUN your postgres on a specific port. You need to add ENV variable PGPORT for postgree docker container to run on a different port. Try the below one...
services:
postgis:
image: mdillon/postgis
volumes:
- ./data:/var/lib/postgresql/data:cached
ports:
- 5432:5432
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
postgrest:
image: postgrest/postgrest
volumes:
- ./data:/var/lib/postgresql/data:cached
environment:
PGRST_DB_URI: postgres://${PGRST_DB_ANON_ROLE}:#postgis:5432/postgres
PGRST_DB_SCHEMA: ${PGRST_DB_SCHEMA}
PGRST_DB_ANON_ROLE: ${PGRST_DB_ANON_ROLE}
PGRST_DB_POOL: ${PGRST_DB_POOL}
PGPORT: 3000
ports:
- 3000:3000
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
app:
image: newapp
command: python main.py
ports:
- 5000:5000
depends_on:
postgis:
condition: service_healthy
postgrest:
condition: service_healthy
By default, Postgres container runs on Port 5432 inside Docker Network. Since, you are not changing the port of Postgres Container, both the containers are trying to run on same port inside docker network, and due to this, one container will run and the other will not. You can check the logs of docker container for better understanding.
Hence, adding PGPORT env var to the container to run Postgres on diff port will resolve your issue...

Connecting to RabbitMQ container with docker-compose

I want to run RabbitMQ in one container, and a worker process in another. The worker process needs to access RabbitMQ.
I'd like these to be managed through docker-compose.
This is my docker-compose.yml file so far:
version: "3"
services:
rabbitmq:
image: rabbitmq
command: rabbitmq-server
expose:
- "5672"
- "15672"
worker:
build: ./worker
depends_on:
- rabbitmq
# Allow access to docker daemon
volumes:
- /var/run/docker.sock:/var/run/docker.sock
So I've exposed the RabbitMQ ports. The worker process accesses RabbitMQ using the following URL:
amqp://guest:guest#rabbitmq:5672/
Which is what they use in the official tutorial, but localhost has been swapped for rabbitmq, since the the containers should be discoverable with a hostname identical to the container name:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
Whenever I run this, I get an connection refused error:
Recreating ci_rabbitmq_1 ... done
Recreating ci_worker_1 ... done
Attaching to ci_rabbitmq_1, ci_worker_1
worker_1 | dial tcp 127.0.0.1:5672: connect: connection refused
ci_worker_1 exited with code 1
I find this interesting because it's using the IP 127.0.0.1 which (I think) is localhost, even though I specified rabbitmq as the hostname. I'm not an expert on docker networking, so maybe this is desired.
I'm happy to supply more information if needed!
Edit
There is an almost identical question here. I think I need to wait until rabbitmq is up and running before starting worker. I tried doing this with a healthcheck:
version: "2.1"
services:
rabbitmq:
image: rabbitmq
command: rabbitmq-server
expose:
- "5672"
- "15672"
healthcheck:
test: [ "CMD", "nc", "-z", "localhost", "5672" ]
interval: 10s
timeout: 10s
retries: 5
worker:
build: .
depends_on:
rabbitmq:
condition: service_healthy
(Note the different version). This doesn't work, however - it will always fail as not-healthy.
Aha! I fixed it. #Ijaz was totally correct - the RabbitMQ service takes a while to start, and my worker tries to connect before it's running.
I tried using a delay, but this failed when the RabbitMQ took longer than usual.
This is also indicative of a larger architectural problem - what happens if the queuing service (RabbitMQ in my case) goes offline during production? Right now, my entire site fails. There needs to be some built-in redundancy and polling.
As described this this related answer, we can use healthchecks in docker-compose 3+:
version: "3"
services:
rabbitmq:
image: rabbitmq
command: rabbitmq-server
expose:
- 5672
- 15672
healthcheck:
test: [ "CMD", "nc", "-z", "localhost", "5672" ]
interval: 5s
timeout: 15s
retries: 1
worker:
image: worker
restart: on-failure
depends_on:
- rabbitmq
Now, the worker container will restart a few times while the rabbitmq container stays unhealthy. rabbitmq immediately becomes healthy when nc -z localhost 5672 succeeds - i.e. when the queuing is live!
Here is the correct working example :
version: "3.8"
services:
rabbitmq:
image: rabbitmq:3.7.28-management
#container_name: rabbitmq
volumes:
- ./etc/rabbitmq/conf:/etc/rabbitmq/
- ./etc/rabbitmq/data/:/var/lib/rabbitmq/
- ./etc/rabbitmq/logs/:/var/log/rabbitmq/
environment:
RABBITMQ_ERLANG_COOKIE: ${RABBITMQ_ERLANG_COOKIE:-secret_cookie}
RABBITMQ_DEFAULT_USER: ${RABBITMQ_DEFAULT_USER:-admin}
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_DEFAULT_PASS:-admin}
ports:
- 5672:5672 #amqp
- 15672:15672 #http
- 15692:15692 #prometheus
healthcheck:
test: [ "CMD", "rabbitmqctl", "status"]
interval: 5s
timeout: 20s
retries: 5
mysql:
image: mysql
restart: always
volumes:
- ./etc/mysql/data:/var/lib/mysql
- ./etc/mysql/scripts:/docker-entrypoint-initdb.d
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: mysqldb
MYSQL_USER: ${MYSQL_DEFAULT_USER:-testuser}
MYSQL_PASSWORD: ${MYSQL_DEFAULT_PASSWORD:-testuser}
ports:
- "3306:3306"
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
timeout: 20s
retries: 10
trigger-batch-process-job:
build: .
environment:
- RMQ_USER=${RABBITMQ_DEFAULT_USER:-admin}
- RMQ_PASS=${RABBITMQ_DEFAULT_PASS:-admin}
- RMQ_HOST=${RABBITMQ_DEFAULT_HOST:-rabbitmq}
- RMQ_PORT=${RABBITMQ_DEFAULT_PORT:-5672}
- DB_USER=${MYSQL_DEFAULT_USER:-testuser}
- DB_PASS=${MYSQL_DEFAULT_PASSWORD:-testuser}
- DB_SERVER=mysql
- DB_NAME=mysqldb
- DB_PORT=3306
depends_on:
mysql:
condition: service_healthy
rabbitmq:
condition: service_healthy
Maybe you dont need to expose/map the ports on the host if you are just accessing the service from another container.
From the documentation:
Expose Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port
can be specified.
expose:
- "3000"
- "8000"
So it should be like this:
version: "3"
services:
rabbitmq:
image: rabbitmq
command: rabbitmq-server
expose:
- "5672"
- "15672"
worker:
build: ./worker
depends_on:
- rabbitmq
# Allow access to docker daemon
volumes:
- /var/run/docker.sock:/var/run/docker.sock
also make sure to connect to rabitmq only when its ready to server on port.
Most clean way for docker compose v3.8
version: "3.8"
services:
worker:
build: ./worker
rabbitmq:
condition: service_healthy
rabbitmq:
image: library/rabbitmq
ports:
- 5671:5671
- 5672:5672
healthcheck:
test: [ "CMD", "nc", "-z", "localhost", "5672" ]
interval: 5s
timeout: 10s
retries: 3

Can't set volume value right when running owncloud docker container

I followed this guide to install owncloud server on my VPS, it works. However there is an issue when I was trying to mount the file volume to host server. Below is the docker-compose.yml file I use. If I run the docker image with the yml file directly, I can't find there is any owncloud data under the directory /mnt/data of my VPS even I created /mnt/data manually.
version: '2.1'
volumes:
files:
driver: local
mysql:
driver: local
backup:
driver: local
redis:
driver: local
services:
owncloud:
image: owncloud/server:${OWNCLOUD_VERSION}
restart: always
ports:
- ${HTTPS_PORT}:443
- ${HTTP_PORT}:80
depends_on:
- db
- redis
environment:
- OWNCLOUD_DOMAIN=${OWNCLOUD_DOMAIN}
- OWNCLOUD_DB_TYPE=mysql
- OWNCLOUD_DB_NAME=owncloud
- OWNCLOUD_DB_USERNAME=owncloud
- OWNCLOUD_DB_PASSWORD=owncloud
- OWNCLOUD_DB_HOST=db
- OWNCLOUD_ADMIN_USERNAME=${ADMIN_USERNAME}
- OWNCLOUD_ADMIN_PASSWORD=${ADMIN_PASSWORD}
- OWNCLOUD_UTF8MB4_ENABLED=true
- OWNCLOUD_REDIS_ENABLED=true
- OWNCLOUD_REDIS_HOST=redis
healthcheck:
test: ["CMD", "/usr/bin/healthcheck"]
interval: 30s
timeout: 10s
retries: 5
volumes:
- files:/mnt/data
db:
image: webhippie/mariadb:latest
restart: always
environment:
- MARIADB_ROOT_PASSWORD=owncloud
- MARIADB_USERNAME=owncloud
- MARIADB_PASSWORD=owncloud
- MARIADB_DATABASE=owncloud
- MARIADB_MAX_ALLOWED_PACKET=128M
- MARIADB_INNODB_LOG_FILE_SIZE=64M
- MARIADB_INNODB_LARGE_PREFIX=ON
- MARIADB_INNODB_FILE_FORMAT=Barracuda
healthcheck:
test: ["CMD", "/usr/bin/healthcheck"]
interval: 30s
timeout: 10s
retries: 5
volumes:
- mysql:/var/lib/mysql
- backup:/var/lib/backup
redis:
image: webhippie/redis:latest
restart: always
environment:
- REDIS_DATABASES=1
healthcheck:
test: ["CMD", "/usr/bin/healthcheck"]
interval: 30s
timeout: 10s
retries: 5
volumes:
- redis:/var/lib/redis
So I tried modifying line files:/mnt/data to files:/mnt/data:/mnt/data, then it prompted me with following error.
ERROR: for ownclouddockerserver_owncloud_1 Cannot create container for service owncloud: invalid bind mount spec "ownclouddockerserver_files:/mnt/data:/mnt/data": invalid mode: /mnt/data
ERROR: for owncloud Cannot create container for service owncloud: invalid bind mount spec "ownclouddockerserver_files:/mnt/data:/mnt/data": invalid mode: /mnt/data
Can anyone help me out to figure out the right way of mounting external directory to container's? Much appreciated.
The /mnt/data directory exists only inside your docker container.
It is mapped to a generated (virtual) drive called files.
In order to map the /mnt/data directory to a local directory, you use:
/my/local/directory:/mnt/data

docker - can't change files (wso2)

i'm using docker 18.04 and running the wso2 iot-server. I want to change the ip-address using this tutorial https://docs.wso2.com/display/IOTS330/Configuring+the+IP+or+Hostname . Im using the attached docker-compose file. I'm creating a container using
sudo docker-compose up
Then i run
sudo -it -u 0 <container-id> bash
navigate to the script directory an execute the script.
After this files like conf/carbon.xml where changed and everything looks good. If I restart the container executing
docker container restart $(docker ps -a -q)
all changes where discarded. But the strange thing is, if i create a new file e. g. in the conf directory this file remains, even after a restart.
Can someone explain this to me?
version: '2.3'
services:
wso2iot-mysql:
image: mysql:5.7.20
container_name: wso2iot-mysql
hostname: wso2iot-mysql
ports:
- 3306
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- ./mysql/scripts:/docker-entrypoint-initdb.d
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-uroot", "-proot"]
interval: 10s
timeout: 60s
retries: 5
wso2iot-broker:
image: wso2iot-broker:3.3.0
container_name: wso2iot-broker
hostname: wso2iot-broker
ports:
- "9446:9446"
- "5675:5675"
healthcheck:
test: ["CMD", "nc", "-z", "localhost", "9446"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./broker:/home/wso2carbon/volumes/wso2/broker
wso2iot-analytics:
image: wso2iot-analytics:3.3.0
container_name: wso2iot-analytics
hostname: wso2iot-analytics
healthcheck:
test: ["CMD", "curl", "-k", "-f", "https://localhost:9445/carbon/admin/login.jsp"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./analytics:/home/wso2carbon/volumes/wso2/analytics
ports:
- "9445:9445"
wso2iot-server:
image: wso2iot-server:3.3.0
container_name: wso2iot-server
hostname: wso2iot-server
healthcheck:
test: ["CMD", "curl", "-k", "-f", "https://localhost:9443/carbon/admin/login.jsp"]
interval: 10s
timeout: 120s
retries: 5
depends_on:
wso2iot-mysql:
condition: service_healthy
volumes:
- ./iot-server:/home/wso2carbon/volumes
ports:
- "443:9443"
links:
- wso2iot-mysql
As far as I know, until the container is deleted write-able layer should be available. But this is not the expected way of using Docker. In this case, if you need to run the change-ip script I think it would be better to create a new Docker image where change-ip script is executed in the process of Docker image creation.

Resources