Possible to access files from a different docker image within a container? - docker

I'm trying to set up a docker-compose file for running Apache Guacamole.
The compose file has 3 services, 2 for guacamole itself and 1 database image. The problem is that the database has to be initialized before the guacamole container can use it, but the files to initialize the database are in the guacamole image. The solution I came up with is this:
version: "3"
services:
init:
image: guacamole/guacamole:latest
command: ["/bin/sh", "-c", "cp /opt/guacamole/postgresql/schema/*.sql /init/" ]
volumes:
- dbinit:/init
database:
image: postgres:latest
restart: unless-stopped
volumes:
- dbinit:/docker-entrypoint-initdb.d
- dbdata:/var/lib/postgresql/data
environment:
POSTGRES_USER: guac
POSTGRES_PASSWORD: guac
depends_on:
- init
guacd:
image: guacamole/guacd:latest
restart: unless-stopped
guacamole:
image: guacamole/guacamole:latest
restart: unless-stopped
ports:
- "8080:8080"
environment:
GUACD_HOSTNAME: guacd
POSTGRES_HOSTNAME: database
POSTGRES_DATABASE: guac
POSTGRES_USER: guac
POSTGRES_PASSWORD: guac
depends_on:
- database
- guacd
volumes:
dbinit:
dbdata:
So I have one container whose job is to copy the database initialization files into a volume and then I mount that volume in the database. The problem is that this creates a race condition and is ugly. Is there some elegant solution for this? Is it possible to mount the files from the guacamole image into the database container? I would rather avoid having an extra sql file with the docker-compose file.
Thanks in advance!

Related

docker-compose is not saving data to volumes

Below is my docker-compose file in which I specified, that the postgres data is saved into my folder in the host called volumes/db_data/. But this folder stays empty. Instead, the data seems to be saved into the default folder var/lib/docker/volumes. Also, the folder user_models is not saved locally in volumes/user_models. The api_server.log is also not saved. Therefore I guess there is something wrong with my understanding of docker volumes in general.
version: "3.7"
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: xxxxxxxxxxx
POSTGRES_USER: postgres
POSTGRES_DB: user-db
volumes:
- ./volumes/db_data:/var/lib/postgresql/data
ports:
- 5432:5432
api-server:
image: api-server
volumes:
- ./volumes/user_models:/classification/user_models
depends_on:
- db
ports:
- 1337:1337
model-trainer:
image: webapp-trainer
volumes:
- ./volumes/user_models:/user_models
- ./volumes/api_server.log:/api_server.log
depends_on:
- db
react-client:
image: webapp-client
depends_on:
- api-server
ports:
- 3000:3000
Everything else is working fine, just the data in user_models is not saved and the postgres data is saved in the wrong place. What am I doing wrong?
Edit:
docker-compose up prints this at the start:
WARNING: Service "db" is using volume "/var/lib/postgresql/data" from the previous container. Host mapping "/home/theo/Documents/Programming/cbi-webapp/compose/volumes/db_data"
yet db_data is empty

How to solve MySQL database data disappear in Docker Swarm

When I use docker swarm with MySQL I met with problem (If docker container rerun mysql data will disappear)
I spend a lot of time to search how to fix this problem, use volume store data and use glusterFs across host share volume but experiencing database data inconsistency.
Is my method right ? or can someone tell we how to fix this problem?
At last , This is my example yaml file :
version: '3.8'
services:
www:
image: httpd:latest
ports:
- "8001:80"
volumes:
- /usr/papertest/src:/var/www/html/
db:
image: mariadb:latest
restart: always
volumes:
- /usr/test/src:/docker-entrypoint-initdb.d
- /etc/timezone:/etc/timezone:ro
- /usr/test/backup:/var/lib/mysql #/usr/test/backup is glusterfs mount place
environment:
MYSQL_ROOT_PASSWORD: MySQL_PASSWORD
MYSQL_DATABASE: MYSQL_DATABASE
#MYSQL_USER: MYSQL_USER
# MYSQL_PASSWORD: MYSQL_PASSWD
phpmyadmin:
image: phpmyadmin
#restart: always
ports:
- 4567:80
environment:
- PMA_ARBITRARY=1

the database is not created during container startup with postgres

docker-compose.yml
version: '3.5'
services:
postgres:
container_name: postgres_container
image: postgres:11.7
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-root}
PGDATA: /data/postgres
# ./init.sql (for unix system)
# //docker/init.sql:/docker-entrypoint-initdb.d/init.sql - for Windows
volumes:
- postgres:/data/postgres
- //docker/init.sql:/docker-entrypoint-initdb.d/init.sql
ports:
- "5432:5432"
networks:
- postgres
restart: unless-stopped
pgadmin:
container_name: pgadmin_container
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL:-pgadmin4#pgadmin.org}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD:-admin}
volumes:
- pgadmin:/root/.pgadmin
ports:
- "${PGADMIN_PORT:-5050}:80"
networks:
- postgres
depends_on:
- postgres
restart: unless-stopped
networks:
postgres:
driver: bridge
volumes:
postgres:
pgadmin:
When the container is raised, the script must be run.
init.sql
CREATE DATABASE example;
CREATE DATABASE test;
But no databases are created. I have to create them through the console, manually
Who has any idea why this is the case and how to fix it? (The figure shows that the script is mounted in a container)
Solution
I stopped and deleted all the containers.
Then deleted the volumes.
After that, I started docker-compose.yml again.
The databases were created.
Perhaps the first launch failed, the volumes were created, and when I corrected the file, the second launch of the database creation command was not executed, since the volumes were already created for the current container. Thanks for the tip.
From the update, it appears that a previous startup of the database had been done. Once that happens, the volume gets initialized. And once the volume has data, the entrypoint for the database will not perform the initialization step again.
The solution is to stop the database, delete the volume with the bad database data, and then restart the database.

Can't to connect to postgres container

I define postgres server in docker-compose.yml:
db:
image: postgres:9.5
expose:
- 5432
Then in another docker container I tried to connect to this postgres container. But it gives an error with warning:
Is the server running on host "db" (172.22.0.2) and accepting
data-service_1 | TCP/IP connections on port 5432?
Why container can't to connect to another by provided information (host="db" and port=5432)?
PS
Full docker-compose.yml:
version: "2"
services:
data-service:
build: .
depends_on:
- db
ports:
- "50051:50051"
db:
image: postgres:9.5
depends_on:
- data-volume
environment:
- POSTGRES_USER=cobrain
- POSTGRES_PASSWORD=a
- POSTGRES_DB=datasets
ports:
- "8000:5432"
expose:
- 5432
volumes_from:
- data-volume
# - container:postgres9.5-data
restart: always
data-volume:
image: busybox
command: echo "I'm data container"
volumes:
- /var/lib/postgresql/data
Solution #1. Same file.
To be able to access the db container, you have to define your other containers in context of docker-compose.yml. When containers are started, each container gets all other containers mapped in /etc/hosts.
Just do
version: '2'
services:
web:
image: your/image
db:
image: postgres:9.5
If you do not wish to put your other containers into the same docker-compose.yml, there are other solutions:
Solution #2. IP
Do docker inspect <name of your db container> and look for IPAddress directive in the result listing. Use that IPAddress as host to connect to.
Solution #3. Networks
Make your containers join same network. For that, under each service, define:
services:
db:
networks:
- myNetwork
Don't forget to change db for each container you are starting.
I usually go with the first solution during development. I use apache+php as one container and pgsql as another one, a separate DB for every project. I do not start more than one setting of docker-compose.yml, so in this case defining both containers in one .yml config is perfect.
the depends on is not correct. i would try to use other paramters like LINKS and environment:
version: "2"
services:
data-service:
build: .
links:
- db
ports:
- "50051:50051"
volumes_from: ["db"]
environment:
DATABASE_HOST: db
db:
image: postgres:9.5
environment:
- POSTGRES_USER=cobrain
- POSTGRES_PASSWORD=a
- POSTGRES_DB=datasets
ports:
- "8000:5432"
expose:
- 5432
#volumes_from:
#- data-volume
# - container:postgres9.5-data
restart: always
data-volume:
image: busybox
command: echo "I'm data container"
volumes:
- /var/lib/postgresql/data
this one works for me (not postgres but mysql)

Docker-Compose persistent data MySQL

I can't seem to get MySQL data to persist if I run $ docker-compose down with the following .yml
version: '2'
services:
# other services
data:
container_name: flask_data
image: mysql:latest
volumes:
- /var/lib/mysql
command: "true"
mysql:
container_name: flask_mysql
restart: always
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: 'test_pass' # TODO: Change this
MYSQL_USER: 'test'
MYSQL_PASS: 'pass'
volumes_from:
- data
ports:
- "3306:3306"
My understanding is that in my data container using volumes: - /var/lib/mysql maps it to my local machines directory where mysql stores data to the container and because of this mapping the data should persist even if the containers are destroyed. And the mysql container is just a client interface into the db and can see the local directory because of volumes_from: - data
Attempted this answer and it did not work. Docker-Compose Persistent Data Trouble
EDIT
Changed my .yml as shown below and created a the dir ./data but now when I run docker-compose up --build the mysql container wont start throws error saying
data:
container_name: flask_data
image: mysql:latest
volumes:
- ./data:/var/lib/mysql
command: "true"
mysql:
container_name: flask_mysql
restart: always
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: 'test_pass' # TODO: Change this
MYSQL_USER: 'test'
MYSQL_PASS: 'pass'
volumes_from:
- data
ports:
- "3306:3306"
flask_mysql | mysqld: Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied)
flask_mysql | 2016-08-26T22:29:21.182144Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
flask_mysql | 2016-08-26T22:29:21.185392Z 0 [ERROR] --initialize specified but the data directory exists and is not writable. Aborting.
The data container is a superfluous workaround. Data-volumes would do the trick for you. Alter your docker-compose.yml to:
version: '2'
services:
mysql:
container_name: flask_mysql
restart: always
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: 'test_pass' # TODO: Change this
MYSQL_USER: 'test'
MYSQL_PASS: 'pass'
volumes:
- my-datavolume:/var/lib/mysql
volumes:
my-datavolume:
Docker will create the volume for you in the /var/lib/docker/volumes folder. This volume persist as long as you are not typing docker-compose down -v
There are 3 ways:
First way
You need specify the directory to store mysql data on your host machine. You can then remove the data container. Your mysql data will be saved on you local filesystem.
Mysql container definition must look like this:
mysql:
container_name: flask_mysql
restart: always
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: 'test_pass' # TODO: Change this
MYSQL_USER: 'test'
MYSQL_PASS: 'pass'
volumes:
- /opt/mysql_data:/var/lib/mysql
ports:
- "3306:3306"
Second way
Would be to commit the data container before typing docker-compose down:
docker commit my_data_container
docker-compose down
Third way
Also you can use docker-compose stop instead of docker-compose down (then you don't need to commit the container)
first, you need to delete all old mysql data using
docker-compose down -v
after that add two lines in your docker-compose.yml
volumes:
- mysql-data:/var/lib/mysql
and
volumes:
mysql-data:
your final docker-compose.yml will looks like
version: '3.1'
services:
php:
build:
context: .
dockerfile: Dockerfile
ports:
- 80:80
volumes:
- ./src:/var/www/html/
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- mysql-data:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
volumes:
mysql-data:
after that use this command
docker-compose up -d
now your data will persistent and will not be deleted even after using this command
docker-compose down
extra:- but if you want to delete all data then you will use
docker-compose down -v
You have to create a separate volume for mysql data.
So it will look like this:
volumes_from:
- data
volumes:
- ./mysql-data:/var/lib/mysql
And no, /var/lib/mysql is a path inside your mysql container and has nothing to do with a path on your host machine. Your host machine may even have no mysql at all. So the goal is to persist an internal folder from a mysql container.
Adding on to the answer from #Ohmen, you could also add an external flag to create the data volume outside of docker compose. This way docker compose would not attempt to create it. Also you wouldn't have to worry about losing the data inside the data-volume in the event of $ docker-compose down -v.
The below example is from the official page.
version: "3.8"
services:
db:
image: postgres
volumes:
- data:/var/lib/postgresql/data
volumes:
data:
external: true
Actually this is the path and you should mention a valid path for this to work. If your data directory is in current directory then instead of my-data you should mention ./my-data, otherwise it will give you that error in mysql and mariadb also.
volumes:
./my-data:/var/lib/mysql
Feasible bind mount solution:
mariadb:
image: mariadb:latest
restart: unless-stopped
environment:
- MARIADB_ROOT_PASSWORD=${MARIADB_ROOT_PASSWORD}
volumes:
- type: bind
source: /host/dir
target: /var/lib/mysql

Resources