How share users between services in docker-compose? - docker

How share users between services in docker-compose? I can create volume and mount it in /etc/ container directory, but it will hide another files/directories. Is exist any smarter idea to achieve goal?

You could use volumes + bind mount to mount one container's passwd & group to another container.
Next is an example:
If not use volume, just verify original no mysql user in test service:
docker_compose.yaml:
version: "3"
services:
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
test:
image: ubuntu:16.04
command: id mysql
depends_on:
- db
Execute as next:
$ docker-compose up
Creating network "23_default" with the default driver
Creating 23_db_1 ... done
Creating 23_test_1 ... done
Attaching to 23_db_1, 23_test_1
test_1 | id: 'mysql': no such user
db_1 | Initializing database
23_test_1 exited with code 1
From above, you could see the container from ubuntu:16.04 not have the user mysql which is a default user in mysql:
test_1 | id: 'mysql': no such user
Use volumes to make user mysql visible to test container:
docker_compose.yaml:
version: "3"
services:
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- my_etc:/etc
test:
image: ubuntu:16.04
command: id mysql
depends_on:
- db
volumes:
- /tmp/etc-data/passwd:/etc/passwd
- /tmp/etc-data/group:/etc/group
volumes:
my_etc:
driver: local
driver_opts:
type: 'none'
o: 'bind'
device: '/tmp/etc-data'
Execute as next, NOTE: we need to new /tmp/etc-data before up:
$ mkdir -p /tmp/etc-data
$ docker-compose up
Creating network "23_default" with the default driver
Creating 23_db_1 ... done
Creating 23_test_1 ... done
Attaching to 23_db_1, 23_test_1
db_1 | Initializing database
test_1 | uid=999(mysql) gid=999(mysql) groups=999(mysql)
23_test_1 exited with code 0
From above, you can see test service already could have the user mysql:
test_1 | uid=999(mysql) gid=999(mysql) groups=999(mysql)
A little explanation:
Above solution first use named volume to pop the /etc folder of first container to the folder /tmp/etc-data on docker host machine, then the second container will use bind mount to separately mount passwd & group to the second container. As you see, the second container just mount the 2 files (passwd, group), so it won't hide any other files.

You can mount only file in a docker container.
volumes:
- /etc/mysql.cnf:/etc/mysql.cnf

Related

How do I give access to container to have full access for a bind volume

I have a problem deploying some docker images when I use bind volumes and when I check the logs I see errors like access denied when the docker application try to create a folder. For example the following docker compose create two containers, one for the postgres database and one for the postgres admin panel.
version: '3.7'
services:
PostgresDB:
image: postgres
environment:
- POSTGRES_DB=MyDatabase
- POSTGRES_USER=MyUser
- POSTGRES_PASSWORD=MyPassword
volumes:
- ./data:/var/lib/postgresql/data
ports:
- '5432:5432'
PostgresDBAdmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: example#example.com
PGADMIN_DEFAULT_PASSWORD: MyPasword
PGADMIN_CONFIG_SERVER_MODE: 'False'
volumes:
- pgadmin:/var/lib/pgadmin
ports:
- "5050:80"
volumes:
pgadmin:
For the database I use bind volume but for the panel I use normal volume. The application works fine. If I change the panel container to use bind volumes my docker compose file looks like this
version: '3.7'
services:
PostgresDB:
image: postgres
environment:
- POSTGRES_DB=MyDatabase
- POSTGRES_USER=MyUser
- POSTGRES_PASSWORD=MyPassword
volumes:
- ./data:/var/lib/postgresql/data
ports:
- '5432:5432'
PostgresDBAdmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: example#example.com
PGADMIN_DEFAULT_PASSWORD: MyPasword
PGADMIN_CONFIG_SERVER_MODE: 'False'
volumes:
- ./pgadmin:/var/lib/pgadmin
ports:
- "5050:80"
This will have as a result the panel container to fail because of directory permission problem. The generated error log looks like this
PostgresDBAdmin_1 | ERROR : Failed to create the directory /var/lib/pgadmin/sessions:
PostgresDBAdmin_1 | [Errno 13] Permission denied: '/var/lib/pgadmin/sessions'
PostgresDBAdmin_1 | HINT : Create the directory /var/lib/pgadmin/sessions, ensure it is writeable by
PostgresDBAdmin_1 | 'pgadmin', and try again, or, create a config_local.py file
PostgresDBAdmin_1 | and override the SESSION_DB_PATH setting per
PostgresDBAdmin_1 | https://www.pgadmin.org/docs/pgadmin4/6.18/config_py.html
PostgresDBAdmin_1 | Traceback (most recent call last):
PostgresDBAdmin_1 | File "/pgadmin4/pgadmin/setup/data_directory.py", line 82, in create_app_data_directory
PostgresDBAdmin_1 | _create_directory_if_not_exists(config.SESSION_DB_PATH)
PostgresDBAdmin_1 | File "/pgadmin4/pgadmin/setup/data_directory.py", line 20, in _create_directory_if_not_exists
PostgresDBAdmin_1 | os.mkdir(_path)
PostgresDBAdmin_1 | PermissionError: [Errno 13] Permission denied: '/var/lib/pgadmin/sessions'
This kind of problem is rare and I try to find a way to give access to the container to create the directory but I did not find a way to do it. The reason that I want to be able to use bind volumes is because in cases like NopCommerce it makes easier for me to have access to the files in order to create a theme.
Can someone help me to solve this problem?
The pgadmin container process runs under a user with UID 5050.
That user needs to have access to the ./pgadmin directory on the host.
One way to do that is to create a user on the host with that UID and make it a member of a group that has access to the ./pgadmin directory.
If, for instance, ./pgadmin is owned by you and your group that are both called 'pitaridis', then you can create a user called 'pgadmin' like this
sudo adduser --system --no-create-home --uid 5050 --ingroup pitaridis --shell /usr/sbin/nologin pgadmin
Then the container process can access ./pgadmin and create the files that it needs.
Another way that may be easier but is less secure, is to run the container as root, like this:
PostgresDBAdmin:
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: example#example.com
PGADMIN_DEFAULT_PASSWORD: MyPasword
PGADMIN_CONFIG_SERVER_MODE: 'False'
volumes:
- ./pgadmin:/var/lib/pgadmin
ports:
- "5050:80"
user: root
You have to specify user:root inside the PostgreAdmin Service. And the result of the docker compose file look like this :
PostgresDBAdmin:
image: dpage/pgadmin4
user: root
environment:
PGADMIN_DEFAULT_EMAIL: example#example.com
PGADMIN_DEFAULT_PASSWORD: MyPasword
PGADMIN_CONFIG_SERVER_MODE: 'False'
volumes:
- ./pgadmin:/var/lib/pgadmin
ports:
- "5050:80"

Error when using Docker stack deploy -c docker-compose.yaml mynetwork

Project structure
Here is my .yaml
version: "3.3"
services:
database:
image: mysql:8
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_USER: ${mysql_user}
MYSQL_PASSWORD: ${mysql_password}
MYSQL_ROOT_PASSWORD: ${mysql_root_password}
ports:
- "6033:3306"
networks:
- ${network_name}
volumes:
- dbdata:/var/lib/mysql
- "./.scripts/schema.sql:/docker-entrypoint-initdb.d/1.sql"
- "./.scripts/data.sql:/docker-entrypoint-initdb.d/2.sql"
secrets:
- mysql_user
- mysql_password
- mysql_root_password
- container_name
- network_name
secrets:
mysql_user:
file: /run/secrets/mysql_user
mysql_password:
file: /run/secrets/mysql_password
mysql_root_password:
file: /run/secrets/mysql_root_password
network_name:
file: /run/secrets/network_name
networks:
${network_name}:
driver: bridge
Here is my script
#!/bin/bash
# Leave current swarm
docker swarm leave --force
# Initialize the host as a Swarm manager
docker swarm init
# Create the secrets
echo "server_user" | docker secret create mysql_user -
echo "server_password" | docker secret create mysql_password -
echo "a1128f69-e6f7-4e93-a2df-3d4db6030abc" | docker secret create mysql_root_password -
echo "template_network" | docker secret create network_name -
# Deploy the stack using the secrets
docker stack deploy -c docker-compose.yaml mynetwork
Here is the error
Node left the swarm.
Swarm initialized: current node (y46rjvlu57bibyhgwk7nthykw) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-161prfq442ha035laq1plnv1o2qfqs026dmg6aslpd4kao7o0i-bnwc5zxiwt3ctmfbxfoszbick 192.168.65.3:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
l3nrkqhy7ygtrb05x7c5rvavu
xrhp70n50waaas1hqha8fk2j2
wj1y8runsi8vydpzc09hp9bmp
u5b6suutp7tkt4lqd5i90bgif
Creating network mynetwork_
failed to create network mynetwork_: Error response from daemon: rpc error: code = InvalidArgument desc = name must be valid as a DNS name component
I do not get the error when I don't use docker secrets for the variables, so I'm wondering if that is something to do with it.
I have tried restarting / clearing / destroying all the containers / networks / services / images in Docker too.
Any help or tips for improvement are also welcomed.
Delete the two networks: blocks.
The actual problem here is the top-level networks:. This is outside the context of any particular service, so where you're getting a network_name secret within the database service, that doesn't happen at the top level. That means Compose tries to expand the environment variable network_name as the network name, but it's empty.
(In Swarm mode you may not want bridge networking either.)
If you remove the networks: blocks then Compose will create a network named default and attach the container(s) to it. If you do need some non-standard settings then it's possible to configure the default network, but keeping the default name and still attaching containers to it by default. More details are in Networking in Compose in the Docker documentation.

Docker volume associated to postgres image empty and not persistent

I have a docker-compose file to build a web server with django and a postgres database. It basically looks like that :
version: '3'
services:
server:
build:
context: .
dockerfile: ./docker/server/Dockerfile
image: backend
volumes:
- ./api:/app
ports:
- 8000:8000
depends_on:
- postgres
- redis
environment:
- PYTHONUNBUFFERED=1
postgres:
image: kartoza/postgis:11.0-2.5
volumes:
- pg_data:/var/lib/postgresql/data:rw
environment:
POSTGRES_DB: "gis,backend"
POSTGRES_PORT: "5432"
POSTGRES_USER: "user"
POSTGRES_PASS: "pass"
POSTGRES_MULTIPLE_EXTENSIONS: "postgis,postgis_topology"
ports:
- 5432:5432
redis:
image: "redis:alpine"
volumes:
pg_data:
I'm using a volume to make my data persistent
I managed to run my containers and add data to the database. A volume has successfully been created : docker volume ls
DRIVER VOLUME NAME
local server_pg_data
But this volume is empty as the output of docker system df -v shows:
Local Volumes space usage:
VOLUME NAME LINKS SIZE
server_pg_data 1 0B
Also, if I want or need to build the containers once again using docker-compose down and docker-compose up, data has been purged from my database. Yet, I thought that volumes were used to make data persistent on disk…
I must be missing something in the way I'm using docker and volumes but I don't get what:
why does my volume appears empty while there is some data in my postgres container ?
why does my volume does not persist after doing docker-compose down ?
This thread (How to persist data in a dockerized postgres database using volumes) looked similar but the solution does not seem to apply.
The kartoza/postgis image isn't configured the same way as the standard postgres image. Its documentation notes (under "Cluster Initializations"):
By default, DATADIR will point to /var/lib/postgresql/{major-version}. You can instead mount the parent location like this: -v data-volume:/var/lib/postgresql
If you look at the Dockerfile in GitHub, you will also see that parent directory named as a VOLUME, which has some interesting semantics here.
With the setting you show, the actual data will be stored in /var/lib/postgresql/11.0; you're mounting the named volume on a different directory, /var/lib/postgresql/data, which is why it stays empty. Changing the volume mount to just /var/lib/postgresql should address this:
volumes:
- pg_data:/var/lib/postgresql:rw # not .../data

Unable to connect mysql from docker container?

I have created a docker-compose file it has two services with Go and Mysql. It creates container for go and mysql. Now i am running code which try to connect to mysql database which is running as a docker container. but i get error.
docker-compose.yml
version: "2"
services:
app:
container_name: golang
restart: always
build: .
ports:
- "49160:8800"
links:
- "mysql"
depends_on:
- "mysql"
mysql:
image: mysql
container_name: mysql
volumes:
- dbdata:/var/lib/mysql
restart: always
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=testDB
- MYSQL_USER=root
- MYSQL_PASSWORD=root
ports:
- "3307:3306"
volumes:
dbdata:
Error while connecting to mysql database
golang | 2019/02/28 11:33:05 dial tcp 127.0.0.1:3306: connect: connection refused
golang | 2019/02/28 11:33:05 http: panic serving 172.24.0.1:49066: dial tcp 127.0.0.1:3306: connect: connection refused
golang | goroutine 19 [running]:
Connection with MySql Database
func DB() *gorm.DB {
db, err := gorm.Open("mysql", "root:root#tcp(mysql:3306)/testDB?charset=utf8&parseTime=True&loc=Local")
if err != nil {
log.Panic(err)
}
log.Println("Connection Established")
return db
}
EDIT:Updated docker file
FROM golang:latest
RUN go get -u github.com/gorilla/mux
RUN go get -u github.com/jinzhu/gorm
RUN go get -u github.com/go-sql-driver/mysql
COPY ./wait-for-it.sh .
RUN chmod +x /wait-for-it.sh
WORKDIR /go/src/app
ADD . src
EXPOSE 8800
CMD ["go", "run", "src/main.go"]
I am using gorm package which lets me connet to the database
depends_on is not a verification that MySQL is actually ready to receive connections. It will start the second container once the database container is running regardless it was ready for connections or not which could lead to such an issue with your application as it expects the database to be ready which might not be true.
Quoted from the documentation:
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started.
There are many tools/scripts that can be used to solve this issue like wait-for which sh compatible in case your image based on Alpine for example (You can use wait-for-it if you have bash in your image)
All you have to do is to add the script to your image through Dockerfile then use this command in docker-compose.yml for the service that you want to make it wait for the database.
What comes after -- is the command that you would normally use to start your application
version: "2"
services:
app:
container_name: golang
...
command: ["./wait-for", "mysql:3306", "--", "go", "run", "myapplication"]
links:
- "mysql"
depends_on:
- "mysql"
mysql:
image: mysql
...
I have removed some parts from the docker-compose for easier readability.
Modify this part go run myapplication with the CMD of your golang image.
See Controlling startup order for more on this problem and strategies for solving it.
Another issue that will rise after you solve the connection issue will be as the following:
Setting MYSQL_USER with root value will cause a failure in MySQL with this error message:
ERROR 1396 (HY000) at line 1: Operation CREATE USER failed for 'root'#'%'
This is because this user already exist in the database and it tries to create another. if you need to use the root user itself you can use only this variable MYSQL_ROOT_PASSWORD or change the value of MYSQL_USER so you can securely use it in your application instead of the root user.
Update: In case you are getting not found and the path was correct, you might need to write the command as below:
command: sh -c "./wait-for mysql:3306 -- go run myapplication"
First, if you are using latest version of docker compose you don't need the link argument in you app service. I quote the docker compose documentation Warning: The --link flag is a legacy feature of Docker. It may eventually be removed. Unless you absolutely need to continue using it, https://docs.docker.com/compose/compose-file/#links
I think the solution is to use the networks argument. This create a docker network and add each service to it.
Try this
version: "2"
services:
app:
container_name: golang
restart: always
build: .
ports:
- "49160:8800"
networks:
- my_network
depends_on:
- "mysql"
mysql:
image: mysql
container_name: mysql
volumes:
- dbdata:/var/lib/mysql
restart: always
networks:
- my_network
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=testDB
- MYSQL_USER=root
- MYSQL_PASSWORD=root
ports:
- "3307:3306"
volumes:
dbdata:
networks:
my_network:
driver: bridge
By the way, if you only connect to Mysql from your app service you don't need to expose the mysql port. If the containers runs in the same network they can reach all ports inside this network.
If my example doesn't works try this
run the docker compose and next go into the app container using
docker container exec -it CONTAINER_NAME bash
Install ping in order to test connection and then run ping mysql.

Storing MySQL data in an image file (formatted as ext4)

I'm trying to use Docker to containerize a MySQL (MariaDB actually) database. I figured out how to store MySQL data (/var/lib/mysql) in a volume mounted from a host directory.
However, because the underlying filesystem is different from host to host there are some inconsistencies, for example table names are case insensitive on NTFS (Windows). Also, it looks like if the database is created on a Linux host it doesn't work on a Windows host (haven't figured out why exactly).
Therefore, I want to store the data on a disk image and mount it inside the container, i.e. db-data.img formatted as ext4. But I'm facing a strange problem, when mounting this image inside the container:
$ docker run -v $PWD:/outside --rm -it ubuntu /bin/bash
# dd if=/dev/zero of=/test.img bs=1M count=100
# mkfs.ext4 test.img
# mount -o loop -t ext4 test.img /mnt
mount: /mnt: mount failed: Operation not permitted.
Using another directory instead of /mnt didn't work either.
Why does it refuse to mount the img file?
I would suggest to use docker-compose and just use a volume declared in the docker-compose.yml configuration.
Something like this:
version: '3'
services:
mysql:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD
MYSQL_USER: $MYSQL_USER
MYSQL_PASS: $MYSQL_PASSWORD
volumes:
- mysql-data:/var/lib/mysql
volumes:
mysql-data:
The mysql-data volume should be stored as a separate volume, independent from the host operating system. The difference to just mounting a directory on the host, it's basically mounting a volume container (which you could also do without docker-compose, but it's more work).
It will not work inside of docker image, Docker blocks access to mouning filesystems (and loop devices). Should be easier create these image earlier, mount and connect to docker as folder by -v.
P.S. Another option is dump your database to sql and restore from windows.
I managed to solve this by using the privileged option in docker-compose.yml:
privileged: true
(or --privileged in the docker command)
Here is my final docker-compose.yml:
version: '3'
services:
db:
build: ./db
image: my_db
container_name: db
privileged: true
ports:
- "3306:3306"
environment:
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_USER=${MYSQL_USER}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
- MYSQL_DATABASE=${MYSQL_DATABASE}
volumes:
- ${MYSQL_DATA_IMG}:/data.img
restart: always
Dockerfile:
FROM mariadb
COPY my-custom.cnf /etc/mysql/conf.d/custom.cnf
COPY run.sh /usr/local/bin/run-mariadb.sh
ENTRYPOINT ["run-mariadb.sh"]
and a custom entry point script that executes mount (run.sh):
#!/bin/sh
# For this mount comamnd to work the DB container must be started
# with --privileged.
mount -o loop /data.img /var/lib/mysql
# Call the entry point script of MariaDB image.
exec /usr/local/bin/docker-entrypoint.sh mysqld
for storing database data make docker-compose.yml will look like
if you want to use Dockerfile
version: '3.1'
services:
php:
build:
context: .
dockerfile: Dockerfile
ports:
- 80:80
volumes:
- ./src:/var/www/html/
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- mysql-data:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
volumes:
mysql-data:
your docker-compose.yml will looks like
if you want to use your image instead of Dockerfile
version: '3.1'
services:
php:
image: php:7.4-apache
ports:
- 80:80
volumes:
- ./src:/var/www/html/
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: example
volumes:
- mysql-data:/var/lib/mysql
adminer:
image: adminer
restart: always
ports:
- 8080:8080
volumes:
if you want to store or preserve data of mysql then
must remember to add two lines in your docker-compose.yml
volumes:
- mysql-data:/var/lib/mysql
and
volumes:
mysql-data:
after that use this command
docker-compose up -d
now your data will persistent and will not be deleted even after using this command
docker-compose down
extra:- but if you want to delete all data then you will use
docker-compose down -v
plus you can check your data list by this command
docker volume ls
DRIVER VOLUME NAME
local 35c819179d883cf8a4355ae2ce391844fcaa534cb71dc9a3fd5c6a4ed862b0d4
local 133db2cc48919575fc35457d104cb126b1e7eb3792b8e69249c1cfd20826aac4
local 483d7b8fe09d9e96b483295c6e7e4a9d58443b2321e0862818159ba8cf0e1d39
local 725aa19ad0e864688788576c5f46e1f62dfc8cdf154f243d68fa186da04bc5ec
local de265ce8fc271fc0ae49850650f9d3bf0492b6f58162698c26fce35694e6231c
local phphelloworld_mysql-data

Resources