I'm having a problem trying to use docker secrets on a remote host that I created with docker-machine.
Below is my docker-compose.yml:
version: "3.5"
services:
mysql:
image: mysql:5.7
container_name: mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD_FILE: /run/secrets/db_root_password
secrets:
- db_root_password
secrets:
db_root_password:
file: ./db_root_password.txt
This works well locally, I can run docker-compose up and access mysql container.
docker-compose up
The root password has well be applied.
I'm now trying to run the container on the remote host that I created using docker-machine.
I first created the machine using docker-machine create (in this case, on exoscale cloud)
docker-machine create --driver exoscale ... MyMachine
Then I tried to deploy the host using:
eval $(docker-machine env MyMachine)
docker-compose up
However, when I try to run on the remote host, I got the following error:
ERROR: for mysql Cannot create container for service mysql: invalid mount config for type "bind": bind source path does not exist: /Users/user/path/to/db_root_password.txt
So it's still trying to load the secret with the path of the local file on my local machine. How can I use this secret on the remote host?
Thanks in advance for your help
It's looks like tour docker-machine can't find ./db_root_password.txt
Can you try to create the file db_root_password.txt inside the docker-machine in the same path?
Related
I want to add my aws credentials file to a docker container, so it can access AWS apis.
The credentials file exists in my host machine at /home/user/.aws/credentials
When running the container from command line, I can do
docker run --rm -d -v /home/user/.aws/:/.aws:ro -d \
--env AWS_CREDENTIAL_PROFILES_FILE=/.aws/credentials proj:latest
In docker compose, I can mount the .aws directory with volumes property like so:
services:
proj:
volumes:
- aws_credentials:/.aws:ro
environment:
AWS_CREDENTIAL_PROFILES_FILE: /.aws/credentials
volumes:
aws_credentials:
external: true
My question is, how to populate the external aws_credentials volume with data?
Approaches that do not work:
Use secrets instead of volumes. I am not using Docker swarm
Use config instead of volumes. I am not using Docker swarm
Use a bind mount instead of a volume. The docker-compose file gets checked into source control, and I do not want directories checked in.
services:
proj:
volumes:
- /home/user/.aws/:/.aws:ro #<-- DO NOT WANT THIS IN SOURCE CONTROL
environment:
AWS_CREDENTIAL_PROFILES_FILE: /.aws/credentials
One answer I came up with is using environment variables like so:
services:
proj:
secrets:
- aws_credentials
environment:
AWS_CREDENTIAL_PROFILES_FILE: /run/secrets/aws_credentials
secrets:
aws_credentials:
file: ${awscredfile}
and making sure awscredfile is either loaded in the environment for the parent process of docker compose, or passed in in an env file with --env-file parameter to docker compose.
I have a MySQL docker image running in a docker container on Ubuntu VPS. I bring up MySQL using the docker-compose up -d command via the following docker-composer.yml file
version: "3"
services:
mysql_server:
image: mysql:8.0.21
restart: always
container_name: mysql_server
environment:
MYSQL_DATABASE: db_name
MYSQL_USER: db_username
MYSQL_PASSWORD: db_password
MYSQL_ROOT_PASSWORD: root_password
volumes:
- mysql_server_data:/var/lib/mysql
- /mysql/files/conf.d:/etc/mysql/conf.d
I am having some performance issues and would like to do the following in attempt to improve performance.
I want the data in the mysql_server_data volume to be mounted on /mysql/data without losing any data as this instance in running in production.
I also want to mount the MySQL config file on /mysql/files so I can change the instance configuration to increase performance.
Questions
How can change the data location of a the volume from mysql_server_data to /mysql/data?
Also, how can I mount MySQL's config file on /mysql/files/conf.d to allow me to update the settings?
I tried to mount config file like this
volumes:
- /mysql/files/conf.d:/etc/mysql/conf.d
But that created a directory /mysql/files/conf.d with no config file.
To move the data:
Shutdown the container with docker-compose down, then on the local file system, copy the data from mysql_server_data to /mysql/data. Then change the compose file to reflect the new location. Finally restart the container with docker-compose up.
To mount the config files, as per the docker hub documentation for MySQL, If /my/custom/config-file.cnf is the path and name of your custom configuration file, the your volume map is:
/my/custom:/etc/mysql/conf.d
Note that mapping the volume to the container does not bring the data from your container to you local, but the other way around. So if you want to have the file in the container, you must first create it on your local.
Use the trick suggested by Docker maintainer Sebastiaan van Stijn at https://github.com/moby/moby/issues/31417 to send the tar over stdout:
docker run --rm -v vol_name:/vol_path img_name sh -c 'tar -cOzf - /vol_path' > volume-export.tgz
I'm deploying few Docker services via docker-compose with remote context. I configured it to use SSH:
docker context create remote --docker "host=ssh://user#my.remote.host"
docker context use remote
On the remote host I have multiple configuration files which I want to mount into the Docker. It's working fine when I'm trying with docker CLI:
docker run -v /home/user/run:/test -it alpine:3.11
# ls -la /test
-> shows remote files correctly here
But when I'm starting it using docker-compose with config file:
version: "3.3"
services:
nginx:
image: nginx:1.17.10-alpine
container_name: nginx
restart: unless-stopped
volumes:
- ${HOME}/run/nginx.conf:/etc/nginx/nginx.conf
ports:
- "80:80"
- "443:443"
It's trying to mount local files instead of remote for some reason and fails with error:
ERROR: for nginx Cannot start service nginx: OCI runtime create failed: container_linux.go:296: starting container process caused "process_linux.go:398: container init caused \"rootfs_linux.go:58: mounting \\\"/home/local-user/run/nginx.conf\\\" to rootfs \\\"/hdd/docker/overlay2/c869ef9f2c983d33245fe1b4360eb602d718786ba7d0245d36c40385f7afde65/merged\\\" at \\\"/hdd/docker/overlay2/c869ef9f2c983d33245fe1b4360eb602d718786ba7d0245d36c40385f7afde65/merged/etc/nginx/nginx.conf\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Is it possible to mount remote resources via docker-compose similar to standard Docker CLI?
You need to explicitly set DOCKER_HOST to access your remote docker host from docker-compose.
From the compose documentation
Compose CLI environment variables
DOCKER_HOST
Sets the URL of the docker daemon. As with the Docker client, defaults
to unix:///var/run/docker.sock.
In your given case, docker context use remote sets current context to remote
only for your docker command. docker-compose still uses your default (local) context. In order for docker-compose to detect it, you must pass it via the DOCKER_HOST environment variable.
Example:
$ export DOCKER_HOST=ssh://user#my.remote.host
$ docker-compose up
Until now, I have used a local LAMP stack to develop my web projects and deploy them manually to the server. For the next project I want to use docker and docker-compose to create a mariaDB, NGINX and a project container for easy developing and deploying.
When developing I want my code directory on the host machine to be synchronised with the docker container. I know that could be achieved by running
docker run -dt --name containerName -v /path/on/host:/path/in/container
in the cli as stated here, but I want to do that within a docker-compose v2 file.
I am as far as having a docker-composer.yml file looking like this:
version: '2'
services:
db:
#[...]
myProj:
build: ./myProj
image: myProj
depends_on:
- db
volumes:
myCodeVolume:/var/www
volumes:
myCodeVolume:
How can I synchronise my /var/www directory in the container with my host machine (Ubuntu desktop, macos or Windows machine)?
Thank you for your help.
It is pretty much the same way, you do the host:container mapping directly under the services.myProj.volumes key in your compose file:
version: '2'
services:
...
myProj:
...
volumes:
/path/to/file/on/host:/var/www
Note that the top-level volumes key is removed.
This file could be translated into:
docker create --links db -v /path/to/file/on/host:/var/www myProj
When docker-compose finds the top-level volumes section it tries to docker volume create the keys under it first before creating any other container. Those volumes could be then used to hold the data you want to be persistent across containers.
So, if I take your file for an example, it would translate into something like this:
docker volume create myCodeVolume
docker create --links db -v myCodeVoume:/var/www myProj
The usage scenario is like this:
I have an AWS EC2 instance already provisioned with docker-machine.
I want to use docker-compose to remotely start a few containers on that EC2 instance.
The compose file has a section like this:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
ports:
- "8888:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /home/ubuntu/nginx.tmpl:/app/nginx.tmpl:ro
If I use docker-compose up -d locally, it'll work, since the "/home/ubuntu/nginx.tmpl" file is present on my local machine.
But if I try to use docker-compose to control the remote daemon in AWS like this:
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://x.y:2376"
export DOCKER_CERT_PATH="somedir"
docker-compose up -d
It'll fail, since the "/home/ubuntu/nginx.tmpl" file is not present in the remote machine.
I have tried creating such a file in the remote machine under the same directory, it works, but it feels wrong ... ...
What is a better way to mount a local file to a remote docker daemon?
Docker Machine has an scp command, so you can copy a local file to a remote machine and vice versa:
docker-machine scp ~/my/local/nginx.tmpl machine-name:/home/ubuntu/nginx.tmpl
Here's the reference docs.