Mount volume for remote Docker context via SSH - docker

I'm deploying few Docker services via docker-compose with remote context. I configured it to use SSH:
docker context create remote --docker "host=ssh://user#my.remote.host"
docker context use remote
On the remote host I have multiple configuration files which I want to mount into the Docker. It's working fine when I'm trying with docker CLI:
docker run -v /home/user/run:/test -it alpine:3.11
# ls -la /test
-> shows remote files correctly here
But when I'm starting it using docker-compose with config file:
version: "3.3"
services:
nginx:
image: nginx:1.17.10-alpine
container_name: nginx
restart: unless-stopped
volumes:
- ${HOME}/run/nginx.conf:/etc/nginx/nginx.conf
ports:
- "80:80"
- "443:443"
It's trying to mount local files instead of remote for some reason and fails with error:
ERROR: for nginx Cannot start service nginx: OCI runtime create failed: container_linux.go:296: starting container process caused "process_linux.go:398: container init caused \"rootfs_linux.go:58: mounting \\\"/home/local-user/run/nginx.conf\\\" to rootfs \\\"/hdd/docker/overlay2/c869ef9f2c983d33245fe1b4360eb602d718786ba7d0245d36c40385f7afde65/merged\\\" at \\\"/hdd/docker/overlay2/c869ef9f2c983d33245fe1b4360eb602d718786ba7d0245d36c40385f7afde65/merged/etc/nginx/nginx.conf\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Is it possible to mount remote resources via docker-compose similar to standard Docker CLI?

You need to explicitly set DOCKER_HOST to access your remote docker host from docker-compose.
From the compose documentation
Compose CLI environment variables
DOCKER_HOST
Sets the URL of the docker daemon. As with the Docker client, defaults
to unix:///var/run/docker.sock.
In your given case, docker context use remote sets current context to remote
only for your docker command. docker-compose still uses your default (local) context. In order for docker-compose to detect it, you must pass it via the DOCKER_HOST environment variable.
Example:
$ export DOCKER_HOST=ssh://user#my.remote.host
$ docker-compose up

Related

Restart entire docker compose stack from one of the containers

Is there any proper way of restarting an entire docker compose stack from within one of its containers?
One workaround involves mounting the docker socket:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
and then use the Docker Engine SDKs (https://docs.docker.com/engine/api/sdk/examples/).
However, this solution only allows restarting the containers itselves. There seems to be no way to send compose commands, like docker compose restart, docker compose up, etc.
The only solution I've found to send docker compose commands is to open a terminal on the host from the container using ssh, like this: access host's ssh tunnel from docker container
This is partly related to How to run shell script on host from docker container? , but I'm actually looking for a more specific solution to only send docker compose commands.
I tried with this simple docker-compose.yml file
version: '3'
services:
nginx:
image: nginx
ports:
- 3000:80
Then I started a docker container using
docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v $(pwd):/work docker
Then, inside the container, I did
cd /work
docker-compose up -d
and it started the container up on the host.
Please note that you have an error in your socket mapping. It needs to be
- /var/run/docker.sock:/var/run/docker.sock
(you have a period instead of a slash at one point)
As mentioned by #BMitch in the comments, compose project name was the reason why I wasn't able to run docker compose commands inside the running container.
By default the compose project name is set to the directory name, so if the docker-compose.yml is run from a host directory named folder1, then the commands inside the container should be run as:
docker-compose -p folder1 ...
So now, for example, restarting the stack works:
docker-compose -p folder1 restart
Just as a reference, a fixed project name for your compose can be set using name: ... as a top-level attribute of the .yml file, but requires docker compose v2.3.3 : Set $PROJECT_NAME in docker-compose file

Cannot mount Config directory in Nextcloud Docker container

I'm trying to create a custom Nextcloud config locally, then have the ability to mount it to the appropriate folder using volumes as defined here: https://github.com/nextcloud/docker#persistent-data. All the volume mounts work except for the config mount... Why is that being treated differently here?
Steps to reproduce
0) Enter a new/emptry directory (containing no sub-directories or additional files).
1) Create a docker-compose.yml file containing only the below contents:
version: "3.4"
services:
nextcloud:
image: nextcloud:latest
volumes:
- "./nextcloud/custom_apps:/var/www/html/custom_apps"
- "./nextcloud/config:/var/www/html/config"
- "/data/nextcloud:/var/www/html/data"
- "./themes:/var/www/html/themes"
2) docker-compose up -d
Expected behavior
Work. I should be able to see the /var/www/html/config contents locally at ./nextcloud/config, and then insert a customer config.php, which is then updated within the container.
Actual behavior
An ERROR when bringing up the container, specific to the config directory. If I remove the ./nextcloud/config:/var/www/html/config volume mount above, then the container will start without error.
ERROR message
ERROR: for nextcloud Cannot start service nextcloud: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\"/home/user/Nextcloud-test/nextcloud/config\\" to rootfs \\"/var/lib/docker/overlay2/41b567141e23b16cf5e4f99f4c33703fc9a533aa5a4bef68fbba70a74842ca88/merged\\" at \\"/var/lib/docker/overlay2/41b567141e23b16cf5e4f99f4c33703fc9a533aa5a4bef68fbba70a74842ca88/merged/var/www/html/config\\" caused \\"not a directory\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
ERROR: Encountered errors while bringing up the project.
Server configuration
Operating system:
Operating System: Debian GNU/Linux 10 (buster)
Kernel: Linux 4.19.0-8-cloud-amd64
Architecture: x86-64
Image
nextcloud:latest (apache)
I could not reproduce using your steps (Ubuntu 18.04).
From here, running docker-compose up -d then docker-compose logs, I see no errors. Then, when running docker inspect on the container, I see the following:
...
"Volumes": {
"/var/www/html": {},
"/var/www/html/config": {},
"/var/www/html/custom_apps": {},
"/var/www/html/data": {},
"/var/www/html/themes": {}
},
...
Which suggests the mount has worked without problem.
What I suggest you do:
Check the directory ./nextcloud/config exists and is not a file
Check your Docker and Docker Compose installation is up-to-date
Try running the Docker container with docker run -it -v ./nextcloud/config:/var/www/html/config <containername> /bin/bash to explore if the mount works manually
Try to do the same on a minimal example such as the Getting started example

Volume data does not fill when running a bamboo container on the server

I am trying to run bamboo on server using docker containers. When i running on local machine work normally and volume save datas successfully. But when i run same docker compose file on server, volume data not save my datas.
docker-compose.yml
version: '3.2'
services:
bamboo:
container_name: bamboo-server_test
image: atlassian/bamboo-server
volumes:
- ./volumes/bamboo_test_vol:/var/atlassian/application-data/bamboo
ports:
- 8085:8085
volumes:
bamboo_test_vol:
Run this compose file on local machine
$ docker-compose up -d
Creating network "test_default" with the default driver
Creating volume "test_bamboo_test_vol" with default driver
Creating bamboo-server_test ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
916c98ca1a9d atlassian/bamboo-server "/entrypoint.sh" 24 minutes ago Up 24 minutes 0.0.0.0:8085->8085/tcp, 54663/tcp bamboo-server_test
$ ls
docker-compose.yml volumes
$ cd volumes/bamboo_test_vol/
$ ls
bamboo.cfg.xml logs
localhost:8085
Run this compose file on server
$ ssh <name>#<ip_address>
password for <name>:
$ docker-compose up -d
Creating network "test_default" with the default driver
Creating volume "test_bamboo_test_vol" with default driver
Creating bamboo-server_test ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
38b77e1b736f atlassian/bamboo-server "/entrypoint.sh" 12 seconds ago Up 11 seconds 0.0.0.0:8085->8085/tcp, 54663/tcp bamboo-server_test
$ ls
docker-compose.yml volumes
$ cd volumes/
$ cd bamboo_test_vol/
$ ls
$ # VOLUME PATH IS EMPTY
server_ip:8085
I didn't have this problem when I tried the same process for jira-software. Why can't it work through the bamboo server even though I use the exact same compose file?
I had the same problem when I wanted to upgrade my Bamboo server instance with my mounted host volume for the bamboo-home directory.
The following was in my docker-compose file:
version: '2.2'
bamboo-server:
image: atlassian/bamboo-server:${BAMBOO_VERSION}
container_name: bamboo-server
environment:
TZ: 'Europe/Berlin'
restart: always
init: true
volumes:
- ./bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo
ports:
- "8085:8085"
- "54663:54663"
When i started with docker-compose up -d bamboo-server, the container never took the files from the host system. So I tried it first without docker-compose, following the instructions of Atlassian Bamboo with the following command:
docker run -v ./bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo --name="bamboo-server" --init -d -p 54663:54663 -p 8085:8085 atlassian/bamboo-server:${BAMBOO_VERSION}
The following error message was displayed:
docker: Error response from daemon: create ./bamboo/bamboo-server/data: "./bamboo/bamboo-server/data" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
So I converted the error message and took the absolute path:
docker run -v /var/project/bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo --name="bamboo-server" --init -d -p 54663:54663 -p 8085:8085 atlassian/bamboo-server:${BAMBOO_VERSION}
After the successful start, I switched to the docker container via SSH and all files were as usual in the docker directory.
I transferred the whole thing to the docker-compose file and took the absolute path in the volumes section. Subsequently it also worked with the docker-compose file.
My docker-compose file then looked like this:
[...]
init: true
volumes:
- /var/project/bamboo/bamboo-server/data:/var/atlassian/application-data/bamboo
ports:
[...]
Setting up a containerized Bamboo Server is not supported for these reasons;
Repository-stored Specs (RSS) are no longer processed in Docker by default. Running RSS in Docker was not possible because;
there is no Docker capability added on the Bamboo server by default,
the setup would require running Docker in Docker.

Use docker secrets on a docker-machine

I'm having a problem trying to use docker secrets on a remote host that I created with docker-machine.
Below is my docker-compose.yml:
version: "3.5"
services:
mysql:
image: mysql:5.7
container_name: mysql
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD_FILE: /run/secrets/db_root_password
secrets:
- db_root_password
secrets:
db_root_password:
file: ./db_root_password.txt
This works well locally, I can run docker-compose up and access mysql container.
docker-compose up
The root password has well be applied.
I'm now trying to run the container on the remote host that I created using docker-machine.
I first created the machine using docker-machine create (in this case, on exoscale cloud)
docker-machine create --driver exoscale ... MyMachine
Then I tried to deploy the host using:
eval $(docker-machine env MyMachine)
docker-compose up
However, when I try to run on the remote host, I got the following error:
ERROR: for mysql Cannot create container for service mysql: invalid mount config for type "bind": bind source path does not exist: /Users/user/path/to/db_root_password.txt
So it's still trying to load the secret with the path of the local file on my local machine. How can I use this secret on the remote host?
Thanks in advance for your help
It's looks like tour docker-machine can't find ./db_root_password.txt
Can you try to create the file db_root_password.txt inside the docker-machine in the same path?

I want to run a docker-compose.yml on a remote docker daemon, what about volumes?

I want to run docker-compose up on a remote docker daemon:
DOCKER_HOST=tcp://...:2375 docker-compose up
In docker-compose.yml, I have a volume binding to a local file:
version: "3"
services:
nginx:
image: nginx:latest
ports:
- 80:80
volumes:
- ./etc/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
This won't work, as the remote docker daemon will be unable to locate ./etc/nginx/nginx.conf.
What is the best approach to handle this?
Extend the existing docker image by creating your own docker image.
Ref : How to extend existing docker container?
Copy the relevant files (from docker build-context) to appropriate directory and then it will be available in docker image and hence will also be available in remote docker demon as well.

Resources