Docker-compose, remote context, path relative to local machine instead of remote - docker

I have a very simple docker-compose.yml:
version: '2.4'
services:
containername:
image: ${DOCKER_IMAGE}
volumes:
- ./config:/root/config
I'm using a remote staging server accessed via ssh:
docker context create staging --docker "host=ssh://ubuntu#staging.example.com"
docker context use staging
However, I see unexpected results with my volume after I docker-compose up:
docker-compose --context staging up -d
docker inspect containername
...
"Mounts": [
{
"Type": "bind",
"Source": "/Users/crummy/code/.../config",
"Destination": "/root/config",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
...
It seems the expansion of ./config to a full path happens on the machine docker-compose is running on. Not the machine Docker is running on.
I can fix this problem by hardcoding the entire path: /home/ubuntu/config:/root/config. But this makes my docker-compose file a little less flexible than it could be. Is there a way to get the dot expansion to occur on the remote machine?

No, the docs say that:
You can mount a relative path on the host, which expands relative to the directory of the Compose configuration file being used. Relative paths should always begin with . or ..
I believe that happens for two reasons:
There's no easy way and objective way that the docker-compose can find out how to expand . in this context, as there's no way to know what . would mean for the ssh client (home? same folder?).
Even though the docker cli is using a different context, the expansion is done by the docker-compose tool, that's unaware about the context switch.
Even using environment variables might pose a problem, since the env expansion would also happen in the machine you're running the docker-compose command.

Related

docker run --volume mapping ignored

I successfully run PostgreSQL thus:
$ docker run --name postgresql --env POSTGRES_PASSWORD=password --publish 6000:5432 --volume /home/russ/dev/pg:/var/lib/postgresql/data postgres
only to find that:
$ docker inspect postgresql
...
"Mounts": [
{
"Type": "volume",
"Name": "06d27a1fe489cedfa47d6a3e801cb286494958e1c3a17f044205629cc7070952",
"Source": "/var/lib/docker/volumes/06d27a1fe489cedfa47d6a3e801cb286494958e1c3a17f044205629cc7070952/_data",
"Destination": "/var/lib/postgresql/data",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
...
Docker's usual, random filesystem backing is used instead of the hard-coded path I tried to map. Why is this or what should I have done instead?
If you look at the Postgres Dockerfile, you'll see a VOLUME [/var/lib/postgresql/data].
This command creates the default, "anonymous" volume you're seeing and takes precedence over the --volume argument you provide with the CLI (as well as any commands in "child" Dockerfiles or configuration in docker-compose files).
This extremely annoying quirk of Docker applies to other commands as well is currently being debated in https://github.com/moby/moby/issues/3465. This comment describes a similar problem with mysql images.
Unfortunately, there isn't an easy workaround but here are some common methods I've seen used:
Reconfigure Postgres to work out of a different directory and mount to that instead
Have another container mount to the same anonymous volume and to your machine and have it copy data over periodically
If you just want the data persist between container starts, I would recommend keeping it in the anonymous volume to keep it simple.

Cassandra fails to start on Docker Toolbox external volume

The issue is related to running "library/Cassandra" version 3.11.2 on Docker Toolbox (docker-machine version 0.11.0) on Windows 7.
These are versions that I have to use due to constraints at my place of work, and can confirm that I have had no problems doing exactly the same thing on Docker on Mac OSX at home.
Cassandra works fine if I don't map the data directory to an external volume in the Compose file, but has the downside of causing the Virtualbox VM to fill up on a regular basis. It's fine for development, but a pain for our other systems such as Test and won't be of any use in Production.
I am aware of the constraints imposed by Docker Toolbox for Windows, especially around the use of C:\Users as mount points.
My docker-compose file looks as follows:
version: '3'
services:
cassandra:
privileged: true
image: cassandra
ports:
- "7000:7000"
- "7001:7001"
- "7199:7199"
- "9042:9042"
- "9160:9160"
volumes:
- C:\Users\davis_s\docker-volumes\cassandra:/var/lib/cassandra
container_name: cassandra
environment:
- CASSANDRA_CLUSTER_NAME=mycluster
- CASSANDRA_ENDPOINT_SNITCH=GossipingPropertyFileSnitch
- CASSANDRA_DC=datacenter1
networks:
- smartdeploy_evo
When I map the data volume for Cassandra to a folder outside of the VM it stops when it gets to the following line:
cassandra | INFO [main] 2018-07-27 10:49:00,044 ColumnFamilyStore.java:411 - Initializing system.IndexInfo
It does manage to create the following folders under this location (which was totally empty prior to startup):
commitlog
data
hints
saved_caches
with other subdirectories under these ones, such as data/system/Index-info.. etc.
I have left it over the weekend and it never gets past this line.
The output for "docker inspect" related to the mount point looks as follows:
"Mounts": [
{
"Type": "bind",
"Source": "/c/Users/davis_s/docker-volumes/cassandra",
"Destination": "/var/lib/cassandra",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
I have tried the following to try and overcome this:
Given full (recursive) Windows permissions to "Everyone" on C:\Users\davis_s
Mapped a different Shared Folder in VirtualBox, again with full permissions for "Everyone"
Tried running the Cassandra container with "Privileged: true"
I've seen a few other posts on SO related to Cassandra stopping at a similar point, but these look to have been solved by clearing the folder and restarting which is relevant in this case (as it started empty anyway!).
Thanks in advance...

Not able to run service using docker stack in docker swarm

Here is the docker-compose file that I once used to successfully run docker container using docker-compose.
Now trying to deploy using the docker stack. My manager and workers node are ready.
docker-compose.yml
version: '3'
services:
prometheus:
image: prom/prometheus
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
command:
- '--config.file=/etc/prometheus/prometheus.yml'
ports:
- '9090:9090'
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
Command used:
docker stack deploy -c docker-compose.yml monitoring
Trying to run it in the manager node. Why is the service not running . checking if the service is running or not using docker service ls shows 0/1 as you can see in the figure bellow.
When trying to check the logs with docker service logs <servicename> Nothing gets loaded.
Where exactly am I missing the things. My ultimate goal being running the full docker monitoring service like cadvisor and node-exporter and all other required. I tried with this https://github.com/stefanprodan/swarmprom/blob/master/docker-compose.yml
I have figured out the problem is in the replicated mode, runs well in global mode although.
What error is here with the replicated mode. I don't think I have a syntax error here. How to make it run in replicated mode and in the manager node only
If everything works well with the "docker-compose up" but you find the issues with "docker stack deploy". This is how I fixed it.
I am using docker in a Windows machines and docker uses a VirtualBox VM. You will need to do port forward to see the containers in the host machine.
The problem is related to the bind mount with the volumes.
See the path in the two scenarios below when I do docker inspect :
Docker-compose up:
"Mounts": [
{
"Type": "bind",
"Source": "/c/Users/Bardav/Documents/Large-Scale-Software Testing/PromptDocker/PromptDocker/app_users",
"Destination": "/app",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
Docker stack deploy -c docker-compose.ymal store:
"Mounts": [
{
"Type": "bind",
"Source": "C:\\Users\\Bardav\\Documents\\Large-Scale-Software Testing\\PromptDocker\\PromptDocker\\app_viewer",
"Target": "/app/"
}
],
SOLUTION:
Update the path in the docker-compose.yml and it will work. For instance;
RELATIVE PATH NOT WORKING
reg_users:
image: prompt_user_app
build: ./app_users/
volumes:
- ./app_users/:/app/
ABSOLUTE PATH WORKS FINE
reg_users:
image: prompt_user_app
build: ./app_users/
volumes:
- "/c/Users/Bardav/Documents/Large-Scale-Software Testing/PromptDocker/PromptDocker/app_users:/app/"
This might be related to the placement constraint on Swarm Master / Manager. I had faced a similar problem while trying to schedule visualiser or portioner container on the master node using Azure Container Service. You can check the status of the master. If it is in Puased state you can make it active
I had shared my experience at https://www.handsonarchitect.com/2017/11/visualize-docker-swarm-in-azure.html

Docker: where is docker volume located for this compose file

I was setting up some materials for a trainning, when I came around this sample compose file:
https://github.com/dockersamples/example-voting-app/blob/master/docker-compose.yml
and I couldn't find out how this volume is mounted, on lines 48 and 49 of the file:
volumes:
db-data:
Can someone explain me where is this volume on the host? Couldn't find it and I wouldn't like to keep any postgresql data dangling around after the containers are gone. Similar thing happens to the networks:
networks:
front-tier:
back-tier:
Why docker compose accepts empty network definitions like this?
Finding the volumes
Volumes like this are internal to Docker and stored in the Docker store (which is usually all under /var/lib/docker). You can get a list of volumes:
$ docker volume ls
DRIVER VOLUME NAME
local 1c59d5b7e90e9173ca30a7fcb6b9183c3f5a37bd2505ca78ad77cf4062bd0465
local 2f13b0cec834a0250845b9dcb2bce548f7c7f35ed9cdaa7d5990bf896e952d02
local a3d54ec4582c3c7ad5a6172e1d4eed38cfb3e7d97df6d524a3edd544dc455917
local e6c389d80768356cdefd6c04f6b384057e9fe2835d6e1d3792691b887d767724
You can find out exactly where the volume is stored on your system if you want to:
$ docker inspect 1c59d5b7e90e9173ca30a7fcb6b9183c3f5a37bd2505ca78ad77cf4062bd0465
[
{
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/1c59d5b7e90e9173ca30a7fcb6b9183c3f5a37bd2505ca78ad77cf4062bd0465/_data",
"Name": "1c59d5b7e90e9173ca30a7fcb6b9183c3f5a37bd2505ca78ad77cf4062bd0465",
"Options": {},
"Scope": "local"
}
]
Cleaning up unused volumes
As far as just ensuring that things are not left dangling, you can use the prune commands, in this case docker volume prune. That will give you this output, and you choose whether to continue pruning or not.
$ docker volume prune
WARNING! This will remove all volumes not used by at least one container.
Are you sure you want to continue? [y/N]
"Empty" definitions in docker-compose.yml
There is a tendency to accept these "empty" definitions for things like volumes and networks when you don't need to do anything other than define that a volume or network should exist. That is, if you want to create it, but are okay with the default settings, then there is no particular reason to specify the parameters.
first method
you have to see your volume list :
docker volume ls
then run this command :
sudo docker inspect <volume-name> | grep Mountpoint | awk '{ print $2 }'
second method
you can use this method :
first run docker ps to get your container id then run this :
docker inspect --format="{{.Mounts}}" $containerID
We will get volume path.

Dockerfile VOLUME not working

I am really stuck with the usage of docker VOLUME's. I have a plain dockerfile:
FROM ubuntu:latest
VOLUME /foo/bar
RUN touch /foo/bar/tmp.txt
I ran $ docker build -f dockerfile -t test . and it was successful. After this, I interactively ran a shell into the docker container associated with the run of the created test image. That is, I ran $ docker run -it test
Observations:
/foo/bar is created but empty.
docker inspect test mounting info:
"Volumes": {
"/foo/bar": {}
}
It seems that it is not mounting at all. The task seems pretty straight but am I doing something wrong ?
EDIT : I am looking to persist the data that is created inside this mounted volume directory.
The VOLUME instruction must be placed after the RUN.
As stated in https://docs.docker.com/engine/reference/builder/#volume :
Note: If any build steps change the data within the volume after it has been declared, those changes will be discarded.
If you want to know the source of the volume created by the docker run command:
docker inspect --format='{{json .Mounts}}' yourcontainer
will give output like this:
[{
"Name": "4c6588293d9ced49d60366845fdbf44fac20721373a50a1b10299910056b2628",
"Source": "/var/lib/docker/volumes/4c6588293d9ced49d60366845fdbf44fac20721373a50a1b10299910056b2628/_data",
"Destination": "/foo/bar",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}]
Source contains the path you are looking for.

Resources