How do you mount sibling containers volumes started from another container? - docker

I am using docker for my dev environment: I have a dev image and I mount my source files as a volume.
But then I wanted to do the same on my continuous integration server (gitlab ci) and I carefully read docker doc's reference to https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ but the solution of bind-mounting docker's unix socket into a docker client container makes impossible mounting volumes from it.
So basically my question is how would you solve this (given I am in a docker ci server/runner): I need to run the following command from a container (a gitlab runner).
$ git clone ... my-sources && cd my-sources
$ docker run my-dev-image -v $PWD:$PWD -w $PWD gcc main.c
Because obviously, the volume is taken from docker's "native" host and not the current container.

Way I've solved this is making sure that build paths are SAME on host and CI container. e.g - starting container with -v /home/jenkins:/home/jenkins. This way we have volume mounted from host to CI container. You can change to whatever directory you like, just making sure that jenkins user's home is set to that directory.
Note: I'm using jenkins as example, but any CI will work with same principle

Make sure that your CI server is started with a volume (e.g. docker run --name gitlabci -v /src gitlabci …), then, when you start the other containers, start them with docker run --volumes-from gitlabci …. That way, /src will also be available in those containers, and whatever you put in this directory (from the CI server) will be available in the other containers.

Related

Docker bind mount is empty inside a containerized TeamCity build agent with DOCKER_IN_DOCKER enabled

I'm running containerized build agents using the linux-sudo image tag and, using a Dockerfile (here) I have successfully customized it to suit our needs. This is running very well and successfully running the builds I need it to.
I am running it in a Swarm cluster with a docker-compose file (here), and I have recently enabled the DOCKER_IN_DOCKER variable. I can successfully run docker run hello-world in this container and I'm happy with the results. However I am having an issue running a small utility container inside the agent with a bind mount volume.
I want to use this Dockerfile inside the build agent to run npm CLI commands against the files in a mounted directory. I'm using the following command to run the container with a custom command and a volume as a bind mount.
docker run -it -v $(pwd):/app {IMAGE_TAG} install
So in theory, running npm install against the local directory that is mounted in the container (npm is the command in the ENTRYPOINT so I can just pass install to the container for simplicity). I can run this on other environments (Ubuntu and WSL) and it works very well. However when I run it in the linux-sudo build agent image it doesn't seem to mount the directory properly. If I inspect the directory in the running utility container (the npm one), the /app folder is empty. Shouldn't I expect to see the contents of the bind mount here as I do in other environments?
I have inspected the container, and it confirms there is a volume created of type bind and I can also see it when I list out the docker volumes.
Is there something fundamental I am doing wrong here? I would really appreciate some help if possible please?

file mounts as directory instead of file in docker-in-docker (dind)

When this command docker run --rm -v $(pwd)/api_tests.conf:/usr/config/api_tests.conf --name api-automation local.artifactory.swg-devops.com/api-automation is ran, api_tests.conf file is mounting as a directory in container instead of file.
I went through Single file volume mounted as directory in Docker and few other similar questions on stack overflow but unable to get the right solution.
I have tested the same code in local mac laptop and here file from local machine mounts to container as a file but locally i don't have docker-in-docker setup.
I have Dockerfile as below.
FROM alpine:latest
MAINTAINER Basavaraj
RUN apk add --no-cache python3 \
&& pip3 install --upgrade pip
WORKDIR /api-automation
COPY . /api-automation
RUN pip --no-cache-dir install .
ENTRYPOINT "some command"
and I have the build.sh file as below,
#!/bin/bash
docker pull local.artifactory.swg-devops.com/api-automation
# creating file with name "api_tests.conf" by adding configuration data
echo "configuration data" > api_tests.conf
# it displays all the configuration data written to api_tests.conf
cat $(pwd)/api_tests.conf
docker run --rm -v $(pwd)/api_tests.conf:/usr/.aiops/config/api_tests.conf --name api-automation local.artifactory.swg-devops.com/api-automation
Now we are calling build.sh file from gocd environment.
Looks like docker run command executed in docker-in-docker(dind) and as a result client which spawns the docker container on a different host and the file (api_tests.conf) being created does not exist on that different host.
because of this file (api_tests.conf) is mounting as empty directory in container.
what are the different solutions to mount the file in docker-in-docker environment?
Can we share the file (api_tests.conf) which we created to host where the docker container is spawned?
I think the problem you're having is most likely because of using dind, although it's worth pointing out that this issue would also occur if you had mounted the docker socket into another container as well.
This is because when you ask the docker daemon to mount a directory, you're docker client (cli) actually mount the file/directory itself, it's just passing a request to the docker daemon to mount this location from its local file system. And this is where the problem is, because usually this isn't where you think it is, if you're using dind or sharing the docker.socket, and hence the file/directory doesn't exist from the daemons point of view.
So in your case the $(pwd) is possibly being expanded to some well known/existing directory path, and then the docker daemon is mounting this directory portion, since the file doesn't exist. That's my guess at least, as I've seen similar behaviour before when using dind/docker.socket sharing in other set ups.
One crazy solution to this would be to bind mount the files you want into the dind container at startup, and then you could try subsequently bind mounting those files from within the dind container into any subsequent containers. However bear in mind this is precisely the kind of file system usage that's warned against in the dind documentation because of instability and potential data loss, so be warned.
Hope this helps.

How to specify volume for docker container in CircleCI configuration?

I did not manage to find out how to mount volume of docker image in config.yml for integrating with CircleCI.
Official document gives those variables for
container usage, entry point, command, etc., but none about volume mounting.
The scenario is, the building of my project requires two docker containers, the main container and the other container for service foo. To use the service foo, I need expose some artifacts generated in earlier steps to foo container and do some next steps.
Anyone has idea whether I can do that?
As taken from CircleCI documentation:
Mounting Folders
It’s not possible to mount a folder from your job space into a container in Remote Docker (and vice versa). But you can use docker cp command to transfer files between these two environments. For example, you want to start a container in Remote Docker and you want to use a config file from your source code for that:
- run: |
# creating dummy container which will hold a volume with config
docker create -v /cfg --name configs alpine:3.4 /bin/true
# copying config file into this volume
docker cp path/in/your/source/code/app_config.yml configs:/cfg
# starting application container using this volume
docker run --volumes-from configs app-image:1.2.3
In the same way, if your application produces some artifacts that need to be stored, you can copy them from Remote Docker:
- run: |
# starting container with our application
# make sure you're not using `--rm` option otherwise container will be killed after finish
docker run --name app app-image:1.2.3
- run: |
# once application container finishes we can copy artifacts directly from it
docker cp app:/output /path/in/your/job/space

Copy files from within a docker container to local machine

Is it possible to copy files to a local machine by running a command inside of a docker container. I am aware of docker cp <containerId>:container/file/path /host/file/path However, my understanding is that this has to be run from outside of the docker container. Is there a way to do it or something similar from within?
For some context I have a python script that is run inside of a docker container with something like the following command docker run -ti -rm --net=host buildServer:5000/myProgram /myProgram.py -h. I would like to retrieve the files that are generated from this program so they can be edited. I could run the docker container in detached mode, docker cp the desired file and the shutdown the container. However, I would like to be able to abstract this away from the user.
Docker containers by design don't have any access to the host filesystem unless you provide it explicitly via volume mounts. So, in your example, you could do something like:
docker run -ti -v /tmp/data:/data -rm --net=host buildServer:5000/myProgram /myProgram.py -h
And within the container, the /data directory would be mapped to /tmp/data on your host. You could then copy files into /data to get at them on your host.
This assumes that you're running Docker on Linux. If you are using Windows or OS X there may be additional steps, since in those environments Docker is actually running on a Linux virtual machine and volume access may or may not behave as expected (I don't use those platforms so I can't comment authoritatively).
For more information:
https://docs.docker.com/engine/tutorials/dockervolumes/#/mount-a-host-directory-as-a-data-volume

TeamCity configuration doesn't persist inside docker

I've setup TeamCity inside a docker image and I can access it via localhost but everytime I restart my docker, TeamCity always ask for configuration again (from the beginning, meaning that I have to reconfigure the whole TeamCity again).
How do I make my configuration persist?
How do I make my configuration persist?
You can mount a volume or use a data volume container, in order to persist that configuration.
If you do not, the copy-on-write mechanism used by docker would remove any modification of docker rm (unless you docker commit right after a docker stop)
For example, this Teamcity docker project runs it with a mounted volume:
docker run --link some-postgres:postgres \
-v <teamcitydir>:/var/lib/teamcity -d \
sjoerdmulder/teamcity:latest

Resources