Docker bind mount is empty inside a containerized TeamCity build agent with DOCKER_IN_DOCKER enabled - docker

I'm running containerized build agents using the linux-sudo image tag and, using a Dockerfile (here) I have successfully customized it to suit our needs. This is running very well and successfully running the builds I need it to.
I am running it in a Swarm cluster with a docker-compose file (here), and I have recently enabled the DOCKER_IN_DOCKER variable. I can successfully run docker run hello-world in this container and I'm happy with the results. However I am having an issue running a small utility container inside the agent with a bind mount volume.
I want to use this Dockerfile inside the build agent to run npm CLI commands against the files in a mounted directory. I'm using the following command to run the container with a custom command and a volume as a bind mount.
docker run -it -v $(pwd):/app {IMAGE_TAG} install
So in theory, running npm install against the local directory that is mounted in the container (npm is the command in the ENTRYPOINT so I can just pass install to the container for simplicity). I can run this on other environments (Ubuntu and WSL) and it works very well. However when I run it in the linux-sudo build agent image it doesn't seem to mount the directory properly. If I inspect the directory in the running utility container (the npm one), the /app folder is empty. Shouldn't I expect to see the contents of the bind mount here as I do in other environments?
I have inspected the container, and it confirms there is a volume created of type bind and I can also see it when I list out the docker volumes.
Is there something fundamental I am doing wrong here? I would really appreciate some help if possible please?

Related

Running attended installer inside docker windows/servercore

I've been attempting to move an application to the cloud for a while now and have most of the services set up in pods running in a k8s cluster. The last piece has been giving me trouble, I need to set up an image with an older piece of software that cannot be installed silently. I then attempted in my dockerfile to install its .net dependencies (2005.x86, 2010.x86, 2012.x86, 2015.x86, 2015.x64) and manually transfer a local install of the program but that also did not work.
Is there any way to run through a guided install in a remote windows image or be able to determine all of the file changes made by an installer in order to do them manually?
You can track the changes done by the installer following these steps:
start a new container based on your base image
docker run --name test -d <base_image>
open a shell in the new container (I am not familiar with Windows so you might have to adapt the command below)
docker exec -ti test cmd
Run whatever commands you need to run inside the container. When you are done exit from the container
Examine the changes to the container's filesystem:
docker container diff test
You can also use docker container export to export the container's filesystem as a tar archive, and then docker image import to create an image from that archive.

How to mount command or busybox to docker container?

The image pulled from docker hub is a minimal system, without commands like vim,ping,etc. Sometimes when in debug environment.
For example, I need ping to test network or "vim" to modify conf, but I dont want to install them in container or indocker-file` as they are not necessary in run time.
I have tried to install the commands in my container which is not convenient.
So, I think if it can mount commands from host to container? or even "mount" a busy-box to container?
You should install these tools in your docker container, because this is how the things are done. I cant find a single reason not to do so, but in case you cant do it (why??), you can put necessary binaries into volume and mount this volume into your container. Something like:
docker run -it -v /my/binaries/here:/binaries:ro image sh
$ ls /binaries
and execute them inside using container path /binaries.
But what you have to keep in mind - these binaries usually have dependencies from system paths like /var/lib and others. And when calling them from inside container, you have to somehow resolve them.
If running on Kubernetes, the kubectl command has support for running a debug container that has access to running container. Check kubectl debug.
https://kubernetes.io/docs/tasks/debug-application-cluster/debug-running-pod/#ephemeral-container

Sharing files between container and host

I'm running a docker container with a volume /var/my_folder. The data there is persistent: When I close the container it is still there.
But also want to have the data available on my host, because I want to work on code with an IDE, which is not installed in my container.
So how can I have a folder /var/my_folder on my host machine which is also available in my container?
I'm working on Linux Mint.
I appreciate your help.
Thanks. :)
Link : Manage data in containers
The basic run command you want is ...
docker run -dt --name containerName -v /path/on/host:/path/in/container
The problem is that mounting the volume will, (for your purposes), overwrite the volume in the container
the best way to overcome this is to create the files (inside the container) that you want to share AFTER mounting.
The ENTRYPOINT command is executed on docker run. Therefore, if your files are generated as part of your entrypoint script AND not as part of your build THEN they will be available from the host machine once mounted.
The solution is therefore, to run the commands that creates the files in the ENTRYPOINT script.
Failing this, during the build copy the files to another directory and then COPY them back in your ENTRYPOINT script.

How do you mount sibling containers volumes started from another container?

I am using docker for my dev environment: I have a dev image and I mount my source files as a volume.
But then I wanted to do the same on my continuous integration server (gitlab ci) and I carefully read docker doc's reference to https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ but the solution of bind-mounting docker's unix socket into a docker client container makes impossible mounting volumes from it.
So basically my question is how would you solve this (given I am in a docker ci server/runner): I need to run the following command from a container (a gitlab runner).
$ git clone ... my-sources && cd my-sources
$ docker run my-dev-image -v $PWD:$PWD -w $PWD gcc main.c
Because obviously, the volume is taken from docker's "native" host and not the current container.
Way I've solved this is making sure that build paths are SAME on host and CI container. e.g - starting container with -v /home/jenkins:/home/jenkins. This way we have volume mounted from host to CI container. You can change to whatever directory you like, just making sure that jenkins user's home is set to that directory.
Note: I'm using jenkins as example, but any CI will work with same principle
Make sure that your CI server is started with a volume (e.g. docker run --name gitlabci -v /src gitlabci …), then, when you start the other containers, start them with docker run --volumes-from gitlabci …. That way, /src will also be available in those containers, and whatever you put in this directory (from the CI server) will be available in the other containers.

Run a script into a container then copy files to host from the container

I want to run a script against a container and copy the output files back to the host. I have few questions:
Does the script need to be inside the container in order to run OR I can have the script in the host and still can run it against the container?
Copying files is available through cp command which only available in docker. Now in the container 'docker cp' is not available. So if the script is running inside the container how it can copy files to the host?
What I am trying to do is the following (my running container has mongodb):
Export certain collections to json files
Copy the resulted files to the host
As you can see some commands are available in the container such as 'mongoexport' and some are available in the host only like 'docker cp'.
Simply use a Docker volume—this is the best way to share data between containers and their host.

Resources