using docker-compose on a kubernetes instance with jenkins - mounting empty volumes - docker

I have a Jenkins instance setup using Googles Jenkins on Kubernetes solution. I have not changed any of the settings of the Kubernetes Pod.
When I trigger a new job I am successfully able to get everything up and running until the point of my tests.
My tests use docker-compose. First I make sure to install docker (1.5-1+b1) and docker-compose (1.8.0-2) on the instance (I know I can optimize this by using an image that already includes these, but I am still just in proof-of-concept).
When I run the docker-compose up command everything works and the services start their initialization scripts. However, the mounts are empty. I have verified that the files exist on the Jenkins slave, and the mount is created inside the docker service when I run docker-compose, however they are empty.
Some information:
In order to get around file permissions I am using /tmp as the Jenkins Workspace. I am using SCM to pull my files (successfully) and in the docker-compose file I specify version: '2' and the mount paths with absolute paths. The volume section of the service that fails looks like this:
volumes:
- /tmp/automation:/opt/automation
I changed the command that is run in the service to ls /opt/automation and the result is an empty directory.
What am I missing? I just want to mount a directory into my docker-compose service. This works perfectly from Windows, Ubuntu, and Centos devices. Why won't it work using the Kubernetes instance?

I found the reason it fails here:
A Docker container in a Docker container uses the parent HOST's Docker daemon and hence, any volumes that are mounted in the "docker-in-docker" case is still referenced from the HOST, and not from the Container.
Therefore, the actual path mounted from the Jenkins container "does not exist" in the HOST. Due to this, a new directory is created in the "docker-in-docker" container that is empty. Same thing applies when a directory is mounted to a new Docker container inside a Container.
So it seems like it will be impossible to mount something from the outer docker into the inner docker. And another solution must be found.

Related

VS Code dev container mounted directory is empty

I have a devcontainer compose project that requires mongo and a replica server. This requires a few mongosh commands to be run, which I'd like to do in a separate container as a bash script.
My issue is that when using "Clone repository into Container volume" my mounted directory is empty. This works fine when I first check the repo out locally and then build the container from that.
Here is a demo repository that shows the issue: https://github.com/jrj2211/vscode-remote-try-node-mongo-compose
In this project, the compose file mounts the .devcontainer directory. The file I need is at the path: .devcontainer/scripts/mongosetup.sh.
volumes:
- ./scripts:/scripts
This produces the correct result locally but the folder is empty when in a docker volume.
What is the correct path to the folder location in the WSL2 volume? Is there a way to make this work both locally and cloned in a docker volume?
I tried to set an ENV variable from the devcontainer.json that pointed to ${workspaceFolder} but that ended up as an empty string in compose.
This documentation makes me believe this should work this way which is linked to from the 2nd link that talks about "Clone Repository in Container Volume":
https://code.visualstudio.com/remote/advancedcontainers/add-local-file-mount
https://code.visualstudio.com/remote/advancedcontainers/improve-performance
I was able to get this working through the use of #h4l brilliant code. This takes the containerWorkspaceFolder and localWorkspaceFolder and turns them into environment variables available in docker-compose. This has the added benefit of continuing to work both locally or in a container.
https://github.com/h4l/dev-container-docker-compose-volume-or-bind
Hopefully soon those variables become available in container mode directly so additional scripts arn't needed.

Docker bind mount is empty inside a containerized TeamCity build agent with DOCKER_IN_DOCKER enabled

I'm running containerized build agents using the linux-sudo image tag and, using a Dockerfile (here) I have successfully customized it to suit our needs. This is running very well and successfully running the builds I need it to.
I am running it in a Swarm cluster with a docker-compose file (here), and I have recently enabled the DOCKER_IN_DOCKER variable. I can successfully run docker run hello-world in this container and I'm happy with the results. However I am having an issue running a small utility container inside the agent with a bind mount volume.
I want to use this Dockerfile inside the build agent to run npm CLI commands against the files in a mounted directory. I'm using the following command to run the container with a custom command and a volume as a bind mount.
docker run -it -v $(pwd):/app {IMAGE_TAG} install
So in theory, running npm install against the local directory that is mounted in the container (npm is the command in the ENTRYPOINT so I can just pass install to the container for simplicity). I can run this on other environments (Ubuntu and WSL) and it works very well. However when I run it in the linux-sudo build agent image it doesn't seem to mount the directory properly. If I inspect the directory in the running utility container (the npm one), the /app folder is empty. Shouldn't I expect to see the contents of the bind mount here as I do in other environments?
I have inspected the container, and it confirms there is a volume created of type bind and I can also see it when I list out the docker volumes.
Is there something fundamental I am doing wrong here? I would really appreciate some help if possible please?

Why can't my Docker container find the file it's supposed to create?

I have a Docker container (Linux container running on Windows with VLS 2) running a .NET Core 5.0 application, whose Dockerfile and docker-compose.yml were created by someone else. I spun it up with docker run and passing a single environment variable and port mapping. It works just fine until it attempts to create a file, which it attempts to do with a statement like this: System.IO.File.WriteAllText($"/output_json/myfile.json", jsonString);, and errors out. The error message says
Could not find a part of the path '/output_json/myfile.json'.
Since a Docker container is essentially a virtualized filesystem, I assume I need to allocate some space to the container, or share a folder on the host machine with it, so that it has an accessible location to save the file. Is that correct?
EDIT: I've just found this in docker-compose.yml:
services:
<servicename>:
volumes:
- ./output:/output_json
Doesn't this mean that an output_json directory is supposed to be created? Does docker-compose not have any bearing on a container created with docker run?
The path /output_json probably doesn't exist in the docker image. That could be because you're meant to map a directory on your host to that path. Then the container can put it's output there and you can grab it after the container is done.
To try it, you can make an empty directory and map that to the /output_json path in your container by running the following 2 commands from a command line
mkdir %temp%\container_output
docker run -v %temp%\container_output:/output_json <other options> <image name>
Then do cd %temp%\container_output and see what output the container has made.

Can Airflow running in a Docker container access a local file?

I am a newbie as far as both Airflow and Docker are concerned; to make things more complicated, I use Astronomer, and to make things worse, I run Airflow on Windows. (Not on a Unix subsystem - could not install Docker on Ubuntu 20.4). "astro dev start" breaks with an error, but in Docker Desktop I see, and can start, 3 Airflow-related containers. They see my DAGs just fine, but my DAGs don't see the local file system. Is thus unavoidable with the Airflow + Docker combo? (Seems like a big handicap; one can only use a file in the cloud).
In general, you can declare a volume at image runtime in Docker using the -v switch with your docker run command to mount a local folder on your host to a mount point in your container, and you can access that point from inside the container.
If you go on to use docker-compose up to orchestrate your containers, you can specify volumes in the docker-compose.yml file for your containers which configures the volumes for the containers that run.
In your case, the Astronomer docs here suggest it is possible to create a custom directive in the Astronomer docker-compose.override.yml file to mount the volumes in the Airflow containers created as part of your astro commands for your stack which should then be visible from your DAGs.

Dealing with data in Docker Containers with Gitlab-Ci

So I am using gitlab-ci to deploy my websites in docker containers, because the gitlab-ci docker runner doesn't seem to do what I want to do I am using the shell executor and let it run docker-compose up -d. Here comes the problem.
I have 2 volumes in my docker-container. ./:/var/www/html/ (which is the content of my git repo, so files I want to replace on build) and a mount that is "inside" of this mount /srv/data:/var/www/html/software/permdata (which is a persistent mount on my server).
When the gitlab-ci runner starts it tries to remove all files while the container is running, but because of this mount in mount it gets a device busy and aborts. So I have to manually stop and remove the container before I can run my build (which kind of defeats the point of build automation).
Options I thought about to fix this problem:
stop and remove the container before gitlab-ci-multi-runner starts (seems not possible)
add the git data to my docker container and only mount my permdata (seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile)
Option 2 would be ideal because then it would also sort out my issues with permissions on the files.
Maybe someone has gone through the same problem and could give me an advice
seems like you can't add data to a container without the volume option with docker compose like you can in a Dockerfile
That's correct. The Compose file is not meant to replace the Dockerfile, it's meant to run multiple images for an application or project.
You can modify the Dockerfile to copy in the git files.

Resources