Docker inside docker : volume is mounted, but empty - docker

I am running a docker container with docker mounted inside using :
docker run -v /Path/to/service:/src/service -v /var/run/docker.sock:/var/run/docker.sock --net=host image-name python run.py
This runs a python script that creates a data folder in /src and fills it. When printing os.listdir('/src/data'), I get a list of files.
I then run a container from within this container, mounting the data folder, using docker-py.
volumes = {'/src/data': {'bind': '/src', 'mode': 'rw'}}
client.containers.run(image, command='ls data', name=container_key, network='host', volumes=volumes)
And it prints :
Starting with UID: 0 and HOME: /src\n0\n'
Which means data is mounted, but empty. What am I doing wrong ?

So- mounting docker inside the container means that containers started from in there are running on your HOST machine.
The end result is you have two containers on host- one with
/Path/to/service:/src/service
and one with
/src/data:/src
If you want to share a volume between two containers you should usually use a "named" volume like
docker run -v sharedvolume:/src/data and docker run -v sharedvolume:/src

Related

How can I see detached docker containers?

I started using docker only recently. It is my understanding that in order to mount the local folder into a docker volume inside the container C1 on the image image_name can be done by running the following code:
var=$(pwd)
docker run -d --name=C1 -v $var:/host image_name
However, because I am detaching the container, I am not able to see it among the containers created doing docker ps or docker container ls.
However, if I run docker volume list and then docker volume rm VOLUMEID I get the error volume is in use - [CONTAINER_C1_ID].
Any idea how can I see where C1 is?
Where am I doing wrong?

Inject configuration into volume before Docker container starts

I am looking for a way to create a Docker volume and put some data on it just before a specific container is started - which needs the configuration on startup.
I do not want to modify the container. I would like to use a vanilla container straight from the Docker Hub.
Any ideas?
Update
I did not mention that all this has to be done in a compose file. If I would do it manually, I could wait for the configuration injecting container to finish.
Absolutely! Just create your volume beforehand, attach it to any container (A base OS like Ubuntu would work great), add your data, and you're good to go!
Create the volume:
docker volume create test_volume
Attach it to an instance where you can add data:
docker run --rm -it --name ubuntu_1 -v test_volume:/app ubuntu /bin/sh
Add some data:
Do this within the container; which you are in from the previous command.
touch /app/my_file
Exit the container:
exit
Attach the volume to your new container:
Of course, replace ubuntu with your real image name.
docker run --rm -it --name ubuntu_2 -v test_volume:/app ubuntu /bin/sh
Verify the data is there:
~> ls app/
my_file

Docker Volume point to host Directory in Dockerfile

I have the following Dockerfile :
FROM jboss/wildfly
USER jboss
RUN mkdir -p /opt/jboss/wildfly/standalone/log
VOLUME /opt/jboss/wildfly/standalone/log
CMD /bin/bash
# CMD true
This resulting image is started with docker run -ti --name=data_volume data/volume. The next Dockerfile
FROM jboss/wildfly
RUN sed -i 's|<file relative-to="jboss.server.log.dir"
path="server.log"/>|\<file relative-to="jboss.server.log.dir"
path="\${jboss.host.name}-server.log"/\>|'
/opt/jboss/wildfly/standalone/configuration/standalone.xml
overrides the logging of the resulting jboss to log to "servername"-server.log in the logging dir. When I start the resulting image with docker run -ti --name=wild-01 --volumes-from=data_volume my/wildfly and docker run -ti --name=wild-02 --volumes-from=data_volume my/wildfly I have two log files in my data_colume container. So fine so good.
I would like to point my volume to a directory on the host eg. /var/log/wildfly.
How can I achieve this in Dockerfiles and not with the -v parameter when running data/volume
Thanks a lot in advance
Inside dockerfiles you can only define volumes in /var/lib/docker/volumes. This is because every host can be different from the other.
Docker uses /var/lib/docker as "docker area" where it stores all docker-related data. It's the directory that's guaranteed on every host because it gets created on installation.
If you were to point out a volume in the dockerfile, let's say to /home/mbieren/docker_vol, the image would result in multiple errors when executed on a different host, as that directory does not exist and the user probably has insufficient permissions to create it.
Docker goes around that problem by not allowing custom mount-paths to be set in the dockerfile.
I would like to point my volume to a directory on the host eg. /var/log/wildfly.
remove all mention of volumes from your Dockerfile ... launch your container using
docker run -d -v /var/log/wildfly:/var/log/wildfly your-image-name
then in your code just reference the normal path
/var/log/wildfly
Your syntax to launch the container using docker run -ti makes the container shell interactive whereas -d is the normal mode to spin it up as a daemon running in the background

How to mount docker volume with jenkins docker container?

I have jenkins running inside container and project source code on github.
I need to run project in container on the same host as jenkins, but not as docker-in-docker, i want to run them as sibling containers.
My pipeline looks like this:
pull the source from github
build the project image
run the project container
What i do right now is using the docker socket of host from jenkins container:
/var/run/docker.sock:/var/run/docker.sock
I have problem when jenkins container mount the volume with source code from /var/jenkins_home/workspace/BRANCH_NAME to project container:
volumes:
- ./servers/identity/app:/srv/app
i am getting empty folder "/srv/app" in project container
My best guess is that docker tries to mount it from host and not from the jenkins container.
So the question is: how can i explicitly set the container from which i mount the volume?
I got the same issue when using Jenkins docker container to run another container.
Senario 1 - Running container inside Jenkins docker container
This is not a recommended way, explanations goes here. If you still need to use this approach, then this problem is not a problem.
Senario 2 - Running docker client inside Jenkins container
Suppose, we need to run another container (ContainerA) inside Jenkins docker container, docker pipeline plugin will use --volumes-from to mount Jenkins container volume to ContainerA.
If you trying to use --volume or -v to map specific directory in Jenkins container to ContainerA, you will got an unexpected behavior.
That's because --volumes or -v would try to map directories in host to ContainerA, rather than mapping from directories inside Jenkins container. If the directories not found in host, then you will get an empty dir inside ContainerA.
In short, we can not map a specific directory from containerA to containerB, we could only mount the whole volumes from containerA to containerB, and volume alias is not supported.
Solution
If your Jenkins is running with host volume, you can map the host directories to the target container.
Otherwise, you can access the files inside the newly created container with the same location as Jenkins container.
try:
docker run -d --volumes-from <ContainerID> <YourImage>
where container ID is id of container you want for mont data from.
You can also create volume, by:
docker volume create <volname>
and assign it to both containers
volumes:
- <volname>:/srv/app
Sharing the sock between the Host and Jenkins was my problem because "/var/jenkins_home" is most likely a volume for the Jenkins container.
My solution was installing docker inside a systemd container without sharing the sock.
docker run -d --name jenkins \
--restart=unless-stopped \
--privileged \
-v /sys/fs/cgroup:/sys/fs/cgroup:ro \
-v jenkins-vol:/var/lib/jenkins \
--tmpfs /run \
--tmpfs /run/lock \
ubuntu:16.04 /sbin/init
Then install Jenkins, Docker and Docker Compose on it.

Share and update docker data containers across containers

I have the following containers:
Data container which is build directly in quay.io from a github repo, basically is a website.
FPM container
NGINX container
The three of them are linked together and working just fine. BUT the problem is that every time I change something in the website (Data container) it is rebuilt (of course) and I have to remove that container and also the FPM and NGINX and recreate them all to be able to read the new content.
I started with a "backup approach" for what I'm copying the data from the container to a host directory and mounting that into the FPM and NGINX containers, this way I can update the data without restarting/removing any service.
But the idea of moving the data from the data container into the host, really doesn't like me. So wondering if there a "docker way" or a better way of doing it.
Thanks!
UPDATE: Adding more context
Dockerfile d`ata container definition
FROM debian
ADD data/* /home/mustela/
VOLUME /home/mustela/
Where data only has 2 files: hello.1 and hello.2
Compiling the image:
docker build -t="mustela/data" .
Running the data container:
docker run --name mustela-data mustela/data
Creating another container to link to the previous one:
docker run -d -it --name nginx --volumes-from mustela-data ubuntu bash
Listing the mounted files:
docker exec -it nginx ls /mustela/home
Result:
hello.1 hello.2
Now, lets rebuild the data container image, but first adding some new files, so now inside data we have hello.1 hello.2 hello.3 hello.4
docker rm mustela-data
docker build -t="mustela/data" .
docker run --name mustela-data mustela/data
If I ls /home/mustela from the running container, the files aren't being updated:
docker exec -it nginx ls /mustela/home
Result:
hello.1 hello.2
But if I run a new container I can see the files
docker run -it --name nginx2 --volumes-from mustela-data ubuntu ls /home/mustela
Result: hello.1 hello.2 hello.3 hello.4

Resources