Access docker within Dockerfile? - docker

I would like to run integration test while I'm building docker image. Those tests need to instantiate docker containers.
Is there a possibility to access docker inside such multi stage docker build?

No, you can't do this.
You need access to your host's Docker socket somehow. In a standalone docker run command you'd do something like docker run -v /var/run/docker.sock:/var/run/docker.sock, but there's no way to pass that option (or any other volume mount) into docker build.
For running unit-type tests (that don't have external dependencies) I'd just run them in your development or core CI build environment, outside of Docker, and run run docker build until they pass. For integration-type tests (that do) you need to set up those dependencies, maybe with a Docker Compose file, which again will be easier to do outside of Docker. This also avoids needing to build your test code and its additional dependencies into your image.
(Technically there are two ways around this. The easier of the two is the massive security disaster that is opening up a TCP-based Docker socket; then your Dockerfile could connect to that ["remote"] Docker daemon and launch containers, stop them, kill itself off, impersonate the host for inbound SSH connections, launch a bitcoin miner that lives beyond the container build, etc...actually it allows any process on the host to do any of these things. The much harder, as #RaynalGobel suggests in a comment, is to try to launch a separate Docker daemon inside the container; the DinD image link there points out that it requires a --privileged container, which again you can't have at build time.)

Related

Use docker socket during image building [duplicate]

I would like to run integration test while I'm building docker image. Those tests need to instantiate docker containers.
Is there a possibility to access docker inside such multi stage docker build?
No, you can't do this.
You need access to your host's Docker socket somehow. In a standalone docker run command you'd do something like docker run -v /var/run/docker.sock:/var/run/docker.sock, but there's no way to pass that option (or any other volume mount) into docker build.
For running unit-type tests (that don't have external dependencies) I'd just run them in your development or core CI build environment, outside of Docker, and run run docker build until they pass. For integration-type tests (that do) you need to set up those dependencies, maybe with a Docker Compose file, which again will be easier to do outside of Docker. This also avoids needing to build your test code and its additional dependencies into your image.
(Technically there are two ways around this. The easier of the two is the massive security disaster that is opening up a TCP-based Docker socket; then your Dockerfile could connect to that ["remote"] Docker daemon and launch containers, stop them, kill itself off, impersonate the host for inbound SSH connections, launch a bitcoin miner that lives beyond the container build, etc...actually it allows any process on the host to do any of these things. The much harder, as #RaynalGobel suggests in a comment, is to try to launch a separate Docker daemon inside the container; the DinD image link there points out that it requires a --privileged container, which again you can't have at build time.)

What is the purpose of creating a volume between two docker.sock files in Docker?

I see this in many docker apps that are somehow related hardly on networking, but I can't understand this
Got it.
https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
Let’s take a step back here. Do you really want Docker-in-Docker? Or
do you just want to be able to run Docker (specifically: build, run,
sometimes push containers and images) from your CI system, while this
CI system itself is in a container?
I’m going to bet that most people want the latter. All you want is a
solution so that your CI system like Jenkins can start containers.
And the simplest way is to just expose the Docker socket to your CI
container, by bind-mounting it with the -v flag.
Simply put, when you start your CI container (Jenkins or other),
instead of hacking something together with Docker-in-Docker, start it
with:
docker run -v /var/run/docker.sock:/var/run/docker.sock
Now this
container will have access to the Docker socket, and will therefore be
able to start containers. Except that instead of starting “child”
containers, it will start “sibling” containers.

Robot Framework test run on docker containers

I have a problem running RFWK tests on docker containers, usually it was done on dedicated servers where was ssh option (SSH is needed). But now I wanted to run them locally on my local docker containers. But I don't know, how to skip the ssh connection. Should I make some changes to dockerfile or slightly change tests init?

Docker in docker and docker compose block one port for no reason

Right now I am setting up an application that has a deployment based upon docker images.
I use gitlab ci to:
Test each service
Build each service
Dockerize each image (create docker container)
Run integration tests (start docker compose that starts all services on special ports, run integration tests)
Stop prod images and run new images
I did this for each service, but I ran into an issue.
When I start my docker container for integration tests then it is setup within a gitlab ci task. For each task a docker based runner is used. I also mount my host docker socket to be able to use docker in docker.
So my gradle docker image is started by the gitlab runner. Then docker will be installed and all images will be started using docker compose.
One microservice listens to port 10004. Within the docker compose file there is a 11004:10004 port mapping.
My integration tests try to connect to port 11004. But this does not work right now.
When I attach to the image that run docker compose while it tries to execute the integration test then I am not able to do it manually by calling
wget ip: port
I just get the message connected and waiting for response. Either my tests can connect successfully. My service does not log any message about a new connection.
When I execute this wget command within  my host shell then it works.
It's a public ip and within my container I can also connect to other ports using telnet and wget. Just one port of one service is broken when I try to connect from my docker in docker instance.
When I do not use docker compose then it works. Docker compose seems to setup a special default network that does something weird.
Setting network to host also works...
So did anyone also make such an experience when using docker compose?
The same setup works flawless in docker for mac, but my server runs on Debian 8.
My solution for now is to use a shell runner to avoid docker in docker issues. It works there as well.
So docker in docker combined with docker compose seems to have an ugly bug.
I'm writing while I am sitting in the subway but I hope describing my issue is also sufficient to talk about experiences. I don't think we need some sourcecode to find bad configurations because it works without docker in docker and on Mac.
I figured out that docker in docker has still some weird behaviors. I fixed my issue by adding a new gitlab ci runner that is a shell runner. Therefore docker-compose is run on my host and everything works flawless.
I can reuse the same runner for starting docker images in production as I do for integration testing. So the easy fix has another benefit for me.
The result is a best practice to avoid pitfalls:
Only use docker in docker when there is a real need.
For example to make sure fast io communication between your host docker image and your docker image of interest.
Have fun using docker (in docker (in docker)) :]

sharing docker.sock or docker in docker (dind)

We're running a mesos cluster and jenkins for continuous integration workflow.
Jenkins is configured with the mesos plugin.
Previously we built our docker images in mesos containers. Now we are switching to docker containers for building our docker images.
I've been searching for the advantage of building our docker images inside a docker container with dind image like this one "dind-jenkins-slave" found on docker hub.
With dind you lose the caching opportunities when sharing the docker.sock of the host. And with dind you also have to push the privileged parameter.
What is the downside of just sharing the docker.sock of the host?
I'm using sharing docker.sock approach. The only downside which I see is security - you could do everything what you want with the host when you could run any docker containers. But if you trust people who create jobs or could control which docker containers with which options could be run from jenkins then giving access to main docker daemon is easy solution.
It depends on what you're doing, really. To get our jenkins jobs truly isolated so that we can run as many as we want in parallel, we switched to DinD. If you share the host socket you still only have a single docker daemon- port conflicts, pulling/pushing multiple images from separate jobs, and one job relying on an image or build that is also being messed with by another job are all issues.
To get around the caching issue, I create the dind container and leave it around. I run
docker start -a dindslave || docker run -v ${WORKSPACE}:/data my/dindimage jenkinscommands.sh
Then jenkins just writes its commands to jenkinscommands.sh and restarts the container every time. When you remove the container you remove your cache as well, and you don't share caches between jobs if that is something you want... but you don't have to think about jobs interfering with one another or making sure they are running on the same host.

Resources