I have a problem running RFWK tests on docker containers, usually it was done on dedicated servers where was ssh option (SSH is needed). But now I wanted to run them locally on my local docker containers. But I don't know, how to skip the ssh connection. Should I make some changes to dockerfile or slightly change tests init?
Related
I would like to run integration test while I'm building docker image. Those tests need to instantiate docker containers.
Is there a possibility to access docker inside such multi stage docker build?
No, you can't do this.
You need access to your host's Docker socket somehow. In a standalone docker run command you'd do something like docker run -v /var/run/docker.sock:/var/run/docker.sock, but there's no way to pass that option (or any other volume mount) into docker build.
For running unit-type tests (that don't have external dependencies) I'd just run them in your development or core CI build environment, outside of Docker, and run run docker build until they pass. For integration-type tests (that do) you need to set up those dependencies, maybe with a Docker Compose file, which again will be easier to do outside of Docker. This also avoids needing to build your test code and its additional dependencies into your image.
(Technically there are two ways around this. The easier of the two is the massive security disaster that is opening up a TCP-based Docker socket; then your Dockerfile could connect to that ["remote"] Docker daemon and launch containers, stop them, kill itself off, impersonate the host for inbound SSH connections, launch a bitcoin miner that lives beyond the container build, etc...actually it allows any process on the host to do any of these things. The much harder, as #RaynalGobel suggests in a comment, is to try to launch a separate Docker daemon inside the container; the DinD image link there points out that it requires a --privileged container, which again you can't have at build time.)
I use docker locally for development. I run a few containers for Redis, Postgres, the frontend compilation and backend compilation. The frontend and backend map files from my local machine to the docker containers where a process runs that auto compiles. Then I can access the backend server and frontend webserver from services in the docker container hosting them.
My backend can be very resource-intensive as I'm developing a task that processes a large amount of time-series data. It can take about 5-10 mins on my machine. I'm using a 15-inch Macbook pro as my local machine and running docker and my development setup is really pushing my machine to the limits. I'm considering running docker on another Linux PC I have and connecting to it from my MacBook pro.
I use CircleCI quite a bit and they have some setup with docker where the CI containers you run don't actually run docker themselves but are networked out to a separate dedicated machine. The only issue is mapping volumes don't work too great.
How can I set this up in docker so that I can run docker commands locally that run on a separate machine?
Any ideas how I can map the directories to the other machine?
You can use SSH to run commands on another machine:
ssh user#server docker run hello-world
I would recommend against mapping volumes, as that doesn't work well. Instead, I'd simply copy the data you needed to the server.
scp -r directory-to-copy/* user#server:/destination-to-copy-into
I would like to run integration test while I'm building docker image. Those tests need to instantiate docker containers.
Is there a possibility to access docker inside such multi stage docker build?
No, you can't do this.
You need access to your host's Docker socket somehow. In a standalone docker run command you'd do something like docker run -v /var/run/docker.sock:/var/run/docker.sock, but there's no way to pass that option (or any other volume mount) into docker build.
For running unit-type tests (that don't have external dependencies) I'd just run them in your development or core CI build environment, outside of Docker, and run run docker build until they pass. For integration-type tests (that do) you need to set up those dependencies, maybe with a Docker Compose file, which again will be easier to do outside of Docker. This also avoids needing to build your test code and its additional dependencies into your image.
(Technically there are two ways around this. The easier of the two is the massive security disaster that is opening up a TCP-based Docker socket; then your Dockerfile could connect to that ["remote"] Docker daemon and launch containers, stop them, kill itself off, impersonate the host for inbound SSH connections, launch a bitcoin miner that lives beyond the container build, etc...actually it allows any process on the host to do any of these things. The much harder, as #RaynalGobel suggests in a comment, is to try to launch a separate Docker daemon inside the container; the DinD image link there points out that it requires a --privileged container, which again you can't have at build time.)
I have a Jenkins that is running inside of a docker container. Outside of the Docker container in the host, I have a bash script that I would like to run from a Jenkins pipeline inside of the container and get the result of the bash script.
You can't do that. One of the major benefits of containers (and also of virtualization systems) is that processes running in containers can't make arbitrary changes or run arbitrary commands on the host.
If managing the host in some form is a major goal of your task, then you need to run it directly on the host, not in an isolation system designed to prevent you from doing this.
(There are ways to cause side effects like this to happen: if you have an ssh daemon on the host, your containerized process could launch a remote command via ssh; or you could package whatever command in a service triggered by a network request; but these are basically the same approaches you'd use to make your host system manageable by "something else", and triggering it from a local Docker container isn't different from triggering it from a different host.)
Right now I am setting up an application that has a deployment based upon docker images.
I use gitlab ci to:
Test each service
Build each service
Dockerize each image (create docker container)
Run integration tests (start docker compose that starts all services on special ports, run integration tests)
Stop prod images and run new images
I did this for each service, but I ran into an issue.
When I start my docker container for integration tests then it is setup within a gitlab ci task. For each task a docker based runner is used. I also mount my host docker socket to be able to use docker in docker.
So my gradle docker image is started by the gitlab runner. Then docker will be installed and all images will be started using docker compose.
One microservice listens to port 10004. Within the docker compose file there is a 11004:10004 port mapping.
My integration tests try to connect to port 11004. But this does not work right now.
When I attach to the image that run docker compose while it tries to execute the integration test then I am not able to do it manually by calling
wget ip: port
I just get the message connected and waiting for response. Either my tests can connect successfully. My service does not log any message about a new connection.
When I execute this wget command within my host shell then it works.
It's a public ip and within my container I can also connect to other ports using telnet and wget. Just one port of one service is broken when I try to connect from my docker in docker instance.
When I do not use docker compose then it works. Docker compose seems to setup a special default network that does something weird.
Setting network to host also works...
So did anyone also make such an experience when using docker compose?
The same setup works flawless in docker for mac, but my server runs on Debian 8.
My solution for now is to use a shell runner to avoid docker in docker issues. It works there as well.
So docker in docker combined with docker compose seems to have an ugly bug.
I'm writing while I am sitting in the subway but I hope describing my issue is also sufficient to talk about experiences. I don't think we need some sourcecode to find bad configurations because it works without docker in docker and on Mac.
I figured out that docker in docker has still some weird behaviors. I fixed my issue by adding a new gitlab ci runner that is a shell runner. Therefore docker-compose is run on my host and everything works flawless.
I can reuse the same runner for starting docker images in production as I do for integration testing. So the easy fix has another benefit for me.
The result is a best practice to avoid pitfalls:
Only use docker in docker when there is a real need.
For example to make sure fast io communication between your host docker image and your docker image of interest.
Have fun using docker (in docker (in docker)) :]