connecting to an insecure local docker registry in uncontrolled CI environment - docker

I'm building a microservice that performs operations on a docker registry.
The microservice i'm building has a test which starts a docker-registry via the docker-registry image in Docker Hub, so the microservice can connect to it, set it up, work on it etc...
The test fails in CI: The Docker client can't connect to the test-registry because it's insecure. This is in CI, and dynamic, different random ip/port each time, and the docker daemon is used by other parallel tests... so having the test edit the global jsons and restarting docker daemon seems like a bad solution.
Has anyone solved this? how do you test integration with docker-registry in CI? am i doomed to modify the global docker jsons and restart/trigger reload of config?
Some specifics:
The build tool is Bazel and runs in GCB so the test itself runs in RBE workers on the Google cloud which are isolated and don't have network access when running the tests and i can't really configure too much, it's not my machine, it's a radon machine each time for each test etc...

we ended up starting another container that has a docker daemon in it (without mounting the external docker daemon socket, so it's actually another docker daemon instance).
we do this at our leisure, so only after we know the private registry address and configure the docker daemon to startup with insecure registry flag.
in order for the containers to communicate we had their container have a name and share a network.

Related

Can't access certain services running on host machine from inside docker container

We're trying to setup a GitLab Runner, which is resposible for building and testing our web application. For running the jobs we use the Docker executor with DinD.
Our problem is now: When trying to access certain services from inside the Runner Container (docker image) we get a timeout and no response back. It includes:
logging in to our own docker registry which is hosted on the same
system
wget on our domain (which is hosted on the same system)
What we can do:
ping our domain as well as the registry
ping other domains
wget other domains
Logging into the registry and wget our domain is successful when trying it native on the server and not in a docker container.
So it maybe looks like a docker problem.
Hope someone can help us.

Docker in docker and docker compose block one port for no reason

Right now I am setting up an application that has a deployment based upon docker images.
I use gitlab ci to:
Test each service
Build each service
Dockerize each image (create docker container)
Run integration tests (start docker compose that starts all services on special ports, run integration tests)
Stop prod images and run new images
I did this for each service, but I ran into an issue.
When I start my docker container for integration tests then it is setup within a gitlab ci task. For each task a docker based runner is used. I also mount my host docker socket to be able to use docker in docker.
So my gradle docker image is started by the gitlab runner. Then docker will be installed and all images will be started using docker compose.
One microservice listens to port 10004. Within the docker compose file there is a 11004:10004 port mapping.
My integration tests try to connect to port 11004. But this does not work right now.
When I attach to the image that run docker compose while it tries to execute the integration test then I am not able to do it manually by calling
wget ip: port
I just get the message connected and waiting for response. Either my tests can connect successfully. My service does not log any message about a new connection.
When I execute this wget command within  my host shell then it works.
It's a public ip and within my container I can also connect to other ports using telnet and wget. Just one port of one service is broken when I try to connect from my docker in docker instance.
When I do not use docker compose then it works. Docker compose seems to setup a special default network that does something weird.
Setting network to host also works...
So did anyone also make such an experience when using docker compose?
The same setup works flawless in docker for mac, but my server runs on Debian 8.
My solution for now is to use a shell runner to avoid docker in docker issues. It works there as well.
So docker in docker combined with docker compose seems to have an ugly bug.
I'm writing while I am sitting in the subway but I hope describing my issue is also sufficient to talk about experiences. I don't think we need some sourcecode to find bad configurations because it works without docker in docker and on Mac.
I figured out that docker in docker has still some weird behaviors. I fixed my issue by adding a new gitlab ci runner that is a shell runner. Therefore docker-compose is run on my host and everything works flawless.
I can reuse the same runner for starting docker images in production as I do for integration testing. So the easy fix has another benefit for me.
The result is a best practice to avoid pitfalls:
Only use docker in docker when there is a real need.
For example to make sure fast io communication between your host docker image and your docker image of interest.
Have fun using docker (in docker (in docker)) :]

docker swarm http connectivity

new to docker and docker swarm. Trying docker and docker swarm both.
initially i had started a docker daemon and was able to connect it on http port i.e. 2375. I had installed docker colud plugin in jenkins and added http://daemon-IP:2375 and was able to create containers. well it creates a container, does my build inside it and destroys the container.
My Query is, will i be able to connect to docker swarm on http port, the same way i a am connecting to a standalone docker daemon ? is there any documentation on it. or the my understanding about the swarm is wrong.
please suggest.
Thanks
Yeah you can connect to a remote host the same way you are doing via the Unix Socket. People very often forget that docker is a client-server architecture and your "docker run..." commands are basically just commands issued by the docker client.
If you set certain environment variables:
DOCKER_HOST=tcp:ip.address.of.host:port
DOCKER_TLS_VERIFY=1
DOCKER_CERTS=/directory/where/certs/are
(The last two are optional for TLS connections, which I would highly recommend. You'd have to setup https://docs.docker.com/engine/security/https/ which is recommended for a production environment)
Once you've set your DOCKER_HOST environment variable, if you issue a docker command and get a response, it will be from the remote host if everything is setup correctly.

Give access to Docker Swarm cluster

Okay, here is my situation:
I created a cluster of docker swarm using docker machine. I can deploy any container, etc. So basically everything is working fine. My question right now is how to give access to the cluster to someone else. I want other people to deploy container on that cluster using docker-compose.
Docker machine configures the docker engine on each node to be secured using TLS:
https://docs.docker.com/engine/security/https/
The client configuration can be seen when running the "docker-machine config" command, for example the following settings are used to access the remote docker host:
--tlsverify
--tlscacert="~/.docker/machine/certs/ca.pem"
--tlscert="~/.docker/machine/certs/cert.pem"
--tlskey="~/.docker/machine/certs/key.pem"
-H=tcp://....
It's the files under ~/.docker/machine/certs that are needed by other users who want to connect to your swarm.
I expect that docker will eventually create some form of user authentication and authorization.

Rancher Performance (Docker in Docker?)

Looking at Rancher, what is the performance like? I guess my main question, is everything deployed in Rancher docker in docker? After reading http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ I trying to stay away from that idea. It looks like the Rancher CI pipeline with Docker/Jenkins is docker in docker, but what about the rest? If i setup a docker-compose or deploy something from their catalog, is it all docker in docker? I've read through their documentation and this simple question has still just flown over my head. Any guidance would be much appreciated.
Thank you
Rancher itself is not deployed with Docker in Docker (DinD). The main components of Rancher, rancher/server and rancher/agent are both normal containers. The server, in a normal deployment, runs the orchestration piece and a few other key services for the catalog, Docker Machine provisioning, websocket-proxy and MySQL. All of these can be broken out if desired, but for simplicity of getting started, its all in one. We use s6 to manage the orchestration and database processes.
The rancher/agent container is privileged and requires the user to bind mount the hosts Docker socket. We package a Docker binary in the container and use it to communicate with the host on startup. It is similar to the way a Mac talks to Boot2docker, the binary is just a client talking to a remote Docker daemon. Once the agent is bootstrapped, it communicates back to the Rancher server container over a websocket connection. When containers and stacks are deployed Rancher server sends events to the agents which then call the hosts Docker daemon for deployment. The deployed containers are running as normal Docker containers on the host, just as if the user typed docker run .... In fact, a neat feature of Rancher is that if you do type docker run ... on the host, the resulting container will show up in the Rancher UI.
The Jenkins entry in the Rancher catalog, when using the Swarm plugin is doing a host bind mount of the Docker socket as well. We have some early experiments that used DinD to test out some concepts with Jenkins, but those were not released.

Resources