All,
I've searched this high and low but was not able to find a reliable answer. The question may be simple for some Pro's but please help me with this...
We have a situation where we need Jenkins to be able to access and build within Docker containers. The target Docker containers are built and instantiated with a separate docker-compose. What would be the best way of connecting Jenkins with Docker containers in each scenario's as below?
Scenario 1 : Jenkins is setup on host machine it self. 2 Docker containers instantiated using their own docker-compose file. How can Jenkins connect to the containers in this situation? Host cannot ping Docker containers since both are on different networks (Host on Physical and Docker containers on docker DNS) hence maybe no SSH as well?
Scenario 2 : We prefer Jenkins to be in its own container (with its own docker-compose) so as we can replicate the same on to other environments. How can Jenkins connect to the containers in this situation? Jenkins container cannot ping Docker containers even though I use the same network in both docker-compose files. Instead it creates additional bridge network on its own e.g. from 2nd scenario below, if I have network-01 in Docker-Compose 01 and if I mention the same in Docker-Compose 2, docker creates additional network for Compose 2. As a result, I cannot ping the Node/Mongo containers from the Jenkins container (so I guess no SSH either).
Note 1 : I'm exposing 22 on both docker images i.e. Node & Mongo...
Note 2 : Our current setup has Jenkins on the host machine with exposed docker volumes from the container to the host. Is this preferred approach?
Am I missing the big elephant in the room or the solution is complicated (should'nt be!)?
Related
I have 10 different host each host has many docker containers, which each few container managed by a docker-compose, containers within the docker-compose can communicate with each other, even containers with in the host can communicate with each other although they are from different docker-compose, but now I want to have ability to reach container which is hosted in different machine within the docker host, other than DNS is there any other way ?
docker-compose is supposed to work only within one host.
If you want your docker containers run on different hosts you should consider using Kubernetes or Docker swarm.
I've got a Docker swarm cluster where the manager nodes are in constant 'drain' mode, e.g. no container will ever run on them.
Now I'm running Jenkins in a container on a worker node and I'd like Jenkins to be able to deploy images to the swarm cluster.
My reasoning so far:
Mounting /var/run/docker.sock is obviously not an option as the docker manager and Jenkins container are on different hosts and the local docker is not a swarm manager.
Connecting from the Jenkins container to the local docker host using tcp has the same issues
Adding the Jenkins container to the --network host, seems not to be possible: a container cannot also be in an overlay network at the same time
I assume this is quite a common use case, yet I haven't found a solution and maybe someone here has an idea.
Thanks!
After looking through docker official swarm explanations, github issues and stackoverflow answers im still at a loss on why i am having the problem that i have.
Issue at hand: docker-compose up starts services not in the swarm even though swarm is active and has 2 nodes.
Im using 1.12.1 docker version.
Looking at swarm tutorial i was able to start and scale my swarm using docker service create without any issues.
running docker-compose up with version 2 docker-compose.yml results in services starting outside of swarm, i can see them through docker ps but not docker service ls
I can see that docker-machine as the tool that solves this problems, but then again it needs virtual box to be installed.
so my questions would be
Can i use docker-compose with docker-swarm (NOT docker-engine) without docker-machine and without experimental build bundle functionality?
If docker service create can start a service on any nodes is it an indication that network configuration of the swarm is correct ?
What is the advantages/disadvantages of docker-machine versus experimental build functionality
1) No. Docker Compose isn't integrated with the new Swarm Mode yet. Issue 3656 in GitHub is tracking that. If you start containers on a swarm with Docker Compose at the moment, it uses docker run to start containers, which is why you see them all on one node.
2) Yes. Actually you can use docker node ls on the manager to confirm all the nodes are up and active, and docker node inspect to check a particular node, you don't need to create a service to validate the swarm.
3) Docker Machine is also behind the 1.12 release, so if you start a swarm with Docker Machine it will be the 'old' type of swarm. The old Docker Swarm product needed a whole lot of extra setup for a key-value store, TLS etc. which Swarm Mode does for free.
1) You can't start services using docker-compose on the new Docker "Swarm Mode". There's a feature to convert a docker-compose file to the new dab format which is understood by the new swarm mode but that's incomplete and experimental at this point. You basically need to use bash scripts to start services at the moment.
2) The nodes in a swarm (swarm mode) interact using their own overlay network. It's the one named ingress when you do docker network ls. You need to setup your own overlay network to run services in. eg:
docker network create -d overlay mynet
docker service create --name serv1 --network mynet nginx
3) I'm not sure what feature you mean by "experimental build'. docker-machine is just a way to create hosts (the nodes). It facilitates the setting up of the docker daemon on each host, the certificates and allows some basic maintenance (renewing the certs, stopping/starting a host if you're the one who created it). It doesn't create services, volumes, networks or manages them. That's the job of the docker api.
I saw couple of tutorials on continuous deployment (on docker.com, on codecentric.de, on devopscube.com).
Overall I saw two approaches:
Set two types of jenkins server (master and slave). Master is in a docker container and slave on the host machine.
Jenkins server in docker container. They set up the link to the host and using that link the jenkins can create or recreate docker images.
In the first approach - I do not understand why they set up additional jenkins server residing inside the docker container. Is not it enough just to have jenkins server on host machine alongside with docker container?
The second approach seems to me a bit insecure because process from container is accessing host OS. Does it have any benefits?
Thanks for any useful info.
Looking at Rancher, what is the performance like? I guess my main question, is everything deployed in Rancher docker in docker? After reading http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ I trying to stay away from that idea. It looks like the Rancher CI pipeline with Docker/Jenkins is docker in docker, but what about the rest? If i setup a docker-compose or deploy something from their catalog, is it all docker in docker? I've read through their documentation and this simple question has still just flown over my head. Any guidance would be much appreciated.
Thank you
Rancher itself is not deployed with Docker in Docker (DinD). The main components of Rancher, rancher/server and rancher/agent are both normal containers. The server, in a normal deployment, runs the orchestration piece and a few other key services for the catalog, Docker Machine provisioning, websocket-proxy and MySQL. All of these can be broken out if desired, but for simplicity of getting started, its all in one. We use s6 to manage the orchestration and database processes.
The rancher/agent container is privileged and requires the user to bind mount the hosts Docker socket. We package a Docker binary in the container and use it to communicate with the host on startup. It is similar to the way a Mac talks to Boot2docker, the binary is just a client talking to a remote Docker daemon. Once the agent is bootstrapped, it communicates back to the Rancher server container over a websocket connection. When containers and stacks are deployed Rancher server sends events to the agents which then call the hosts Docker daemon for deployment. The deployed containers are running as normal Docker containers on the host, just as if the user typed docker run .... In fact, a neat feature of Rancher is that if you do type docker run ... on the host, the resulting container will show up in the Rancher UI.
The Jenkins entry in the Rancher catalog, when using the Swarm plugin is doing a host bind mount of the Docker socket as well. We have some early experiments that used DinD to test out some concepts with Jenkins, but those were not released.