What are benefits of having jenkins master in a docker container? - jenkins

I saw couple of tutorials on continuous deployment (on docker.com, on codecentric.de, on devopscube.com).
Overall I saw two approaches:
Set two types of jenkins server (master and slave). Master is in a docker container and slave on the host machine.
Jenkins server in docker container. They set up the link to the host and using that link the jenkins can create or recreate docker images.
In the first approach - I do not understand why they set up additional jenkins server residing inside the docker container. Is not it enough just to have jenkins server on host machine alongside with docker container?
The second approach seems to me a bit insecure because process from container is accessing host OS. Does it have any benefits?
Thanks for any useful info.

Related

Best way to setup Jenkins with multiple Docker containers?

All,
I've searched this high and low but was not able to find a reliable answer. The question may be simple for some Pro's but please help me with this...
We have a situation where we need Jenkins to be able to access and build within Docker containers. The target Docker containers are built and instantiated with a separate docker-compose. What would be the best way of connecting Jenkins with Docker containers in each scenario's as below?
Scenario 1 : Jenkins is setup on host machine it self. 2 Docker containers instantiated using their own docker-compose file. How can Jenkins connect to the containers in this situation? Host cannot ping Docker containers since both are on different networks (Host on Physical and Docker containers on docker DNS) hence maybe no SSH as well?
Scenario 2 : We prefer Jenkins to be in its own container (with its own docker-compose) so as we can replicate the same on to other environments. How can Jenkins connect to the containers in this situation? Jenkins container cannot ping Docker containers even though I use the same network in both docker-compose files. Instead it creates additional bridge network on its own e.g. from 2nd scenario below, if I have network-01 in Docker-Compose 01 and if I mention the same in Docker-Compose 2, docker creates additional network for Compose 2. As a result, I cannot ping the Node/Mongo containers from the Jenkins container (so I guess no SSH either).
Note 1 : I'm exposing 22 on both docker images i.e. Node & Mongo...
Note 2 : Our current setup has Jenkins on the host machine with exposed docker volumes from the container to the host. Is this preferred approach?
Am I missing the big elephant in the room or the solution is complicated (should'nt be!)?

How to use a remote docker server from jenkins?

I got 2 servers, 1 Linux 2 AMI with Jenkins running and one RHEL with Docker running.
I would like to configure Jenkins in order to build and deploy an application on the Docker server. If I clone my repository on the Docker server, i'm running docker-compose build then docker-compose up and everything is working fine.
I find some documentation about using a remote docker server with jenkins but it doesn't work. Docker API is already open.
Strictly speaking, you can connect to a remote Docker Daemon by enabling the Remote API over TCP and using the docker client by setting the DOCKER_HOST environment variable. I would also suggest you configure encryption and authentication to have an additional layer of security and if you can restrict it to be only accessible from your Jenkins Slaves.
But as stated on the comment by David Maze, I don't think this is the best approach for deployment of containers as it carries some security risks that can compromise your servers.
I would suggest that if you are planning on running production workloads and you need a full pipeline for managing the lifecycle of your applications running on containers, you research Docker Swarm or Kubernetes as they are better alternatives suited for achieving this.

Deploy docker windows container from CI to Windows Server 2016

I'm trying to wrap my head around Docker containers, specifically how to deploy them to a Docker container host. I know there are lots of options here and ultimately we'll switch to a more common deployment approach (e.g. to Azure, AWS) but this is a temporary requirement. We're using windows containers.
I have a container image that I've created and will be recreated on each build as part of a Jenkins job (our Jenkins instance is hosted on a container-ready windows server 2016 box). I also have a separate container-ready Windows Server 2016 box which is where we intend to run the containers from.
However, I'm not sure how I can have the containers that our Jenkins box produces automatically pushed to our separate 2016 host. Ideally, I'd like to avoid using a container registry, unless there is a low-friction, on-premise option available.
Container registries are the way to distribute Docker images. Tooling is built around registries, it would be counterproductive to work against the concept.
But docker image save and docker image import could get you started as it saves the image as a tar file that you can transfer between the hosts. Once you copied the image to the other box, you can start it up with the usual docker run command, or docker compose up.
If your case is not trivial though and you start having multiple Docker hosts to run the containers, container orchestrators like Docker Swarm, Kubernetes are the way to go - or the managed versions of those, like Azure ACS. That rabbit hole is deeper though than I can answer in a single SO answer :)

Rancher Performance (Docker in Docker?)

Looking at Rancher, what is the performance like? I guess my main question, is everything deployed in Rancher docker in docker? After reading http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ I trying to stay away from that idea. It looks like the Rancher CI pipeline with Docker/Jenkins is docker in docker, but what about the rest? If i setup a docker-compose or deploy something from their catalog, is it all docker in docker? I've read through their documentation and this simple question has still just flown over my head. Any guidance would be much appreciated.
Thank you
Rancher itself is not deployed with Docker in Docker (DinD). The main components of Rancher, rancher/server and rancher/agent are both normal containers. The server, in a normal deployment, runs the orchestration piece and a few other key services for the catalog, Docker Machine provisioning, websocket-proxy and MySQL. All of these can be broken out if desired, but for simplicity of getting started, its all in one. We use s6 to manage the orchestration and database processes.
The rancher/agent container is privileged and requires the user to bind mount the hosts Docker socket. We package a Docker binary in the container and use it to communicate with the host on startup. It is similar to the way a Mac talks to Boot2docker, the binary is just a client talking to a remote Docker daemon. Once the agent is bootstrapped, it communicates back to the Rancher server container over a websocket connection. When containers and stacks are deployed Rancher server sends events to the agents which then call the hosts Docker daemon for deployment. The deployed containers are running as normal Docker containers on the host, just as if the user typed docker run .... In fact, a neat feature of Rancher is that if you do type docker run ... on the host, the resulting container will show up in the Rancher UI.
The Jenkins entry in the Rancher catalog, when using the Swarm plugin is doing a host bind mount of the Docker socket as well. We have some early experiments that used DinD to test out some concepts with Jenkins, but those were not released.

Dockerized jenkins is a good choice?

As mentioned in the title, I thinking about a dockerized jenkins. I have a running container that run all tests but now I want to run some deployment job.
The files (.py, .conf, .sh) will be copied into folders which are mounted by other container (app container). As I seen some recommend do not use docker as well.
Now I'm wondering if I should continue to use jenkins in a container (so i must find a way to run the deployment script) or prefer to install it on the server ?
If you are running dockerized Jenkins for production, It is good practice to have its volume mounted on Docker host.
I personally do not prefer dockerized Jenkins for production due to non static IP for Jenkins, and reliability issues with docker networking. For non-production use, i dockerize Jenkins.
We're experimenting with containerizing Jenkins in production - the flexibility of being able to easily set up or move instances offsets the learning pain, and that pain is :
1 - Some build jobs are themselves containerized, requiring that you run docker-in-docker. This is possible by passing the host docker.sock into the Jenkins' container. (more reading : https://getintodevops.com/blog/the-simple-way-to-run-docker-in-docker-for-ci). It requires that the host and Jenkins container are running identical versions of Docker, but I can live with that.
2 - SSH keys are a bigger issue. ssh agent forwarding in Docker is notorious for its unreliability, and we've always copied keys into containers (ignoring security questions for the context of this question). In an on-the-host Jenkins instance we put our ssh keys in Jenkins' home folder and everything works seamlessly. But, dockerized Jenkins has its home folder inside a Docker volume, which is owned by the host system, so keys are too open. We got around this by copying the keys to a folder outside Jenkins' home, chown/chmod'ing those keys to the Jenkins container user, then adding the key path to the container's /etc/ssh/ssh_config.

Resources