Docker and jenkins - jenkins

I am working with docker and jenkins, and I'm trying to do two main tasks :
Control and manage docker images and containers (run/start/stop) with jenkins.
Set up a development environment in a docker image then build and test my application which is in the container using jenkins.
While I was surfing the net I found many solutions :
Run jenkins as container and link it with other containers.
Run jenkins as service and use the jenkins plugins provided to support docker.
Run jenkins inside the container which contain the development environment.
So my question is what is the best solution or you can suggest an other approach.
One more question I heard about running a container inside a container. Is it a good practice or better avoid it ?

To run Jenkins as a containerized service is not a difficult task. There are many images out there that allow you to do just that. It took me just a couple minutes to make Jenkins 2.0-beta-1 run in a container, compiling from source (image can be found here). Particularity I like this approach, you just have to make sure to use a data volume or a data container as jenkins_home to make your data persist.
Things become a little bit trickier when you want to use this Jenkins - in a container - to build and manage containers itself. To achieve that, you need to implement something called docker-in-docker, because you'll need a docker daemon and client available inside the Jenkins container.
There is a very good tutorial explaining how to do it: Docker in Docker with Jenkins and Supervisord.
Basically, you will need to make the two processes (Jenkins and Docker) run in the container, using something like supervisord. It's doable and proclaims to have good isolation, etc... But can be really tricky, because the docker daemon itself has some dependencies, that need to be present inside the container as well. So, only using supervisord and running both processes is not enough, you'll need to make use of the DIND project itself to make it work... AND you'll need to run the container in privileged mode... AND you'll need to deal with some strange DNS problems...
For my personal taste, it sounded too much workarounds to make something simple work and having two services running inside one container seems to break docker good practices and the principle of separation of concerns, something I'd like to avoid.
My opinion got even stronger when I read this: Using Docker-in-Docker for your CI or testing environment? Think twice. It's worth to mention that this last post is from the DIND author himself, so he deserves some attention.
My final solution is: run Jenkins as a containerized service, yes, but consider the docker daemon as part of the provisioning of the underlying server, even because your docker cache and images are data that you'll probably want to persist and they are fully owned and controlled by the daemon.
With this setup, all you need to do is mount the docker daemon socket in your Jenkins image (which also needs the docker client, but not the service):
$ docker run -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock -v local/folder/with/jenkins_home:/var/jenkins_home namespace/my-jenkins-image
Or with a docker-compose volumes directive:
---
version: '2'
services:
jenkins:
image: namespace/my-jenkins-image
ports:
- '8080:8080'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- local/folder/with/jenkins_home:/var/jenkins_home
# other services ...

Related

Create a Dockerfile from NiFi Docker Container

I'm pretty new to using Docker. I'm needing to deploy a NiFi instance through my employer, but the internal service we need to use requires a Dockerfile, not an image.
The service we're using requires the Dockerfile because each time the repository we're using is updated, the service is pointed to the Dockerfile and initiates the build process from it, then runs/operates the container.
I've already set up the NiFi flow to how it needs to operate, I'm just unsure of how to get a Dockerfile from an already existing container (or if that is even possible?)
I was looking into this myself, apparently there is no real way to do it, but you can inspect the docker container and pretty much get all the commands used to create the container except the OS used which is easy to find, you can spawn a bash into the container and do something like sudo uname -a, which you can just take and make your own docker image with. Usually you can find it on github, though.
docker inspect <image>
or you can do it through the docker desktop UI
You can use the Dockerfile that is in NiFi source code, see in this directory: https://github.com/apache/nifi/tree/main/nifi-docker/dockerhub

Docker build can't find docker to run tests

I have a NodeJS application that is using ioredis to connect to redis and publish data and other redisy things.
I am trying to write a component test against redis and was able to create a setup/teardown script via jest that runs redis via docker on a random port and tears it down when the tests are done via docker run -d -p 6379 --rm redis and docker stop {containerId}.
This works great locally, but we have the tests running in a multi-stage build in our Dockerfile:
RUN yarn test
which I try to build via docker build . it goes great until it gets to the tests and then complains with the following error - /bin/sh: docker: not found
Hence, Docker is unavailable to the docker-build process to run the tests?
Is there a way to run docker-build to give it the ability to spin up sibling processes during the process?
This smells to me like a "docker-in-docker" situation.
You can't spin up siblings, but you can spawn a container within a container, by doing some tricks: (you might need to do some googling to get it right)
install the docker binaries in the "host container"
mount the docker socket from the actual host inside the "host" container, like so docker run -v /var/run/docker.sock:/var/run/docker.sock ...
But you won't be able to do it in the build step, so it won't be easy for your case.
I suggest you prepare a dedicated build container capable of running nested containers, which would basically emulate your local env and use that in your CI. Still, you might need to refactor your process a bit make it work.
Good luck :)
In my practice, tests shouldn't be concerned with initializing the database, they should only be concerned about how to connect to the database, so you just pass your db connection data via environment variables.
The way you are doing it it won't scale, imagine that you need a lot more services for your application, it will be difficult and not practical to start them via tests.
When you are developing locally, it's your responsibility to have the services running before doing the tests.
You can have docker compose scripts in your repository that create and start all the services you need when you start developing.
And when you are using CI in the cloud, you would still use docker containers and run tests in them( node container with your tests, redis container, mysql container, etc...) and again just pass the appropriate connection data via environment variables.

Installing and Running docker in a Docker container running in Openshift

I am currently working on the following scenario
I am trying to setup a container in OpenShift that runs a Jenkins that is itsself able to run docker to make use of declarative pipelines where the build is running in it's own docker container. This basically makes it necessary to install and run docker inside this container.
I have been working on it on quite some time now. Checked dozens of posts and threads online but I have not been able to accomplish it. Basically I got so far
I can install docker in my container (from the baseimage openshift/jenkins-2-centos7:latest)
I can't get docker to run as this makes use of systemctl which
Now I read that systemctl is not working inside docker containers or at least highly unrecommended as it interferes with the PID 1 in the system. Without
systemctl start docker
that will leave me with docker beeing unable to connect with the daemon (as expected) and the error message
Can't connect to docker daemon. Is 'docker -d' running on this host?
So I tried to set up the daemon myself using
the follwoing in my Dockerfile
RUN usermod -aG docker $(whoami)
RUN dockerd -H unix:///var/run/docker.sock
which will also not work telling me that cgroups cannot be mounted. After some more research I found that this could be handled with the cgroupfs-mount script from
https://github.com/tianon/cgroupfs-mount/tree/master
But also here I got no luck leaving me with the following error
Error starting daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables v1.4.21: can't initialize iptables table `nat': Permission denied (you must be root)
Perhaps iptables or your kernel needs to be upgraded.
Now after hours I am out of ideas. Does anyone have an idea how to make docker work inside of OpenShift? Would be really greatful
I am trying to setup a container in OpenShift that runs a Jenkins that is itsself able to run docker to make use of declarative pipelines where the build is running in it's own docker container. This basically makes it necessary to install and run docker inside this container.
I don't think your conclusion here is the only possibility, and what I'll describe below is an easier approach to get what (I think) you want! :) If there are any other use cases that you have than these 3 I'll describe, let me know and I'll try to update to cover them:
Pipelines running in their own containers
Running additional containers from Pipelines
Building container images from Pipelines
Pipelines running in their own containers
For this case, there's the excellent Kubernetes plugin.
With this plugin, you add a Kubernetes/OpenShift cloud to the Jenkins global config. This can either be the one in which Jenkins is running (if you use the Jenkins image provided by OpenShift, this gets added by default at least), or an external cluster.
Inside that configuration, you can define PodTemplates (again, there are a couple of examples provided in the Jenkins image provided by OpenShift), or you can specify that in your pipeline directly also I think. When your pipeline requests a node/agent with a label that matches one of these (and there are no long-running agents that match), then a pod will be created from that template, and your pipeline execution will happen inside a container in that. Once it's no longer needed, it will be deprovisioned again.
Here are the pipeline steps exposed by this plugin: https://jenkins.io/doc/pipeline/steps/kubernetes/
Running additional containers from Pipelines
As part of your pipeline, you may want to run some tests, and those may expect to be able to interact with e.g. a database. You can create resources for that in your OpenShift project (e.g. a Deployment & expose it with a Service), and tear them down after. The openshift-client plugin is very useful here and has docs on how to interact with OpenShift.
Building container images from Pipelines
If your goal is to build container images from pipelines, remember that OpenShift also exposes this capability (depending on the security configuration) through Builds. Just like in the previous section, you can use the openshift-client plugin to create and trigger builds.
For more information on the Jenkins image that's maintained by OpenShift (and generally how to do useful things in Jenkins on OpenShift), there's this dedicated page in the OpenShift docs.
You have this article by #jpetazzo, from Docker team, about Docker In Docker (DinD):
article:
The primary purpose of Docker-in-Docker was to help with the development of Docker itself. Many people use it to run CI (e.g. with Jenkins), which seems fine at first, but they run into many “interesting” problems that can be avoided by bind-mounting the Docker socket into your Jenkins container instead.
DinD Repo:
This work is now obsolete, thanks to the combined efforts of some amazing people like #jfrazelle and #tianon, who also are black belts in the art of putting IKEA furniture together.
If you want to run Docker-in-Docker today, all you need to do is:
docker run --privileged -d docker:dind
So here is an article using another approach to build docker containers with Jenkins inside a docker container:
docker run -p 8080:8080 \
-v /var/run/docker.sock:/var/run/docker.sock \
--name jenkins \
jenkins/jenkins:lts
So you may want to adapt one of this solutions to your OpenShift scenario. I hope it solves your issue.
You'll need a privileged pod running jenkins wich mounts the openshift node docker socket. This is a bad idea as jenkins'll launch container outside kubernetes semantics and control.
Why do not use s2i service shipped with openshift ?
Regards.

Any reasons to not use Docker Swarm (instead of Docker-Compose) on a single node?

There's Docker Swarm (now built into Docker) and Docker-Compose. People seem to use Docker-Compose when running containers on a single node only. However, Docker-Compose doesn't support any of the deploy config values, see https://docs.docker.com/compose/compose-file/#deploy, which include mem_limit and cpus, which seems like nice/important to be able to set.
So therefore maybe I should use Docker Swarm? although I'm deploying on a single node only. Also, then the installation instructions will be simpler for other people to follow (they won't need to install Docker-Compose).
But maybe there are reasons why I should not use Swarm on a single node?
I'm posting an answer below, but I'm not sure if it's correct.
Edit: Please note that this is not an opinion based question. If you have a look at the answer below, you'll see that there are "have-to" and "cannot-do" facts about this.
For development, use Docker-Compose. Because only Docker-Compose is able to read your Dockerfiles and build images for you. Docker Stack instead needs pre-built images. Also, with Docker-Compose, you can easily start and stop single containers, with docker-compose kill ... and ... start .... This is useful, during development (in my experience). For example, to see how the app server reacts if you kill the database. Then you don't want Swarm to auto-restart the database directly.
In production, use Docker Swarm (unless: see below), so you can configure mem limits. Docker-Compose has less functionality that Docker Swarm (no mem or cpu limits for example) and doesn't have anything that Swarm does not have (right?). So no reason to use Compose in production. (Except maybe if you know how Compose works already and don't want to spend time reading about the new Swarm commands.)
Docker Swarm doesn't, however, support .env files like Docker-Compose does. So you cannot have e.g. IMAGE_VERSION=1.2.3 in an .env file and then in the docker-compose.yml file have: image: name:${IMAGE_VERSION}. See https://github.com/moby/moby/issues/29133 — instead you'll need to set env vars "manually": IMAGE_VERSION=SOMETHING docker stack up ... (this actually made me stick with Docker-Compose. + that I didn't reasonably quickly find out how to view a container's log, via Swarm; Swarm seemed more complicated.)
In addition to #KajMagnus answer I should note that Docker Swarm still don't support Linux Capabilities as Docker [Compose] do. You can learn about this issue and dive into Docker community discussions here.

Jenkins-node as docker container

The jenkins-node is a docker-container on which the jobs are run. A jenkins-job running in the dockerized jenkins-node checks the project of svn/git and runs the build and test in other docker-containers launched by the job. In doing so the jenkins-job mounts via "docker run -v : ..." files/directories from the checked out project into the build-container. This sounds like docker-in-docker, but according to http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ docker-in-docker is not good in ci. With the recommended approach (mount the hosts docker-socket into the jenkins-node container) I'm facing the problem that the mounted files in the build-container appear as empty direcotries. I think it's because these files are not known in the host (they are checked out inside the jenkins-node container). Providing the --privileged flag doesn't help this way.
However the 'evil' docker-in-docker approach works fine with this scenario. Am I doing s.th. wrong or is docker-in-docker the way to go here?
With "expose the docker socket" approach all volume paths are going to be relative to the host. So if you need to access something in the jenkins-node container you have two options:
make sure the checkout directory is a volume, and use --volumes-from jenkins-node as an argument to all the other docker containers. From your question it sounds like the containers created by the test suite would be configured from the app repos, so this is probably not a good option.
make the checkout directory a host mounted volume -v /git/checkouts:/path/in/jenkins-node/container when you start jenkins-node. That way the files will actually end up on the host (not in the jenkins-node container), and you'll be able to access them a the host path.
I would also say that the article you're referencing is more of a caution. dind is still done quite a bit, sometimes it's even necessary. It's not the worst thing ever, just be aware that it's not a silver bullet and does come with it's own set of issues/problems.

Resources