Jenkins with Docker plugin not pulling Docker images from private registry - docker

We have a Jenkins setup with the Docker plugin installed and want to run our build jobs in Docker Containers based on private images.
Here is what we have:
Jenkins master runs on "bare metal" VM, no containerization
We have a second VM with a Docker Engine running on it, the Docker engine port from this VM is exposed and accessible from the Jenkins master via TCP
We created several Docker Templates (in the global Jenkins settings) and can use them in our build jobs, as long as we use the "never pull" strategy for the images
The problems occurs when we try to pull an image from our private registry (we use Artifactory for this and it is accessible from the Docker Engine, since we can successfully push images from our Docker VM).
Whenever we start a job in Jenkins that uses such an image which should always be pulled from the private registry, we see All nodes of label ‘OUR_IMAGE_LABEL’ are offline and the job is hanging forever.
The strange thing is that we don't see anything related to such a job in the Jenkins log (/var/log/jenkins/jenkins.log on the Jenkins master) nor do we see anything in the Docker logs (/var/log/messages on the VM with the Docker Engine).
Things work perfectly fine, if we switch back to the "Never pull" strategy and have the image locally available on the Docker engine.
Any ideas
why things are not working
how we could get at least some log messages regarding what Jenkins is doing, when it shows All nodes of label ‘OUR_IMAGE_LABEL’ are offline

Related

Jenkins on k8s and pipeline with docker agent

I want to run my Jenkins behind k8s. We can achieve that with any standard helm chart or our own manifest files. In this case, Jenkins (master only) will run inside a container (Pod).
Now I also want to have a pipeline job that uses docker agent as described here
I am getting confused, about
how and where this docker container will be run (on the same node where Jenkins is running? and suppose the node capacity is over then it needs to run docker agent on a different node)
how does Jenkins will authenticate to run containers on k8s nodes?
I saw the Kubernetes plugin/docker plugin. But those plugins create containers beforehand (or at least we need to set up a template, which decides how containers will start, which image will be used and many more) and connects Jenkins with help of JNLP / ssh. I lose the flexibility to have an image as an agent in that case.
going further, I also like to build custom images on the fly with help of Dockerfile shipped along with code. An example is available in the same link.
I believe this documentation is answering all of your questions: https://devopscube.com/jenkins-build-agents-kubernetes/
With this method, you are not losing your flexibility because your Jenkins master going to create a K8s pod on the fly. Yes, additionally you need JNLP authentication but you can think of that as a sidecar container.
About your first question: If you use exactly that way, your Jenkins jobs going to run under Jenkins master with the same Docker that your Jenkins Master is using.

Get mapped ports of Jenkins docker slaves as pipeline parameters

We want to switch to a new Jenkins (ver. 2.176.1), where the slaves are started in a docker cloud on demand (using the docker plugin).
How to start an agent in the cloud, with a specific port mapping, which does not collide with other containers in the same cloud, but can be further used in the pipeline script?
Old setup
Our current jenkins does not use docker in any way, the nodes are always running. The usual build process for a web project uses maven. At some point, the application is started using the maven cargo plugin. Selenium tests are executed, using a selenium grid. The external ports for the running web project are configured on each jenkins slave.
Goal
Run this setup with on demand docker containers as slaves, still using the external tools.
Problem
The basic build of a test project works and the problem is with the selenium part.
Using one port mapping works for one container, of course there will be collisions if we run more at the same time.
First we tried to use a port range in the global docker agent template, from the docker plugin. This allows to start multiple containers, but we found no parameter with the actually used port in the pipeline scripts, so no way to set it for the tests.
Further tries included agent{ docker{ image 'my_image' args '-p...'} } or the "sidecar" approach from here https://jenkins.io/doc/book/pipeline/docker/ and setting the ports when the container is started, using the EXECUTOR_NUMBER parameter to make the ports unique. In both cases, the jenkins tries to start another container in the agent container. This is too late, as the mapped ports of the agent container can't be changed after the container was created. And n
Using something like docker inspect from withing a running slave failed, as we don't know the current container id either. Update see below
So how can we start a slave, map a know set of docker internal ports to a set of ports on the host, without colliding with other docker agents and still know which ports are used in the build scripts, aka. jenkins files?
Update/Workaround
First, it is possible to get the ID of the container, using the environment variable DOCKER_CONTAINER_ID. Another approach is the hostname of the current node, as this is also the container id and can be resolved in the scripts.
The resulting line looks like this:
HTTP_PORT = (sh(script: 'docker -H tcp://${BIDGE_IP}:4243 inspect --format=\'{{(index (index .NetworkSettings.Ports \"8080/tcp\") 0).HostPort}}\' `hostname` ', returnStdout: true)).trim()
The variable ${BRIDGE_IP} is defined in the jenkins global variables and is the docker network ip of the host, where the docker engine is running on.

Making docker container a build executor for jenkins

I have a ec2 instance running as my jenkins master. I would like to run a container in that instance that will be used as another build executor so I can tun a few builds simultaneously.
Im having pronblems connecting these.
In the docker hub jenkins docs it says under the relevant section:
You can run builds on the master out of the box.
But if you want to attach build slave servers through JNLP (Java Web
Start): make sure you map the port: -p 50000:50000 - which will be
used when you connect a slave agent.
If you are only using SSH slaves, then you do NOT need to put that
port mapping.
but when I try to add a node in the jenkins config, it asks remote root directory (probably should be /var/jenkins ? ) and a launch method.
I don't quite understand what I should give it as its launch method to make this work and I don't understand where the port number comes into play.
What you need is Jenkins Docker plugin (link below) and follow the instructions listed here
https://wiki.jenkins.io/display/JENKINS/Docker+Plugin
I followed those instructions and was able to setup dynamic slaves in Jenkins which are dynamically provisioned.

Deploy docker windows container from CI to Windows Server 2016

I'm trying to wrap my head around Docker containers, specifically how to deploy them to a Docker container host. I know there are lots of options here and ultimately we'll switch to a more common deployment approach (e.g. to Azure, AWS) but this is a temporary requirement. We're using windows containers.
I have a container image that I've created and will be recreated on each build as part of a Jenkins job (our Jenkins instance is hosted on a container-ready windows server 2016 box). I also have a separate container-ready Windows Server 2016 box which is where we intend to run the containers from.
However, I'm not sure how I can have the containers that our Jenkins box produces automatically pushed to our separate 2016 host. Ideally, I'd like to avoid using a container registry, unless there is a low-friction, on-premise option available.
Container registries are the way to distribute Docker images. Tooling is built around registries, it would be counterproductive to work against the concept.
But docker image save and docker image import could get you started as it saves the image as a tar file that you can transfer between the hosts. Once you copied the image to the other box, you can start it up with the usual docker run command, or docker compose up.
If your case is not trivial though and you start having multiple Docker hosts to run the containers, container orchestrators like Docker Swarm, Kubernetes are the way to go - or the managed versions of those, like Azure ACS. That rabbit hole is deeper though than I can answer in a single SO answer :)

What are benefits of having jenkins master in a docker container?

I saw couple of tutorials on continuous deployment (on docker.com, on codecentric.de, on devopscube.com).
Overall I saw two approaches:
Set two types of jenkins server (master and slave). Master is in a docker container and slave on the host machine.
Jenkins server in docker container. They set up the link to the host and using that link the jenkins can create or recreate docker images.
In the first approach - I do not understand why they set up additional jenkins server residing inside the docker container. Is not it enough just to have jenkins server on host machine alongside with docker container?
The second approach seems to me a bit insecure because process from container is accessing host OS. Does it have any benefits?
Thanks for any useful info.

Resources