Differences between Jenkins docker-agent and docker-inbound-agent - docker

I couldn't get the differences or better say the benefits of using docker-agent or docker-inbound-agent for Jenkins as part of Cloud Nodes.
currently, I am using the routine Docker configuration for Configuration Cloud to use docker as agent to build application.
Jenkins Controller running on Host#1
another host for running docker agents!
base on GitHub readme:
docker-inbound-agent is using TCP or WebSockets to establish inbound connection to the Jenkins master.
docker-agent
docker-inbound-agent

According the docker-agent readme on github :
This image is used as the basis for the Docker Inbound Agent image. In
that image, the container is launched externally and attaches to
Jenkins.
This image may instead be used to launch an agent using the Launch
method of Launch agent via execution of command on the master.
docker-agent is used to launch an agent with a command on the master :
This launch the agent on the master.
docker-inbound-agent got the docker-agent image as basis (see on a Dockerfile) :
ARG version=latest-alpine-jdk11
FROM jenkins/agent:$version
This image was before named jnlp-slave (see this link) which got the same goal. This setup an agent which connect to the jenkins using the TCP protocol.
You also have a third agent, the docker-ssh-agent which is used to be connected to the master with SSH.

Related

Jenkins on k8s and pipeline with docker agent

I want to run my Jenkins behind k8s. We can achieve that with any standard helm chart or our own manifest files. In this case, Jenkins (master only) will run inside a container (Pod).
Now I also want to have a pipeline job that uses docker agent as described here
I am getting confused, about
how and where this docker container will be run (on the same node where Jenkins is running? and suppose the node capacity is over then it needs to run docker agent on a different node)
how does Jenkins will authenticate to run containers on k8s nodes?
I saw the Kubernetes plugin/docker plugin. But those plugins create containers beforehand (or at least we need to set up a template, which decides how containers will start, which image will be used and many more) and connects Jenkins with help of JNLP / ssh. I lose the flexibility to have an image as an agent in that case.
going further, I also like to build custom images on the fly with help of Dockerfile shipped along with code. An example is available in the same link.
I believe this documentation is answering all of your questions: https://devopscube.com/jenkins-build-agents-kubernetes/
With this method, you are not losing your flexibility because your Jenkins master going to create a K8s pod on the fly. Yes, additionally you need JNLP authentication but you can think of that as a sidecar container.
About your first question: If you use exactly that way, your Jenkins jobs going to run under Jenkins master with the same Docker that your Jenkins Master is using.

I've a problem with jenkins docker integration

I've a gitlab container and i've a jenkins container locally. I can configured the CI, but but i can't solve the cd. I would like to when i push the code to repo the jenkins build my Blazor Server App solution in docker container locally.
Can you help me ?
Setup a Gitlab webhook like so: https://docs.gitlab.com/ee/user/project/integrations/webhooks.html
set up a Jenkins pipeline with the necessary information inside a jenkinsfile like so: https://www.jenkins.io/doc/book/pipeline/jenkinsfile/
The Gitlab webhook should send a request to the local jenkins container to execute the pipeline. You will need to make sure the appropriate ports are opened and linked between the container and the host computer.

Get mapped ports of Jenkins docker slaves as pipeline parameters

We want to switch to a new Jenkins (ver. 2.176.1), where the slaves are started in a docker cloud on demand (using the docker plugin).
How to start an agent in the cloud, with a specific port mapping, which does not collide with other containers in the same cloud, but can be further used in the pipeline script?
Old setup
Our current jenkins does not use docker in any way, the nodes are always running. The usual build process for a web project uses maven. At some point, the application is started using the maven cargo plugin. Selenium tests are executed, using a selenium grid. The external ports for the running web project are configured on each jenkins slave.
Goal
Run this setup with on demand docker containers as slaves, still using the external tools.
Problem
The basic build of a test project works and the problem is with the selenium part.
Using one port mapping works for one container, of course there will be collisions if we run more at the same time.
First we tried to use a port range in the global docker agent template, from the docker plugin. This allows to start multiple containers, but we found no parameter with the actually used port in the pipeline scripts, so no way to set it for the tests.
Further tries included agent{ docker{ image 'my_image' args '-p...'} } or the "sidecar" approach from here https://jenkins.io/doc/book/pipeline/docker/ and setting the ports when the container is started, using the EXECUTOR_NUMBER parameter to make the ports unique. In both cases, the jenkins tries to start another container in the agent container. This is too late, as the mapped ports of the agent container can't be changed after the container was created. And n
Using something like docker inspect from withing a running slave failed, as we don't know the current container id either. Update see below
So how can we start a slave, map a know set of docker internal ports to a set of ports on the host, without colliding with other docker agents and still know which ports are used in the build scripts, aka. jenkins files?
Update/Workaround
First, it is possible to get the ID of the container, using the environment variable DOCKER_CONTAINER_ID. Another approach is the hostname of the current node, as this is also the container id and can be resolved in the scripts.
The resulting line looks like this:
HTTP_PORT = (sh(script: 'docker -H tcp://${BIDGE_IP}:4243 inspect --format=\'{{(index (index .NetworkSettings.Ports \"8080/tcp\") 0).HostPort}}\' `hostname` ', returnStdout: true)).trim()
The variable ${BRIDGE_IP} is defined in the jenkins global variables and is the docker network ip of the host, where the docker engine is running on.

Making docker container a build executor for jenkins

I have a ec2 instance running as my jenkins master. I would like to run a container in that instance that will be used as another build executor so I can tun a few builds simultaneously.
Im having pronblems connecting these.
In the docker hub jenkins docs it says under the relevant section:
You can run builds on the master out of the box.
But if you want to attach build slave servers through JNLP (Java Web
Start): make sure you map the port: -p 50000:50000 - which will be
used when you connect a slave agent.
If you are only using SSH slaves, then you do NOT need to put that
port mapping.
but when I try to add a node in the jenkins config, it asks remote root directory (probably should be /var/jenkins ? ) and a launch method.
I don't quite understand what I should give it as its launch method to make this work and I don't understand where the port number comes into play.
What you need is Jenkins Docker plugin (link below) and follow the instructions listed here
https://wiki.jenkins.io/display/JENKINS/Docker+Plugin
I followed those instructions and was able to setup dynamic slaves in Jenkins which are dynamically provisioned.

Jenkins with Docker plugin not pulling Docker images from private registry

We have a Jenkins setup with the Docker plugin installed and want to run our build jobs in Docker Containers based on private images.
Here is what we have:
Jenkins master runs on "bare metal" VM, no containerization
We have a second VM with a Docker Engine running on it, the Docker engine port from this VM is exposed and accessible from the Jenkins master via TCP
We created several Docker Templates (in the global Jenkins settings) and can use them in our build jobs, as long as we use the "never pull" strategy for the images
The problems occurs when we try to pull an image from our private registry (we use Artifactory for this and it is accessible from the Docker Engine, since we can successfully push images from our Docker VM).
Whenever we start a job in Jenkins that uses such an image which should always be pulled from the private registry, we see All nodes of label ‘OUR_IMAGE_LABEL’ are offline and the job is hanging forever.
The strange thing is that we don't see anything related to such a job in the Jenkins log (/var/log/jenkins/jenkins.log on the Jenkins master) nor do we see anything in the Docker logs (/var/log/messages on the VM with the Docker Engine).
Things work perfectly fine, if we switch back to the "Never pull" strategy and have the image locally available on the Docker engine.
Any ideas
why things are not working
how we could get at least some log messages regarding what Jenkins is doing, when it shows All nodes of label ‘OUR_IMAGE_LABEL’ are offline

Resources