Get mapped ports of Jenkins docker slaves as pipeline parameters - jenkins

We want to switch to a new Jenkins (ver. 2.176.1), where the slaves are started in a docker cloud on demand (using the docker plugin).
How to start an agent in the cloud, with a specific port mapping, which does not collide with other containers in the same cloud, but can be further used in the pipeline script?
Old setup
Our current jenkins does not use docker in any way, the nodes are always running. The usual build process for a web project uses maven. At some point, the application is started using the maven cargo plugin. Selenium tests are executed, using a selenium grid. The external ports for the running web project are configured on each jenkins slave.
Goal
Run this setup with on demand docker containers as slaves, still using the external tools.
Problem
The basic build of a test project works and the problem is with the selenium part.
Using one port mapping works for one container, of course there will be collisions if we run more at the same time.
First we tried to use a port range in the global docker agent template, from the docker plugin. This allows to start multiple containers, but we found no parameter with the actually used port in the pipeline scripts, so no way to set it for the tests.
Further tries included agent{ docker{ image 'my_image' args '-p...'} } or the "sidecar" approach from here https://jenkins.io/doc/book/pipeline/docker/ and setting the ports when the container is started, using the EXECUTOR_NUMBER parameter to make the ports unique. In both cases, the jenkins tries to start another container in the agent container. This is too late, as the mapped ports of the agent container can't be changed after the container was created. And n
Using something like docker inspect from withing a running slave failed, as we don't know the current container id either. Update see below
So how can we start a slave, map a know set of docker internal ports to a set of ports on the host, without colliding with other docker agents and still know which ports are used in the build scripts, aka. jenkins files?
Update/Workaround
First, it is possible to get the ID of the container, using the environment variable DOCKER_CONTAINER_ID. Another approach is the hostname of the current node, as this is also the container id and can be resolved in the scripts.
The resulting line looks like this:
HTTP_PORT = (sh(script: 'docker -H tcp://${BIDGE_IP}:4243 inspect --format=\'{{(index (index .NetworkSettings.Ports \"8080/tcp\") 0).HostPort}}\' `hostname` ', returnStdout: true)).trim()
The variable ${BRIDGE_IP} is defined in the jenkins global variables and is the docker network ip of the host, where the docker engine is running on.

Related

Making docker container a build executor for jenkins

I have a ec2 instance running as my jenkins master. I would like to run a container in that instance that will be used as another build executor so I can tun a few builds simultaneously.
Im having pronblems connecting these.
In the docker hub jenkins docs it says under the relevant section:
You can run builds on the master out of the box.
But if you want to attach build slave servers through JNLP (Java Web
Start): make sure you map the port: -p 50000:50000 - which will be
used when you connect a slave agent.
If you are only using SSH slaves, then you do NOT need to put that
port mapping.
but when I try to add a node in the jenkins config, it asks remote root directory (probably should be /var/jenkins ? ) and a launch method.
I don't quite understand what I should give it as its launch method to make this work and I don't understand where the port number comes into play.
What you need is Jenkins Docker plugin (link below) and follow the instructions listed here
https://wiki.jenkins.io/display/JENKINS/Docker+Plugin
I followed those instructions and was able to setup dynamic slaves in Jenkins which are dynamically provisioned.

How to run a bash script in the host from a docker container and get the result

I have a Jenkins that is running inside of a docker container. Outside of the Docker container in the host, I have a bash script that I would like to run from a Jenkins pipeline inside of the container and get the result of the bash script.
You can't do that. One of the major benefits of containers (and also of virtualization systems) is that processes running in containers can't make arbitrary changes or run arbitrary commands on the host.
If managing the host in some form is a major goal of your task, then you need to run it directly on the host, not in an isolation system designed to prevent you from doing this.
(There are ways to cause side effects like this to happen: if you have an ssh daemon on the host, your containerized process could launch a remote command via ssh; or you could package whatever command in a service triggered by a network request; but these are basically the same approaches you'd use to make your host system manageable by "something else", and triggering it from a local Docker container isn't different from triggering it from a different host.)

Kubernetes plugin containers can't connect back to Jenkins

I have a Jenkins and Kubernetes cluster running within the same network in AWS. Jenkins has it's own instance.
I have configured the Kubernetes plugin as follows:
The recommended JNLP docker image is used. Jenkins JNLP port is configured to be static 5000.
Now when I kick off the job, it shows me that the node is offline. When I click on the offline node I get this:
This makes me go to the k8 cluster. Running docker ps shows no containers running. However:
From there I go to find what docker container gets run and what logs it leaves after that:
I use:
https://github.com/jenkinsci/docker-jnlp-slave as image
https://github.com/jenkinsci/kubernetes-plugin
Jenkins version: 2.27
k8: hyperkube:v1.4.3_coreos.0
Jenkins does spin up the container, I guess it runs and errors out because no valid arguments are provided during the container run? I need it to be a hands off process where I don't have to log in to my containers (Java clients). How do I achieve this?
UPDATE
Based on this answer: kubernetes slaves cannot register to jenkins master
If I log into the container and run the command that Jenkins displays under the host that cannot connect:
java -jar /usr/share/jenkins/slave.jar -jnlpUrl https://test.myhost.com/computer/jenkinsminions-10f0b7d49054ac/slave-agent.jnlp -secret 62637e83008f50eb94483ad609e9a2719d313fa56e640e4beca9eebeaf0b1af2
The container connects via JNLP2 and the job runs.
I tried to add the arguments as suggested, but no luck. Containers still won't connect automatically:
Do you have a Root directory not writable message in the container log?
[...]
Exception in thread "main" java.lang.RuntimeException: Root directory not writable
at hudson.remoting.FileSystemJarCache.<init>(FileSystemJarCache.java:44)
at hudson.remoting.Engine.<init>(Engine.java:139)
at hudson.remoting.jnlp.Main.createEngine(Main.java:164)
at hudson.remoting.jnlp.Main.main(Main.java:148)
at hudson.remoting.jnlp.Main._main(Main.java:144)
at hudson.remoting.jnlp.Main.main(Main.java:110)
In this case, you might have a problem similar to this.
PS: If you cannot see the logs, try removing the "Allocate pseudotty" option

What are benefits of having jenkins master in a docker container?

I saw couple of tutorials on continuous deployment (on docker.com, on codecentric.de, on devopscube.com).
Overall I saw two approaches:
Set two types of jenkins server (master and slave). Master is in a docker container and slave on the host machine.
Jenkins server in docker container. They set up the link to the host and using that link the jenkins can create or recreate docker images.
In the first approach - I do not understand why they set up additional jenkins server residing inside the docker container. Is not it enough just to have jenkins server on host machine alongside with docker container?
The second approach seems to me a bit insecure because process from container is accessing host OS. Does it have any benefits?
Thanks for any useful info.

docker build step plugin inside jenkins docker container

We are running jenkins in a docker container and we want to use the docker build step plugin. The documentation tells us:
You have to make sure that Docker service is running on slaves where you run the build. In Jenkins global configuration, you need to specify Docker REST API URL (typically somethig like http://127.0.0.1:2375)
But I see very often that people are using 0.0.0.0:2375
What is the difference and which do we have to use when we just want to use the docker daemon inside one docker container on one server (docker daemon is running on the same server)?
Regarding the differences between 0.0.0.0:2375 and 127.0.0.1:2375, according to this answer it's basically whether you want to open the host up to the outside or not.
If it's all on one server, I'm assuming both should work as it's all on the same host..

Resources