Making docker container a build executor for jenkins - docker

I have a ec2 instance running as my jenkins master. I would like to run a container in that instance that will be used as another build executor so I can tun a few builds simultaneously.
Im having pronblems connecting these.
In the docker hub jenkins docs it says under the relevant section:
You can run builds on the master out of the box.
But if you want to attach build slave servers through JNLP (Java Web
Start): make sure you map the port: -p 50000:50000 - which will be
used when you connect a slave agent.
If you are only using SSH slaves, then you do NOT need to put that
port mapping.
but when I try to add a node in the jenkins config, it asks remote root directory (probably should be /var/jenkins ? ) and a launch method.
I don't quite understand what I should give it as its launch method to make this work and I don't understand where the port number comes into play.

What you need is Jenkins Docker plugin (link below) and follow the instructions listed here
https://wiki.jenkins.io/display/JENKINS/Docker+Plugin
I followed those instructions and was able to setup dynamic slaves in Jenkins which are dynamically provisioned.

Related

Differences between Jenkins docker-agent and docker-inbound-agent

I couldn't get the differences or better say the benefits of using docker-agent or docker-inbound-agent for Jenkins as part of Cloud Nodes.
currently, I am using the routine Docker configuration for Configuration Cloud to use docker as agent to build application.
Jenkins Controller running on Host#1
another host for running docker agents!
base on GitHub readme:
docker-inbound-agent is using TCP or WebSockets to establish inbound connection to the Jenkins master.
docker-agent
docker-inbound-agent
According the docker-agent readme on github :
This image is used as the basis for the Docker Inbound Agent image. In
that image, the container is launched externally and attaches to
Jenkins.
This image may instead be used to launch an agent using the Launch
method of Launch agent via execution of command on the master.
docker-agent is used to launch an agent with a command on the master :
This launch the agent on the master.
docker-inbound-agent got the docker-agent image as basis (see on a Dockerfile) :
ARG version=latest-alpine-jdk11
FROM jenkins/agent:$version
This image was before named jnlp-slave (see this link) which got the same goal. This setup an agent which connect to the jenkins using the TCP protocol.
You also have a third agent, the docker-ssh-agent which is used to be connected to the master with SSH.

Get mapped ports of Jenkins docker slaves as pipeline parameters

We want to switch to a new Jenkins (ver. 2.176.1), where the slaves are started in a docker cloud on demand (using the docker plugin).
How to start an agent in the cloud, with a specific port mapping, which does not collide with other containers in the same cloud, but can be further used in the pipeline script?
Old setup
Our current jenkins does not use docker in any way, the nodes are always running. The usual build process for a web project uses maven. At some point, the application is started using the maven cargo plugin. Selenium tests are executed, using a selenium grid. The external ports for the running web project are configured on each jenkins slave.
Goal
Run this setup with on demand docker containers as slaves, still using the external tools.
Problem
The basic build of a test project works and the problem is with the selenium part.
Using one port mapping works for one container, of course there will be collisions if we run more at the same time.
First we tried to use a port range in the global docker agent template, from the docker plugin. This allows to start multiple containers, but we found no parameter with the actually used port in the pipeline scripts, so no way to set it for the tests.
Further tries included agent{ docker{ image 'my_image' args '-p...'} } or the "sidecar" approach from here https://jenkins.io/doc/book/pipeline/docker/ and setting the ports when the container is started, using the EXECUTOR_NUMBER parameter to make the ports unique. In both cases, the jenkins tries to start another container in the agent container. This is too late, as the mapped ports of the agent container can't be changed after the container was created. And n
Using something like docker inspect from withing a running slave failed, as we don't know the current container id either. Update see below
So how can we start a slave, map a know set of docker internal ports to a set of ports on the host, without colliding with other docker agents and still know which ports are used in the build scripts, aka. jenkins files?
Update/Workaround
First, it is possible to get the ID of the container, using the environment variable DOCKER_CONTAINER_ID. Another approach is the hostname of the current node, as this is also the container id and can be resolved in the scripts.
The resulting line looks like this:
HTTP_PORT = (sh(script: 'docker -H tcp://${BIDGE_IP}:4243 inspect --format=\'{{(index (index .NetworkSettings.Ports \"8080/tcp\") 0).HostPort}}\' `hostname` ', returnStdout: true)).trim()
The variable ${BRIDGE_IP} is defined in the jenkins global variables and is the docker network ip of the host, where the docker engine is running on.

Kubernetes plugin containers can't connect back to Jenkins

I have a Jenkins and Kubernetes cluster running within the same network in AWS. Jenkins has it's own instance.
I have configured the Kubernetes plugin as follows:
The recommended JNLP docker image is used. Jenkins JNLP port is configured to be static 5000.
Now when I kick off the job, it shows me that the node is offline. When I click on the offline node I get this:
This makes me go to the k8 cluster. Running docker ps shows no containers running. However:
From there I go to find what docker container gets run and what logs it leaves after that:
I use:
https://github.com/jenkinsci/docker-jnlp-slave as image
https://github.com/jenkinsci/kubernetes-plugin
Jenkins version: 2.27
k8: hyperkube:v1.4.3_coreos.0
Jenkins does spin up the container, I guess it runs and errors out because no valid arguments are provided during the container run? I need it to be a hands off process where I don't have to log in to my containers (Java clients). How do I achieve this?
UPDATE
Based on this answer: kubernetes slaves cannot register to jenkins master
If I log into the container and run the command that Jenkins displays under the host that cannot connect:
java -jar /usr/share/jenkins/slave.jar -jnlpUrl https://test.myhost.com/computer/jenkinsminions-10f0b7d49054ac/slave-agent.jnlp -secret 62637e83008f50eb94483ad609e9a2719d313fa56e640e4beca9eebeaf0b1af2
The container connects via JNLP2 and the job runs.
I tried to add the arguments as suggested, but no luck. Containers still won't connect automatically:
Do you have a Root directory not writable message in the container log?
[...]
Exception in thread "main" java.lang.RuntimeException: Root directory not writable
at hudson.remoting.FileSystemJarCache.<init>(FileSystemJarCache.java:44)
at hudson.remoting.Engine.<init>(Engine.java:139)
at hudson.remoting.jnlp.Main.createEngine(Main.java:164)
at hudson.remoting.jnlp.Main.main(Main.java:148)
at hudson.remoting.jnlp.Main._main(Main.java:144)
at hudson.remoting.jnlp.Main.main(Main.java:110)
In this case, you might have a problem similar to this.
PS: If you cannot see the logs, try removing the "Allocate pseudotty" option

What are benefits of having jenkins master in a docker container?

I saw couple of tutorials on continuous deployment (on docker.com, on codecentric.de, on devopscube.com).
Overall I saw two approaches:
Set two types of jenkins server (master and slave). Master is in a docker container and slave on the host machine.
Jenkins server in docker container. They set up the link to the host and using that link the jenkins can create or recreate docker images.
In the first approach - I do not understand why they set up additional jenkins server residing inside the docker container. Is not it enough just to have jenkins server on host machine alongside with docker container?
The second approach seems to me a bit insecure because process from container is accessing host OS. Does it have any benefits?
Thanks for any useful info.

Jenkins shell script execution on different server

I have two VM, on one VM I have docker and on another I have Jenkins .I have shell script for running docker, which is placed on server which have docker on it. But I need Jenkins to execute this shell script, from pre-build step.
I am facing problem with this process.
It will be very helpful if any one can provide the detail steps
Thanks in Advance
There are different approaches to achieve that.
One is to install Jenkins slave on the VM which has docker on and have your Jenkins master run the whole job on the slave.
Or you could install one of Publish Over SSH Plugin or SSH plugin to execute commands remotely (if your docker VM has SSH access)
If your network is sufficiently secured from the outside, you could expose the docker API socket via a TCP port on your docker-machine and run the docker commands from your jenkins machine, using the remote tcp-port.
Basic idea is outlined here in the section "Bind Docker to another host/port or a Unix socket"
Cheers D

Resources