I have a Jenkins and Kubernetes cluster running within the same network in AWS. Jenkins has it's own instance.
I have configured the Kubernetes plugin as follows:
The recommended JNLP docker image is used. Jenkins JNLP port is configured to be static 5000.
Now when I kick off the job, it shows me that the node is offline. When I click on the offline node I get this:
This makes me go to the k8 cluster. Running docker ps shows no containers running. However:
From there I go to find what docker container gets run and what logs it leaves after that:
I use:
https://github.com/jenkinsci/docker-jnlp-slave as image
https://github.com/jenkinsci/kubernetes-plugin
Jenkins version: 2.27
k8: hyperkube:v1.4.3_coreos.0
Jenkins does spin up the container, I guess it runs and errors out because no valid arguments are provided during the container run? I need it to be a hands off process where I don't have to log in to my containers (Java clients). How do I achieve this?
UPDATE
Based on this answer: kubernetes slaves cannot register to jenkins master
If I log into the container and run the command that Jenkins displays under the host that cannot connect:
java -jar /usr/share/jenkins/slave.jar -jnlpUrl https://test.myhost.com/computer/jenkinsminions-10f0b7d49054ac/slave-agent.jnlp -secret 62637e83008f50eb94483ad609e9a2719d313fa56e640e4beca9eebeaf0b1af2
The container connects via JNLP2 and the job runs.
I tried to add the arguments as suggested, but no luck. Containers still won't connect automatically:
Do you have a Root directory not writable message in the container log?
[...]
Exception in thread "main" java.lang.RuntimeException: Root directory not writable
at hudson.remoting.FileSystemJarCache.<init>(FileSystemJarCache.java:44)
at hudson.remoting.Engine.<init>(Engine.java:139)
at hudson.remoting.jnlp.Main.createEngine(Main.java:164)
at hudson.remoting.jnlp.Main.main(Main.java:148)
at hudson.remoting.jnlp.Main._main(Main.java:144)
at hudson.remoting.jnlp.Main.main(Main.java:110)
In this case, you might have a problem similar to this.
PS: If you cannot see the logs, try removing the "Allocate pseudotty" option
Related
We want to switch to a new Jenkins (ver. 2.176.1), where the slaves are started in a docker cloud on demand (using the docker plugin).
How to start an agent in the cloud, with a specific port mapping, which does not collide with other containers in the same cloud, but can be further used in the pipeline script?
Old setup
Our current jenkins does not use docker in any way, the nodes are always running. The usual build process for a web project uses maven. At some point, the application is started using the maven cargo plugin. Selenium tests are executed, using a selenium grid. The external ports for the running web project are configured on each jenkins slave.
Goal
Run this setup with on demand docker containers as slaves, still using the external tools.
Problem
The basic build of a test project works and the problem is with the selenium part.
Using one port mapping works for one container, of course there will be collisions if we run more at the same time.
First we tried to use a port range in the global docker agent template, from the docker plugin. This allows to start multiple containers, but we found no parameter with the actually used port in the pipeline scripts, so no way to set it for the tests.
Further tries included agent{ docker{ image 'my_image' args '-p...'} } or the "sidecar" approach from here https://jenkins.io/doc/book/pipeline/docker/ and setting the ports when the container is started, using the EXECUTOR_NUMBER parameter to make the ports unique. In both cases, the jenkins tries to start another container in the agent container. This is too late, as the mapped ports of the agent container can't be changed after the container was created. And n
Using something like docker inspect from withing a running slave failed, as we don't know the current container id either. Update see below
So how can we start a slave, map a know set of docker internal ports to a set of ports on the host, without colliding with other docker agents and still know which ports are used in the build scripts, aka. jenkins files?
Update/Workaround
First, it is possible to get the ID of the container, using the environment variable DOCKER_CONTAINER_ID. Another approach is the hostname of the current node, as this is also the container id and can be resolved in the scripts.
The resulting line looks like this:
HTTP_PORT = (sh(script: 'docker -H tcp://${BIDGE_IP}:4243 inspect --format=\'{{(index (index .NetworkSettings.Ports \"8080/tcp\") 0).HostPort}}\' `hostname` ', returnStdout: true)).trim()
The variable ${BRIDGE_IP} is defined in the jenkins global variables and is the docker network ip of the host, where the docker engine is running on.
I have a ec2 instance running as my jenkins master. I would like to run a container in that instance that will be used as another build executor so I can tun a few builds simultaneously.
Im having pronblems connecting these.
In the docker hub jenkins docs it says under the relevant section:
You can run builds on the master out of the box.
But if you want to attach build slave servers through JNLP (Java Web
Start): make sure you map the port: -p 50000:50000 - which will be
used when you connect a slave agent.
If you are only using SSH slaves, then you do NOT need to put that
port mapping.
but when I try to add a node in the jenkins config, it asks remote root directory (probably should be /var/jenkins ? ) and a launch method.
I don't quite understand what I should give it as its launch method to make this work and I don't understand where the port number comes into play.
What you need is Jenkins Docker plugin (link below) and follow the instructions listed here
https://wiki.jenkins.io/display/JENKINS/Docker+Plugin
I followed those instructions and was able to setup dynamic slaves in Jenkins which are dynamically provisioned.
I run docker container with Jenkins master. I run docker container with Jenkins slave (slave image) and exposed port 8082:8080.
I created docker network to make containers see each other and it works (ping works).
I installed Docker Plugin on Jenkins master. I checked IP address of the slave container and tried to use it in the master's configuration, but master cannot connect to slave:
I think I'm doing something wrong. Any ideas what else should I do?
First check your docker daemon listening on your DockerURL and then try Testconnection
sudo dockerd
check API listening on
or provide your certs path to credential section.
certs path usually will be %userprofile%/.docker
It is throwing HttpHostConnectException because you are using tcp in the Docker URL field. Use http. Check the configuration document here.
I solved my problem.
Here is nice tutorial about setting up master in docker container and slaves aslo in docker containers. It doesn't use Docker Plugin.
I saw couple of tutorials on continuous deployment (on docker.com, on codecentric.de, on devopscube.com).
Overall I saw two approaches:
Set two types of jenkins server (master and slave). Master is in a docker container and slave on the host machine.
Jenkins server in docker container. They set up the link to the host and using that link the jenkins can create or recreate docker images.
In the first approach - I do not understand why they set up additional jenkins server residing inside the docker container. Is not it enough just to have jenkins server on host machine alongside with docker container?
The second approach seems to me a bit insecure because process from container is accessing host OS. Does it have any benefits?
Thanks for any useful info.
I have two VM, on one VM I have docker and on another I have Jenkins .I have shell script for running docker, which is placed on server which have docker on it. But I need Jenkins to execute this shell script, from pre-build step.
I am facing problem with this process.
It will be very helpful if any one can provide the detail steps
Thanks in Advance
There are different approaches to achieve that.
One is to install Jenkins slave on the VM which has docker on and have your Jenkins master run the whole job on the slave.
Or you could install one of Publish Over SSH Plugin or SSH plugin to execute commands remotely (if your docker VM has SSH access)
If your network is sufficiently secured from the outside, you could expose the docker API socket via a TCP port on your docker-machine and run the docker commands from your jenkins machine, using the remote tcp-port.
Basic idea is outlined here in the section "Bind Docker to another host/port or a Unix socket"
Cheers D