Is it possible to set different environment variables per executor in Jenkins - jenkins

I currently have a functioning CI pipeline with a single node (the master) and a single executor. The builds and test suite themselves run as a script on the master node, but delegates all the actual work to a Docker container that runs the actual application.
This application is built on a framework that is very slow and expensive to start up, so I have the application running permanently in a Docker container. The Jenkinsfile handles updating the application at runtime (which is relatively fast) and running the test suite.
In theory, everything I need to expand my setup to multiple Docker containers so that test runs for multiple branches can run in parallel is to:
Duplicate the application Docker container
Increase the number of executors on the master node
Set the environment variable "DOCKER_CONTAINER" for each executor to point to the different Docker containers
However, after searching the Jenkins control panel to the best of my abilities, I can't find any controls for executors other then "Number of executors". I can't find any per-executor settings.
Are there any controls to set environment variables per-executor that I have missed? If not are there any plugins to achieve this? Or will I have to embed my environment variables in my Jenkinsfile and pick between them using $EXECUTOR_NUMBER? (I would much rather keep that kind of environment specific stuff in Jenkins.)

Related

Is there a way for a docker pipeline file to determine the image of the child node it runs on?

I'd like to be able to dynamically provision docker child nodes for builds and have the configuration / setup of those nodes be part of the Jenkinsfile groovy script it uses.
Limitations of the current setup of jobs means Jenkins has one node/executor (master) and I'd like to support using Docker for nodes to alleviate this bottleneck.
I've noticed there's two ways of using a docker container as a node:
You can use the agent section in your pipeline file which allows you to specify an image to use. As part of this, you can target a specific node which supports running docker images, but I haven't gotten that far as to see what happens.
You can use the Jenkins Docker Plugin which allows you to add a Docker Cloud in Jenkins' configuration. It allows you to specify a label which, when used as part of a build, will spawn a container in that "cloud" from the image chosen in the cloud configuration. In this case, the "cloud" is the docker instance running on the Jenkins server.
Unfortunately, it doesn't seem like you can use both together - using the label but specifying a docker image in the configuration (1) where the label matches a docker cloud template configuration (2) does not seem to work and instead produces a label not found error during the build.
Ideally I'd prefer the control to be in the pipeline groovy file so the configuration is stored with the application (1), not with the Jenkins server (2). However, it suggests that if I use the agent section and provide a docker image, it still must target an existing executor first (i.e. master) which will cause other builds to queue until the current build is complete.
I'm at a point of migrating builds, so not all builds can support using a docker container as the node yet, and builds will have issues when ran in parallel on the master node.
Is there a way for a docker pipeline file to determine the image of the child node it runs on?
There are a few options I have considered but not attempted yet:
Migrate jobs to run on the "docker cloud" until all jobs support running on child container nodes, then move the configuration from Jenkins to the pipeline build file for each job and turn on parallel builds on the master node.
Attempt to add a new node configuration which is effectively a copy of master (uses the same server, just different location). Configure it to support parallel builds, and have all migrated jobs target the node explicitly during builds.

Dockerizing Jenkins builds - slaves as containers or builds as containers?

I'm tyring to figure out the best strategy for containerizing builds in a Jenkins CI/CD infrastructure using Docker. From what I see I have 2 options:
(1) Use ephemeral slaves that get provisioned on-demand on Docker hosts using the Docker Plugin: https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin
Once the build completes the slave is disposed. As a consequence, only one build ever gets run on a single slave.
(2) Use static slaves (e.g. VMs) that run builds inside Docker containers using the CloudBees Docker Custom Build Environment Plugin: https://wiki.jenkins-ci.org/display/JENKINS/CloudBees+Docker+Custom+Build+Environment+Plugin As a consequence, multiple (isolated) builds can run on a single slave.
What are the main advantages/disadvantages of one approach over the other? When and why should should I choose one over the other? This does not appear at all obvious to me.
I suspect builds are lighter weight that slaves, so for a CI/CD infrastructure orchestrating a large end-to-end pipeline with many jobs running (2) would be more scalable - each Jenkins slave incurs at least 2 threads on the master node.
Edit
My preference is the option 1 (ephemeral slaves) with the Docker plugin.
With this plugin, you declare your build images in the global Jenkins settings, you can affect labels to your Docker images:
On your job, you just have to use the relevant labels, and the Docker plugin will create the relevant slave into a new container.
With the Docker plugin, Jenkins will spin-up a new slave in a few seconds. So even if you're using a pipeline with a lot of stages, it will work fine.
This is what I'm going to implement at Forgerock (my company):
2 powerful bare metal machines (with SSD, 32 cores and 1 TB of RAM)
The Jenkins Docker plugin
Maven artifacts caching using Artifactory (to not download the internet)
The docker container will use a local Maven cache (so I'm sure to not use an old/odd Maven artefact)
I did a POC on a small bare metal machine and it works well :)
If you are using ephemeral slaves without Maven caching, it can become a problem regarding the performance.
Regarding the Jenkins plugins, there is a new one developed by Nicolas De Loof: Docker Slaves plugin.
I have to try this new plugin.

How to start Jenkins slave on docker-cloud?

I have a jenkins master defined as a stack on cloud.docker.com. I also have set up a couple of other stacks that contain the services I need to test against during my build process (some components use mongo, some use rabbitmq, etc).
docker cloud (man I wish they picked a more unique name!) has a REST api to start stacks, and I've even written a script that will redeploy the stack based on UUID, but I can't figure out how to get the jenkins master to start the stack or how to execute my script.
The jenkins slave setup plugin didn't document how to attach the "setup" to a node, and none of the other plugins I looked at appeared to have support for neither docker cloud nor a way of using arbitrary rest apis on slave startup.
I've also tried just using the docker daemon to launch containers directly, but docker-cloud appears to remove images not associated with stacks or services on its managed node, and then the jenkins docker plugin complains it can't find the slave image.
Everything is latest-and-greatest version-wise. The node itself is running on AWS and otherwise appears to function well.

How to configure Jenkins for distributed load using multiple Jmeter-servers

I use Jmeter to generate a huge load to my web-server. Some slave machines are acted as Jmeter-server, another one - as Jmeter master that coordinates the load and collects statistics from slaves.
Now I'm trying to integrate this system to CI (Jenkins).
That's how I do it now. I have two separate Jenkins jobs: one of them prepares all slaves by running jmeter-server, another one runs Jmeter-master itself. All is fine with 2nd part: I successfully generate traffic and collect statistics. The issue is with 1st job. I have a huge set of slaves that can be rebooted anytime. So, I can't run the job that initiates jmeter-server once and forget about it. I need to run this job every time before Jmeter-master.
But in this case on some machines (that were not rebooted) I have multiple copies of java processes (jmeter-server copies).
So, I'm looking for a mechanism to start jmeter-server on slave nodes in a proper way.
Any ideas appreciated.
Thank you in advance!
Read this:
https://dzone.com/articles/distributed-performance
It combines:
JMeter
Maven Lazery JMeter plugin
Jenkins
All you have to do for jmeter-slaves is to start them from Jenkins using jmeter-server.sh , you might want to tweak port if you have 2 slaves on same host.
Then from controller you will reference those host machines (in this casse default port is used):
remote_hosts=test-server-1.nerdability.com,test-server-2.nerdability.com,test-server-3.nerdability.com

How would i go about creating docker environment in CI with lots of services

Suppose i want to move mu current acceptance test CI environment to dockers, so i can take benefit of performance improvements and also quickly setting up multiple clones for slow acceptance tests.
I would have a lot of services.
The easy ones would be postgres, mongodb, reddis and such, which are updated rarely.
However, how would i go about, if my own product has lots of services aswell? - over 10-20 services, that all need to work together for tests. Is it even feasible to handle this with dockers, i.e., how can CI efficiently control so many containers automatically AND make clones of them to run acceptance tests in parallel.
Also, how would i automatically update the containers easily for the CI? Would the CI simply need to rebuild every container at the start of the every run with the HEAD of every service branch? Or would the CI run git pull and some update/migrate command on every service?
In VM-s its easy to control these services, but i would like to be convinced that dockers are good or better for it as well.
I'm in the same position as you and have recently gotten this all working to my liking.
First of all, while docker is generally intended to run a single process, for testing I've found it works better for the docker container to run all services needed. There is some duplication in going this route, but you don't have to worry about shared services, like Mongo or PostgreSQL. This can be accomplished by using something like Supervisor: http://docs.docker.com/articles/using_supervisord/
The idea is to configure supervisor to start all necessary services inside the container, so they are completely isolated from other containers. In my environment, I have mongo, xvfb, chrome and firefox all running in a single container. So really, you still are running a single process (supervisor) but it starts many others.
As for adding repositories to your container, I just have the host machine checkout the code and then when I run docker, I use the -v flag to add the repo to the container. This way you don't need to rebuild the container each time. I build containers nightly with the latest code to be able to add all necessary gems for a faster 'gem install' at testing time.
Lastly I have a script as the entrypoint of the container that allows me to pass in what test I want to run.
Jenkins then just runs the docker commands and passes in the tests to run. These can be done in parallel, sequentially or any other way you like. I'm currently looking into having these tests run on slave Jenkins instances in an auto-scaling group in AWS.
Hope that helps.
drone is a docker based open source CI plus online service: https://drone.io
Generally it runs build and test in docker containers, and remove all containers after built. you just need to provide a file named .drone.yml with similar configuration like .travis.yml to configure your build.
it will manage your services like database, cache as linked container.
For your build environment, you can use exiting docker images as template of dependencies.
So far, it supports github.com and gitlab. for your own CI system, you can use drone CLI only or its web interface.
I recommend to use Jenkins docker plugin, though it is new, it starts to expose the power of docker used inside jenkins, the configuration is well written there. (let me know if u have problem)
The strategy I planned to use it.
create different app images to serve different service like postgres, mongodb, reddis and such, since it is rare updated, they will be configured globally as "cloud" template in advance, each VM will have label to indicate the service
In each jenkins job, each images will be selected as slave node (use that label as name)
When the job is triggered, it will automatically start the docker container as slave in seconds
It shall work for you.
BTW: As the time I answered (2014.5), the plugin is not mature enough, but it is the right direction.

Resources