How to start Jenkins slave on docker-cloud? - jenkins

I have a jenkins master defined as a stack on cloud.docker.com. I also have set up a couple of other stacks that contain the services I need to test against during my build process (some components use mongo, some use rabbitmq, etc).
docker cloud (man I wish they picked a more unique name!) has a REST api to start stacks, and I've even written a script that will redeploy the stack based on UUID, but I can't figure out how to get the jenkins master to start the stack or how to execute my script.
The jenkins slave setup plugin didn't document how to attach the "setup" to a node, and none of the other plugins I looked at appeared to have support for neither docker cloud nor a way of using arbitrary rest apis on slave startup.
I've also tried just using the docker daemon to launch containers directly, but docker-cloud appears to remove images not associated with stacks or services on its managed node, and then the jenkins docker plugin complains it can't find the slave image.
Everything is latest-and-greatest version-wise. The node itself is running on AWS and otherwise appears to function well.

Related

Is it possible to set different environment variables per executor in Jenkins

I currently have a functioning CI pipeline with a single node (the master) and a single executor. The builds and test suite themselves run as a script on the master node, but delegates all the actual work to a Docker container that runs the actual application.
This application is built on a framework that is very slow and expensive to start up, so I have the application running permanently in a Docker container. The Jenkinsfile handles updating the application at runtime (which is relatively fast) and running the test suite.
In theory, everything I need to expand my setup to multiple Docker containers so that test runs for multiple branches can run in parallel is to:
Duplicate the application Docker container
Increase the number of executors on the master node
Set the environment variable "DOCKER_CONTAINER" for each executor to point to the different Docker containers
However, after searching the Jenkins control panel to the best of my abilities, I can't find any controls for executors other then "Number of executors". I can't find any per-executor settings.
Are there any controls to set environment variables per-executor that I have missed? If not are there any plugins to achieve this? Or will I have to embed my environment variables in my Jenkinsfile and pick between them using $EXECUTOR_NUMBER? (I would much rather keep that kind of environment specific stuff in Jenkins.)

Is there a way for a docker pipeline file to determine the image of the child node it runs on?

I'd like to be able to dynamically provision docker child nodes for builds and have the configuration / setup of those nodes be part of the Jenkinsfile groovy script it uses.
Limitations of the current setup of jobs means Jenkins has one node/executor (master) and I'd like to support using Docker for nodes to alleviate this bottleneck.
I've noticed there's two ways of using a docker container as a node:
You can use the agent section in your pipeline file which allows you to specify an image to use. As part of this, you can target a specific node which supports running docker images, but I haven't gotten that far as to see what happens.
You can use the Jenkins Docker Plugin which allows you to add a Docker Cloud in Jenkins' configuration. It allows you to specify a label which, when used as part of a build, will spawn a container in that "cloud" from the image chosen in the cloud configuration. In this case, the "cloud" is the docker instance running on the Jenkins server.
Unfortunately, it doesn't seem like you can use both together - using the label but specifying a docker image in the configuration (1) where the label matches a docker cloud template configuration (2) does not seem to work and instead produces a label not found error during the build.
Ideally I'd prefer the control to be in the pipeline groovy file so the configuration is stored with the application (1), not with the Jenkins server (2). However, it suggests that if I use the agent section and provide a docker image, it still must target an existing executor first (i.e. master) which will cause other builds to queue until the current build is complete.
I'm at a point of migrating builds, so not all builds can support using a docker container as the node yet, and builds will have issues when ran in parallel on the master node.
Is there a way for a docker pipeline file to determine the image of the child node it runs on?
There are a few options I have considered but not attempted yet:
Migrate jobs to run on the "docker cloud" until all jobs support running on child container nodes, then move the configuration from Jenkins to the pipeline build file for each job and turn on parallel builds on the master node.
Attempt to add a new node configuration which is effectively a copy of master (uses the same server, just different location). Configure it to support parallel builds, and have all migrated jobs target the node explicitly during builds.

What's the benefits of docker with Jenkins Pipelines?

I'm new to Jenkins/Docker. So far I've found lots of Jenkins official Documents recommended to be used with Docker. But the necessity and advantages of running Jenkins as a docker container remain vague to me. In my case, it's a node/react app and environment required is not complicated.
Disadvantages I've found running Jenkins as a Docker container:
High usage of hard drive
Directory path in docker container is more complicated to deal with, esp when working with ssh in pipeline scripts
Without docker, I can easily achieve the same and there's also blueocean plugin available.
So, what's the main benefits of Docker with Jenkins/Jenkins Pipeline? Are there pitfalls for my node application using Jenkins without Docker? Articles to help me dive into are also appreciated.
Jenkins as Code
The main advantages of Jenkins in Docker is that it helps you to get: Jenkins as Code
Advantages of Jenkins as code are:
SCM: Code can be in put under version control
History is transparant, backup and roll-back becomes easy.
The code is the documentation of your Jenkins setup.
Jenkins becomes portable, so you can run Jenkins locally to try new plugins etc.
Jenkins pipelines work really well with Docker. As #Ivthillo mentioned: there is no need to install additional tools, you just use images of these tool. Jenkins will download them from internet for you (Docker Hub).
For each stage in the pipeline you can use a different image (i.e. tool). Essentially you get "micro Jenkins agents" which only exists temporary. This makes your Jenkins setup much more clean.
Disadvantage is:
Jenkins initial (Groovy) configuration is poorly documented on the web.
Simple Node setup
Most arguments also holds for a simple Node setup.
Change the node version or run multiple job each with a different Node version becomes easy.
Add your Jenkinsfile inside the Node repo. So everyone with a Jenkins+Docker setup can run your CI/CD.
And finaly: gather knowledge on running your app inside a container will enable your to run your production app in Docker in the future.
Getting started
A while ago I have written an small blog on how to get started with Jenkins and Docker, i.e. create a Jenkins image for development which you can launch and destroy in seconds.

Starting up dependent services in Jenkins

Our test suite relies on a number of subsidiary services being present - database, message queue, redis, and so on. I would like to set up a Jenkins build that spins up all the correct services (docker containers, most likely) and then runs the correct tests, followed by some other steps.
Can someone point me to a good example for doing such a thing? I've seen a plug-in for mongo, and some general guides on spinning up agents, but their relationship to what I'm trying to do is unclear.
One possibility is to use the JenkinsCI Kubernetes plugin and jenkinsCI Kubernetes pipeline plugin: they will allow you to
launch docker slaves automatically,
with container group support through podTemplate and containerTemplate.

CI/CD with Docker - what is the final deployment step?

I am developing a small website (Ruby/Sinatra) to be used internally where I work. (Simply, it crunches some source data and generates reports.)
I'm want to deploy it using Docker and have a set up that works on my dev environment, but I'm trying to understand the workflow for "production" deployment (we're using Jenkins).
I've read lots of articles about deployment workflows using Docker, but they all seem to stop at "and then push your image to the Docker registry". What seems to be missing is how to then take that image and actually update the application.
I appreciate that every application is likely to be different, but what is the next step? I'm aware of lots of different frameworks like Chef, Puppet, Ansible that could be used, but my question really is - how do I integrate that into my CI/CD pipeline? E.g. does a job "push" the changes to the production server, or should a Jenkins slave be running on the production server to execute a job directly on the server?
There are several orchestration tools like docker-swarm, kubernetes and rancher. In docker swarm for example you create services and can update the versions in blue-green deployment manner also for just one instance (then there is no blue-green :) ) and if you just use docker run you should check your running container, stop and remove it if its running an start your docker container with the newer image version.
It depends on how your application is configured to run. In my case, I have a call to "docker run" in a systemd script. It's configured to just restart if it ever stops.
So, in my Jenkinsfile, after I push the image to the registry, I do a "docker pull" (my Jenkins agent is running on the same box that the application is running on), and then a "docker stop". That causes the application to exit, then restarts, which causes it to get the new version that was just pulled, and now it's running the new version.

Resources