Jenkins differences between tools and docker agent - jenkins

Sorry it might be a simple question but what is the differences between using tools and docker agent.I think using docker agent is much more flexible instead of using tools. When should I use docker agent or tools?
Tools
pipeline {
agent any
tools {
maven 'Maven 3.3.9'
jdk 'jdk8'
}
stages {
stage ('Initialize') {
steps {
sh '''
echo "PATH = ${PATH}"
echo "M2_HOME = ${M2_HOME}"
'''
}
}
stage ('Build') {
steps {
sh 'mvn -Dmaven.test.failure.ignore=true install'
}
Docker Agent
pipeline {
agent none
stages {
stage('Back-end') {
agent {
docker { image 'maven:3-alpine' }
}
steps {
sh 'mvn --version'
}
}

These two options serve a bit different purpose. The tools block allows you to add specific versions of maven, jdk, or gradle in your PATH. You can't use any version - you can only use versions that are configured in the Global Tool Configuration Jenkins page:
If your Jenkins configuration contains only a single Maven version, e.g., Maven 3.6.3, you can use only this version. Specifying a version that is not configured in the Global Tool Configuration will cause your pipeline to fail.
pipeline {
agent any
tools {
maven 'Maven 3.6.3'
}
stages {
stage('Example') {
steps {
sh 'mvn --version'
}
}
}
}
Using the tools block to specify different versions of supported tools will be a good option if your Jenkins server does not support running docker containers.
The docker agent, on the other hand, gives you total freedom when it comes to specifying tools and their versions. It does not limit you to maven, jdk, and gradle, and it does not require any pre-configuration in your Jenkins server. The only tool you need is docker, and you are free to use any tool you need in your Jenkins pipeline.
pipeline {
agent {
docker {
image "maven:3.6.3-jdk-11-slim"
}
}
stages {
stage('Example') {
steps {
sh 'mvn --version'
}
}
}
}
When to use one over another?
There is no single right answer to this question. It depends on the context. The tools block is very limiting, but it gives you control over what tools are used in your Jenkins. In some cases, people decide not to use docker in their Jenkins environment, and they prefer to control what tools are available to their users. We can agree with this or not. When it comes to using the docker agent, you get full access to any tools that can be shipped as a docker container.
In some cases, this is the best choice when it comes to using a tool with a specific version - your operating system may not allow you to install the desired version. Of course, you need to keep in mind that this power and flexibility comes with a cost. You lose control over what tools are used in your Jenkins pipelines. Also, if you pull tons of different docker images, you will increase disk space consumption. Not to mention that the docker agent allows you to run the pipeline with tools that may consume lots of CPU and memory. (I have seen Jenkins pipelines starting Elasticsearch, Logstash, Zookeeper, and other services, on nodes that were not prepared for that load.)

Related

Jenkins pipeline - How to make a stage work for both Windows and Linux

I am developing a declarative pipeline on Jenkins and one of the requisites is that it must work for both Windows and Linux.
Right now, to achieve this I am making use of two stages, one for Linux environment and the other one for Windows environment, as it is possible to see in the code below
stage('Integration Tests on Windows') {
when { expression { env.OS == 'BAT' }}
steps
{
dir('')
{
bat 'gradlew.bat integrationTest'
junit '**/build/test-results/integrationTest/*.xml'
}
}
}
stage('Integration Tests on LINUX') {
when { expression { env.OS == 'UNIX' }}
steps
{
dir('')
{
sh 'gradlew integrationTest'
junit '**/build/test-results/integrationTest/*.xml'
}
}
}
I was wondering if there is a better way to do this while keeping the pipeline declarative?
Not sure whether you are using git. However our preferred way to do this is by using the sh step on both Linux and Windows:
We do this by using the bash/sh which comes with Windows git. You just need to ensure that it (sh) is in the path (if e.g. you do a manual install git will even ask you to add its command line tools to the path). For the Jenkins nodes you may want to add this to your Jenkins node configuration.
One alternative we use is a wrapper function which you may specify somewhere in your Jenkinsfile or in a Pipeline library. It may look something like this:
def executeCmd(def args) {
if (isUnix()) {
sh args
} else {
bat args
}
}
Please note that this obviously can only handle cases where the arguments would be 100% identical on Windows and Linux.
Therefore I would recommend to use sh on Windows and Linux.
Or if you prefer you may want to Powershell on Linux and use the pwsh step instead.

Jenkins avoid tool installation if it is installed already

Not a jenkins expert here. I have a scripted pipeline where I have tool installed (Node). Unfortunately it was configured to pull in other dependencies which takes 250sec in overall now. I'd like to add a condition to avoid this installation if it(Node with packages) was already installed previously, but don't know where to start. Perhaps jenkins stores meta info from prev runs that can be checked?
node {
env.NODEJS_HOME = "${tool 'Node v8.11.3'}"
env.PATH = "${env.NODEJS_HOME}/bin:${env.PATH}"
env.PATH = "/opt/xs/bin:${env.PATH}"
// ...
}
Are you using dynamic jenkins agents (docker containers)? In this case tools will be installed everytime you run build.
Mount volumes to containers, use persistant agents or build your own docker image with installed nodejs.
As I see you use workaround to install nodejs tool.
Jenkins supports it native way (declarative style):
pipeline {
agent any
tools {
nodejs 'NodeJS_14.5'
}
stages {
stage ('nodejs test') {
steps {
sh 'npm -v'
}
}
}
}
On first run tools will be installed. On next ones - will not since it is installed.

How to run unit tests in Jenkins in separate Docker containers?

In our codebase, we have multiple neural networks (classification, object detection, etc.) for which we have written some unit tests which we want to run in Jenkins at some specified point (the specific point is not relevant, e.g. whenever we merge some feature branch in the master branch).
The issue is that due to external constraints, each neural net needs another version of keras/tensorflow and a few other packages, so we cannot run them all in the same Jenkins environment. The obvious solution to this is Docker containers (we have specialized Docker images for each one) and ideally we would want to tell Jenkins to execute each unittest in a Docker container that we specify beforehand.
Does anyone know how to do that with Jenkins? I searched online, but the solutions I have found seem a bit hacky to me.
It looks like a candidate for Jenkins pipelines, especially docker agents
pipeline {
agent none
stages {
stage('Back-end') {
agent {
docker { image 'maven:3-alpine' }
}
steps {
sh 'mvn --version'
}
}
stage('Front-end') {
agent {
docker { image 'node:7-alpine' }
}
steps {
sh 'node --version'
}
}
}
}
This allows for the actual Jenkins agent to spin up a docker container to do the work. You say you have images already so you're most of the way.

How to run the same job with two different agents with Declarative Syntax?

I have two jobs running on two different OS, all the build steps are the same, it is the tools (jdk and maven), the meta data that are different.
I want to make a job that include both jobs on two agents depending on the OS.
I'm using Jenkins Pipeline Declarative Syntax, the problem is that I couldn't find a way to declare tools for a specific agent.
In Jenkins Pipeline, we can declare tools inside the entire pipeline or inside a specific stage and that's it.
PS: I need to use the declarative Syntax: no use of node {}
If I do so:
stage('Environment Set Up Linux') {
agent {
label "linux"
}
tools {
jdk 'oracle-jdk-1.8'
}
steps {
echo "Environment tools have been configured"
}
}
stage('Environment Set Up Solaris') {
agent {
label "solaris-64"
}
tools {
jdk 'oracle-jdk-1.7'
}
steps {
echo "Environment tools have been configured"
}
}
The tools will be used only for those stages not all stages and making tools in every stage would be stupid.
Define the common tools which are available on every slave in the entire pipeline and the specific ones in the stage section:
pipeline {
agent any
tools {
maven 'Maven 3.3.9'
}
stages {
stage('test'){
tools {
maven 'Maven 2.2.1'
}
steps {
sh 'mvn --version'
}
}
stage('random'){
steps {
sh 'mvn --version'
}
}
}
}
In this case the output in the stage 'test' is 2.2.1 because I define my tools in the stage section which overwrites the global pipeline. In the stage random I define no tools inside the stage so the tools which are defined in the global pipeline are used and 3.3.9 is printed. I hope this is what you meant.
In your case it could be all agents contain jdk1.8 and you want to use it in nearly ever stage (define it in the pipeline), if there is one stage in which you want to use jdk 1.7, just define the tools in the stage section which will overwrite the global config.

Jenkins Using result artifacts in different steps - stash and unstash

I have a Jenkinsfile declarative pipeline which has two steps:
build an RPM file inside a docker container
build a docker image with the RPM and run it
The first step is built inside a docker container because it require a specific app to build the RPM.
The second step is run directly on a Jenkins slave, can be other slave than the slave which ran the first step.
In order to use the RPM produced by the first step I'm currently using stash and unstash steps. If I do not use them the second step doesn't have access to the RPM file.
The RPM file is about 215MB which is more than the 100MB recommended limit so I'll like to know if there is a better solution?
pipeline {
agent any
options {
timestamps()
}
stages {
stage('Gradle: build') {
agent {
docker {
image 'some-internal-image'
}
}
steps {
sh """
chmod +x gradlew
./gradlew buildRpm
"""
}
post {
success {
stash name: 'rpm', includes: 'Server/target/myapp.rpm'
}
}
}
stage('Gradle: build docker image') {
steps {
unstash 'rpm'
sh """
chmod +x gradlew
./gradlew buildDockerImage
"""
}
}
}
}
You could use docker's multi-stage build, but I'm not aware of a nice implementation using Jenkins Pipelines.
We're stashing also several hundreds of megabytes to distribute it to build agents. I've experimented with uploading the artifacts to S3 and downloading them again from there with now visible performance improvement (only that it takes off load from the Jenkins Master).
So my very opinionated recommendation: Keep it like it is and optimize, once you really run into performance / load issues.
you can use Artifactory or any other binary repository manager..
From Artifactory's webpage:
As the first, and only, universal Artifact Repository Manager on the
market, JFrog Artifactory fully supports software packages created by
any language or technology.
...
...Artifactory provides an end-to-end, automated and bullet-proof
solution for tracking artifacts from development to production.

Resources