I have a Jenkinsfile to create a pipeline.
If this pipeline should be executed on different operating system like Linux and Windows, which is the best approach to manage this situation?
I mean, do I need to create two different Jenkinsfile? One for Windows and another one for Linux in order to manage the different commands/shell operating systems?
Thanks in advance
There is a isUnix() function that will return TRUE if the node that you are running on is a Unix like OS (Unix/MacOS/Linux), or FALSE if it is running on Windows. You should be able to implement a if check
script {
if (isUnix()) {
//Linux Environment
sh ./script.sh
}
else {
//Windows Environment
bat batchfile.bat
}
}
Related
I am developing a declarative pipeline on Jenkins and one of the requisites is that it must work for both Windows and Linux.
Right now, to achieve this I am making use of two stages, one for Linux environment and the other one for Windows environment, as it is possible to see in the code below
stage('Integration Tests on Windows') {
when { expression { env.OS == 'BAT' }}
steps
{
dir('')
{
bat 'gradlew.bat integrationTest'
junit '**/build/test-results/integrationTest/*.xml'
}
}
}
stage('Integration Tests on LINUX') {
when { expression { env.OS == 'UNIX' }}
steps
{
dir('')
{
sh 'gradlew integrationTest'
junit '**/build/test-results/integrationTest/*.xml'
}
}
}
I was wondering if there is a better way to do this while keeping the pipeline declarative?
Not sure whether you are using git. However our preferred way to do this is by using the sh step on both Linux and Windows:
We do this by using the bash/sh which comes with Windows git. You just need to ensure that it (sh) is in the path (if e.g. you do a manual install git will even ask you to add its command line tools to the path). For the Jenkins nodes you may want to add this to your Jenkins node configuration.
One alternative we use is a wrapper function which you may specify somewhere in your Jenkinsfile or in a Pipeline library. It may look something like this:
def executeCmd(def args) {
if (isUnix()) {
sh args
} else {
bat args
}
}
Please note that this obviously can only handle cases where the arguments would be 100% identical on Windows and Linux.
Therefore I would recommend to use sh on Windows and Linux.
Or if you prefer you may want to Powershell on Linux and use the pwsh step instead.
I've successfully created a Jenkins setup with multiple build agents (nodes) for building on different operating systems.
Each of my build agents has different hardware capabilities (especially in terms of CPU cores available). I'm trying to figure out the preferred/recommended way of passing a per-node variable to the make -j <variable> stage of my build pipeline:
stage('Build [FreeBSD]') {
steps {
dir('build') {
sh 'make -j8'
}
}
}
As I don't want to do this explicitly in each of my project's pipelines I figured that I can add an environment variable to the node configuration of each node and then use then environment variable inside the build step.
Is this the correct/recommended way of doing this or am I missing some obvious infrastructure put in place for exactly this?
I'm currently running Jenkins 2.267 and my pipelines are declarative.
If you use *nix agents you can get the information from /proc/cpuinfo
// ...
dir('build') {
sh "make -j\$(grep -c -E '^core id' /proc/cpuinfo)"
}
// ...
The more universal solution is using a groovy method that calls Java Runtime.getRuntime().availableProcessors(). Also, you need to approve your pipeline script http://[jenkins-host]/scriptApproval page.
Please note it may not safe in the security aspects. Read here
pipeline {
// ...
dir('build') {
sh "make -j${cores()}"
}
// ...
}
def cores() { return Runtime.getRuntime().availableProcessors();}`
I have a run-of-the-mill Jenkins install using multibranch/pipeline/Jenkinsfile stuff.
The project I'm building is a C/C++ project, which must be compiled on many operating systems and architectures. For this, I have a bunch of nodes (a.k.a. agents or slaves) registered to jenkins, each doing a build of that said project, for that particular combination of operating system and architecture.
Many of those builds are executed in parallel, which is common sense. My problem is now this: I would like to stop all in-progress sub-builds on all nodes when an error occurs anywhere in that overall multiplatform build. For example, lets assume I'm building the code for 3 things:
macos
Linux aarch64
windows 10
If macos fails, I would like to automatically cancel linux and windows, even if in-progress, because I know for a fact that the overall build will fail: I will not be able to make a release out of those with some missing parts.
Currently, if macos fails, linux and windows are happily continuing the builds, wasting the slots for other queued jobs, electricity and time.
Any hints are appreciated!
Update 1:
The poor's man solution to this is to serialize the sub-builds, in a long stage-after-stage way. But that will take very long time to complete the build. So that is not an option for me
Update 2:
more tangible explanation:
stages {
stage('Checkout') {
...
}
stage('Build') {
parallel {
stage ('linux') {
...
}
stage ('windows') {
...
}
stage ('mac') {
...
}
}
}
stage('Archive and release') {
...
}
}
If any of mac, windows or linux builds fails, I would like the rest to be aborted immediately as well, not wait for completion
Found the answer. The trick is to add this in the pipeline configuration.
pipeline {
options {
parallelsAlwaysFailFast()
}
...
}
Sorry it might be a simple question but what is the differences between using tools and docker agent.I think using docker agent is much more flexible instead of using tools. When should I use docker agent or tools?
Tools
pipeline {
agent any
tools {
maven 'Maven 3.3.9'
jdk 'jdk8'
}
stages {
stage ('Initialize') {
steps {
sh '''
echo "PATH = ${PATH}"
echo "M2_HOME = ${M2_HOME}"
'''
}
}
stage ('Build') {
steps {
sh 'mvn -Dmaven.test.failure.ignore=true install'
}
Docker Agent
pipeline {
agent none
stages {
stage('Back-end') {
agent {
docker { image 'maven:3-alpine' }
}
steps {
sh 'mvn --version'
}
}
These two options serve a bit different purpose. The tools block allows you to add specific versions of maven, jdk, or gradle in your PATH. You can't use any version - you can only use versions that are configured in the Global Tool Configuration Jenkins page:
If your Jenkins configuration contains only a single Maven version, e.g., Maven 3.6.3, you can use only this version. Specifying a version that is not configured in the Global Tool Configuration will cause your pipeline to fail.
pipeline {
agent any
tools {
maven 'Maven 3.6.3'
}
stages {
stage('Example') {
steps {
sh 'mvn --version'
}
}
}
}
Using the tools block to specify different versions of supported tools will be a good option if your Jenkins server does not support running docker containers.
The docker agent, on the other hand, gives you total freedom when it comes to specifying tools and their versions. It does not limit you to maven, jdk, and gradle, and it does not require any pre-configuration in your Jenkins server. The only tool you need is docker, and you are free to use any tool you need in your Jenkins pipeline.
pipeline {
agent {
docker {
image "maven:3.6.3-jdk-11-slim"
}
}
stages {
stage('Example') {
steps {
sh 'mvn --version'
}
}
}
}
When to use one over another?
There is no single right answer to this question. It depends on the context. The tools block is very limiting, but it gives you control over what tools are used in your Jenkins. In some cases, people decide not to use docker in their Jenkins environment, and they prefer to control what tools are available to their users. We can agree with this or not. When it comes to using the docker agent, you get full access to any tools that can be shipped as a docker container.
In some cases, this is the best choice when it comes to using a tool with a specific version - your operating system may not allow you to install the desired version. Of course, you need to keep in mind that this power and flexibility comes with a cost. You lose control over what tools are used in your Jenkins pipelines. Also, if you pull tons of different docker images, you will increase disk space consumption. Not to mention that the docker agent allows you to run the pipeline with tools that may consume lots of CPU and memory. (I have seen Jenkins pipelines starting Elasticsearch, Logstash, Zookeeper, and other services, on nodes that were not prepared for that load.)
Intro:
We are currently running a Jenkins master with multiple slave nodes, each of which is currently tagged with a single label (e.g., linux, windows, ...)
In our scripted-pipeline scripts (which are defined in a shared library), we currently use snippets like the following:
node ("linux") {
// do something on a linux node
}
or
node ("windows") {
// do something on a windows node
}
Yet, as our testing environment grows, we now have multiple different Linux environments, some of which have or do not have certain capabilities (e.g., some may be able to run service X and some may not).
I would like to label my slaves now with multiple lables, indicating their capabilities, for example:
Slave 1: linux, serviceX, serviceY
Slave 2: linux, serviceX, serviceZ
If I now need a Linux slave that is able to run service X, I wanted to do the following (according to this):
node ("linux" && "serviceX") {
// do something on a linux node that is able to run service X
}
Yet, this fails.
Sometime, also a windows slave gets selected, which is not what I want to achieve.
Question: How can i define multiple labels (and-combined) based on which a node gets selected in a Jenkins scripted pipepline script?
The && needs to be part of the string, not the logical Groovy operator.