I have two Jenkins workflow jobs that start the same job with different parameters, namely, the branch they build. The latter job builds the project on several platforms. The "head" job, that is the worklflow job may start on different machines. Also, there are two linux machines in the setup.
And sometimes it so happens that one of them (say, master) starts on one of the linux machines, and the other one starts on the other. Both of them have to build a target on a linux machine, and since both of them are busy, both jobs stall.
With usual jobs, one can limit where they can run, however, I couldn't find how to limit where a workflow job can run. Obviously, it should be done using the groovy script, but it escapes me how exactly.
Is there a solution to that?
here's a Jenkinsfile to do it globally (this is telling jenkins the entire pipeline must be run on a slave with these three labels):
pipeline {
agent { label 'docker && git && rbenv' }
stages {
stage('commit_stage') {
steps {
echo 'building stuff'
}
}
}
}
you can also select a certain slave or certain capabilities via the node step for any stage or part of a stage:
pipeline {
agent { label 'docker && git && rbenv' }
stages {
stage('commit_stage') {
steps {
// this overrides the top-level agent requirements
node('linux_with_zsh') {
echo 'building stuff'
}
}
}
}
}
Related
I want to be able to use an AWS node to run our jenkins build stages or one of our PCs connected to hardware that can also test the systems. We have some PCs installed that have a recognised agent, and for the AWS node I want to run the steps in a docker container.
Based on this, I would like to decide whether to use the docker agent or the custom agent based on parameters passed in by the user at build time. My jenkinsfile looks like this:
#!/usr/bin/env groovy
#Library('customLib')
import groovy.transform.Field
pipeline {
agent none
parameters{
choice(name: 'NODE', choices: ['PC1', 'PC2', 'dockerNode', 'auto'], description: 'Select which node to run on? Select auto for auto assignment.')
}
stages {
stage('Initialise build environment'){
agent { // agent
docker {
image 'docker-image'
label 'docker-agent'
registryUrl 'url'
registryCredentialsId 'dockerRegistry'
args '--entrypoint=\'\''
alwaysPull true
}
}
steps {
script {
if (params.NODE != 'auto' && params.NODE != 'dockerNode'){
agentLabel = params.NODE + ' && ' + 'custom_string' // this will lead to running the stages on our PCs
}else {
agentLabel = 'docker-agent'
}
env.NODE = params.NODE
}
}
} // initialise build environment
stage('Waiting for available node') {
agent {label agentLabel} // if auto or AWSNode is chosen, I want to run in the docker container defined above; otherwise, use the PC.
post { cleanup { cleanWs() } }
stages {
stage('Import and Build project') {
steps {
script {
...
}
}
} // Build project
...
}
} // Waiting for available node
}
}
Assuming the PC option works fine if I don't add this docker option and the docker option also works fine on its own, how do I add a label to my custom docker agent and use it conditionally like above? Running this Jenkinsfile, I get the message 'There are no nodes with the label ‘docker-agent’ which seems to say that it can't find the label I defined.
Edit: I worked around this problem by adding two stages that run a when block that decides whether the stage runs or not, with a different agent.
Pros: I'm able to choose where the steps are run.
Cons:
The inside build and archive stages are identical, so they are simply repeated for each outer stage. (This may become a pro if I want to run a separate HW test stage only on one of the agents)
both outer stages always run, and that means that the docker container is always pulled from the registry, even if it is not used (i.e. if we choose to run the build process on a PC)
Seems like stages do not lend themselves easily to being encapsulated into functions.
I'm currently attempting to run Groovy code using Jenkins Pipeline, but I'm finding that my scripts run the Groovy part of the code on the Master rather than the Slaves despite me noting the agent.
I have a Repo in Git named JenkinsFiles containing the below JenkinsFile, as well as a repo named CommonLibrary, which contains the code run in the Stages.
Here's my JenkinsFile :
#Library(['CommonLibrary']) _
pipeline {
agent { label 'Slave' }
stages {
stage("Preparation") {
agent { label 'Slave && Windows' }
steps {
Preparation()
}
}
}
}
Here's the Preparation.groovy file :
def call() {
println("Running on : " + InetAddress.localHost.canonicalHostName)
}
Unfortunately, I always seem to get the Master returned when I run the Pipeline in Jenkins. I've tried manually installing Groovy on the Slave, and have also removed the Executors on the Master. Any Powershell that gets run triggers correctly on the Slave, and the $NODE_NAME value returns as the Slave, but it's just the Groovy commands that seem to run on the Master.
Any help would be greatly appreciated. Thanks! - Tor
In my project I have a Jenkins pipeline, which should execute two stages on a provided Docker image, and a third stage on the same machine but outside the container. Running this third stage on the same machine is crucial, because the previous stages produces some output that is needed later. These files are stored on the machine trough mounted volumes.
In order to be sure these files are accessible in the third stage, I manually select a specific node. Here is my pipeline (modified a little, because it's from work):
pipeline {
agent {
docker {
label 'jenkins-worker-1'
image 'custom-image:1.0'
registryUrl 'https://example.com/registry'
args '-v $HOME/.m2:/root/.m2'
}
}
stages {
stage('Test') {
steps {
sh 'mvn test'
}
}
stage('Package') {
steps {
sh 'mvn package'
sh 'mv target workdir/'
}
}
stage('Upload') {
agent {
node {
label 'jenkins-worker-1'
}
}
steps {
sh 'uploader.sh workdir'
}
}
}
}
The node is preconfigured for uploading, so I can't simply upload built target from Docker container, it has to be done from the physical machine.
And here goes my problem: while the first two stages works perfectly fine, the third stage cannot start, because: "Waiting for next available executor" suddenly appears in logs. It's obvious the node is waiting for itself, I cannot use another machine. It looks like Docker is blocking something and Jenkins thinks the node is busy, so it waits eternally.
I look for a solution, that will allow me to run stages both in and outside the container, on the same machine.
Apparently the nested stages feature would solve this problem, but unfortunately it's available since version 1.3 of pipeline plugin, but my node has 1.2.9.
I'm trying to get the following features to work in Jenkins' Declarative Pipeline syntax:
Conditional execution of certain stages only on the master branch
input to ask for user confirmation to deploy to a staging environment
While waiting for confirmation, it doesn't block an executor
Here's what I've ended up with:
pipeline {
agent none
stages {
stage('1. Compile') {
agent any
steps {
echo 'compile'
}
}
stage('2. Build & push Docker image') {
agent any
when {
branch 'master'
}
steps {
echo "build & push docker image"
}
}
stage('3. Deploy to stage') {
when {
branch 'master'
}
input {
message "Deploy to stage?"
ok "Deploy"
}
agent any
steps {
echo 'Deploy to stage'
}
}
}
}
The problem is that stage 2 needs the output from 1, but this is not available when it runs. If I replace the various agent directives with a global agent any, then the output is available, but the executor is blocked waiting for user input at stage 3. And if I try and combine 1 & 2 into a single stage, then I lose the ability to conditionally run some steps only on master.
Is there any way to achieve all the behaviour I'm looking for?
You need to use the stash command at the end of your first step and then unstash when you need the files
I think these are available in the snippet generator
As per the documentation
Saves a set of files for use later in the same build, generally on
another node/workspace. Stashed files are not otherwise available and
are generally discarded at the end of the build. Note that the stash
and unstash steps are designed for use with small files. For large
data transfers, use the External Workspace Manager plugin, or use an
external repository manager such as Nexus or Artifactory
I was reading about the best practices of a Jenkins pipeline.
I have created a declarative pipeline which is not executing parallel jobs and I want to run everything on the same slave.
I use:
agent {
label 'xxx'
}
The rest of my pipeline looks like:
pipeline {
agent {
label 'xxx'
}
triggers {
pollSCM pipelineParams.polling
}
options {
buildDiscarder(logRotator(numToKeepStr: '3'))
}
stages {
stage('stage1') {
steps {
xxx
}
}
stage('stage2') {
steps {
xxx
}
}
}
post {
always {
cleanWs()
}
failure {
xxx"
}
success {
xxx
}
}
}
Now I read the best practices here.
Point 4 is telling:
Do: All Material Work Within a Node
Any material work within a pipeline should occur within a node block.
Why? By default, the Jenkinsfile script itself runs on the Jenkins
master, using a lightweight executor expected to use very few
resources. Any material work, like cloning code from a Git server or
compiling a Java application, should leverage Jenkins distributed
builds capability and run an agent node.
I suspect this is for scripted pipelines.
Now my questions are:
Do I ever have to create a node inside a stage in a declarative pipeline (it is possible) or do I have to use agent inside the stage when I want to run my stage on another specific agent?
My current pipeline has defined a label which is on 4 agents. But my whole pipeline is always executed on one agent (what I want) but I would suspect it's executing stage1 on slaveX and maybe stage2 on slaveY. Why is this not happening?
The documentation is quite misleading.
What the documentation is suggesting is to take advantage of distributed builds. Distributed builds activated either by using the agent or node block.
The agent should be used when you want to run the pipeline almost exclusively on one node. The node block allows for more flexibilty as it allows you to specify where a granular task should be executed.
If you running the pipeline on some agent and you encapsulate a step with node with the same agent, there won't be any effect execpt that a new executor will be allocated to the step encapsulated with node.
There is no obvious benefit in doing so. You will simply be consuming executors that you don't need.
In conclusion, you are already using distributed builds when using agent and this is what the documentation is vaguely recommending.