I have a Jenkinsfile which as one of its stages builds several Docker images and pushes them to a registry. There is quite a long and growing list of these images, so I don't want to repetitively declare the build. Instead, I have a variable:
def dockerImages = ["myimage1","myimage2","myimage3"]
And then have the following stage:
stage("Initiate docker image builds") {
steps {
script {
dockerImages.each { image ->
stage ("${image}") {
utils.doStuff(${image}
}
}
}
}
}
I only want build to happen when there is a change, so I could do something like:
stage("Initiate docker image builds") {
when{
changeset "dockerfiles/**"
}
steps {
script {
dockerImages.each { image ->
stage ("${image}") {
utils.doStuff(${image}
}
}
}
}
}
But this would trigger building all the images if there was a change on just one of them. Is there a way that I could modify my script to have the when apply to the inner stage ("${image}" section? The syntax doesn't appear to allow when on that level.
You can inspect the Changeset to see what files have changed e.g.
https://issues.jenkins.io/browse/JENKINS-58441
Looks quite messy code though.
Could also break the dockerfiles out to their own repos. It might seem "wasteful" having a repo for a single file, but it totally removes situations like this.
Or have separate Jenkinsfiles and jenkins jobs for each container build which just look it its Dockerfile has changed. Would need a lot of executors though if you had lots of containers
Related
I have the following declarative pipeline where I write a global build variable
during a parallel matrix, the write in stage Build Detection is probably (wasn't clear to me) a race condition but I am not sure. I have 3 questions regarding the below simple pipeline:
Is it correct that since Build-Detection uses the same agent (note only Build uses a different agent), it is definitely a race condition ?
If I would have one agent for each parallel line, it would not be a
race condition as the global build is different in each agent?
Is there a way to make a variable copy of build inside the stage such that its not global anymore?
How should we deal with global variable communicating stuff (for when steps etc)
and parallel matrix feature?
Map<String,Boolean> build
pipeline {
stages {
stage('Test') {
failFast false
matrix {
axes {
axis {
name 'CONTAINER'
values 'A', 'B'
}
}
stages {
stage('Build Detection') {
steps {
script {
build[CONTAINER] = CONATAINER == 'A'
echo "Should Build: ${build[CONTAINER]}"
}
}
}
stage('Build') {
agent {
kubernetes {
yamlFile '.jenkins/pods/build-kaniko.yaml'
}
}
when {
beforeAgent true
expression { return build[CONTAINER] }
}
steps {
echo "BUILDING....."
}
}
}
}
}
}
}
No, it has nothing to do with build agents. The JVM that's executing the compiled groovy code is running on the Jenkins master, not a build agent. Therefore, using a global variable is shared by each thread running in the Jenkins master JVM. Whether there's a possible race condition is not related to stages using the same or different build agents.
Same answer as 1.
Yes, simply define a variable using "def" or a specific type in the stage's script block. Just be sure to not reference a new variable without a type because in Groovy that causes it to be declared globally.
Using a map with a key that is specific to each thread like you're doing seems like a good way to me. If you really want to make sure there is no possibility of two unsafe thread operations modifying the map at the same time, then make sure that a threadsafe map is used. You could print out the class of the map to find out what implementation is getting instantiated. I would hope it's something threadsafe like ConcurrentHashMap.
I have a repository with multiple Jenkinsfiles (at least there will be multiple Jenkins files) and I want to setup the Jobs in Jenkins using a SEED job.
So far I can set up one job based on my remote repository.
#!/usr/bin/env groovy
/*
* Setup jobs from gitlab project docker-jenkins-pipelines
*/
def createPipelineJob(final String repo) {
String repoName = repo.substring(repo.lastIndexOf("/") + 1, repo.length())
pipelineJob(repoName) {
definition {
cpsScm {
scm {
git {
remote {
url('git#gitlab.com:' + repo +'.git')
}
branches('*/main')
//branches('*/feat*')
}
}
scriptPath("src/main/jobs/ADMIN-initialize-repository/Jenkinsfile")
}
}
}
}
createPipelineJob('sommerfeld.sebastian/docker-jenkins-pipelines')
Now I would like to iterate all folders in my repo (https://gitlab.com/sommerfeld.sebastian/docker-jenkins-pipelines/-/tree/main/src/main/jobs) and create separate jobs for all Jenkinsfiles.
I would like to have some sort of wildcard for src/main/jobs/*/Jenkinsfile. But looping the folder would be okay too and mybe even better because I could better define the jobnames.
But I don't know how to iterate the folders. Can anyone give me a hint on how to do that? Is there an APi call for gitlab.com or something?
I would suggest to not use the API. You do have groovy at hand, and you can iterate through the files. When you checkout the repository you have all information.
https://stackoverflow.com/a/38899519/3708208 is a good starting point to iterate over the files with groovy, there might be some sandbox security limitations, but this shows how you can iterate over a set of files. Calling the method to create the pipeline jobs should be something like:
new File(parentPath).traverse(type: groovy.io.FileType.FILES, nameFilter: ~/Jenkinsfile/) { it ->
createPipelineJob("sommerfeld.sebastian/docker-jenkins-pipelines/${it.parent.name}")
} //code untested :)
I've been trying to construct multiple jobs from a list and everything seems to be working as expected. But as soon as I execute the first build (which works correctly) the parameters in the job disappears. This is how I've constructed the pipelineJob for the project.
import javaposse.jobdsl.dsl.DslFactory
def repositories = [
[
id : 'jenkins-test',
name : 'jenkins-test',
displayName: 'Jenkins Test',
repo : 'ssh://<JENKINS_BASE_URL>/<PROJECT_SLUG>/jenkins-test.git'
]
]
DslFactory dslFactory = this as DslFactory
repositories.each { repository ->
pipelineJob(repository.name) {
parameters {
stringParam("BRANCH", "master", "")
}
logRotator{
numToKeep(30)
}
authenticationToken('<TOKEN_MATCHES_WITH_THE_BITBUCKET_POST_RECEIVE_HOOK>')
displayName(repository.displayName)
description("Builds deploy pipelines for ${repository.displayName}")
definition {
cpsScm {
scm {
git {
branch('${BRANCH}')
remote {
url(repository.repo)
credentials('<CREDENTIAL_NAME>')
}
extensions {
localBranch('${BRANCH}')
wipeOutWorkspace()
cloneOptions {
noTags(false)
}
}
}
scriptPath('Jenkinsfile)
}
}
}
}
}
After running the above script, all the required jobs are created successfully. But then once I build any job, the parameters disappear.
After that when I run the seed job again, the job starts showing the parameter. I'm having a hard time figuring out where the problem is.
I've tried many things but nothing works. Would appreciate any help. Thanks.
This comment helped me to figure out similar issue with my .groovy file:
I called parameters property twice (one at the node start and then tried to set other parameters in if block), so the latter has overwritten the initial parameters.
BTW, as per the comments in the linked ticket, it is an issue with both scripted and declarative pipelines.
Fixed by providing all job parameters in each parameters call - for the case with ifs.
Though I don't see repeated calls in the code you've provided, please check the full groovy files for your jobs and add all parameters to all parameters {} blocks.
What is modern best practice for multi-configuration builds (with Jenkins)?
I want to support multiple branches and multiple configurations.
For example for each version V1, V2 of the software I want builds targeting
platforms P1 and P2.
We have managed to set up multi-branch declarative pipelines. Each build has its own docker so its easy to support multiple platforms.
pipeline {
agent none
stages {
stage('Build, test and deploy for P1) {
agent {
dockerfile {
filename 'src/main/docker/Jenkins-P1.Dockerfile'
}
}
steps {
sh buildit...
}
}
stage('Build, test and deploy for P2) {
agent {
dockerfile {
filename 'src/main/docker/Jenkins-P2.Dockerfile'
}
}
steps {
sh buildit...
}
}
}
}
This gives one job covering multiple platforms but there is no separate red/blue status for each platform.
There is good argument that this does not matter as you should not release unless the build works for all platforms.
However, I would like a separate status indicator for each configuration. This suggests I should use a multi-configuration build which triggers a parameterised build for each configuration as below (and the linked question):
pipeline {
parameters {
choice(name: 'Platform',choices: ['P1', 'P2'], description: 'Target OS platform', )
}
agent {
filename someMagicToGetDockerfilePathFromPlatform()
}
stages {
stage('Build, test and deploy for P1) {
steps {
sh buildit...
}
}
}
}
There are several problems with this:
A declarative pipeline has more constraints over how it is scripted
Multi-configuration builds cannot trigger declarative pipelines (even with the parameterized triggers plugin I get "project is not buildable").
This also begs the question what use are parameters in declarative pipelines?
Is there a strategy that gives the best of both worlds i.e:
pipeline as code
separate status indicators
limited repetition?
This is a partial answer. I think others with better experience will be able to improve on it.
This is currently untested. I may be barking up the wrong tree.
Please comment or add a better answer.
Do not use pipeline parameters except where you need user input
Use a hybrid of a scripted and declarative pipeline
(see also https://stackoverflow.com/a/46675227/1569204)
Have a function which declares a pipeline based on parameters:
(see also https://jenkins.io/doc/book/pipeline/shared-libraries/)
Use nodes to create visible indicators in the pipeline (at least in blue ocean)
So something like the following:
def build(string platform) {
switch(platform) {
case P1:
dockerFile = 'foo'
indicator = 'build for foo'
break
case P2:
dockerFile = 'bar'
indicator = 'build for bar'
break
}
pipeline {
agent {
dockerfile {
filename "$dockerFile"
}
node {
label "$indicator"
}
}
stages {
steps {
echo "build it"
}
}
}
}
The relevant code could be moved to a shared library (even if you don't actually need to share it).
I think the cleanest approach is to have this all in a pipeline similar to the first one you presented, the only modification I would see here is making those parallel, so you would actually try and build/test for both platforms.
To reuse the previous stage's workspace you could do: reuseNode true
Something similar to this flow, that would have parallel build for platforms
pipeline {
agent 'docker'
stages {
stage('Common pre') { ... }
stage('Build all platforms') {
parallel {
stage('Build, test and deploy for P1') {
agent {
dockerfile {
filename 'src/main/docker/Jenkins-P1.Dockerfile'
reuseNode true
}
}
steps {
sh buildit...
}
}
stage('Build, test and deploy for P2') {
agent {
dockerfile {
filename 'src/main/docker/Jenkins-P2.Dockerfile'
reuseNode true
}
}
steps {
sh buildit...
}
}
}
}
stage('Common post parallel') { ... }
}
}
I am creating a Jenkins pipeline, I want certain stage to be triggered only when a particular log file's(log file is located in the server node where all the stages are going to run) last modified date is updated after the initiation of pipeline job, I understand we need to use "When" condition but not really sure how to implement it.
Tried referring some of the pipeline related portals but could not able to find an answer
Can some please help me through this?
Thanks in advance!
To get data about file is quite tricky in a Jenkins pipeline when using the Groovy sandbox since you're not allowed to do new File(...).lastModified. However there is the findFiles step, which basically returns a list of wrapped File objects with a getter for last modified time in millis, so we can use findFiles(glob: "...")[0].lastModified.
The returned array may be empty, so we should rather check on that (see full example below).
The current build start time in millis is accessible via currentBuild.currentBuild.startTimeInMillis.
Now that we git both, we can use them in an expression:
pipeline {
agent any
stages {
stage("create file") {
steps {
touch "testfile.log"
}
}
stage("when file") {
when {
expression {
def files = findFiles(glob: "testfile.log")
files && files[0].lastModified < currentBuild.startTimeInMillis
}
}
steps {
echo "i ran"
}
}
}
}