Groovy[Jenkins] how to iterate over a paramter - jenkins

so I want to explain you what I'm doing and the struggle:
GOAL:
I'm trying to create a parallel pipeline to run 3 pods for each of our code tenants;
GOAL - 3 pods each executing a maven process
params.testTenant is a parameter in plugin and it's values are "test1, test2, test3"
I get 3 pods each with the name of the tenant, but when I do "println tenant" I get only test3 inside each of the pods
println tenant
should give test1, then test2, then test3
Why is this happening :(
parallelMap = [:]
for (def tenant in params.testTenant.tokenize( ',' )) {
parallelMap["$tenant"] = {
nodeContainer(
[image(alias: 'tests',
imageName: 'test,
alwaysPull: true)]) {
println tenant
stage('Checkout SCM') {
checkout scm
}

Related

How to fetch Build ID of Job triggered from another job

I have the following situation in a Jenkinsfile of Job A:
...
... // Some execution
...
call Job B
// When Job B runs successfully
params.some_var_used_in_Job_C = BUILD ID of Job B
call Job C
I have to know the BUILD ID of Job B after it succeeds and I need to pass it as a params to Job C. Can anyone suggest how I can do this?
Also is it possible that I can pass some variable from Job B to Job A (so that I can send that value to Job C later) ?
Should be as simple as this:
node {
stage('Test') { // for display purposes
def jb = build wait: true, job: 'JobB'
println jb.fullDisplayName
println jb.id
//this will show everything available but needs admin privs to execute
println jb.properties
}
}
If you want to pass a simple string from job B to Job A then in Job B you can set an env variable
env.someVar = "some value"
then back in job A
println jb.buildVariables.someVar
#Kaus Untwale answer is correct. I've copied his answer into a declarative pipeline and added error handling.
From a upstream job:
pipeline {
agent any
stages {
stage('Run job') {
steps {
// make build as unstable on error
// remove this if not needed
catchError(buildResult: 'UNSTABLE') {
script {
def jb = build wait: true, job: 'test2', propagate: false
println jb
println jb.fullDisplayName
println jb.id
// throw an error if build failed
// this still allows you to get the job infos you need
if (jb.result == 'FAILURE') {
error('Downstream job failed')
}
}
}
}
}
}
}
Get the build within a downstream job:
# job: test2
pipeline {
agent any
stages {
stage('Upstream') {
steps {
script {
// upstream build if available
def upstream = currentBuild.rawBuild.getCause(hudson.model.Cause$UpstreamCause)
echo upstream?.shortDescription
// the run of that cause holds more infos
def upstreamRun = upstream?.getUpstreamRun()
echo upstreamRun?.number.toString()
}
}
}
}
}
See the api docs for the run class. You'll also need to allow some calls from the downstream example as approved scripts or disable the Groovy Sandboxo on that job.

Multiple jobs running the same pipeline

I have a project using a declarative pipeline for a c++ project.
I wish to have multiple jobs running the same pipeline with different configurations.
E.g. Release or Debug build. Using different compilers. Some jobs using the address sanitizer or thread sanitizer.
Some of these jobs should run on every commit. Other jobs should run once a day.
My idea was to control this by having the pipeline depend on environment variables or parameters.
I now have a JenkinsFile where all the stages depends on environment variables.
Unfortunately I can't find a place to control the variables from the outside.
Is there any way I can control these environment variables on a job level without violating the DRY principle?
A simplified example depending on two variables.
pipeline {
agent {
label 'linux_x64'
}
environment {
CC = '/usr/bin/gcc-4.9'
BUILD_CONF = 'Debug'
}
stages {
stage('Build') {
steps {
cmakeBuild buildDir: 'build', buildType: "${env.BUILD_CONF}", installation: 'InSearchPath', sourceDir: 'src', steps: [[args: '-j4']]
}
}
stage('Test') {
sh 'test.sh'
}
}
I now want two create a second job running the same pipeline but with CC='/usr/bin/clang' and BUILD_CONF='Release'.
In my real project I have more variables and want to test approximately ten combinations.
yes , It possible in one pipeline but it will be more easy with Jenkins scripted pipeline , like below example :
node {
def x=[
[CC: 10, DD: 15],
[CC: 11, DD: 16],
[CC: 12, DD: 17]
]
x.each{
stage("stage for CC - ${it.CC}") {
echo "CC = ${it.CC} & DD = ${it.DD}"
}
}
}
so on your case you can decorate your pipleine like below:
node {
// define all your expected key and value here like an array
def e = [
[CC: '/usr/bin/gcc-4.9', BUILD_CONF: 'Debug'],
[CC: '/usr/bin/clang', BUILD_CONF: 'Release']
]
// Iterate and prepare the pipeline
e.each {
stage("build - ${it.BUILD_CONF}") {
cmakeBuild buildDir: 'build', buildType: "${it.BUILD_CONF}", installation: 'InSearchPath', sourceDir: 'src', steps: [[args: '-j4']]
}
stgae("test - ${it.BUILD_CONF}") {
sh 'test.sh'
}
}
}
with this way , you can reuse your stages with defined array like e

Jenkins pipeline - Trigger a new build for a given job

I have two Jenkins pipelines, named A and B. Both use the same docker container, named C1. The job A looks like this:
pipeline {
agent {
node {
label 'c1'
}
}
stages {
...
}
post {
always {
script {
echo "always"
}
}
success {
script {
echo "success"
}
}
failure {
script {
echo "failure"
}
}
unstable {
script {
echo "unstable"
}
}
}
}
The only difference is in the job B. This, in the post action always calls the job A, like this:
build job: 'A/master', parameters: [ string(name: 'p1', value: params["p1"]) ]
The error occures by starting the job A, saying:
Error when executing success post condition:
hudson.AbortException: No item named A/master found
And also, by listing the parent folders, it is evident that there is no folder for job A.
How could I solve this problem?
KI
The solution for this problem is the folder name in which these jobs are located. So, the name of the job is test/A/master.
From your comments I've read you're using the Gitea plugin to receive the repositories from your Gitea instance. Gitea repositories are organized per user/organization like Github does.
Once you've added a Gitea project (user/organization) via the Jenkins plugin the full name of one of the jobs will be:
<user_or_orga>/<repository_name>/<branch>
For example for the user name "crash", where the repository "stackoverflow" belongs to, it would be (for the master branch):
crash/stackoverflow/master
To get a list of all items including all full job names you can run this in the script console (under Manage Jenkins):
Jenkins.instance.getAllItems(AbstractItem.class).each {
println(it.fullName)
};
P.S. for the Bitbucket plugin and others it's the same rule.

Using Jenkins to run SonarQube against any Project and choice of branche(s)

I am working on creating a single Jenkins job that allows you to pick the GitHub project and then select the branch you would like to run your SonarQube tests on.
So far I have been able to create a job that ONLY runs against the Master build of each project.
Does anyone have any experience creating something like this?
Thanks!
You need to parametrize your build.
You will have to make gitproject and gitBranch as a parameter this will make you select the project you want to run and select the branch too. Here is an example
pipeline {
agent {
node {
label any
}
}
parameters {
choice(
name: 'PLATFORM',
choices:"Test\nArt19-Data-Pipeline\nBrightcove-Report\nBrightcove-Video\nData-Delivery\nGlobal_Facebook_Engagement_Score\nGoogle-Analytics-Data-Pipeline\nInstagram-Data-Pipeline\nTwitter-Analytics\nTwitter-Data-Pipeline\nYoutube-Data",
description: "Choose the lambda function to deploy or rollback")
choice(
name: 'STAGE',
choices:"dev\nstag",
description: "Choose the lambda function to deploy or rollback")
}
stages {
stage("Git CheckOut") {
steps {
//CheckOut from the repository
//git credentialsId: 'svc.gitlab',branch:'master', url: 'git#git.yourProjectURL/yourProjectName.git'
echo " Parameters are ${PLATFORM}"
echo " STAGE IS ${STAGE}"
}
}
}
}
All you need is replace the 'master' with a a paramter and the 'yourProjectName' with another paramter instead of the one i used as example

Jenkins Pipeline Examples for Selecting Different Jenkins Node

Our Jenkins setup consists of master nodes and different / dedicated worker nodes for running jobs in dev, test and prod environment. How do I go about creating a scripted pipeline code that allows users to select environment (possibly from master node) and depending upon the environment selected would execute the rest of the job in the node selected? Here is my initial thought:
stage('Select environment ') {
script {
def userInput = input(id: 'userInput', message: 'Merge to?',
parameters: [[$class: 'ChoiceParameterDefinition', defaultValue: 'strDef',
description:'describing choices', name:'Env', choices: "dev\ntest\nprod"]
])
println(userInput);
}
echo "Environment here ${params.Env}" // prints null here
stage("Build") {
node(${params.Env}) { // schedule job based upon the environment selected earlier
echo "My test here"
}
}
}
I am in the right path or should I be looking at something else?
Another follow up question is that the job that is running on the worker node also requires additional user input. Is there a way to combine the user input in one go such that the users would not be prompted with multiple user screens?
If you pass the environment as a build parameter when kicking off the job, and you have appropriate labels on your nodes, you could do something like:
agent = params.WHAT_NODE
agentLabels = "deploy && ${agent}"
pipeline {
agent { label agentLabels }
....
}
Ended up doing the following for scripted pipeline:
The code for selecting environment can be run on any node (whether master or slaves with agent running). The parameter can be injected into an environment variable: env..
node {
stage('Select Environment'){
env.Env = input(id: 'userInput', message: 'Select Environment',
parameters: [[$class: 'ChoiceParameterDefinition',
defaultValue: 'strDef',
description:'describing choices',
name:'Env',
choices: "jen-dev-worker\njen-test-worker\njen-prod-worker"]
])
println(env.Env);
}
stage('Display Environment') {
println(env.Env);
}
}
The following code snippet ensures that script would be executed on the environment selected in the last step. Requires Jenkins workers with labels: jen-dev-worker, jen-test-worker, jen-prod-worker) available.
node (env.Env) {
echo "Hello world, I am running on ${env.Env}"
}

Resources