How to create Dynamic array of sshUserPrivateKey in jenkinsfile? - jenkins

Error
Could not instantiate
{bindings=[#sshUserPrivateKey(credentialsId=dev-02,keyFileVariable=dev-02-key),
#sshUserPrivateKey(credentialsId=dev-01,keyFileVariable=dev-01-key)]}
for org.jenkinsci.plugins.credentialsbinding.impl.BindingStep:
java.lang.ClassCastException:
org.jenkinsci.plugins.credentialsbinding.impl.BindingStep.bindings
expects
java.util.List<org.jenkinsci.plugins.credentialsbinding.MultiBinding>
but received class java.lang.String
I am trying to create sshUserPrivatekey value array in in jenkinsfile.
I am trying to create this array service_array and want to use in withCredentials but facing issue in that.
stages {
stage('configure value') {
steps {
script {
def service_array=[]
def conjourString=""
if (env.BRANCH_NAME == 'stage') {
env.varible_host='stage'
} else if(env.BRANCH_NAME == 'prod') {
env.varible_host='production'
} else if(env.BRANCH_NAME == 'dev') {
env.varible_host='dev'
}else{
env.varible_host='dev'
}
def listofserver = readJSON file: "${env.WORKSPACE}/inventory/listofserver.json"
for(def i = 0;i<listofserver[env.varible_host].size();i++) {
println(listofserver[env.varible_host][i]);
def hostname=listofserver[env.varible_host][i]['host_name']
def keyString=hostname+"-key"
service_array.add(sshUserPrivateKey(credentialsId: hostname,
keyFileVariable: keyString));
}
env.service_array=service_array
}
}
},
stage('Run Ansible Playbook Playbook') {
steps {
echo "!* Running Ansible Playbook ${env.ANSIBLE_PB}. *!"
withCredentials(env.service_array) {
}
}
}
}
I am getting error in env.service_array, what can i use here?

Related

Jenkins Parallel Build reads in an empty map, but it held data in a previous stage

Total noobie trying to make a parallel build more dynamic.
Using this declarative script https://stackoverflow.com/a/48421660/14335065
instead of reading in a prepopulated map def jobs = ["JobA", "JobB", "JobC"], which works perfectly.
I am trying to read in from a global map variable JOBS = [] which I populate in a stage using JOBS.add("JobAAA") syntax.
Printing out JOBS in a pipeline stage shows there are contents within,
JOBS map is [JobAAA, JobBBB, JobCCC]
but when I use it to generate a parallel build it seems to become empty and I am getting error message
No branches to run
I know I must be mixing my understands up somewhere, but can anyone please point me in the right direction.
Here is the code I am fighting with
def jobs = ["JobA", "JobB", "JobC"]
JOBS_MAP = []
def parallelStagesMap = jobs.collectEntries() {
["${it}" : generateStage(it)]
}
def parallelStagesMapJOBS = JOBS_MAP.collectEntries(){
["${it}" : generateStage(it)]
}
def generateStage(job) {
return {
stage("Build: ${job}") {
echo "This is ${job}."
}
}
}
pipeline {
agent any
stages {
stage('populate JOBS map') {
steps {
script {
JOBS_MAP.add("JobAAA")
JOBS_MAP.add("JobBBB")
JOBS_MAP.add("JobCCC")
}
}
}
stage('print out JOBS map'){
steps {
echo "JOBS_MAP map is ${JOBS_MAP}"
}
}
stage('parallel job stage') {
steps {
script {
parallel parallelStagesMap
}
}
}
stage('parallel JOBS stage') {
steps {
script {
parallel parallelStagesMapJOBS
}
}
}
}
}
Try this:
def jobs = ["JobA", "JobB", "JobC"]
JOBS_MAP = []
def generateStage(job) {
return {
stage("Build: ${job}") {
echo "This is ${job}."
}
}
}
pipeline {
agent any
stages {
stage('populate JOBS map') {
steps {
script {
JOBS_MAP.add("JobAAA")
JOBS_MAP.add("JobBBB")
JOBS_MAP.add("JobCCC")
}
}
}
stage('print out JOBS map'){
steps {
echo "JOBS_MAP map is ${JOBS_MAP}"
}
}
stage('parallel job stage') {
steps {
script {
def parallelStagesMap = jobs.collectEntries() {
["${it}" : generateStage(it)]
}
parallel parallelStagesMap
}
}
}
stage('parallel JOBS stage') {
steps {
script {
def parallelStagesMapJOBS = JOBS_MAP.collectEntries(){
["${it}" : generateStage(it)]
}
parallel parallelStagesMapJOBS
}
}
}
}
}

Jenkins file groovy issues

Hi My jenkins file code is as follows : I am basically trying to make call to a python script and execute it, I have defined some variables in my code : And when i am trying to run it, It gives no such property error in the beginning and I cant find out the reason behind it.
I would really appreciate any suggestions on this .
import groovy.json.*
pipeline {
agent {
label 'test'
}
parameters {
choice(choices: '''\
env1
env2'''
, description: 'Environment to deploy', name: 'vpc-stack')
choice(choices: '''\
node1
node2'''
, description: '(choose )', name: 'stack')
}
stages {
stage('Tooling') {
steps {
script {
//set up terraform
def tfHome = tool name: 'Terraform 0.12.24'
env.PATH = "${tfHome}:${env.PATH}"
env.TFHOME = "${tfHome}"
}
}
}
stage('Build all modules') {
steps {
wrap([$class: 'BuildUser']) {
// build all modules
script {
if (params.refresh) {
echo "Jenkins refresh!"
currentBuild.result = 'ABORTED'
error('Jenkinsfile refresh! Aborting any real runs!')
}
sh(script: """pwd""")
def status_code = sh(script: """PYTHONUNBUFFERED=1 python3 scripts/test/test_script.py /$vpc-stack""", returnStatus: true)
if (status_code == 0) {
currentBuild.result = 'SUCCESS'
}
if (status_code == 1) {
currentBuild.result = 'FAILURE'
}
}
}
}
}
}
post {
always {
echo 'cleaning workspace'
step([$class: 'WsCleanup'])
}
}
}
And this code is giving me the following error :
hudson.remoting.ProxyException: groovy.lang.MissingPropertyException: No such property: vpc for class
Any suggestions what can be done to resolve this.
Use another name for the choice variable without the dash sign -, e.g. vpc_stack or vpcstack and replace the variable name in python call.

Jenkins Pipeline Conditional Environmental Variables

I have a set of static environmental variables in the environmental directive section of a declarative pipeline. These values are available to every stage in the pipeline.
I want the values to change based on an arbitrary condition.
Is there a way to do this?
pipeline {
agent any
environment {
if ${params.condition} {
var1 = '123'
var2 = abc
} else {
var1 = '456'
var2 = def
}
}
stages {
stage('One') {
steps {
script {
...
echo env.var1
echo env.var2
...
}
}
}
}
stag('Two'){
steps {
script {
...
echo env.var1
echo env.var2
...
}
}
}
Looking for the same thing I found a nice answer in other question:
Basically is to use the ternary conditional operator
pipeline {
agent any
environment {
var1 = "${params.condition == true ? "123" : "456"}"
var2 = "${params.condition == true ? abc : def}"
}
}
Note: keep in mind that in the way you wrote your question (and I did my answer) the numbers are Strings and the letters are variables.
I would suggest you to create a stage "Environment" and declare your variable according to the condition you want, something like below:-
pipeline {
agent any
environment {
// Declare variables which will remain same throughout the build
}
stages {
stage('Environment') {
agent { node { label 'master' } }
steps {
script {
//Write condition for the variables which need to change
if ${params.condition} {
env.var1 = '123'
env.var2 = abc
} else {
env.var1 = '456'
env.var2 = def
}
sh "printenv"
}
}
}
stage('One') {
steps {
script {
...
echo env.var1
echo env.var2
...
}
}
}
stage('Two'){
steps {
script {
...
echo env.var1
echo env.var2
...
}
}
}
}
}
Suppose we want to use optional params for downstream job if it is called from upsteam job, and default params if downsteam job is called by itself.
But we don't want to have "holder" params with default value in downstream for some reason.
This could be done via groovy function:
upstream Jenkinsfile - param CREDENTIALS_ID is passed downsteam
pipeline {
stage {
steps {
build job: "my_downsteam_job_name",
parameters [string(name: 'CREDENTIALS_ID', value: 'other_credentials_id')]
}
}
}
downstream Jenkinsfile - if param CREDENTIALS_ID not passed from upsteam, function returns default value
def getCredentialsId() {
if(params.CREDENTIALS_ID) {
return params.CREDENTIALS_ID;
} else {
return "default_credentials_id";
}
}
pipeline {
environment{
TEST_PASSWORD = credentials("${getCredentialsId()}")
}
}
you can get another level of flexibility, using maps:
stage("set_env_vars") {
steps {
script {
def MY_MAP1 = [A: "123", B: "456", C: "789"]
def MY_MAP2 = [A: "abc", B: "def", C: "ghi"]
env.var1 = MY_MAP1."${env.switching_var}"
env.var2 = MY_MAP2."${env.switching_var}"
}
}
}
This way, more choices are possible.

Jenkins pipeline: load properties from file

Below pipeline codes works well:
pipeline {
agent {
label "test_agent"
}
stages {
stage("test") {
steps {
script {
sh "echo 'number=${BUILD_NUMBER}' >log"
if (fileExists('log')) {
load 'log'
retVal = "${number}"
}
echo "${retVal}"
}
}
}
}
}
However, when I tried to put the logic of read file to a lib(named getNumber.groovy) and call it in pipeline, like this:
getNumber.groovy
def call() {
def retVal
if (fileExists('log')) {
load 'log'
retVal = "${number}"
}
return retVal
}
This is how the pipeline (test.groovy) call this lib:
#Library('lib') _
pipeline {
agent {
label "test_agent"
}
stages {
stage("test") {
steps {
script {
sh "echo 'number=${BUILD_NUMBER}' >log"
def retVal = getNumber()
echo "${retVal}"
}
}
}
}
}
It always fail with below error:
[Pipeline] End of Pipeline
groovy.lang.MissingPropertyException: No such property: number for class: getNumber
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:53)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.getProperty(ScriptBytecodeAdapter.java:458)
at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.getProperty(DefaultInvoker.java:34)
at com.cloudbees.groovy.cps.impl.PropertyAccessBlock.rawGet(PropertyAccessBlock.java:20)
Any suggestion? How to fix it if I want to encapsulate the logic in a lib?
[Edit]
If I change this segment
load 'log'
retVal = "${number}"
to this:
def matcher = readFile('log') =~ '^number=(.+)'
retVal=matcher ? matcher[0][1] : null
it works. But I just curious why the previous one can't work.

Run the tasks in a node twice

I am using the Jenkins pipeline plugin to test my project. I have a Groovy script of the following form:
node {
stage("checkout") {
//some other code
}
stage("build") {
//some other code
}
stage("SonarQube Analysis") {
//some other code
}
}
When I have a feature branch that I want to merge into master, I would like to first do this process on master, then on the feature and see if the SonarQube analysis is worse on feature.
I would like something of this sort:
def codeCoverageMaster = node("master")
def codeCoverageFeature = node("feature/someFeature")
if(codeCoverageFeature < codeCoverageMaster) {
currentBuild.result = "ERROR"
}
Is something like this possible?
You do it by defining a function which contain your script and return the SonarQube result, then you call the function twice and compare the result:
def runBranch(String path) {
def sonarQubeRes
node {
stage("checkout") {
//some other code
// Use path supplied to this function
}
stage("build") {
//some other code
}
stage("SonarQube Analysis") {
//some other code
}
}
return sonarQubeRes
}
def codeCoverageMaster = runBranch("master")
def codeCoverageFeature = runBranch("feature/someFeature")
if(codeCoverageFeature < codeCoverageMaster) {
currentBuild.result = "ERROR"
}

Resources