jenkins passing AmazonWebServicesCredentialsBinding to slave node - jenkins

I have a Jenkins pipeline which needs to run on a slave node. I curently have issues with passing Variables set by plugin withCredentials. When I try to use them on the slave node they are empty, but they work on the master.
Here is the pipeline snippet.
#!groovy
#Library('sharedPipelineLib#master') _
pipeline {
agent { node
{ label 'jenkins-slave-docker' }
}
options {
skipDefaultCheckout(true)
}
environment {
sonar = credentials('SONAR')
}
stages {
stage('Checkout') {
steps {
cleanWs()
script {
checkout scm
}
}
}
stage('Deploy backend') {
steps {
script {
withCredentials([
[
$class : 'AmazonWebServicesCredentialsBinding',
credentialsId : 'AWS_ACCOUNT_ID_DEV',
accessKeyVariable: 'AWS_ACCESS_KEY_ID_DEV',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY_DEV'
],
[
$class : 'AmazonWebServicesCredentialsBinding',
credentialsId : 'AWS_ACCOUNT_ID_DNS',
accessKeyVariable: 'AWS_ACCESS_KEY_ID_DNS',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY_DNS'
]
]){
sh '''
echo "$AWS_ACCESS_KEY_ID_DEV\\n$AWS_SECRET_ACCESS_KEY_DEV\\n\\n" | aws configure --profile profile_705229686812
echo "$AWS_ACCESS_KEY_ID_DNS\\n$AWS_SECRET_ACCESS_KEY_DNS\\n\\n" | aws configure --profile profile_417752960097
'''
}
}
}
}
}
}
And the log
[Pipeline] withCredentials
Masking supported pattern matches of $AWS_ACCESS_KEY_ID_DEV or $AWS_SECRET_ACCESS_KEY_DEV or $AWS_SECRET_ACCESS_KEY_DNS or $AWS_ACCESS_KEY_ID_DNS
[Pipeline] {
[Pipeline] sh
echo '\n\n\n'
aws configure --profile profile_705229686812
AWS Access Key ID [None]: AWS Secret Access Key [None]:
EOF when reading a line

the issue was again echo cmd. I had to uses printf instead, cause echo adds newline which causes to fail.

Related

how to jenkins pipeline set variable in sh

pipeline {
agent any
stages {
stage('a') {
steps {
sh """
#!/bin/bash
a="test"
"""
}
}
stage('b') {
steps {
sh """
#!/bin/bash
echo ${a} => this result is stage 'a' in variable(test)
""
}
}
}
}
HI, I want to use stage 'a' steps sh variable in stage 'b' steps
But, I don't want to use readFile function
Please tell me how to configuration jenkins pipeline
Thank you.

jenkins Pipeline Script: How to get the return value from shell script

I have a jenkins pipeline where I am executing different scripts at different stages. However in one stage I want to get the output of the stage to a variable where I want to pass that variable as an input to next stage . Here is my code in Jenkinsfile
timestamps
{
node('cf_slave')
{
checkout scm
stage('Download HA image from GSA')
{
withCredentials(usernamePassword(credentialsId: 'ssc4icp_GSA', usernameVariable: 'GSA_USERNAME', passwordVariable: 'GSA_PASSWORD')
{
environment {
script {
OUTPUT = """${sh(
returnStdout: true,
script: 'bash jenkins/try_install.sh $VAR_ABC'
)}"""
echo $OUTPUT
}
}
}
}
}
}
Here i am getting syntax error. I want to get the OUTPUT in OUTPUT variable and pass that to next stage. Please help me how to do that in a correct way
When referencing variable outside of a string you should not us a dollar sign ($). The code should be (including changes suggested by Matt):
timestamps
{
node('cf_slave')
{
checkout scm
stage('Download HA image from GSA')
{
withCredentials(usernamePassword(credentialsId: 'ssc4icp_GSA', usernameVariable: 'GSA_USERNAME', passwordVariable: 'GSA_PASSWORD'))
{
environment {
script {
OUTPUT = sh returnStdout: true,
script: "bash jenkins/try_install.sh $VAR_ABC"
echo OUTPUT
}
}
}
}
}
}

How to get Jenkins credentials variable in all stages of my Jenkins Declarative Pipeline

How do I get Jenkins credentials variable i.e "mysqlpassword" accessible to all stages of my Jenkins Declarative Pipeline?
The below code snippet works fine and prints my credentials.
node {
stage('Getting Database Credentials') {
withCredentials([usernamePassword(credentialsId: 'mysql_creds', passwordVariable: 'mysqlpassword', usernameVariable: 'mysqlusername')])
{
creds = "\nUsername: ${mysqlusername}\nPassword: ${mysqlpassword}\n"
}
println creds
}
}
How can I incorporate the above code in my current pipeline so that mysqlusername & mysqlpassword variables are accessible to all stages across the pipeline script i.e globally.
My pipeline script layout looks like below:
pipeline { //indicate the job is written in Declarative Pipeline
agent { label 'Prod_Slave' }
environment {
STAGE_2_EXECUTED = "0"
}
stages {
stage ("First Stage") {
steps {
echo "First called in pipeline"
script {
echo "Inside script of First stage"
}
}
} // end of first stage
stage ("Second Stage") {
steps {
echo "Second stage called in pipeline"
script {
echo "Inside script of Second stage"
}
}
} // end of second stage
} //end of stages
} // end of pipeline
I m on the latest version of Jenkins.
Requesting solutions. Thank you.
You can do something like this. Here, you define you variables under environment { } and use it throughout your stages.
pipeline {
agent any
environment {
// More detail:
// https://jenkins.io/doc/book/pipeline/jenkinsfile/#usernames-and-passwords
MYSQL_CRED = credentials('mysql_creds')
}
stages {
stage('Run Some Command') {
steps{
echo "Running some command"
sh '<some-command> -u $MYSQL_CRED_USR -p $MYSQL_CRED_PSW'
}
}
}
Variables defined under environments are global to all the stages so can be used in the whole jenkinsfile.
More information about credentials() in official documentation.

How to use Terraform Plan and Apply in different Jenkins pipeline stages

I am working on a declarative Jenkins pipeline for Terraform deployments. I want to have the terraform init / select workspace / plan in one stage, ask for approval in another stage, and then do the apply in another stage. I have the agent at the top set to none and then using a kubernetes agent for a docker image we created that has packages we need for the stages. I am declaring those images in each stage. When I execute the pipeline, I get an error that I need to reinitialize Terraform in the apply stage even though I initialized in the init/plan stage. I figure this is nature of the stages running in different nodes.
I have it working by doing init / plan and stashing the plan. In the apply stage, it unstashes the plan, calls init / select workspace again, and then finally applies the unstashed plan.
I realize I could set the agent at the top, but according to Jenkins documentation, that is bad practice, as waiting for user input will block the execution.
I feel like there has to be a way to do this more elegantly. Any suggestions?
Here's my code:
def repositoryURL = env.gitlabSourceRepoHttpUrl != null && env.gitlabSourceRepoHttpUrl != "" ? env.gitlabSourceRepoHttpUrl : env.RepoURL
def repositoryBranch = env.gitlabTargetBranch != null && env.gitlabTargetBranch != "" ? env.gitlabTargetBranch : env.RepoBranch
def notificationEmail = env.gitlabUserEmail != null && env.gitlabUserEmail != "" ? env.gitlabSourceRepoHttpUrl : env.Email
def projectName = env.ProjectName
def deployAccountId = env.AccountId
pipeline {
agent none
stages {
stage("Checkout") {
agent any
steps {
git branch: "${repositoryBranch}", credentialsId: '...', url: "${repositoryURL}"
stash name: 'tf', useDefaultExcludes: false
}
}
stage("Terraform Plan") {
agent {
kubernetes {
label 'myagent'
containerTemplate {
name 'cis'
image 'docker-local.myrepo.com/my-image:v2'
ttyEnabled true
command 'cat'
}
}
}
steps {
container('cis') {
unstash 'tf'
script {
sh "terraform init"
try {
sh "terraform workspace select ${deployAccountId}_${projectName}_${repositoryBranch}"
} catch (Exception e) {
sh "terraform workspace new ${deployAccountId}_${projectName}_${repositoryBranch}"
}
sh "terraform plan -out=${deployAccountId}_${projectName}_${repositoryBranch}_plan.tfplan -input=false"
stash includes: "*.tfplan" name: "tf-plan", useDefaultExcludes: false
}
}
}
post{
success{
echo "Terraform init complete"
}
failure{
echo "Terraform init failed"
}
}
}
stage ("Terraform Plan Approval") {
agent none
steps {
script {
def userInput = input(id: 'confirm', message: 'Apply Terraform?', parameters: [ [$class: 'BooleanParameterDefinition', defaultValue: false, description: 'Apply terraform', name: 'confirm'] ])
}
}
}
stage ("Terraform Apply") {
agent {
kubernetes {
label 'myagent'
containerTemplate {
name 'cis'
image 'docker-local.myrepo.com/my-image:v2'
ttyEnabled true
command 'cat'
}
}
}
steps {
container("cis") {
withCredentials([[
$class: 'AmazonWebServicesCredentialsBinding',
credentialsId: 'my-creds',
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY'
]]) {
script {
unstash "tf"
unstash "tf-plan"
sh "terraform init"
try {
sh "terraform workspace select ${deployAccountId}_${projectName}_${repositoryBranch}"
} catch (Exception e) {
sh "terraform workspace new ${deployAccountId}_${projectName}_${repositoryBranch}"
}
sh """
set +x
temp_role="\$(aws sts assume-role --role-arn arn:aws:iam::000000000000:role/myrole --role-session-name jenkinzassume)" > /dev/null 2>&1
export AWS_ACCESS_KEY_ID=\$(echo \$temp_role | jq .Credentials.AccessKeyId | xargs) > /dev/null 2>&1
export AWS_SECRET_ACCESS_KEY=\$(echo \$temp_role | jq .Credentials.SecretAccessKey | xargs) > /dev/null 2>&1
export AWS_SESSION_TOKEN=\$(echo \$temp_role | jq .Credentials.SessionToken | xargs) > /dev/null 2>&1
set -x
terraform apply ${deployAccountId}_${projectName}_${repositoryBranch}_plan.tfplan
"""
}
}
}
}
}
}
}

Store current workspace path in variable for a later stage

I am using the declarative syntax for my pipeline, and would like to store the path to the workspace being used on one of my stages, so that same path can be used in a later stage.
I have seen I can call pwd() to get the current directory, but how do I assign to a variable to be used between stages?
EDIT
I have tried to do this by defining by own custom variable and using like so with the ws directive:
pipeline {
agent { label 'master' }
stages {
stage('Build') {
steps {
script {
def workspace = pwd()
}
sh '''
npm install
bower install
gulp set-staging-node-env
gulp prepare-staging-files
gulp webpack
'''
stash includes: 'dist/**/*', name: 'builtSources'
stash includes: 'config/**/*', name: 'appConfig'
node('Protractor') {
dir('/opt/foo/deploy/') {
unstash 'builtSources'
unstash 'appConfig'
}
}
}
}
stage('Unit Tests') {
steps {
parallel (
"Jasmine": {
node('master') {
ws("${workspace}"){
sh 'gulp karma-tests-ci'
}
}
},
"Mocha": {
node('master') {
ws("${workspace}"){
sh 'gulp mocha-tests'
}
}
}
)
}
post {
success {
sh 'gulp combine-coverage-reports'
sh 'gulp clean-lcov'
publishHTML(target: [
allowMissing: false,
alwaysLinkToLastBuild: false,
keepAll: false,
reportDir: 'test/coverage',
reportFiles: 'index.html',
reportName: 'Test Coverage Report'
])
}
}
}
}
}
In the Jenkins build console, I see this happens:
[Jasmine] Running on master in /var/lib/jenkins/workspace/_Pipelines_IACT-Jenkinsfile-UL3RGRZZQD3LOPY2FUEKN5XCY4ZZ6AGJVM24PLTO3OPL54KTJCEQ#2
[Pipeline] [Jasmine] {
[Pipeline] [Jasmine] ws
[Jasmine] Running in /var/lib/jenkins/workspace/_Pipelines_IACT-Jenkinsfile-UL3RGRZZQD3LOPY2FUEKN5XCY4ZZ6AGJVM24PLTO3OPL54KTJCEQ#2#2
The original workspace allocated from the first stage is actually _Pipelines_IACT-Jenkinsfile-UL3RGRZZQD3LOPY2FUEKN5XCY4ZZ6AGJVM24PLTO3OPL54KTJCEQ
So it doesnt look like it working, what am I doing wrong here?
Thanks
pipeline {
agent none
stages {
stage('Stage-One') {
steps {
echo 'StageOne.....'
script{ name = 'StackOverFlow'}
}
}
stage('Stage-Two'){
steps{
echo 'StageTwo.....'
echo "${name}"
}
}
}
}
Above prints StackOverFlow in StageTwo for echo "${name}"
You can also use sh "echo ${env.WORKSPACE}" to get The absolute path of the directory assigned to the build as a workspace.
You could put the value into an environment variable like described in this answer
CURRENT_PATH= sh (
script: 'pwd',
returnStdout: true
).trim()
Which version are you running? Maybe you can just assign the WORKSPACE variable to an environment var?
Or did i totally misunderstand and this is what you are looking for?

Resources