How to get Jenkins credentials variable in all stages of my Jenkins Declarative Pipeline - jenkins

How do I get Jenkins credentials variable i.e "mysqlpassword" accessible to all stages of my Jenkins Declarative Pipeline?
The below code snippet works fine and prints my credentials.
node {
stage('Getting Database Credentials') {
withCredentials([usernamePassword(credentialsId: 'mysql_creds', passwordVariable: 'mysqlpassword', usernameVariable: 'mysqlusername')])
{
creds = "\nUsername: ${mysqlusername}\nPassword: ${mysqlpassword}\n"
}
println creds
}
}
How can I incorporate the above code in my current pipeline so that mysqlusername & mysqlpassword variables are accessible to all stages across the pipeline script i.e globally.
My pipeline script layout looks like below:
pipeline { //indicate the job is written in Declarative Pipeline
agent { label 'Prod_Slave' }
environment {
STAGE_2_EXECUTED = "0"
}
stages {
stage ("First Stage") {
steps {
echo "First called in pipeline"
script {
echo "Inside script of First stage"
}
}
} // end of first stage
stage ("Second Stage") {
steps {
echo "Second stage called in pipeline"
script {
echo "Inside script of Second stage"
}
}
} // end of second stage
} //end of stages
} // end of pipeline
I m on the latest version of Jenkins.
Requesting solutions. Thank you.

You can do something like this. Here, you define you variables under environment { } and use it throughout your stages.
pipeline {
agent any
environment {
// More detail:
// https://jenkins.io/doc/book/pipeline/jenkinsfile/#usernames-and-passwords
MYSQL_CRED = credentials('mysql_creds')
}
stages {
stage('Run Some Command') {
steps{
echo "Running some command"
sh '<some-command> -u $MYSQL_CRED_USR -p $MYSQL_CRED_PSW'
}
}
}
Variables defined under environments are global to all the stages so can be used in the whole jenkinsfile.
More information about credentials() in official documentation.

Related

Running script (stash) prior to parallel stages being invoked

I have a parallel stage setup, and would like to know if it's possible to run a script prior to the nested stages, so something like this:
stage('E2E-PR-CYPRESS') {
when {
allOf {
expression {
return fileExists("cypress.json")
}
branch "PR-*"
}
}
steps {
script {
stash name: 'cypress-dir', includes: 'cypress/**/*'
}
}
parallel {
stage('Cypress Tests 1') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
stage('Cypress Tests 2') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
}
post {
always {
e2eAfterCypressRun(this, true)
}
}
}
I know the above is wrong, I get the error Only one of "matrix", "parallel", "stages", or "steps" allowed for stage "E2E-PR-CYPRESS"
I already have the stash script in a setup stage at the beginning of my pipeline, but I'd like to be able to restart from this stage above on Jenkins, and so need the stash part in this stage as the parallel stages need to unstash the contents.
Updated Answer:
After playing a bit with the Restart from a Stage option there is seems to be a nice feature designed exactly for your needs called Preserving stashes for Use with Restarted Stages:
Normally, when you run the stash step in your Pipeline, the resulting
stash of artifacts is cleared when the Pipeline completes, regardless
of the result of the Pipeline. Since stash artifacts aren’t accessible
outside of the Pipeline run that created them, this has not created
any limitations on usage. But with Declarative stage restarting, you
may want to be able to unstash artifacts from a stage which ran before
the stage you’re restarting from.
To enable this, there is a job property that allows you to configure a
maximum number of completed runs whose stash artifacts should be
preserved for reuse in a restarted run. You can specify anywhere from
1 to 50 as the number of runs to preserve.
This job property can be configured in your Declarative Pipeline’s options section, as below:
options {
preserveStashes()
// or
preserveStashes(buildCount: 5)
}
This built in feature is exactly what you need to solve your issue without any special modifications to your code, as it will allow you to rerun the pipeline from any stage and still use the existing file that were previously stashed.
Original Answer:
You can actually achieve this quite simply using the scripted syntax for the parallel command, and it will also allow you to avoid the duplicate code in the parallel stages.
parallel: Execute in parallel
Takes a map from branch names to closures and an optional argument failFast which will terminate all branches upon a failure in any other branch:
parallel firstBranch: {
// do something
}, secondBranch: {
// do something else
},
failFast: true|false
In your case it can look like:
stage('E2E-PR-CYPRESS') {
when {
allOf {
expression {
return fileExists("cypress.json")
}
branch "PR-*"
}
}
steps {
script {
stash name: 'cypress-dir', includes: 'cypress/**/*'
// Define the parallel execution stages
def stages = ['Cypress Tests 1', 'Cypress Tests 2']
// Create the parallel executions and run them
parallel stages.collectEntries {
["Running ${it}": {
node('aws_micro_slave_e2e') {
skipDefaultCheckout()
runE2eTests()
}
}]
}
}
}
post {
always {
e2eAfterCypressRun(this, true)
}
}
}
This way you can easily add more parallel steps by updating the stages list, or even receive it as an input parameter. In addition you can create the parallel executions by different labels or tests suits, instead of the stage name.
You can add a Prepare stage at the top like this:
stages{
stage('Preperation'){
when {
allOf {
expression {
return fileExists("cypress.json")
}
branch "PR-*"
}
}
steps {
script {
stash name: 'cypress-dir', includes: 'cypress/**/*'
}
}
}
stage('E2E-PR-CYPRESS') {
parallel {
stage('Cypress Tests 1') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
stage('Cypress Tests 2') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
}
}
}
post {
always {
e2eAfterCypressRun(this, true)
}
}
An out of the box concept
Propose splitting the job into 2 parts taking the following into consideration:
Currently use an EC2 plugin, as the current agents are EC2
Running the parallel stages with the same stashed content ready to unstash
Create jenkins pipeline job 1:
This job will checkout the workspace with any type of agent
Create a packer json to create a customised AMI for the EC2
The customised AMI will stash the contents and move to a directory that will appear on the EC2 when the agent is built
Output the AMI ID, run a groovy job to update the EC2 plugin AMI ID with the customised AMI ID to temporarily set the AMI in memory on Jenkins
pipeline {
agent {
docker {
test-container
}
}
options {
buildDiscarder(
logRotator(
numToKeepStr: '10',
artifactNumToKeepStr: '10'
)
)
ansiColor('xterm')
gitConnection("git")
}
stages {
stage('Run Stash Cypress Functional Test') {
steps {
dir('functional-test') {
// develop branch is canary build, all other branches are stable builds
script {
sh """
# script to stash cypress tests
"""
}
}
}
}
stage('Functional Test AMI Build') {
steps {
dir('functional-test/packer') {
withAWS(role: 'PackerBuild', roleAccount: '123456789012', roleSessionName: 'Jenkins-Workflow-FunctionalTest-Packer') {
script {
sh """
# packer json script will require to copy contents from workspace, run the script to stash content
# packer json script will require to capture new AMI ID
# https://discuss.devopscube.com/t/how-to-get-the-ami-id-after-a-packer-build/36
# https://www.packer.io/docs/post-processors/manifest
packer validate FunctionalTestPacker.json
packer build -debug FunctionalTestPacker.json
# grab AMI ID and export as jenkins env variable
"""
}
}
}
}
}
stage('run groovy script to update AMI ID on EC2 plugin') {
steps {
dir(groovy job dir) {
script {
sh """
# run groovy job to update AMI on Jenkins EC2 plugin
# https://gist.github.com/vrivellino/97954495938e38421ba4504049fd44ea
"""
}
}
}
}
stage('Kickoff Functional Test Deploy') {
// pipeline checkbox parameter, when ticked it will automatically kick off the functional test pipeline
when {
expression {params.RUN_TESTS.toBoolean()}
}
steps {
script{
env.branch = params.BRANCH
sh """
echo "Branch is ${branch}"
"""
}
build job: 'workflow/CypressFunctionaTestDeployAndRun',
parameters: [
string(name: 'BRANCH', value: env.branch)
],
wait : false
}
}
}
post {
always {
cleanWs()
}
}
}
Create jenkins pipeline job 2:
This job will create the EC2 agents via the plugin from the customised AMI from pipeline job 1
This means your agents will have the same workspace ready to unstash - so you can execute a parallel run
Also you could move a lot of your user data script that is in the EC2 plugin as part of the customised AMI build, thus cut down the time for each EC2 agent to get ready to carry out execution
pipeline {
stages {
stage('E2E-PR-CYPRESS') {
when {
allOf {
expression {
return fileExists("cypress.json")
}
branch "PR-*"
}
}
}
parallel {
stage('Cypress Tests 1') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
stage('Cypress Tests 2') {
agent { label 'aws_micro_slave_e2e' }
options { skipDefaultCheckout() }
steps {
runE2eTests()
}
}
}
}
post {
always {
e2eAfterCypressRun(this, true)
}
}
}

Jenkins - translate withCredentials into declarative style syntax

I have a jenkins pipeline which is written using a scripted syntax and I need to make this into a new pipeline using a declarative style.
This is my fist project with jenkins and I am stuck at how to translate the withCredentials syntax in declarative jenkins.
The original(scripted) pipeline looks something like this:
stage('stage1') {
steps {
script {
withCredentials([usernamePassword(credentialsId: CredentialsAWS, passwordVariable: 'AWS_SECRET_ACCESS_KEY', usernameVariable: 'AWS_ACCESS_KEY_ID')]) {
parallel (
something: {
sh 'some commands where there is no reference to the above credentials'
}
)
}
}
}
}
So far, I have set the credentials in question as an environment variable but since in the original pipeline these are not referenced in the command but they just wrap around the command as 'withCredentials', I am not sure how to achieve the same result. Any idea how to do this?
First of all take a look at official documentation
For your case pipeline will look like:
pipeline {
agent any
environment {
YOUR_CRED = credentials('CredentialsAWS')
}
stages {
stage('Call username and password from YOUR_CRED') {
steps {
echo "To call username use ${YOUR_CRED_USR}"
echo "To call password use ${YOUR_CRED_PSW}"
}
}
}
}
Jenkinsfile (Declarative Pipeline)
pipeline {
agent any
parameters {
string(name: 'STATEMENT', defaultValue: 'hello; ls /', description: 'What should I say?')
}
stages {
stage('Example') {
steps {
/* CORRECT */
sh('echo ${STATEMENT}')
}
}
}
}
visit here

How to enforce different stages in pipeline to run on the same Jenkins agent?

My jenkinsfile looks like this:
pipeline {
agent {label "master"}
parameters {
string(name: "host", defaultValue: "ci_agent_001 || ci_agent_002")
}
stages {
stage ("build") {
agent ( label "${params.host}" )
steps {
script {
sh "./script.py --build"
}
}
}
stage ("deploy") {
agent ( label "${params.host}" )
steps {
script {
sh "./script.py --deploy"
}
}
}
stage ("test") {
agent ( label "${params.host}" )
steps {
script {
sh "./script.py --test"
}
}
}
}
Each Python script handles all the logic I need, however I must have the same agent to run the stages, and if I allow more than one option in the host param, I can't assert the agent stage I got, will be the same agent stage II will get (and without downtime, i.e. I can't allow another job use that agent between my stages).
Can I specify an agent to the whole pipeline?
Yes you can. In fact, you can free your master from running pipelines at all:
Replace agent {label "master"} with agent {label params.host}
Clear all the agent ( label "${params.host}" ) lines inside individual stages (you also don't need the script block, as you can run sh steps directly within the steps block)
If later on you decide you don't want to assign a single node to all stages you'll have to use scripted pipeline inside the declarative pipeline to group stages that should run on the same node:
stage("stages that share the same node") {
agent { label params.host }
steps {
script {
stage("$NODE_NAME - build") {
sh "./script.py --build"
}
stage("$NODE_NAME - deploy") {
sh "./script.py --deploy"
}
stage("$NODE_NAME - test") {
sh "./script.py --test"
}
}
}
}
stage("look at me I might be on another node") {
agent { label params.host }
steps {
echo NODE_NAME
}
}

Equivalent block to environment in scripted pipeline with Jenkins

I have the following pipeline:
pipeline {
agent any
environment {
branch = 'master'
scmUrl = 'ssh://git#myrepo.git'
serverPort = '22'
}
stages {
stage('Stage 1') {
steps {
sh '/var/jenkins_home/contarpalabras.sh'
}
}
}
}
I want to change the pipeline to "scripted pipeline" in order to use try / catch blocks and have better error management. However i did not find how the equivalent to environment block in official documentation.
You can use the withEnv block like:
node {
withEnv(['DISABLE_AUTH=true',
'DB_ENGINE=sqlite']) {
stage('Build') {
sh 'printenv'
}
}
}
This info is also in the official documentation : https://jenkins.io/doc/pipeline/tour/environment/#

Cannot define variable in pipeline stage

I'm trying to create a declarative Jenkins pipeline script but having issues with simple variable declaration.
Here is my script:
pipeline {
agent none
stages {
stage("first") {
def foo = "foo" // fails with "WorkflowScript: 5: Expected a step # line 5, column 13."
sh "echo ${foo}"
}
}
}
However, I get this error:
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 5: Expected a step # line 5, column 13.
def foo = "foo"
^
I'm on Jenkins 2.7.4 and Pipeline 2.4.
The Declarative model for Jenkins Pipelines has a restricted subset of syntax that it allows in the stage blocks - see the syntax guide for more info. You can bypass that restriction by wrapping your steps in a script { ... } block, but as a result, you'll lose validation of syntax, parameters, etc within the script block.
I think error is not coming from the specified line but from the first 3 lines. Try this instead :
node {
stage("first") {
def foo = "foo"
sh "echo ${foo}"
}
}
I think you had some extra lines that are not valid...
From declaractive pipeline model documentation, it seems that you have to use an environment declaration block to declare your variables, e.g.:
pipeline {
environment {
FOO = "foo"
}
agent none
stages {
stage("first") {
sh "echo ${FOO}"
}
}
}
Agree with #Pom12, #abayer. To complete the answer you need to add script block
Try something like this:
pipeline {
agent any
environment {
ENV_NAME = "${env.BRANCH_NAME}"
}
// ----------------
stages {
stage('Build Container') {
steps {
echo 'Building Container..'
script {
if (ENVIRONMENT_NAME == 'development') {
ENV_NAME = 'Development'
} else if (ENVIRONMENT_NAME == 'release') {
ENV_NAME = 'Production'
}
}
echo 'Building Branch: ' + env.BRANCH_NAME
echo 'Build Number: ' + env.BUILD_NUMBER
echo 'Building Environment: ' + ENV_NAME
echo "Running your service with environemnt ${ENV_NAME} now"
}
}
}
}
In Jenkins 2.138.3 there are two different types of pipelines.
Declarative and Scripted pipelines.
"Declarative pipelines is a new extension of the pipeline DSL (it is basically a pipeline script with only one step, a pipeline step with arguments (called directives), these directives should follow a specific syntax. The point of this new format is that it is more strict and therefore should be easier for those new to pipelines, allow for graphical editing and much more.
scripted pipelines is the fallback for advanced requirements."
jenkins pipeline: agent vs node?
Here is an example of using environment and global variables in a Declarative Pipeline. From what I can tell enviroment are static after they are set.
def browser = 'Unknown'
pipeline {
agent any
environment {
//Use Pipeline Utility Steps plugin to read information from pom.xml into env variables
IMAGE = readMavenPom().getArtifactId()
VERSION = readMavenPom().getVersion()
}
stages {
stage('Example') {
steps {
script {
browser = sh(returnStdout: true, script: 'echo Chrome')
}
}
}
stage('SNAPSHOT') {
when {
expression {
return !env.JOB_NAME.equals("PROD") && !env.VERSION.contains("RELEASE")
}
}
steps {
echo "SNAPSHOT"
echo "${browser}"
}
}
stage('RELEASE') {
when {
expression {
return !env.JOB_NAME.equals("TEST") && !env.VERSION.contains("RELEASE")
}
}
steps {
echo "RELEASE"
echo "${browser}"
}
}
}//end of stages
}//end of pipeline
You are using a Declarative Pipeline which requires a script-step to execute Groovy code. This is a huge difference compared to the Scripted Pipeline where this is not necessary.
The official documentation says the following:
The script step takes a block of Scripted Pipeline and executes that
in the Declarative Pipeline.
pipeline {
agent none
stages {
stage("first") {
script {
def foo = "foo"
sh "echo ${foo}"
}
}
}
}
you can define the variable global , but when using this variable must to write in script block .
def foo="foo"
pipeline {
agent none
stages {
stage("first") {
script{
sh "echo ${foo}"
}
}
}
}
Try this declarative pipeline, its working
pipeline {
agent any
stages {
stage("first") {
steps{
script {
def foo = "foo"
sh "echo ${foo}"
}
}
}
}
}

Resources