I have strange problem during build. This is my initial Jenkinsfiles:
pipeline {
agent none
environment {
MAVEN_ARGS = "${HOST}"
}
stages {
stage('Test step') {
agent {
docker {
image 'maven:3-alpine'
}
}
steps {
echo "${HOST}"
echo "${env.HOST}"
echo "${MAVEN_ARGS}"
}
}
}
}
Why first two echos prints correct value of HOST variable and last echo prints null?
What is interesting when I delete stage agent section:
pipeline {
agent any
environment {
MAVEN_ARGS = "${HOST}"
}
stages {
stage('Test step') {
steps {
echo "${HOST}"
echo "${env.HOST}"
echo "${MAVEN_ARGS}"
}
}
}
}
Every single echo is printing correctly HOST variable - this is obvious.
Thanks for help :)
Related
I'd like to set an env variable in one Stage and have it available in all subsequent Stages and Steps. Something like this:
pipeline {
stages {
stage('One') {
steps {
sh 'export MY_NAME=$(whoami)'
}
}
stage('Two') {
steps {
sh 'echo "I am ${MY_NAME}"'
}
}
stage('Three') {
steps {
sh 'echo "I am ${MY_NAME}"'
}
}
}
}
Those sh steps seem to be independent of each other, and the exported var is not preserved even for the next Step, let alone Stage.
One way I can think of is to write the var to a shell file, like echo "FOLDER_CONTENT=$(ls -lh)" and then source it a next Step, but again, I'll have to do the sourcing in every next Step, which is suboptimal.
Is there a better way to do that?
Finally was able to achieve it like so:
pipeline {
stages {
stage('One') {
steps {
script {
env.MY_NAME= sh (
script: 'whoami',
returnStdout: true
).trim()
}
}
}
stage('Two') {
steps {
echo "I am ${MY_NAME}"
}
}
stage('Three') {
steps {
sh 'echo "I am ${MY_NAME}"'
}
}
}
}
I am working on a groovy script for a Jenkins pipeline and am struggling to find how to pass a variable across stages when the variable is obtained from a remote ssh connection.
I found Example 1 and Example 2 on this site and I want to merge them together as seen in "My attempt" below. Note that the output of the file on the remote server is 4. I'm trying pass 4 to a_var.
Example 1: works fine. SSH connection. This reads the file and outputs value to the Jenkins console
def sshCredId = 'myid_cred'
def sshUser = 'myid'
def sshServer = 'myserver'
pipeline {
agent { label 'docker-maven-slave' }
stages {
stage('one') {
steps {
script {
sshagent([sshCredId]){
sh "ssh -o StrictHostKeyChecking=no ${sshUser}#${sshServer} cat /mydir/myfile.csv"
}
}
}
}
stage('two') {
steps {
echo "something"
}
}
stage('three') {
steps {
echo "do stuff"
}
}
}
}
Example 2: works fine. This passes a parameter across stages
pipeline {
agent {
label 'docker-maven-slave'
}
parameters {
string(name: 'a_var', defaultValue: '')
}
stages {
stage("one") {
steps {
script {
tmp_param = sh (script: 'echo something', returnStdout: true).trim()
env.a_var = tmp_param
}
}
}
stage("two") {
steps {
echo "${env.a_var}"
}
}
}
}
**My attempt: stage two output is null. I'm expecting '4' **
def sshCredId = 'myid_cred'
def sshUser = 'myid'
def sshServer = 'myserver'
pipeline {
agent { label 'docker-maven-slave' }
parameters {
string(name: 'a_var', defaultValue: 'nothing')
}
stages {
stage('one') {
steps {
script {
tmp_param=sshagent([sshCredId]){
sh "ssh -o StrictHostKeyChecking=no ${sshUser}#${sshServer} cat /mydir/myfile.csv"
}
env.a_var=tmp_param
}
}
}
stage('two') {
steps {
echo "${env.a_var}"
}
}
stage('three') {
steps {
echo "do stuff"
}
}
}
}
Update the answer based on comments and feedback from MayJoAnneBeth
Try below snippet
sshagent([sshCredId]){
env.a_var = sh (script: "ssh -o StrictHostKeyChecking=no ${sshUser}#${sshServer} cat /mydir/myfile.csv", returnStdout: true).trim()
}
I would like to configure an environment variable for my Jenkins pipeline, but dynamically based on an input parameter to the build. I'm trying to configure my pipeline to set the KUBECONFIG environment variable for kubectl commands.
My pipeline is as follows (slightly changed):
pipeline {
parameters {
choice(name: 'CLUSTER_NAME', choices: 'cluster1/cluster2')
}
stages {
// Parallel stages since only one environment variable should be set, based on input
stage ('Set environment variable') {
parallel {
stage ('Set cluster1') {
when {
expression {
params.CLUSTER_NAME == "cluster1"
}
}
environment {
KUBECONFIG = "~/kubeconf/cluster1.conf"
}
steps {
echo "Using KUBECONFIG: ${env.KUBECONFIG}"
}
}
stage ('Set cluster2') {
when {
expression {
params.CLUSTER_NAME == "cluster2"
}
}
environment {
KUBECONFIG = "~/kubeconf/cluster2.conf"
}
steps {
echo "Using KUBECONFIG: ${env.KUBECONFIG}"
}
}
}
}
stage ('Test env') {
steps {
sh "cat ${env.KUBECONFIG}"
}
}
}
}
However, while the stage where I set the environment variable can print it, once I move to another stage I only get null.
Is there some way of sharing env variables between stages? Since I'd like to use the default KUBECONFIG command (and not specify a file/context in my kubectl commands), it would be much easier to find a way to dynamically set the env variable.
I've seen the EnvInject plugin mentioned, but was unable to get it working for a pipeline, and was struggling with the documentation.
I guess that with the environment{} you are setting the environment variable only for the stage where it runs - it is not affecting the context of environment of the pipeline itself. Set environment variables like below to affect the main context. Works for me.
pipeline {
agent any
parameters {
choice(name: 'CLUSTER_NAME', choices: 'cluster1\ncluster2')
}
stages {
// Parallel stages since only one environment variable should be set, based on input
stage ('Set environment variable') {
parallel {
stage ('Set cluster1') {
when {
expression {
params.CLUSTER_NAME == "cluster1"
}
}
steps {
script{
env.KUBECONFIG = "~/kubeconf/cluster1.conf"
echo "Using KUBECONFIG: ${env.KUBECONFIG}"
}
}
}
stage ('Set cluster2') {
when {
expression {
params.CLUSTER_NAME == "cluster2"
}
}
steps {
script{
env.KUBECONFIG = "~/kubeconf/cluster2.conf"
echo "Using KUBECONFIG: ${env.KUBECONFIG}"
}
}
}
}
}
stage ('Test env') {
steps {
sh "cat ${env.KUBECONFIG}"
}
}
}
}
Ihave noticed that Jenkins pipeline file -- Jenkinsfile which have two syntax
Declarative
Scripted
I have made Declarative Script work to specify node to run my task. However I don't know how to modify my script to Scripted syntax.
My Declarative Script
pipeline {
agent none
stages {
stage('Build') {
agent { label 'my-label' }
steps {
echo 'Building..'
sh '''
'''
}
}
stage('Test') {
agent { label 'my-label' }
steps {
echo 'Testing..'
sh '''
'''
}
}
stage('Deploy') {
agent { label 'my-label' }
steps {
echo 'Deploying....'
sh '''
'''
}
}
}
}
I have tried to use in this way:
node('my-label') {
stage 'SCM'
git xxxx
stage 'Build'
sh ''' '''
}
But it seems Jenkins cannot find my node to run.
How about this simple example?
stage("one") {
node("linux") {
echo "One"
}
}
stage("two") {
node("linux") {
echo "two"
}
}
stage("three") {
node("linux") {
echo "three"
}
}
Or the below answer, this way you are guaranteed to have the stages run on the same node if there are multiple nodes with the same label and run interrupted by another job.
The above example will release the node after every stage, the below example will hold the node for all three stages.
node("linux") {
stage("one") {
echo "One"
}
stage("two") {
echo "two"
}
stage("three") {
echo "three"
}
}
I'm using Jenkins Pipeline with the declarative syntax, currently with the following stages:
Prepare
Build (two parallel sets of steps)
Test (also two parallel sets of steps)
Ask if/where to deploy
Deploy
For steps 1, 2, 3, and 5 I need and agent (an executor) because they do actual work on the workspace. For step 4, I don't need one, and I would like to not block my available executors while waiting for user input. This seem to be referred to as either a "flyweight" or "lightweight" executor for the classic, scripted syntax, but I cannot find any information on how to achieve this with the declarative syntax.
So far I've tried:
Setting an agent directly in the pipeline options, and then setting agent none on the stage. This has no effect, and the pipeline runs as normalt, blocking the executor while waiting for input. It is also mentioned in the documentation that it will have no effect, but I thought I'd give it a shot anyway.
Setting agent none in the pipeline options, and then setting an agent for each stage except #4. Unfortunately, but expectedly, this allocates a new workspace for every stage, which in turn requires me to stash and unstash. This is both messy and gives me further problems in the parallel stages (2 and 3) because I cannot have code outside the parallel construct. I assume the parallel steps run in the same workspace, so stashing/unstashing in both would have unfortunate results.
Here is an outline of my Jenkinsfile:
pipeline {
agent {
label 'build-slave'
}
stages {
stage("Prepare build") {
steps {
// ...
}
}
stage("Build") {
steps {
parallel(
frontend: {
// ...
},
backend: {
// ...
}
)
}
}
stage("Test") {
steps {
parallel(
jslint: {
// ...
},
phpcs: {
// ...
},
)
}
post {
// ...
}
}
stage("Select deploy target") {
steps {
script {
// ... code that determines choiceParameterDefinition based on branch name ...
try {
timeout(time: 5, unit: 'MINUTES') {
deployEnvironment = input message: 'Deploy target', parameters: [choiceParameterDefinition]
}
} catch(ex) {
deployEnvironment = null
}
}
}
}
stage("Deploy") {
when {
expression {
return binding.variables.get("deployEnvironment")
}
}
steps {
// ...
}
}
}
post {
// ...
}
}
Am I missing something here, or is it just not possible in the current version?
Setting agent none at the top level, then agent { label 'foo' } on every stage, with agent none again on the input stage seems to work as expected for me.
i.e. Every stage that does some work runs on the same agent, while the input stage does not consume an executor on any agent.
pipeline {
agent none
stages {
stage("Prepare build") {
agent { label 'some-agent' }
steps {
echo "prepare: ${pwd()}"
}
}
stage("Build") {
agent { label 'some-agent' }
steps {
parallel(
frontend: {
echo "frontend: ${pwd()}"
},
backend: {
echo "backend: ${pwd()}"
}
)
}
}
stage("Test") {
agent { label 'some-agent' }
steps {
parallel(
jslint: {
echo "jslint: ${pwd()}"
},
phpcs: {
echo "phpcs: ${pwd()}"
},
)
}
}
stage("Select deploy target") {
agent none
steps {
input message: 'Deploy?'
}
}
stage("Deploy") {
agent { label 'some-agent' }
steps {
echo "deploy: ${pwd()}"
}
}
}
}
However, there are no guarantee that using the same agent label within a Pipeline will always end up using the same workspace, e.g. as another build of the same job while the first build is waiting on the input.
You would have to use stash after the build steps. As you note, this cannot be done normally with parallel at the moment, so you'd have to additionally use a script block, in order to write a snippet of Scripted Pipeline for the stashing/unstashing after/before the parallel steps.
There is a workaround to use the same build slave in the other stages.
You can set a variable with the node name and use it in the others.
ie:
pipeline {
agent none
stages {
stage('First Stage Gets Agent Dynamically') {
agent {
node {
label "some-agent"
}
}
steps {
echo "first stage running on ${NODE_NAME}"
script {
BUILD_AGENT = NODE_NAME
}
}
}
stage('Second Stage Setting Node by Name') {
agent {
node {
label "${BUILD_AGENT}"
}
}
steps {
echo "Second stage using ${NODE_NAME}"
}
}
}
}
As of today (2021), you can use nested stages (https://www.jenkins.io/doc/book/pipeline/syntax/#sequential-stages) to group all the stages that must run in the same workspace before the input step, and all the stages that must be run in the same workspace after the input step. Of course, you need to stash or to store artifacts in some external repository before the input step, because the second workspace may not be the same than the first one:
pipeline {
agent none
stages {
stage('Deployment to Preproduction') {
agent any
stages {
stage('Stage PRE.1') {
steps {
echo "StagePRE.1"
sleep(10)
}
}
stage('Stage PRE.2') {
steps {
echo "Stage PRE.2"
sleep(10)
}
}
}
}
stage('Stage Ask Deploy') {
steps {
input message: 'Deploy to production?'
}
}
stage('Deployment to Production') {
agent any
stages {
stage('Stage PRO.1') {
steps {
echo "Stage PRO.1"
sleep(10)
}
}
stage('Stage PRO.2') {
steps {
echo "Stage PRO.2"
sleep(10)
}
}
}
}
}
}