checkout scm in Jenkinsfile - jenkins

I have the following advanced scripted pipeline in a Jenkinsfile:
stage('Generate') {
node {
checkout scm
}
parallel windows: {
node('windows') {
sh 'cmake . -Bbuild.windows -A x64'
}
},
macos: {
node('apple') {
sh '/usr/local/bin/cmake . -DPLATFORM="macos" -Bbuild.macos -GXcode'
}
},
ios: {
node('apple') {
sh '/usr/local/bin/cmake . -DPLATFORM="ios" -Bbuild.ios -GXcode'
}
}
}
Note the top node that precedes the parallel windows/macos/ios nodes. Does this mean that checkout scm will be invoked on every subsequent building node (windows/apple), before proceeding to the parallel steps? In other words, does the script above guarantee that the repository will be checked out on every node that will be involved at any stage of this build?
Many thanks.

The first node step will allocate any build agent and check out the source code.
Later, additional nodes will be allocated, where I can promise you that cmake will fail, as it works with an empty directory.
You can use stash and unstash to copy over the files that are needed for the build (and subsequent stages):
stage('Generate') {
node {
checkout scm
stash 'source'
}
parallel windows: {
node('windows') {
unstash 'source'
sh 'cmake . -Bbuild.windows -A x64'
}
},
macos: {
node('apple') {
unstash 'source'
sh '/usr/local/bin/cmake . -DPLATFORM="macos" -Bbuild.macos -GXcode'
}
},
ios: {
node('apple') {
unstash 'source'
sh '/usr/local/bin/cmake . -DPLATFORM="ios" -Bbuild.ios -GXcode'
}
}
}

Related

how to checkout only specific folder from git repo and build

Hey folks I need some help on the jenkinsfile. Below is my usecase
This is the strcuture of my GIT repo:
root
|->app1
| |->jenkinsfile
| |->dockerfile
|->app2
|->jenkinsfile
|->dockerfile
I am having a monorepo, app1 and app2 in the root folder and I want when their is a change in app1 folder, only app1 should build and same for app2. I have defined the jenkinsfile in jenkins but when its build. its looking for dockerfile1 in root folder not inside app1.
jenkisfile:
pipeline {
agent any
environment {
PIPENV_VENV_IN_PROJECT = true
DEVPI_USER = '\'jenkins_user\''
DEVPI_PASSWORD = '\'V$5_Z%Bf-:mJ\''
WORKSPACE="${WORKSPACE}/app1"
}
stages {
stage('Notify Bitbucket') {
steps {
bitbucketStatusNotify(buildState: 'INPROGRESS')
}
}
stage('Build Environment') {
steps {
sh 'docker build -t app-builder .'
}
}
stage('Test') {
steps{
sh 'docker run --rm app-builder pytest'
}
}
Use dir command to change the directory e.g
stage('Build Environment') {
steps {
dir("app1"){
sh 'docker build -t app-builder .'
}
}
}
On a multi-branch pipeline, you could leverage the customWorkspace option of the jenkins agents
The change you are making to the WORKSPACE env variable affects the variable only, it does not change the workspace location.
pipeline {
agent {
node {
label 'my-node'
customWorkspace '${WORKSPACE}/app1'
}
}
environment {
PIPENV_VENV_IN_PROJECT = true
DEVPI_USER = '\'jenkins_user\''
DEVPI_PASSWORD = '\'V$5_Z%Bf-:mJ\''
}
The git plugin allows to define sparse checkout paths. You can use this to restrict the directories in your clone.

Jenkins using docker agent with environment declarative pipeline

I would like to install maven and npm via docker agent using Jenkins declarative pipeline. But When I would like to use below script Jenkins throws an error as below. It might be using agent none but how can I use node with docker agent via declarative pipeline jenkins.
ERROR: Attempted to execute a step that requires a node context while
‘agent none’ was specified. Be sure to specify your own ‘node { ... }’
blocks when using ‘agent none’.
I try to set agent any but this time I received an error "Still waiting to schedule task
Waiting for next available executor"
pipeline {
agent none
// environment{
proxy = https://
// stable_revision = sh(script: 'curl -H "Authorization: Basic $base64encoded"
// }
stages {
stage('Build') {
agent {
docker { image 'maven:3-alpine'}
}
steps {
sh 'mvn --version'
echo "$apigeeUsername"
echo "Stable Revision: ${env.stable_revision}"
}
}
stage('Test') {
agent { docker { image 'maven:3-alpine' image 'node:8.12.0' } }
environment {
HOME = '.'
}
steps {
script{
try{
sh 'npm install'
sh 'node --version'
//sh 'npm test/unit/*.js'
}catch(e){
throw e
}
}
}
}
// stage('Policy-Code Analysis') {
// steps{
// sh "npm install -g apigeelint"
// sh "apigelint -s wiservice_api_v1/apiproxy/ -f codeframe.js"
// }
// }
stage('Promotion'){
steps{
timeout(time: 2, unit: 'DAYS') {
input 'Do you want to Approve?'
}
}
}
stage('Deployment'){
steps{
sh "mvn -f wiservice_api_v1/pom.xml install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} -Dapigee.config.options=update"
//sh "mvn apigee-enterprise:install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} "
}
}
}
}
Basically your error message tells you everything you need to know:
ERROR: Attempted to execute a step that requires a node context while
‘agent none’ was specified. Be sure to specify your own ‘node { ... }’
blocks when using ‘agent none’.
so what is the issue here? You use agent none for your pipeline which means you do not specify a specific agent for all stages. An agent executes a specific stage. If a stage has no agent it can't be executed and this is your issue here.
The following 2 stage have no agent which means no docker-container / server or whatever where it can be executed.
stage('Promotion'){
steps{
timeout(time: 2, unit: 'DAYS') {
input 'Do you want to Approve?'
}
}
}
stage('Deployment'){
steps{
sh "mvn -f wiservice_api_v1/pom.xml install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} -Dapigee.config.options=update"
//sh "mvn apigee-enterprise:install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} "
}
}
so you have to add agent { ... } to both stage seperately or use a global agent like following and remove the agent from your stages:
pipeline {
agent {
docker { image 'maven:3-alpine'}
} ...
For further information see guide to set up master and agent machines or distributed jenkins builds or the official documentation.
I think you meant to add agent any instead of agent none, because each stage requires at least one agent (either declared at the top for the pipeline or per stage).
Also, I see some more issues.
Your Test stage specifies two images for the same stage.
agent { docker { image 'maven:3-alpine' image 'node:8.12.0' } } although, your stage is executing only npm commands. I believe only one of the image will be downloaded.
To clarify bit more on mkemmerz answer, your Promotion stage is designed correctly. If you plan to have an input step in the pipeline, do not add an agent for the pipeline because input steps block the executor context. See this link https://jenkins.io/blog/2018/04/09/whats-in-declarative/

Clone Jenkinsfile by changing default workspace from master to slave

I am working on Jenkins Pipeline Script and I have checked-in my jenkinsfile in Git repository and I need to clone to local work space. But by default its cloning to master (Unix) work space but I need it in slave (Windows) work space.
Is there any plugins to change the default Pipeline Script from SCM work space location to slave?
You can do something like this
pipeline {
agent any
options {
skipDefaultCheckout()
}
stages {
stage('checkout') {
steps {
node('windows') {
checkout scm
}
}
}
}
}
OR
pipeline {
agent 'windows'
stages {
stage('build') {
steps {
// build
}
}
}
}
In my case, the following pipeline configuration skips the default checkout on master, and checkout my code just on Jenkins slave.
node {
docker.image('php7.1.30:1.0.0').inside {
skipDefaultCheckout() // this avoid the checkout on master
stage("checkout"){
checkout scm // here the checkout happens on slave node
}
stage('NPM Install'){
sh label: 'NPM INSTALL', script: "npm install"
sh label: 'GRUNT INSTALL', script: "npm install -g grunt-cli"
}
stage('Executing grunt') {
sh label: 'GRUNT DEFAULT', script: "grunt default"
}
}
}

Pass variables between Jenkins stages

I want to pass a variable which I read in stage A towards stage B somehow. I see in some examples that people write it to a file, but I guess that is not really a nice solution. I tried writing it to an environment variable, but I'm not really successful on that. How can I set it up properly?
To get it working I tried a lot of things and read that I should use the """ instead of ''' to start a shell and escape those variables to \${foo} for example.
Below is what I have as a pipeline:
#!/usr/bin/env groovy
pipeline {
agent { node { label 'php71' } }
environment {
packageName='my-package'
packageVersion=''
groupId='vznl'
nexus_endpoint='http://nexus.devtools.io'
nexus_username='jenkins'
nexus_password='J3nkins'
}
stages{
// Package dependencies
stage('Install dependencies') {
steps {
sh '''
echo Skip composer installation
#composer install --prefer-dist --optimize-autoloader --no-interaction
'''
}
}
// Unit tests
stage('Unit Tests') {
steps {
sh '''
echo Running PHP code coverage tests...
#composer test
'''
}
}
// Create artifact
stage('Package') {
steps {
echo 'Create package refs'
sh """
mkdir -p ./build/zpk
VERSIONTAG=\$(grep 'version' composer.json)
REGEX='"version": "([0-9]+.[0-9]+.[0-9]+)"'
if [[ \${VERSIONTAG} =~ \${REGEX} ]]
then
env.packageVersion=\${BASH_REMATCH[1]}
/usr/bin/zs-client packZpk --folder=. --destination=./build/zpk --name=${env.packageName}-${env.packageVersion}.zpk --version=${env.packageVersion}
else
echo "No version found!"
exit 1
fi
"""
}
}
// Publish ZPK package to Nexus
stage('Publish packages') {
steps {
echo "Publish ZPK Package"
sh "curl -u ${env.nexus_username}:${env.nexus_password} --upload-file ./build/zpk/${env.packageName}-${env.packageVersion}.zpk ${env.nexus_endpoint}/repository/zpk-packages/${groupId}/${env.packageName}-${env.packageVersion}.zpk"
archive includes: './build/**/*.{zpk,rpm,deb}'
}
}
}
}
As you can see the packageVersion which I read from stage Package needs to be used in stage Publish as well.
Overall tips against the pipeline are of course always welcome as well.
A problem in your code is that you are assigning version of environment variable within the sh step. This step will execute in its own isolated process, inheriting parent process environment variables.
However, the only way of passing data back to the parent is through STDOUT/STDERR or exit code. As you want a string value, it is best to echo version from the sh step and assign it to a variable within the script context.
If you reuse the node, the script context will persist, and variables will be available in the subsequent stage. A working example is below. Note that any try to put this within a parallel block can be of failure, as the version information variable can be written to by multiple processes.
#!/usr/bin/env groovy
pipeline {
environment {
AGENT_INFO = ''
}
agent {
docker {
image 'alpine'
reuseNode true
}
}
stages {
stage('Collect agent info'){
steps {
echo "Current agent info: ${env.AGENT_INFO}"
script {
def agentInfo = sh script:'uname -a', returnStdout: true
println "Agent info within script: ${agentInfo}"
AGENT_INFO = agentInfo.replace("/n", "")
env.AGENT_INFO = AGENT_INFO
}
}
}
stage("Print agent info"){
steps {
script {
echo "Collected agent info: ${AGENT_INFO}"
echo "Environment agent info: ${env.AGENT_INFO}"
}
}
}
}
}
Another option which doesn't involve using script, but is just declarative, is to stash things in a little temporary environment file.
You can then use this stash (like a temporary cache that only lives for the run) if the workload is sprayed out across parallel or distributed nodes as needed.
Something like:
pipeline {
agent any
stages {
stage('first stage') {
steps {
// Write out any environment variables you like to a temporary file
sh 'echo export FOO=baz > myenv'
// Stash away for later use
stash 'myenv'
}
}
stage ("later stage") {
steps {
// Unstash the temporary file and apply it
unstash 'myenv'
// use the unstashed vars
sh 'source myenv && echo $FOO'
}
}
}
}

Using Jenkins to deploy to staging and production based on condition

My project has a Jenkinsfile that runs smoothly. The problem is that I need to run some commands only on certain occasions. I'm using the Github plugin. I need to run the deploy only when it is in the master or a new tag, one will be for staging and the other will be production.
pipeline {
agent any
stages {
stage('Test') {
steps {
sh 'node -v'
sh 'yarn install'
sh 'yarn test -- --coverage'
}
}
stage('Build') {
steps {
sh 'yarn build'
}
}
stage('Deploy') {
steps {
sh 'aws s3 sync ./build s3://my.bucket --only-show-errors'
}
}
}
}
I need the master to deploy to a bucket and when it is new tag to another. How can I create this conditional?
How about the following working as two conditionals for two separate deployment scenarios? I think it's better to work with this using variables to indicate deployment scenarios instead of splitting this to two distinctly different steps though. You could for example write a shell script that would handle everything inside depending on tags/branches/whatever you need instead of forcing yourself to control this on pipeline level.
Each stage will have it's steps executed only when when part is satisfied. Stage Deploy will only work for master branch, while stage Deploy_NonMaster will only work any non master branch. Using the method written in when conditionals you can check for anything, including tags or whatnot.
stage ('Deploy') {
when {
expression {
GIT_BRANCH = sh(returnStdout: true, script: 'git rev-parse --abbrev-ref HEAD').trim()
return (GIT_BRANCH == 'master')
}
}
steps {
echo 'Do stuff/deploy.'
}
}
stage ('Deploy_NonMaster') {
when {
expression {
GIT_BRANCH = sh(returnStdout: true, script: 'git rev-parse --abbrev-ref HEAD').trim()
return !(GIT_BRANCH == 'master')
}
}
steps {
echo 'Do stuff/deploy.'
}
}

Resources