Jenkins declarative pipeline with docker and git - jenkins

I am trying to build a pipeline for Node JS application using git and dockers. I have made declarative Jenkinsfile from which everything works smoothly. I have set SCM Poll for every two minutes and it gets invoked correctly but the problem comes as old pipeline still running so new poll get queued with the message Waiting for next available executor. I wanted to know if I have done all correctly and what I am missing.
My complete code can be found here.
I have tried making npm start in deliver.sh file with & to make it run in daemon mode and used input message option in Jenkinsfile to stop the pipeline from finishing as otherwise only with "npm start &" and without "input message" pipeline reaches to the end of pipeline and app container created get killed. I am sure this approach is not correct. I did then with npm start without & and wihtout input message and scm poll when invoked and pipeline also started executing stages but as the last container is already published to port 3000, obviously it won't publish new to 3000, so pipeline returns error.
Dockerfile
FROM node:alpine
COPY . .
EXPOSE 3000
Jenkinsfile
pipeline {
triggers {
pollSCM 'H/2 * * * *'
}
agent { dockerfile {
args '-p 3000:3000'
}
}
stages {
stage('Build') {
steps {
sh 'npm install'
}
}
stage('Deliver') {
steps {
sh './jenkins/scripts/deliver.sh'
// input message: 'Finished using the web site? (Click "Proceed" to continue)'
// sh './jenkins/scripts/kill.sh'
}
}
}
}
deliver.sh script
# set -x
# npm start &
npm start
# sleep 1
# copying process ID of npm start to file name pidfile, this id will
# be used when the user press any key to stop the app
# echo $! > .pidfile
# set +x
Any help in this regard would be highly appreciated.

Add
disableConcurrentBuilds()
inside an 'options' section to prevent 2 builds running at the same time.
pipeline {
triggers {
pollSCM 'H/2 * * * *'
}
agent { dockerfile {
args '-p 3000:3000'
}
options {
disableConcurrentBuilds()
}
stages {
stage('Build') {
steps {
sh 'npm install'
}
}
stage('Deliver') {
steps {
sh './jenkins/scripts/deliver.sh'
// input message: 'Finished using the web site? (Click "Proceed" to continue)'
// sh './jenkins/scripts/kill.sh'
}
}
}
}

Related

Jenkins Pipeline: Run the step when you see a new file or when it changes

I have a laravel application that requires the "yarn" command at initialization, and later only if certain files are changed.
Using the code below, I manage to detect when that file has changed, but I need a suggestion to be able to run it at the first initialization (practically, that file together with all the others seem to be new files from the perspective of the Jenkinsfile).
Thanks!
Current try:
stage("Install NodeJS dependencies") {
when {
changeset "package.json"
}
agent {
docker {
image 'node:14-alpine'
reuseNode true
}
}
steps {
sh 'yarn --silent'
sh 'yarn dev --silent'
}
}

Jenkins Pipeline with Dockerfile configuration

I am struggling, to get the right configuration for my Jenkins Pipeline.
It works but I could not figure out how to seperate test & build stages.
Requirements:
Jenkins Pipeline with seperated test & build stage
Test stage requires chromium (I currently use node alpine image + adding chromium)
Build stage is building a docker image, which is published later (publish stage)
Current Setup:
Jenkinsfile:
pipeline {
environment {
...
}
options {
...
}
stages {
stage('Restore') {
...
}
stage('Lint') {
...
}
stage('Build & Test DEV') {
steps {
script {
dockerImage = docker.build(...)
}
}
}
stage('Publish DEV') {
steps {
script {
docker.withRegistry(...) {
dockerImage.push()
}
}
}
}
Dockerfile:
FROM node:12.16.1-alpine AS build
#add chromium for unit tests
RUN apk add chromium
...
ENV CHROME_BIN=/usr/bin/chromium-browser
...
# works but runs both tests & build in the same jenkins stage
RUN npm run test-ci
RUN npm run build
...
This works, but as you can see "Build & Test DEV" is a single stage,
I would like to have 2 seperate jenkins stages (Test, Build)
I already tried using Jenkins agent docker and defining the image for the test stage inside the jenkins file, but I dont know how to add the missing chromium package there.
Jenkinsfile:
pipeline {
agent {
docker {
image 'node:12.16.1-alpine'
//add chromium package here?
//set Chrome_bin env?
}
}
I also thought about using a docker image that already includes chromium, but couldnt find any official images
Would really appreciate your help / insights how to make this work.
You can either build your customized image (which includes the installation of Chromium) and push it to a registry and then pull it from that registry:
node {
docker.withRegistry('https://my-registry') {
docker.image('my-custom-image').inside {
sh 'make test'
}
}
}
Or build the image directly with Jenkins with your Dockerfile:
node {
def testImage = docker.build("test-image", "./dockerfiles/test")
testImage.inside {
sh 'make test'
}
}
Builds test-image from the Dockerfile found at ./dockerfiles/test/Dockerfile.
Reference: Using Docker with Pipeline
So in general I would execute the npm run commands inside the groovy syntax and not inside the dockerfile. So your code would look something like that:
pipeline {
agent {
docker {
image 'node:12.16.1-alpine'
args '-u root:root' // better would be to use sudo, but this should work
}
}
stages {
stage('Preparation') {
steps {
sh 'apk add chromium'
}
}
stage('build') {
steps {
sh 'npm run build'
}
}
stage('test') {
steps {
sh 'npm run test'
}
}
}
}
I would also suggest that you collect the results within Jenkins with the warnings ng jenkins plugin

Jenkins using docker agent with environment declarative pipeline

I would like to install maven and npm via docker agent using Jenkins declarative pipeline. But When I would like to use below script Jenkins throws an error as below. It might be using agent none but how can I use node with docker agent via declarative pipeline jenkins.
ERROR: Attempted to execute a step that requires a node context while
‘agent none’ was specified. Be sure to specify your own ‘node { ... }’
blocks when using ‘agent none’.
I try to set agent any but this time I received an error "Still waiting to schedule task
Waiting for next available executor"
pipeline {
agent none
// environment{
proxy = https://
// stable_revision = sh(script: 'curl -H "Authorization: Basic $base64encoded"
// }
stages {
stage('Build') {
agent {
docker { image 'maven:3-alpine'}
}
steps {
sh 'mvn --version'
echo "$apigeeUsername"
echo "Stable Revision: ${env.stable_revision}"
}
}
stage('Test') {
agent { docker { image 'maven:3-alpine' image 'node:8.12.0' } }
environment {
HOME = '.'
}
steps {
script{
try{
sh 'npm install'
sh 'node --version'
//sh 'npm test/unit/*.js'
}catch(e){
throw e
}
}
}
}
// stage('Policy-Code Analysis') {
// steps{
// sh "npm install -g apigeelint"
// sh "apigelint -s wiservice_api_v1/apiproxy/ -f codeframe.js"
// }
// }
stage('Promotion'){
steps{
timeout(time: 2, unit: 'DAYS') {
input 'Do you want to Approve?'
}
}
}
stage('Deployment'){
steps{
sh "mvn -f wiservice_api_v1/pom.xml install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} -Dapigee.config.options=update"
//sh "mvn apigee-enterprise:install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} "
}
}
}
}
Basically your error message tells you everything you need to know:
ERROR: Attempted to execute a step that requires a node context while
‘agent none’ was specified. Be sure to specify your own ‘node { ... }’
blocks when using ‘agent none’.
so what is the issue here? You use agent none for your pipeline which means you do not specify a specific agent for all stages. An agent executes a specific stage. If a stage has no agent it can't be executed and this is your issue here.
The following 2 stage have no agent which means no docker-container / server or whatever where it can be executed.
stage('Promotion'){
steps{
timeout(time: 2, unit: 'DAYS') {
input 'Do you want to Approve?'
}
}
}
stage('Deployment'){
steps{
sh "mvn -f wiservice_api_v1/pom.xml install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} -Dapigee.config.options=update"
//sh "mvn apigee-enterprise:install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} "
}
}
so you have to add agent { ... } to both stage seperately or use a global agent like following and remove the agent from your stages:
pipeline {
agent {
docker { image 'maven:3-alpine'}
} ...
For further information see guide to set up master and agent machines or distributed jenkins builds or the official documentation.
I think you meant to add agent any instead of agent none, because each stage requires at least one agent (either declared at the top for the pipeline or per stage).
Also, I see some more issues.
Your Test stage specifies two images for the same stage.
agent { docker { image 'maven:3-alpine' image 'node:8.12.0' } } although, your stage is executing only npm commands. I believe only one of the image will be downloaded.
To clarify bit more on mkemmerz answer, your Promotion stage is designed correctly. If you plan to have an input step in the pipeline, do not add an agent for the pipeline because input steps block the executor context. See this link https://jenkins.io/blog/2018/04/09/whats-in-declarative/

Jenkins declarative pipline multiple slave

I have a pipeline with multiple stages, some of them are in parallel. Up until now I had a single code block indicating where the job should run.
pipeline {
triggers { pollSCM '0 0 * * 0' }
agent { dockerfile { label 'jenkins-slave'
filename 'Dockerfile'
}
}
stages{
stage('1'){
steps{ sh "blah" }
} // stage
} // stages
} // pipeline
What I need to do now is run a new stage on a different slave, NOT in docker.
I tried by adding an agent statement for that stage but it seems like it tries to run that stage withing a docker container on the second slave.
stage('test new slave') {
agent { node { label 'e2e-aws' } }
steps {
sh "ifconfig"
} // steps
} // stage
I get the following error message
13:14:23 unknown flag: --workdir
13:14:23 See 'docker exec --help'.
I tried setting the agent to none for the pipeline and using an agent for every step and have run into 2 issues
1. My post actions show an error
2. The stages that have parallel stages also had an error.
I can't find any examples that are similar to what I am doing.
You can use the node block to select a node to run a particular stage.
pipeline {
agent any
stages {
stage('Init') {
steps {
node('master'){
echo "Run inside a MASTER"
}
}
}
}
}

Pass variables between Jenkins stages

I want to pass a variable which I read in stage A towards stage B somehow. I see in some examples that people write it to a file, but I guess that is not really a nice solution. I tried writing it to an environment variable, but I'm not really successful on that. How can I set it up properly?
To get it working I tried a lot of things and read that I should use the """ instead of ''' to start a shell and escape those variables to \${foo} for example.
Below is what I have as a pipeline:
#!/usr/bin/env groovy
pipeline {
agent { node { label 'php71' } }
environment {
packageName='my-package'
packageVersion=''
groupId='vznl'
nexus_endpoint='http://nexus.devtools.io'
nexus_username='jenkins'
nexus_password='J3nkins'
}
stages{
// Package dependencies
stage('Install dependencies') {
steps {
sh '''
echo Skip composer installation
#composer install --prefer-dist --optimize-autoloader --no-interaction
'''
}
}
// Unit tests
stage('Unit Tests') {
steps {
sh '''
echo Running PHP code coverage tests...
#composer test
'''
}
}
// Create artifact
stage('Package') {
steps {
echo 'Create package refs'
sh """
mkdir -p ./build/zpk
VERSIONTAG=\$(grep 'version' composer.json)
REGEX='"version": "([0-9]+.[0-9]+.[0-9]+)"'
if [[ \${VERSIONTAG} =~ \${REGEX} ]]
then
env.packageVersion=\${BASH_REMATCH[1]}
/usr/bin/zs-client packZpk --folder=. --destination=./build/zpk --name=${env.packageName}-${env.packageVersion}.zpk --version=${env.packageVersion}
else
echo "No version found!"
exit 1
fi
"""
}
}
// Publish ZPK package to Nexus
stage('Publish packages') {
steps {
echo "Publish ZPK Package"
sh "curl -u ${env.nexus_username}:${env.nexus_password} --upload-file ./build/zpk/${env.packageName}-${env.packageVersion}.zpk ${env.nexus_endpoint}/repository/zpk-packages/${groupId}/${env.packageName}-${env.packageVersion}.zpk"
archive includes: './build/**/*.{zpk,rpm,deb}'
}
}
}
}
As you can see the packageVersion which I read from stage Package needs to be used in stage Publish as well.
Overall tips against the pipeline are of course always welcome as well.
A problem in your code is that you are assigning version of environment variable within the sh step. This step will execute in its own isolated process, inheriting parent process environment variables.
However, the only way of passing data back to the parent is through STDOUT/STDERR or exit code. As you want a string value, it is best to echo version from the sh step and assign it to a variable within the script context.
If you reuse the node, the script context will persist, and variables will be available in the subsequent stage. A working example is below. Note that any try to put this within a parallel block can be of failure, as the version information variable can be written to by multiple processes.
#!/usr/bin/env groovy
pipeline {
environment {
AGENT_INFO = ''
}
agent {
docker {
image 'alpine'
reuseNode true
}
}
stages {
stage('Collect agent info'){
steps {
echo "Current agent info: ${env.AGENT_INFO}"
script {
def agentInfo = sh script:'uname -a', returnStdout: true
println "Agent info within script: ${agentInfo}"
AGENT_INFO = agentInfo.replace("/n", "")
env.AGENT_INFO = AGENT_INFO
}
}
}
stage("Print agent info"){
steps {
script {
echo "Collected agent info: ${AGENT_INFO}"
echo "Environment agent info: ${env.AGENT_INFO}"
}
}
}
}
}
Another option which doesn't involve using script, but is just declarative, is to stash things in a little temporary environment file.
You can then use this stash (like a temporary cache that only lives for the run) if the workload is sprayed out across parallel or distributed nodes as needed.
Something like:
pipeline {
agent any
stages {
stage('first stage') {
steps {
// Write out any environment variables you like to a temporary file
sh 'echo export FOO=baz > myenv'
// Stash away for later use
stash 'myenv'
}
}
stage ("later stage") {
steps {
// Unstash the temporary file and apply it
unstash 'myenv'
// use the unstashed vars
sh 'source myenv && echo $FOO'
}
}
}
}

Resources