Jenkins Pipeline with Dockerfile configuration - docker

I am struggling, to get the right configuration for my Jenkins Pipeline.
It works but I could not figure out how to seperate test & build stages.
Requirements:
Jenkins Pipeline with seperated test & build stage
Test stage requires chromium (I currently use node alpine image + adding chromium)
Build stage is building a docker image, which is published later (publish stage)
Current Setup:
Jenkinsfile:
pipeline {
environment {
...
}
options {
...
}
stages {
stage('Restore') {
...
}
stage('Lint') {
...
}
stage('Build & Test DEV') {
steps {
script {
dockerImage = docker.build(...)
}
}
}
stage('Publish DEV') {
steps {
script {
docker.withRegistry(...) {
dockerImage.push()
}
}
}
}
Dockerfile:
FROM node:12.16.1-alpine AS build
#add chromium for unit tests
RUN apk add chromium
...
ENV CHROME_BIN=/usr/bin/chromium-browser
...
# works but runs both tests & build in the same jenkins stage
RUN npm run test-ci
RUN npm run build
...
This works, but as you can see "Build & Test DEV" is a single stage,
I would like to have 2 seperate jenkins stages (Test, Build)
I already tried using Jenkins agent docker and defining the image for the test stage inside the jenkins file, but I dont know how to add the missing chromium package there.
Jenkinsfile:
pipeline {
agent {
docker {
image 'node:12.16.1-alpine'
//add chromium package here?
//set Chrome_bin env?
}
}
I also thought about using a docker image that already includes chromium, but couldnt find any official images
Would really appreciate your help / insights how to make this work.

You can either build your customized image (which includes the installation of Chromium) and push it to a registry and then pull it from that registry:
node {
docker.withRegistry('https://my-registry') {
docker.image('my-custom-image').inside {
sh 'make test'
}
}
}
Or build the image directly with Jenkins with your Dockerfile:
node {
def testImage = docker.build("test-image", "./dockerfiles/test")
testImage.inside {
sh 'make test'
}
}
Builds test-image from the Dockerfile found at ./dockerfiles/test/Dockerfile.
Reference: Using Docker with Pipeline

So in general I would execute the npm run commands inside the groovy syntax and not inside the dockerfile. So your code would look something like that:
pipeline {
agent {
docker {
image 'node:12.16.1-alpine'
args '-u root:root' // better would be to use sudo, but this should work
}
}
stages {
stage('Preparation') {
steps {
sh 'apk add chromium'
}
}
stage('build') {
steps {
sh 'npm run build'
}
}
stage('test') {
steps {
sh 'npm run test'
}
}
}
}
I would also suggest that you collect the results within Jenkins with the warnings ng jenkins plugin

Related

how to checkout only specific folder from git repo and build

Hey folks I need some help on the jenkinsfile. Below is my usecase
This is the strcuture of my GIT repo:
root
|->app1
| |->jenkinsfile
| |->dockerfile
|->app2
|->jenkinsfile
|->dockerfile
I am having a monorepo, app1 and app2 in the root folder and I want when their is a change in app1 folder, only app1 should build and same for app2. I have defined the jenkinsfile in jenkins but when its build. its looking for dockerfile1 in root folder not inside app1.
jenkisfile:
pipeline {
agent any
environment {
PIPENV_VENV_IN_PROJECT = true
DEVPI_USER = '\'jenkins_user\''
DEVPI_PASSWORD = '\'V$5_Z%Bf-:mJ\''
WORKSPACE="${WORKSPACE}/app1"
}
stages {
stage('Notify Bitbucket') {
steps {
bitbucketStatusNotify(buildState: 'INPROGRESS')
}
}
stage('Build Environment') {
steps {
sh 'docker build -t app-builder .'
}
}
stage('Test') {
steps{
sh 'docker run --rm app-builder pytest'
}
}
Use dir command to change the directory e.g
stage('Build Environment') {
steps {
dir("app1"){
sh 'docker build -t app-builder .'
}
}
}
On a multi-branch pipeline, you could leverage the customWorkspace option of the jenkins agents
The change you are making to the WORKSPACE env variable affects the variable only, it does not change the workspace location.
pipeline {
agent {
node {
label 'my-node'
customWorkspace '${WORKSPACE}/app1'
}
}
environment {
PIPENV_VENV_IN_PROJECT = true
DEVPI_USER = '\'jenkins_user\''
DEVPI_PASSWORD = '\'V$5_Z%Bf-:mJ\''
}
The git plugin allows to define sparse checkout paths. You can use this to restrict the directories in your clone.

Jenkins using docker agent with environment declarative pipeline

I would like to install maven and npm via docker agent using Jenkins declarative pipeline. But When I would like to use below script Jenkins throws an error as below. It might be using agent none but how can I use node with docker agent via declarative pipeline jenkins.
ERROR: Attempted to execute a step that requires a node context while
‘agent none’ was specified. Be sure to specify your own ‘node { ... }’
blocks when using ‘agent none’.
I try to set agent any but this time I received an error "Still waiting to schedule task
Waiting for next available executor"
pipeline {
agent none
// environment{
proxy = https://
// stable_revision = sh(script: 'curl -H "Authorization: Basic $base64encoded"
// }
stages {
stage('Build') {
agent {
docker { image 'maven:3-alpine'}
}
steps {
sh 'mvn --version'
echo "$apigeeUsername"
echo "Stable Revision: ${env.stable_revision}"
}
}
stage('Test') {
agent { docker { image 'maven:3-alpine' image 'node:8.12.0' } }
environment {
HOME = '.'
}
steps {
script{
try{
sh 'npm install'
sh 'node --version'
//sh 'npm test/unit/*.js'
}catch(e){
throw e
}
}
}
}
// stage('Policy-Code Analysis') {
// steps{
// sh "npm install -g apigeelint"
// sh "apigelint -s wiservice_api_v1/apiproxy/ -f codeframe.js"
// }
// }
stage('Promotion'){
steps{
timeout(time: 2, unit: 'DAYS') {
input 'Do you want to Approve?'
}
}
}
stage('Deployment'){
steps{
sh "mvn -f wiservice_api_v1/pom.xml install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} -Dapigee.config.options=update"
//sh "mvn apigee-enterprise:install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} "
}
}
}
}
Basically your error message tells you everything you need to know:
ERROR: Attempted to execute a step that requires a node context while
‘agent none’ was specified. Be sure to specify your own ‘node { ... }’
blocks when using ‘agent none’.
so what is the issue here? You use agent none for your pipeline which means you do not specify a specific agent for all stages. An agent executes a specific stage. If a stage has no agent it can't be executed and this is your issue here.
The following 2 stage have no agent which means no docker-container / server or whatever where it can be executed.
stage('Promotion'){
steps{
timeout(time: 2, unit: 'DAYS') {
input 'Do you want to Approve?'
}
}
}
stage('Deployment'){
steps{
sh "mvn -f wiservice_api_v1/pom.xml install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} -Dapigee.config.options=update"
//sh "mvn apigee-enterprise:install -Ptest -Dusername=${apigeeUsername} -Dpassword=${apigeePassword} "
}
}
so you have to add agent { ... } to both stage seperately or use a global agent like following and remove the agent from your stages:
pipeline {
agent {
docker { image 'maven:3-alpine'}
} ...
For further information see guide to set up master and agent machines or distributed jenkins builds or the official documentation.
I think you meant to add agent any instead of agent none, because each stage requires at least one agent (either declared at the top for the pipeline or per stage).
Also, I see some more issues.
Your Test stage specifies two images for the same stage.
agent { docker { image 'maven:3-alpine' image 'node:8.12.0' } } although, your stage is executing only npm commands. I believe only one of the image will be downloaded.
To clarify bit more on mkemmerz answer, your Promotion stage is designed correctly. If you plan to have an input step in the pipeline, do not add an agent for the pipeline because input steps block the executor context. See this link https://jenkins.io/blog/2018/04/09/whats-in-declarative/

Jenkins declarative pipeline: npm command not found

So I have set up this jenkins ec2 instance, ssh into it, globally installed node and set PATH. But when executing my pipeline, it gives me npm command not found error.
I put echo $PATH in my pipeline and the result is:
/home/ec2-user/.nvm/versions/node/v10.15.1/bin:/sbin:/usr/sbin:/bin:/usr/bin
Which looks correct.
For reference, here's my very simple pipeline:
pipeline {
agent { label 'master' }
environment {
PATH = "/home/ec2-user/.nvm/versions/node/v10.15.1/bin:${env.PATH}"
}
stages {
stage('Test npm') {
steps {
sh """
echo $PATH
npm --version
"""
}
}
}
}
Appreciate with any help.
As #Dibakar Adtya pointed, the problem is when jenkins executes a pipeline, it's under the user jenkins, whereas I configured node under another user, ec2-user, and jenkins doesn't have access to ec2-user's bin. Thank you #Dibakar!
A more elegant solution is to use Jenkins NodeJS Plugin. It saves you from the environment hassles. Now the pipeline is:
pipeline {
agent { label 'master' }
tools { nodejs "nodejs" }
stages {
stage('Test npm') {
steps {
sh """
npm --version
"""
}
}
}
}

Jenkins Pipeline: Executing a shell script

I have create a pipeline like below and please note that I have the script files namely- "backup_grafana.sh" and "gitPush.sh" in source code repository where the Jenkinsfile is present. But I am unable to execute the script because of the following error:-
/home/jenkins/workspace/grafana-backup#tmp/durable-52495dad/script.sh:
line 1: backup_grafana.sh: not found
Please note that I am running jenkins master on kubernetes in a pod. So copying scripts files as suggested by the error is not possible because the pod may be destroyed and recreated dynamically(in this case with a new pod, my scripts will no longer be available in the jenkins master)
pipeline {
agent {
node {
label 'jenkins-slave-python2.7'
}
}
stages {
stage('Take the grafana backup') {
steps {
sh 'backup_grafana.sh'
}
}
stage('Push to the grafana-backup submodule repository') {
steps {
sh 'gitPush.sh'
}
}
}
}
Can you please suggest how can I run these scripts in Jenkinsfile? I would like to also mention that I want to run these scripts on a python slave that I have already created finely.
If the command 'sh backup_grafana.sh' fails to execute when it actually should have successfully executed, here are two possible solutions.
1) Maybe you need a dot slash in front of those executable commands to tell your shell where they are. if they are not in your $PATH, you need to tell your shell that they can be found in the current directory. here's the fixed Jenkinsfile with four non-whitespace characters added:
pipeline {
agent {
node {
label 'jenkins-slave-python2.7'
}
}
stages {
stage('Take the grafana backup') {
steps {
sh './backup_grafana.sh'
}
}
stage('Push to the grafana-backup submodule repository') {
steps {
sh './gitPush.sh'
}
}
}
}
2) Check whether you have declared your file as a bash or sh script by declaring one of the following as the first line in your script:
#!/bin/bash
or
#!/bin/sh

How to use an environment variable in the agent section of a Jenkins Declarative Pipeline?

I'm building a Docker image for an application based in node.js where some of the dependencies requires an NPM token for a private NPM registry, but when building the image the variable containing the token is null, e.g.
docker build -t 3273e0bfe8dd329a96070382c1c554454ca91f96 --build-args NPM_TOKEN=null -f Dockerfile
a simplified pipeline is:
pipeline {
environment {
NPM_TOKEN = credentials('npm-token')
}
agent {
dockerfile {
additionalBuildArgs "--build-args NPM_TOKEN=${env.NPM_TOKEN}"
}
}
stages {
stage('Lint') {
steps {
sh 'npm run lint'
}
}
}
}
Is there a way to use the env variable in that section or it is not currently supported?
BTW, I've followed the suggestions in Docker and private modules related to how to use a NPM token to build a docker image
This is definitely a bug with the declarative pipeline. You can track the issue related to this here: https://issues.jenkins-ci.org/browse/JENKINS-42369
If you move away from using the declarative pipeline and use the scripted pipelines instead, this won't occur, although your Jenkinsfile will be "wordier"
found a solution for this. Use credentials manager to add NPM_TOKEN. Then you can do
pipeline {
agent {
docker {
image 'node:latest'
args '-e NPM_TOKEN=$NPM_TOKEN'
}
}
stages {
stage('npm install') {
steps {
sh 'npm install'
}
}
stage('static code analysis') {
steps {
sh 'npx eslint .'
}
}
}
}
I came up with a workaround for this and it still uses declarative pipeline.
I'm using this technique to download private github repos with pip.
// Workarounds for https://issues.jenkins-ci.org/browse/JENKINS-42369
// Warning: The secret will show up in your build log, and possibly be in your docker image history as well.
// Don't use this if you have a super-confidential codebase
def get_credential(name) {
def v;
withCredentials([[$class: 'StringBinding', credentialsId: name, variable: 'foo']]) {
v = env.foo;
}
return v
}
def get_additional_build_args() {
return "--build-arg GITHUB_ACCESS_TOKEN=" + get_credential("mysecretid")
}
pipeline {
agent {
dockerfile {
filename 'Dockerfile.test'
additionalBuildArgs get_additional_build_args()
}
}

Resources