how to checkout only specific folder from git repo and build - jenkins

Hey folks I need some help on the jenkinsfile. Below is my usecase
This is the strcuture of my GIT repo:
root
|->app1
| |->jenkinsfile
| |->dockerfile
|->app2
|->jenkinsfile
|->dockerfile
I am having a monorepo, app1 and app2 in the root folder and I want when their is a change in app1 folder, only app1 should build and same for app2. I have defined the jenkinsfile in jenkins but when its build. its looking for dockerfile1 in root folder not inside app1.
jenkisfile:
pipeline {
agent any
environment {
PIPENV_VENV_IN_PROJECT = true
DEVPI_USER = '\'jenkins_user\''
DEVPI_PASSWORD = '\'V$5_Z%Bf-:mJ\''
WORKSPACE="${WORKSPACE}/app1"
}
stages {
stage('Notify Bitbucket') {
steps {
bitbucketStatusNotify(buildState: 'INPROGRESS')
}
}
stage('Build Environment') {
steps {
sh 'docker build -t app-builder .'
}
}
stage('Test') {
steps{
sh 'docker run --rm app-builder pytest'
}
}

Use dir command to change the directory e.g
stage('Build Environment') {
steps {
dir("app1"){
sh 'docker build -t app-builder .'
}
}
}

On a multi-branch pipeline, you could leverage the customWorkspace option of the jenkins agents
The change you are making to the WORKSPACE env variable affects the variable only, it does not change the workspace location.
pipeline {
agent {
node {
label 'my-node'
customWorkspace '${WORKSPACE}/app1'
}
}
environment {
PIPENV_VENV_IN_PROJECT = true
DEVPI_USER = '\'jenkins_user\''
DEVPI_PASSWORD = '\'V$5_Z%Bf-:mJ\''
}

The git plugin allows to define sparse checkout paths. You can use this to restrict the directories in your clone.

Related

Jenkins Pipeline with Dockerfile configuration

I am struggling, to get the right configuration for my Jenkins Pipeline.
It works but I could not figure out how to seperate test & build stages.
Requirements:
Jenkins Pipeline with seperated test & build stage
Test stage requires chromium (I currently use node alpine image + adding chromium)
Build stage is building a docker image, which is published later (publish stage)
Current Setup:
Jenkinsfile:
pipeline {
environment {
...
}
options {
...
}
stages {
stage('Restore') {
...
}
stage('Lint') {
...
}
stage('Build & Test DEV') {
steps {
script {
dockerImage = docker.build(...)
}
}
}
stage('Publish DEV') {
steps {
script {
docker.withRegistry(...) {
dockerImage.push()
}
}
}
}
Dockerfile:
FROM node:12.16.1-alpine AS build
#add chromium for unit tests
RUN apk add chromium
...
ENV CHROME_BIN=/usr/bin/chromium-browser
...
# works but runs both tests & build in the same jenkins stage
RUN npm run test-ci
RUN npm run build
...
This works, but as you can see "Build & Test DEV" is a single stage,
I would like to have 2 seperate jenkins stages (Test, Build)
I already tried using Jenkins agent docker and defining the image for the test stage inside the jenkins file, but I dont know how to add the missing chromium package there.
Jenkinsfile:
pipeline {
agent {
docker {
image 'node:12.16.1-alpine'
//add chromium package here?
//set Chrome_bin env?
}
}
I also thought about using a docker image that already includes chromium, but couldnt find any official images
Would really appreciate your help / insights how to make this work.
You can either build your customized image (which includes the installation of Chromium) and push it to a registry and then pull it from that registry:
node {
docker.withRegistry('https://my-registry') {
docker.image('my-custom-image').inside {
sh 'make test'
}
}
}
Or build the image directly with Jenkins with your Dockerfile:
node {
def testImage = docker.build("test-image", "./dockerfiles/test")
testImage.inside {
sh 'make test'
}
}
Builds test-image from the Dockerfile found at ./dockerfiles/test/Dockerfile.
Reference: Using Docker with Pipeline
So in general I would execute the npm run commands inside the groovy syntax and not inside the dockerfile. So your code would look something like that:
pipeline {
agent {
docker {
image 'node:12.16.1-alpine'
args '-u root:root' // better would be to use sudo, but this should work
}
}
stages {
stage('Preparation') {
steps {
sh 'apk add chromium'
}
}
stage('build') {
steps {
sh 'npm run build'
}
}
stage('test') {
steps {
sh 'npm run test'
}
}
}
}
I would also suggest that you collect the results within Jenkins with the warnings ng jenkins plugin

Jenkins declarative pipeline: npm command not found

So I have set up this jenkins ec2 instance, ssh into it, globally installed node and set PATH. But when executing my pipeline, it gives me npm command not found error.
I put echo $PATH in my pipeline and the result is:
/home/ec2-user/.nvm/versions/node/v10.15.1/bin:/sbin:/usr/sbin:/bin:/usr/bin
Which looks correct.
For reference, here's my very simple pipeline:
pipeline {
agent { label 'master' }
environment {
PATH = "/home/ec2-user/.nvm/versions/node/v10.15.1/bin:${env.PATH}"
}
stages {
stage('Test npm') {
steps {
sh """
echo $PATH
npm --version
"""
}
}
}
}
Appreciate with any help.
As #Dibakar Adtya pointed, the problem is when jenkins executes a pipeline, it's under the user jenkins, whereas I configured node under another user, ec2-user, and jenkins doesn't have access to ec2-user's bin. Thank you #Dibakar!
A more elegant solution is to use Jenkins NodeJS Plugin. It saves you from the environment hassles. Now the pipeline is:
pipeline {
agent { label 'master' }
tools { nodejs "nodejs" }
stages {
stage('Test npm') {
steps {
sh """
npm --version
"""
}
}
}
}

What are the #tmp folders in a Jenkins workspace and how to clean them up

I have a Jenkins pipeline, for a PHP project in a Docker container. This is my Jenkinsfile:
pipeline {
agent any
stages {
stage('Build') {
agent any
steps {
sh 'docker-compose up -d'
sh 'docker exec symfony composer install'
}
}
stage('Test') {
steps {
sh 'docker exec symfony php ./bin/phpunit --coverage-clover=\'reports/coverage/coverage.xml\' --coverage-html=\'reports/coverage\' --coverage-crap4j=\'reports/crap4j.xml\''
}
}
stage('Coverage') {
steps {
step([$class: 'CloverPublisher', cloverReportDir: '/reports/coverage', cloverReportFileName: 'coverage.xml'])
}
}
}
post {
cleanup {
sh 'docker-compose down -v'
cleanWs()
}
}
}
After running the pipeline, the var/lib/jenkins/workspace folder contains 4 folders (assuming my project name is Foo):
Foo
Foo#2
Foo#2#tmp
Foo#tmp
What are these, and how do I clean them up? cleanWs does not remove any except the first of them after the build.
EDIT: This is not a duplicate of this question because
That question does not answer my question: what are these files.
The answers to that question suggest using deleteDir, which is not recommended when using Docker containers.
There is an opened Jenkins issue about deleteDir() not deleting the #tmp/#script/#... directories.
A workaround to delete those:
post {
always {
cleanWs()
dir("${env.WORKSPACE}#tmp") {
deleteDir()
}
dir("${env.WORKSPACE}#script") {
deleteDir()
}
dir("${env.WORKSPACE}#script#tmp") {
deleteDir()
}
}
}
There is also a comment on the issue describing what #tmp is:
It [#tmp folder] contains the content of any library that was loaded at
run time. Without a copy, Replay can't work reliably.
The
Foo#2
Foo#2#tmp
folders were created because the agent was defined 2 times. Once it was defined at the top level inside the pipeline block. And once inside the stage called build.
The working folder of stage 'build' is the Foo#2 folder.

How to configure a Jenkinsfile to build docker image and push it to a private registry

I have two questions. But maybe I don't know how to add tags for this question so that I added the tag for both. First question is related to Jenkins plugin usage to bake and push docker image using this. Below is my Jenkinsfile script. I finished building jar file in target directory. Then I want to run this docker plugin to bake with this artifact. As you know, we needed to have a Dockerfile so I put it in a root directory where git cloned the source. How I configure this? I don't know how to this. If I run below, Jenkins told that there is no steps.
pipeline {
agent any
stages {
stage('build') {
steps {
git branch: 'master', credentialsId: 'e.joe-gitlab', url: 'http://70.121.224.108/gitlab/cicd/spring-petclinic.git'
sh 'mvn clean package'
}
}
stage('verify') {
steps {
sh 'ls -alF target'
}
}
stage('build-docker-image') {
steps {
docker.withRegistry('https://sds.redii.net/', 'redii-e.joe') {
def app = docker.build("sds.redii.net/e-joe/spring-pet-clinic-demo:v1",'.')
app.push()
}
}
}
}
}
UPDATE
this is another Jenkins Pipeline Syntax sniffet generator. But it doesn't work neither.
pipeline {
agent any
stages {
stage('build') {
steps {
git branch: 'master', credentialsId: 'e.joe-gitlab', url: 'http://70.121.224.108/gitlab/cicd/spring-petclinic.git'
sh 'mvn clean package'
}
}
stage('verify') {
steps {
sh 'ls -alF target'
}
}
stage('docker') {
withDockerRegistry([credentialsId: 'redii-e.joe', url: 'https://sds.redii.net']) {
def app = docker.build("sds.redii.net/e-joe/spring-pet-clinic-demo:v1",'.')
app.push()
}
}
}
}
Dokerfile is like the below. If I try baking image in my local, I got the following error.
container_linux.go:247: starting container process caused "chdir to cwd (\"/usr/myapp\") set in config.json failed: not a directory"
oci runtime error: container_linux.go:247: starting container process caused "chdir to cwd (\"/usr/myapp\") set in config.json failed: not a directory"
: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
DockerFile
FROM openjdk:7
COPY ./target/spring-petclinic-1.5.1.jar /usr/myapp
WORKDIR /usr/myapp
RUN java spring-petclinic-1.5.1.jar
You are writing your .jar to /usr/myapp. Which means that /usr/myapp will be the jar file and not a directory, resulting in that error. Change your docker copy line to COPY ./target/spring-petclinic-1.5.1.jar /usr/myapp/ (with the trailing slash) and your Dockerfile should work.

How to use an environment variable in the agent section of a Jenkins Declarative Pipeline?

I'm building a Docker image for an application based in node.js where some of the dependencies requires an NPM token for a private NPM registry, but when building the image the variable containing the token is null, e.g.
docker build -t 3273e0bfe8dd329a96070382c1c554454ca91f96 --build-args NPM_TOKEN=null -f Dockerfile
a simplified pipeline is:
pipeline {
environment {
NPM_TOKEN = credentials('npm-token')
}
agent {
dockerfile {
additionalBuildArgs "--build-args NPM_TOKEN=${env.NPM_TOKEN}"
}
}
stages {
stage('Lint') {
steps {
sh 'npm run lint'
}
}
}
}
Is there a way to use the env variable in that section or it is not currently supported?
BTW, I've followed the suggestions in Docker and private modules related to how to use a NPM token to build a docker image
This is definitely a bug with the declarative pipeline. You can track the issue related to this here: https://issues.jenkins-ci.org/browse/JENKINS-42369
If you move away from using the declarative pipeline and use the scripted pipelines instead, this won't occur, although your Jenkinsfile will be "wordier"
found a solution for this. Use credentials manager to add NPM_TOKEN. Then you can do
pipeline {
agent {
docker {
image 'node:latest'
args '-e NPM_TOKEN=$NPM_TOKEN'
}
}
stages {
stage('npm install') {
steps {
sh 'npm install'
}
}
stage('static code analysis') {
steps {
sh 'npx eslint .'
}
}
}
}
I came up with a workaround for this and it still uses declarative pipeline.
I'm using this technique to download private github repos with pip.
// Workarounds for https://issues.jenkins-ci.org/browse/JENKINS-42369
// Warning: The secret will show up in your build log, and possibly be in your docker image history as well.
// Don't use this if you have a super-confidential codebase
def get_credential(name) {
def v;
withCredentials([[$class: 'StringBinding', credentialsId: name, variable: 'foo']]) {
v = env.foo;
}
return v
}
def get_additional_build_args() {
return "--build-arg GITHUB_ACCESS_TOKEN=" + get_credential("mysecretid")
}
pipeline {
agent {
dockerfile {
filename 'Dockerfile.test'
additionalBuildArgs get_additional_build_args()
}
}

Resources