Jenkins concurrent builds interfering each other - jenkins

I have a declare pipeline, shown as follow:
pipeline{
agent any
stages{
stage("Pull Souce Code"){
steps{
checkout(...)
}
}
stage("Build and Push image"){
steps{
script{
docker.withRegistry(...){
def image = docker.build(...)
image.push()
}
}
}
}
}
}
The pipeline is running on the Jenkins Master(I don't build any jenkins slave).
When I run this task concurrently, sometimes the Dockerfile referenced does not match the GitLab project.
I notice the current pipeline have 2 Jenkins workspace:
/var/lib/jenkins/workspace/job_name and /var/lib/jenkins/workspace/job_name#2
Their Git files don't match their Dockerfile files.
I have to disable concurrent build now. What should I do?

Related

Jenkins / How to deploy with one click

I am working on a project with git and jenkins (pipeline).
I want to build the project at every commit but only deploy it when the chief wants.
So I would like to have like, two pipelines, one that run at every commit and only build / test and one that I can run by clicking on a button labelled "click me to deploy" that do the job.
Do I must create 2 jenkins jobs or is there a plugin or a way to do this with 1 job.
I have searched but I found nothing about this.
You can achieve with 1job using Input Step (Pipeline). As part of your pipeline, after the build and test execution, add input step (Wait for Interactive input) and then add the deployment related stages.
So for each check-in, Jenkins build will trigger. But it will complete only build and test stages after that it will wait for chief approval to proceed for deployment.
reference: https://jenkins.io/doc/pipeline/steps/pipeline-input-step
This is an example on how to build a pipeline that builds, waits for input, and deploys when input is yes. If input timeout is exceeded then the job will exit. If one does not need the timeout then it can be ommited and the pipeline will wait indefinately without consuming an executor (note the agent annotation in the top pipeline and in each stage)
pipeline {
agent none
stages {
stage('Build') {
agent { label 'master' }
steps {
sh 'build something'
}
}
stage('Production deploy confirmation') {
options {
timeout(time: 60, unit: 'SECONDS')
}
input {
message "Deploy to production?"
ok "Yes"
}
steps {
echo 'Confirmed production deploy'
}
}
stage('Deploy Production') {
stage('internal') {
agent { label 'master' }
steps {
sh 'deploy something'
}
}
}
}
}
Try a parametrized Job with a Boolean parameter and two separate stages for Build and Deploy:
pipeline{
parameters {
booleanParam(name: 'deploy_param', defaultValue: false, description: 'Check if want to deploy')
}
stages{
stage("Build"){
steps{
// build steps
}
}
stage("Deploy"){
when {
environment name: 'deploy_param', value: 'true'
}
steps{
// deploy steps
}
}
}
}
In this way yo can have a CI build with "Deploy" stage turned off, as the deploy_param is set to false by default. And also a manual build ("when the chief wants") with "Deploy" stage turned on by manually setting the deploy_param to true.

Jenkins pipeline script to copy artifacts of current build to server location

I want to create a Jenkins job which does following:
Git>Mvn build> copy jar to some location of server.
So this can be done using a single job or 2 jobs?
Or which is preferred way of doing this , is pipeline preferred over creating a maven job?
I have created this pipeline script, but this does not copy the current build jar to the server location, it copies the previous build artifact jar.
node {
def mvnHome
stage('Preparation') { // for display purposes
// Get some code from a GitHub repository
git 'git#github.pie.ABC.com:abcdef/BoltRepo.git'
mvnHome = tool 'M2'
}
stage('Build') {
// Run the maven build
if (isUnix()) {
sh "'${mvnHome}/bin/mvn' -Dmaven.test.failure.ignore clean package"
} else {
bat(/"${mvnHome}binmvn" -Dmaven.test.failure.ignore clean package/)
}
}
stage('Results') {
archiveArtifacts 'target/*/BoltRepo*.jar'
}
stage('Deploy Artifact') {
copyArtifacts(
projectName: currentBuild.projectName,
filter: 'target/*/BoltRepo*.jar',
fingerprintArtifacts: true,
target: '/ngs/app/boltd/bolt/bolt_components/bolt_provision/test',
flatten: true )
}
}
What is the best way of achieving this.
I haven't used the pipeline before, but I have done what you want using "ArtifactDeployer" from the "Post-build Actions" in the job's configurations
Note: you will need to install "Artifact Deployer Plug-in"

jenkins pipeline, unstash from a sub job

I have a separate build pipeline that uses jenkinsfile to build the code.
I trigger it from a deploy pipeline and want to get build results.
The reason for this is that devs can define build steps but deploy is out of there control.
Here's a sample code in jenkins job builder:
- job:
name: Build and Deploy
project-type: pipeline
dsl: |
node {
stage('Build') {
# that job does stash inside a jenkinsfile
build job: "Build"
sh 'cp -rv "../Build/dist/" ./' # this is a workaround
stash includes: 'dist/*.zip', name: 'archive'
}
stage('Deploy') {
unstash 'archive'
sh "..."
}
}
So how can I unstash code stash-ed in a sub-job?
P.S.: there's also a workaround with artefacts:
In a sub-job:
archiveArtifacts artifacts: '*.zip', fingerprint: true
main DSL:
dsl: |
node {
def build_job_number = 0
def JENKINS = "http://127.0.0.1:8080"
stage('Build') {
def build_job = build job: "Build"
build_job_number = build_job.getNumber()
}
stage('Deploy') {
sh "wget -c --http-user=${USER} --http-password=${TOKEN} --auth-no-challenge ${JENKINS}/job/Build/${build_job_number}/artifact/name.zip"
sh "..."
}
}
The issue here is that API token is required.
If you go with archiveArtifacts you can use copyArtifacts to complement it.
As far as I know stash/unstash only work within the same job, so your other option would be to tick the Preserve stashes from completed builds in the pipeline's configuration so you can re-use them.

Failing Jenkins build when Checkmarx job fails

I have configured a Checkmarx job in Jenkins and I wanted to integrate the Job with the actual build job of a repository.
In my Jenkinsfile I've configured this as a stage and the job gets executed.
The question is how to I listen to failures on the Checkmarx job and accordingly change the status of my build job? Here's a snippet from my JenkinsFile
agent any
stages {
stage('Build') {
steps {
echo 'Building'
...
........
...........
}
}
stage('Checkmarx') {
when {
branch 'master'
}
steps {
echo 'Kicking off checkmarx job..'
build job: 'checkmarx', wait: false
}
}
If you would just make it wait for the downstream job it would fail with it
steps {
echo 'Kicking off checkmarx job..'
build job: 'checkmarx', wait: true
}

How to build docker images using a Declarative Jenkinsfile

I'm new to using Jenkins....
I'm trying to automate the production of an image (to be stashed in a repo) using a declarative Jenkinsfile. I find the documentation to be confusing (at best). Simply put, how can I convert the following scripted example (from the docs)
node {
checkout scm
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.push()
}
to a declarative Jenkinsfile....
You can use scripted pipeline blocks in a declarative pipeline as a workaround
pipeline {
agent any
stages {
stage('Build image') {
steps {
echo 'Starting to build docker image'
script {
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.push()
}
}
}
}
}
I'm using following approach:
steps {
withDockerRegistry([ credentialsId: "<CREDENTIALS_ID>", url: "<PRIVATE_REGISTRY_URL>" ]) {
// following commands will be executed within logged docker registry
sh 'docker push <image>'
}
}
Where:
CREDENTIALS_ID stands for key in Jenkis under which you store credentials to your docker registry.
PRIVATE_REGISTRY_URL stands for url of your private docker registry. If you are using docker hub then it should be empty.
I cannot recommend the declarative syntax for building a Docker image bcos it seems that every important step requires falling back to the old scripting syntax. But if you must, a hybrid approach seems to work.
First a detail about the scm step: when I defined the Jenkins "Pipeline script from SCM" project that fetches my Jenkinsfile with a declarative pipline from git, Jenkins cloned the repo as the first step in the pipeline even tho I did not define a scm step.
For the build and push steps, I can only find solutions that are a hybrid of old-style scripted pipeline steps inside the new-style declarative syntax. For example see gustavoapolinario's work at Medium:
https://medium.com/#gustavo.guss/jenkins-building-docker-image-and-sending-to-registry-64b84ea45ee9
which has this hybrid pipeline definition:
pipeline {
environment {
registry = "gustavoapolinario/docker-test"
registryCredential = 'dockerhub'
dockerImage = ''
}
agent any
stages {
stage('Cloning Git') {
steps {
git 'https://github.com/gustavoapolinario/microservices-node-example-todo-frontend.git'
}
}
stage('Building image') {
steps{
script {
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage('Deploy Image') {
steps{
script {
docker.withRegistry( '', registryCredential ) {
dockerImage.push()
}
}
}
}
stage('Remove Unused docker image') {
steps{
sh "docker rmi $registry:$BUILD_NUMBER"
}
}
}
}
Because the first step here is a clone, I think he built this example as a standalone pipeline project in Jenkins (not a Pipeline script from SCM project).

Resources