Jenkins 2 Build Processors for one Job - jenkins

i have a little problem with an multibranch Pipeline Job. Following problem I have a job that always requires 2 build processors. Unfortunately I don't want to unlock more build processors in Jenkins but want to know why Jenkins always uses 2 build cores for this job. Can anyone help me why jenkins uses 2 processors for this job at the same time ?
pipeline {
options { disableConcurrentBuilds() }
agent { label 'myServer' }
stages {
stage('helloworld') {
agent {
docker {
image 'ubuntu:16.04'
label 'myServer'
}
}
steps {
dir('build') {
sh 'npm i'
sh 'npm run gulp clean:all'
sh 'npm run gulp ci:all'
}
}
}
}
}

There is an option reuseNode for docker agents which is false by default. I assume that this may be the reason why jenkins needs 2 build cores (one for each docker agent) in your case, altough I'm not sure.
The option can be found in the Jenkins Declarative Syntax Documentation ( https://jenkins.io/doc/book/pipeline/syntax/#common-options ) at Sections > Agent > Common Options > reuseNode.
Could you try with reuseNode enabled and report whether it solved the problem?

Related

How to use a Jenkinsfile for these build steps?

I'm learning how to use Jenkins and working on configuring a Jenkins file instead of the build using the Jenkins UI.
The source code management step for building from Bitbucket:
The build step for building a Docker container:
The build is of type multi configuration project:
Reading the Jenkins file documentation at https://www.jenkins.io/doc/book/pipeline/jenkinsfile/index.html and creating a new build using Pipeline :
I'm unsure how to configure the steps I've configured via the UI: Source Code Management & Build. How to convert the config for Docker and Bitbucket that can be used with a Jenkinsfile ?
The SCM will not be changed, regardless if you are using UI configuration or a pipeline, although in theory you can do the git clone from the steps in the pipeline, if you really insist convert the SCM steps in pure pipeline steps.
The pipeline will can have multiple stages, and each of the stages can have different execution environment. You can use the Docker pipeline plug-in, or you can use plain sh to issue the docker commands on the build agent.
Here is small sample from one of my manual build pipelines:
pipeline {
agent none
stages {
stage('Init') {
agent { label 'docker-x86' }
steps {
checkout scm
sh 'docker stop demo-001c || true'
sh 'docker rm demo-001c || true'
}
}
stage('Build Back-end') {
agent { label 'docker-x86' }
steps {
sh 'docker build -t demo-001:latest ./docker'
}
}
stage('Test') {
agent {
docker {
label 'docker-x86'
}
}
steps {
sh 'docker run --name demo-001c demo-001:latest'
sh 'cd test && make test-back-end'
}
}
}
}
You need to create a Pipeline type of a project and specify the SCM configuration in the General tab. In the Pipeline tab, you will have option to select Pipeline script or Pipeline script from SCM. It's always better to start with the Pipeline script while you are building and modifying your workflow. Once it's stabilized, you can add it to the repository.

Jenkins job getting stuck on execution of docker image as the agent

I have installed Jenkins and Docker inside a VM. I am using Jenkins pipeline project and my jenkins declarative pipeline looks like this.
pipeline {
agent {
docker { image 'node:7-alpine' }
}
stages {
stage('Test') {
steps {
echo 'Hello Nodejs'
sh 'node --version'
}
}
}
}
It is a very basic pipeline following this link https://jenkins.io/doc/book/pipeline/docker/
When I try to build my jenkins job, it prints Hello Nodejs, but gets stuck at the next instruction i.e. execution of shell command. After 5 minutes, the job fails with this error
process apparently never started in /var/lib/jenkins/workspace/MyProject#tmp/durable-c118923c
(running Jenkins temporarily with -Dorg.jenkinsci.plugins.durabletask.BourneShellScript.LAUNCH_DIAGNOSTICS=true might make the problem clearer)
ERROR: script returned exit code -2
I am not understanding why it is not executing the sh command.
If I make it as agent any, it executes the sh command.
I am not sure that it will help but I remember that node image is launched under root account by default. Jenkins uses its own ID when launching a container. So, probably, it's a permissions issue. Try to add -u 0 argument:
agent {
docker {
image 'node:7-alpine'
args '-u 0'
}
}

Jenkins Docker pipeline stuck on "Waiting for next available executor"

In my project I have a Jenkins pipeline, which should execute two stages on a provided Docker image, and a third stage on the same machine but outside the container. Running this third stage on the same machine is crucial, because the previous stages produces some output that is needed later. These files are stored on the machine trough mounted volumes.
In order to be sure these files are accessible in the third stage, I manually select a specific node. Here is my pipeline (modified a little, because it's from work):
pipeline {
agent {
docker {
label 'jenkins-worker-1'
image 'custom-image:1.0'
registryUrl 'https://example.com/registry'
args '-v $HOME/.m2:/root/.m2'
}
}
stages {
stage('Test') {
steps {
sh 'mvn test'
}
}
stage('Package') {
steps {
sh 'mvn package'
sh 'mv target workdir/'
}
}
stage('Upload') {
agent {
node {
label 'jenkins-worker-1'
}
}
steps {
sh 'uploader.sh workdir'
}
}
}
}
The node is preconfigured for uploading, so I can't simply upload built target from Docker container, it has to be done from the physical machine.
And here goes my problem: while the first two stages works perfectly fine, the third stage cannot start, because: "Waiting for next available executor" suddenly appears in logs. It's obvious the node is waiting for itself, I cannot use another machine. It looks like Docker is blocking something and Jenkins thinks the node is busy, so it waits eternally.
I look for a solution, that will allow me to run stages both in and outside the container, on the same machine.
Apparently the nested stages feature would solve this problem, but unfortunately it's available since version 1.3 of pipeline plugin, but my node has 1.2.9.

Jenkins Pipeline Across Multiple Docker Images

Using a declarative pipeline in Jenkins, how do I run stages across multiple versions of a docker image. I want to execute the following jenkinsfile on python 2.7, 3.5, and 3.6. Below is a pipeline file for building and testing a python project in a docker container
pipeline {
agent {
docker {
image 'python:2.7.14'
}
}
stages {
stage('Build') {
steps {
sh 'pip install pipenv'
sh 'pipenv install --dev'
}
}
stage('Test') {
steps {
sh 'pipenv run pytest --junitxml=TestResults.xml'
}
}
}
post {
always {
junit 'TestResults.xml'
}
}
}
What is minimal amount of code to make sure the same steps succeed across python 3.5 and 3.6? The hope is that if a test fails, it is evident which version(s) the test fails on.
Or is what I'm asking for not possible for declarative pipelines (eg. scripted pipelines may be what would most elegantly solve this problem)
As a comparison, this is how Travis CI let's you specify runs across different python version.
I had to resort to a scripted pipeline and combine all the stages
def pythons = ["2.7.14", "3.5.4", "3.6.2"]
def steps = pythons.collectEntries {
["python $it": job(it)]
}
parallel steps
def job(version) {
return {
docker.image("python:${version}").inside {
checkout scm
sh 'pip install pipenv'
sh 'pipenv install --dev'
sh 'pipenv run pytest --junitxml=TestResults.xml'
junit 'TestResults.xml'
}
}
}
The resulting pipeline looks like
Ideally we'd be able to break up each job into stages (Setup, Build, Test), but
the UI currently doesn't support this (still not supported).

Jenkins workflow job: limit where it can run

I have two Jenkins workflow jobs that start the same job with different parameters, namely, the branch they build. The latter job builds the project on several platforms. The "head" job, that is the worklflow job may start on different machines. Also, there are two linux machines in the setup.
And sometimes it so happens that one of them (say, master) starts on one of the linux machines, and the other one starts on the other. Both of them have to build a target on a linux machine, and since both of them are busy, both jobs stall.
With usual jobs, one can limit where they can run, however, I couldn't find how to limit where a workflow job can run. Obviously, it should be done using the groovy script, but it escapes me how exactly.
Is there a solution to that?
here's a Jenkinsfile to do it globally (this is telling jenkins the entire pipeline must be run on a slave with these three labels):
pipeline {
agent { label 'docker && git && rbenv' }
stages {
stage('commit_stage') {
steps {
echo 'building stuff'
}
}
}
}
you can also select a certain slave or certain capabilities via the node step for any stage or part of a stage:
pipeline {
agent { label 'docker && git && rbenv' }
stages {
stage('commit_stage') {
steps {
// this overrides the top-level agent requirements
node('linux_with_zsh') {
echo 'building stuff'
}
}
}
}
}

Resources