We have several Jenkins agents running as physical machines.
So far I ran my Jenkins pipeline on the agents themselves, I now try to move the building and test execution into docker containers using the Jenins docker plugin.
Below you can find a simplified version of our Jenkinsfile which uses gradle to build, test, and package a Java Spring Boot application.
node {
stage('Preparation') {
cleanWs()
checkout scm
notifyBitbucket()
}
}
pipeline {
agent {
docker {
image "our-custom-registry.com/jenkins-build:latest"
registryUrl 'https://our-custom-registry.com'
registryCredentialsId '...'
alwaysPull true
args "-u jenkins -v /var/run/docker.sock:/var/run/docker.sock" // the pipeline itself required docker
}
}
stages {
stage('Build') {
steps {
sh './gradlew assemble classes testClasses'
}
}
stage('Test') {
parallel {
stage('Unit Tests') {
when { expression { return build_params.ENABLE_UNITTEST } }
steps {
sh './gradlew test'
junit UNIT_TEST_RESULT_DIR
}
}
stage('Integration Tests') {
when { expression { return build_params.ENABLE_INTEGRATION } }
steps {
sh './gradlew integration'
junit INTEGRATION_TEST_RESULT_DIR
}
}
}
}
stage('Finalize') {
stage('Docker Push') {
when { expression { return build_params.ENABLE_DOCKER_PUSH } }
steps {
sh './gradlew pushDockerImage'
}
}
}
}
post {
cleanup {
cleanWs()
}
always {
script {
node {
currentBuild.result = currentBuild.result ?: 'SUCCESS'
notifyBitbucket()
}
}
}
}
}
Below is the Dockerfile I use for the build image. As you can see I manually create a Jenkins user and add the to the docker groups (unfortunately the GID is 998 or 999 depending on the Jenkins agent).
FROM openjdk:8-jdk-stretch
USER root
# prerequisites:
# - a user and a group Jenkins with UID/GID=1001 exist
# - the user home is /var/jenkins
# - the user is in the docker group
# - on some agents docker has the gid 998, on some it is 999
RUN apt-get update \
&& apt-get -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common rsync tree \
&& curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - \
&& add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable" \
&& apt-get update \
&& apt-get -y install docker-ce docker-ce-cli containerd.io \
&& groupadd -g 1001 jenkins \
&& groupadd -f -g 998 docker1 \
&& groupadd -f -g 999 docker2 \
&& useradd -d "/var/jenkins" -u 1001 -g 1001 -m -s /bin/bash jenkins \
&& usermod -a -G 998 jenkins \
&& usermod -a -G 999 jenkins
USER jenkins
Jenkins then executes the following command
docker run -t -d -u 1001:1001 -u jenkins -v /var/run/docker.sock:/var/run/docker.sock -w /var/jenkins/workspace/JOB_NAME -v /var/jenkins/workspace/JOB_NAME:/var/jenkins/workspace/JOB_NAME:rw,z -v /var/jenkins/workspace/JOB_NAME#tmp:/var/jenkins/workspace/JOB_NAME#tmp:rw,z -e ******** ... our-custom-registry.com/base/jenkins-build:latest cat
This pipeline works just fine... sometimes!
Most of the times, however, some files get mysteriously lost.
For example, my build.gradle consists of multiple other files that are included.
At some point during the build, one of these files seems to be missing.
+ ./gradlew pushDockerImage
FAILURE: Build failed with an exception.
* Where:
Build file '/var/****/workspace/JOB_NAME/build.gradle' line: 35
* What went wrong:
A problem occurred evaluating root project 'foo'.
> Could not read script '/var/****/workspace/JOB_NAME/build/gradle/scripts/springboot-plugin.gradle' as it does not exist.
It is always a different file that goes missing.
I started running tree just before the ./gradlew just to make sure the file is not actually removed.
Does anybody have an idea what might be going on here?
Update
Forget everything I said about docker, this is a pure Gradle and Jenkins problem.
When I replace the docker agent with a plain Jenkins agent the same problems occur.
The problem seems to be that you cannot run multiple gradle tasks in parallel on the same directory.
Related
Below is the error from Jenkins console output:
+ sonar-scanner -Dsonar.login=**** -Dsonar.projectBaseDir=.
/var/lib/jenkins/workspace/Mtr-Pipeline_develop#2#tmp/durable-0080bcff/script.sh: 1: /var/lib/jenkins/workspace/Mtr-Pipeline_develop#2#tmp/durable-0080bcff/script.sh: sonar-scanner: Permission denied
I have setup the token and pasted the key in t-m-sonar-login variable in Jenkins global credentials.But I dont think it should be the keys causing `permission denied error. Can someone provide some pointers to look into the issue.
stage('SonarQube scan') {
agent {
dockerfile { filename 'sonar/Dockerfile' }
}
steps {
withCredentials([string(credentialsId: 't-m-sonar-login', variable: 'SONAR_LOGIN')]) {
script {
unstash 'coverage'
unstash 'testResults'
sh 'sonar-scanner -Dsonar.login="$SONAR_LOGIN" -Dsonar.projectBaseDir=. || true'
}
}
}
}
sonar/Dockerfile:
FROM node:15-buster
################
# Install java #
################
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get -y install openjdk-11-jre-headless && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
############################
# Install SonarQube client #
############################
WORKDIR /root
RUN apt-get install -y curl grep sed unzip
RUN curl --insecure -o ./sonarscanner.zip -L https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.4.0.2170-linux.zip
RUN unzip -q sonarscanner.zip
RUN rm sonarscanner.zip
RUN mv sonar-scanner-4.4.0.2170-linux sonar-scanner
ENV SONAR_RUNNER_HOME=/root/sonar-scanner
ENV PATH $PATH:/root/sonar-scanner/bin
# Include Sonar configuration and project paths
COPY ./sonar/sonar-runner.properties ./sonar-scanner/conf/sonar-scanner.properties
# Ensure Sonar uses the provided Java for musl instead of a borked glibc one
RUN sed -i 's/use_embedded_jre=true/use_embedded_jre=false/g' /root/sonar-scanner/bin/sonar-scanner
Please check whether Java is available on the system where SonarQube Scanner is running.
Another thing you can try is:
Go to SonarQube Scanner Directory -> Go to bin -> chmod +x sonar-scanner
I'm struggling to modify the PATH in Jenkins, using various methods to no avail.
I am using Jenkins: 2.319.3
Here is an example where I've tried to use a Dockerfile and powershell. I had comparable results with the bourne shell, where even if the path was exported it would not persist to subsequent shell calls.
When running the dockerfile locally I can confirm that the path is modified correctly, so I feel it must be some Jenkins specific issue?
Dockerfile.unix
ARG VARIANT="bullseye"
ARG PYTHON="3.8"
FROM python:${PYTHON}-${VARIANT}
# Installl powershell
RUN \
DEBIAN_FRONTEND=noninteractive \
&& apt-get update -y \
&& apt-get install -y software-properties-common lsb-release --no-install-recommends \
&& wget "https://packages.microsoft.com/config/debian/$(lsb_release -rs)/packages-microsoft-prod.deb" \
&& dpkg -i packages-microsoft-prod.deb \
&& apt-get update -y \
&& apt-get install -y powershell --no-install-recommends \
&& rm *.deb
RUN \
python3 -m pip install pipx \
&& pipx ensurepath \
&& pipx install tox \
&& pipx install hatch
CMD [ "pwsh" ]
Jenkinsfile
pipeline {
agent {
dockerfile {
filename 'Dockerfile.unix'
reuseNode true
args '-u root'
// Agent label
label 'ubuntu-docker'
}
}
options {
timestamps()
timeout(time: 5, unit: 'MINUTES') // timeout on whole pipeline job
}
stages {
stage('Setup') {
environment {
PATH = "/root/.local/bin:$PATH"
}
steps {
pwsh '$env:PATH' // ------------------------------ not in path
pwsh 'python -m pipx ensurepath'
pwsh '$env:PATH' // ------------------------------ not in path
pwsh 'python -m pipx list'
pwsh '''
$env:PATH="/root/.local/bin:"+$env:PATH
python --version
python -m pip freeze
$env:PATH # ------------------------------------- works
tox --version
hatch --version
'''
}
}
}
}
I'm trying to use sshagent plugin to deploy to remote server.
when using below syntax, I'm getting
pipeline {
agent any
stages {
stage('Deploy') {
steps {
sshagent(['nginx-ec2']) {
// some block
sh "ssh -o StrictHostKeyChecking=no ubuntu#<host_ip>"
sh "whoami"
}
}
}
}
}
getting output:
[Pipeline] sh (hide)
+ whoami
jenkins
while I'm expecting to run script at the remote server using provided credentials !!
So, I had to run it this way
pipeline {
agent any
stages {
stage('Deploy') {
steps {
sshagent(['nginx-ec2']) {
// some block
sh "ssh -o StrictHostKeyChecking=no -l ubuntu <host_ip> 'whoami && \
sudo apt update && sudo apt install -y docker.io && \
sudo usermod -aG docker ubuntu && \
source .bashrc && \
docker run -d nginx'"
}
}
}
}
}
Is there any "clean" way to run script on remote server with ubuntu instead of jenkins user ?
Edit:
I understand I need to run it under the ssh command not as separate sh script otherwise, it will run as jenkins and I'm able to do it in the scripted way as below.
That's why I'm asking if there's a better way to write it in the declarative way.
node {
stage('Deploy'){
def dockerRun = "whoami && \
sudo apt update && sudo apt install -y docker.io && \
sudo usermod -aG docker ubuntu && \
source .bashrc && \
docker run -d nginx "
sshagent(['nginx-ec2']) {
sh "ssh -o StrictHostKeyChecking=no ubuntu#<host_ip> '${dockerRun}' "
}
}
}
Thanks,
As noted, you should select a credential which does reference the right remote username, as seen in the SSH Agent Jenkins plugin:
node {
sshagent (credentials: ['deploy-dev']) {
sh 'ssh -o StrictHostKeyChecking=no -l cloudbees 192.168.1.106 uname -a'
}
}
Plus, I would execute only one script which would have the all sequence of commands you want to execute remotely.
Well, so far this is the best way to do so, in-spite of repetition!
pipeline {
agent any
stages {
stage('Deploy') {
steps {
sshagent(['nginx-ec2']) {
// some block
sh "ssh -o StrictHostKeyChecking=no -l ubuntu <remote_ip> 'whoami'"
sh "ssh -o StrictHostKeyChecking=no -l ubuntu <remote_ip> 'sudo apt update && sudo apt install -y docker.io'"
sh "ssh -o StrictHostKeyChecking=no -l ubuntu <remote_ip> 'sudo usermod -aG docker ubuntu'"
sh "ssh -o StrictHostKeyChecking=no -l ubuntu <remote_ip> 'source .bashrc'"
sh "ssh -o StrictHostKeyChecking=no -l ubuntu <remote_ip> 'docker run -d -nginx'"
}
}
}
}
}
I'm getting a "Bad substitution" error when trying to pass a pipeline parameter to the Dockerfile.
Jenkins parameter: version
Jenkinsfile:
pipeline {
agent any
stages {
stage('Build in docker container') {
agent { dockerfile true }
steps {
sh 'node -v'
}
}
}
}
Dockerfile:
FROM ubuntu:16.04
WORKDIR /root
# install dependencies
RUN apt-get update
RUN apt-get install curl wget vim nano zip git htop ncdu build-essential chrpath libssl-dev libxft-dev apt-transport-https -y
# install node 10
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash
RUN apt-get install --yes nodejs
#RUN node -v
#RUN npm -v
RUN echo ${params.version}
#ARG VERSION
#RUN echo $VERSION
Jenkins error message:
Jenkins error message
I'm sure the problem is that im new to pipelines/docker. :)
I would be grateful for any help.
issue resolved by adding the ARG variable to the Dockerfile.
This is how the Dockerfile looks like:
FROM ubuntu:16.04
WORKDIR /root
# install dependencies
RUN apt-get update
RUN apt-get install curl wget vim nano zip git htop ncdu build-essential chrpath libssl-dev libxft-dev apt-transport-https -y
# install node 10
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash
RUN apt-get install --yes nodejs
#RUN node -v
#RUN npm -v
ARG version=fisticuff
RUN echo $version
and this is how the Jenkinsfile looks like:
pipeline {
agent any
stages {
stage('Build in docker container') {
agent {
dockerfile {
additionalBuildArgs '--build-arg version="$version"'
}
}
steps {
sh 'node -v'
}
}
}
}
Console output in Jenkins:
Jenkins console output
Much obliged to all of you for giving me the hints. It helped me a lot!
Try running Dockerfile independently first.
Since you are new to docker try one step at a time.
I've created a docker images to be able to run node >= 7.9.0 and monogodb for testing in Jenkins. Some might argue that testing with mongodb is not correct approach but the app uses it extensively and I have some complex updates and deletes so I need it there.
Docker file is under dockerfiles/test/Dockerfile in my github repo. When using the pipeline syntax the docker images is built successfully but I can't do sh 'npm install' or sh 'npm -v' in the steps of the pipeline. The docker images is tested and if I build it locally and run it I can do the npm install there. sh 'node -v' runs successfully in the pipeline and also sh 'ls'.
Here is the pipeline syntax.
pipeline {
agent { dockerfile { dir 'dockerfiles/test' } }
stages {
stage('Build') {
steps {
sh 'npm install'
}
}
}
post {
always {
echo 'I will always say Hello again!'
}
}
}
I get this error: ERROR: script returned exit code -1. I can't see anything wrong here. I've also tested with other node images with the same result. If I run it with a node slave I can do the installation but I do not want to have many different slaves with a lot of setups for integration tests.
And here is the dockerfile
FROM ubuntu:16.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
RUN echo "deb http://repo.mongodb.org/apt/ubuntu $(cat /etc/lsb-release | grep DISTRIB_CODENAME | cut -d= -f2)/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list
RUN apt-get update && apt-get install -y \
curl && \
curl -sL https://deb.nodesource.com/setup_7.x | bash - && \
apt-get install -y nodejs && \
apt-get install -y mongodb-org
RUN mkdir -p /data/db
RUN export LC_ALL=C
RUN groupadd -g 1000 jenkins && useradd -u 1000 jenkins -g jenkins
EXPOSE 27017
CMD ["/usr/bin/mongod"]
Found a workaround to a similar problem.
Problem
Jenkins running a pipeline job
This job is running commands inside a debian slim container
All commands are failing instantly with no error output, only a ERROR: script returned exit code -1
Running the container outside docker and executing the same commands with the same user is working as it should be
Extract from Jenkinfile :
androidImage = docker.build("android")
androidImage.inside('-u root') {
stage('Install'){
sh 'npm install' // is failing with generic error and no output
}
Solution
Found the answer on Jenkins bugtracker : https://issues.jenkins-ci.org/browse/JENKINS-35370 and on Jenkins Docker Pipeline Exit Code -1
My problem was solved by installing the procps package in my debian Dockerfile :
apt-get install -y procps
I replicated your setup as faithfully as I could. I used your Dockerfile and Jenkinsfile, and here's my package.json:
{
"name": "minimal",
"description": "Minimal package.json",
"version": "0.0.1",
"devDependencies": {
"mocha": "*"
}
}
It failed like this for me during npm install:
npm ERR! Error: EACCES: permission denied, mkdir '/home/jenkins'
I updated one line in your Dockerfile to add --create-home:
RUN groupadd -g 1000 jenkins && useradd -u 1000 jenkins -g jenkins --create-home
And the build passed. Kudos to #mkobit for keying in on the issue and linking to the jenkins issue that will make this cleaner in the future.