I'm new with Jenkins-Groovy and try to run a command within an existing Docker-Container and before setting some environmental-variables using a Jenkins-Pipeline.
The bash-script used for right now (so just executing it from the command line) looks like that and works:
export LIB_ROOT=/usr/local/LIBS
export TMP_MAC_ADDRESS=b5:17:a3:28:55:ea
sudo docker run --rm -i -v "$LIB_ROOT":/usr/local/LIBS/from-host -v /home/sbuild/Dockerfiles/Sfiles/mnt:/home/sbuild/mount --mac-address="$TMP_MAC_ADDRESS" -t sbuild:current
Afterwards I want to build some of my sources (mounted) inside the Docker-Container using something like:
python3 batchCompile.sh ../mount/src.zip
Right now I've been trying to write it like that in my Jenkins:
node ('linux-slave') {
withEnv(['PATH=/usr/local/LIBS:/usr/local/MATLAB/from-host -v /home/sbuild/Dockerfiles/Sfiles/mnt:/home/sbuild/mount --mac-address=b5:17:a3:28:55:ea']) {
docker.image('sbuild').inside {
sh 'echo $PATH'
sh 'mvn --version'
}
}
sh 'echo $PATH'
}
Yet this just fails with an opaque message:
Running in Durability level: MAX_SURVIVABILITY
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 71: Expected a symbol # line 71, column 25.
docker.image('sbuild:current').inside {
^
1 error
at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
I'm not able to figure out what is running wrong.
So I was just trying to get inside the Docker and look what I can do from there. With this little script I was experimenting a little:
script{
docker.image('sbuild:current').inside{
sh 'touch asdf'
sh 'cd /home/sbuild/'
sh 'pwd'
}
Yet by default I'm just working from the Jeninks-Folder and none of these commands are actually called inside the Docker. Also the container doesn't seem to run at any time.
How do I have to write my code to start the Docker I had configured and use commands inside?
There's some documentation outside for creating new Docker containers, but I have difficulties to figure out how to make sense of that error message and how to properly debug.
Edit 1: The Dockerfile:
FROM labs:R2018
# Avoid interaction
ENV DEBIAN_FRONTEND noninteractive
# Set user to root
USER root
# =========== Basic Configuration ======================================================
# Update the system
#RUN apt-get -y update \
# && apt-get install -y sudo build-essential git python python-dev \
# python-setuptools make g++ cmake gfortran ipython swig ant python-numpy \
# python-scipy python-matplotlib cython python-lxml python-nose python-jpype \
# libboost-dev jcc git subversion wget zlib1g-dev pkg-config clang
# Install system libs
# RUN apt-get install sudo
# ========== Install pip for managing python packages ==================================
RUN apt-get install -y python-pip python-lxml && pip install cython
# Install simulix dependencies
RUN apt-get install -y git
RUN apt-get install --assume-yes python
RUN apt-get install --assume-yes cmake
RUN apt-get install --assume-yes mingw-w64
# Add User
#RUN adduser --disabled-password --gecos '' docker
#RUN adduser docker sudo
#RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER build
# Install simulix
WORKDIR /home/sbuild
RUN git clone https://github.com/***.git
RUN mkdir mount
WORKDIR /home/sbuild/Sfiles
RUN pip install -r requirements.txt
When I use Docker with Jenkins Pipeline I do it with the sh step only:
try {
stage('Start Docker') {
sh 'docker-compose up'
}
stage('Build project') {
sh 'docker-compose exec my_service make:build
}
} catch (Error e)
// Maybe do something
} finally {
sh 'docker-compose stop'
}
You want to surround your stages with a try/catch/finally block to always stop the docker containers in case of failure.
Related
Below is the error from Jenkins console output:
+ sonar-scanner -Dsonar.login=**** -Dsonar.projectBaseDir=.
/var/lib/jenkins/workspace/Mtr-Pipeline_develop#2#tmp/durable-0080bcff/script.sh: 1: /var/lib/jenkins/workspace/Mtr-Pipeline_develop#2#tmp/durable-0080bcff/script.sh: sonar-scanner: Permission denied
I have setup the token and pasted the key in t-m-sonar-login variable in Jenkins global credentials.But I dont think it should be the keys causing `permission denied error. Can someone provide some pointers to look into the issue.
stage('SonarQube scan') {
agent {
dockerfile { filename 'sonar/Dockerfile' }
}
steps {
withCredentials([string(credentialsId: 't-m-sonar-login', variable: 'SONAR_LOGIN')]) {
script {
unstash 'coverage'
unstash 'testResults'
sh 'sonar-scanner -Dsonar.login="$SONAR_LOGIN" -Dsonar.projectBaseDir=. || true'
}
}
}
}
sonar/Dockerfile:
FROM node:15-buster
################
# Install java #
################
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get -y install openjdk-11-jre-headless && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
############################
# Install SonarQube client #
############################
WORKDIR /root
RUN apt-get install -y curl grep sed unzip
RUN curl --insecure -o ./sonarscanner.zip -L https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.4.0.2170-linux.zip
RUN unzip -q sonarscanner.zip
RUN rm sonarscanner.zip
RUN mv sonar-scanner-4.4.0.2170-linux sonar-scanner
ENV SONAR_RUNNER_HOME=/root/sonar-scanner
ENV PATH $PATH:/root/sonar-scanner/bin
# Include Sonar configuration and project paths
COPY ./sonar/sonar-runner.properties ./sonar-scanner/conf/sonar-scanner.properties
# Ensure Sonar uses the provided Java for musl instead of a borked glibc one
RUN sed -i 's/use_embedded_jre=true/use_embedded_jre=false/g' /root/sonar-scanner/bin/sonar-scanner
Please check whether Java is available on the system where SonarQube Scanner is running.
Another thing you can try is:
Go to SonarQube Scanner Directory -> Go to bin -> chmod +x sonar-scanner
I'm getting a "Bad substitution" error when trying to pass a pipeline parameter to the Dockerfile.
Jenkins parameter: version
Jenkinsfile:
pipeline {
agent any
stages {
stage('Build in docker container') {
agent { dockerfile true }
steps {
sh 'node -v'
}
}
}
}
Dockerfile:
FROM ubuntu:16.04
WORKDIR /root
# install dependencies
RUN apt-get update
RUN apt-get install curl wget vim nano zip git htop ncdu build-essential chrpath libssl-dev libxft-dev apt-transport-https -y
# install node 10
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash
RUN apt-get install --yes nodejs
#RUN node -v
#RUN npm -v
RUN echo ${params.version}
#ARG VERSION
#RUN echo $VERSION
Jenkins error message:
Jenkins error message
I'm sure the problem is that im new to pipelines/docker. :)
I would be grateful for any help.
issue resolved by adding the ARG variable to the Dockerfile.
This is how the Dockerfile looks like:
FROM ubuntu:16.04
WORKDIR /root
# install dependencies
RUN apt-get update
RUN apt-get install curl wget vim nano zip git htop ncdu build-essential chrpath libssl-dev libxft-dev apt-transport-https -y
# install node 10
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash
RUN apt-get install --yes nodejs
#RUN node -v
#RUN npm -v
ARG version=fisticuff
RUN echo $version
and this is how the Jenkinsfile looks like:
pipeline {
agent any
stages {
stage('Build in docker container') {
agent {
dockerfile {
additionalBuildArgs '--build-arg version="$version"'
}
}
steps {
sh 'node -v'
}
}
}
}
Console output in Jenkins:
Jenkins console output
Much obliged to all of you for giving me the hints. It helped me a lot!
Try running Dockerfile independently first.
Since you are new to docker try one step at a time.
I have a docker container that uses a debian image, and inside it, I need to run some **Go get commands **, using the user jenkins:jenkins, because it is the user jenkins use when running a build, but this user by itself don't have permission to do that(mkdir and creating files).
Tried to install sudo on image and run "sudo go get" on jenkins, but it doesn't work because of the env variables.
The dockerfile image I'm using is this one:
FROM debian:latest
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y gnupg2
RUN apt-get install sudo
RUN DEBIAN_FRONTEND="noninteractive" apt install -y apt-transport-https ca-certificates software-properties-common curl git jq wget unzip
RUN curl -s https://storage.googleapis.com/golang/go1.15.6.linux-amd64.tar.gz| tar -v -C /usr/local -xz
ENV PATH=$PATH:/usr/local/go/bin
RUN export PATH=$PATH:/usr/local/go/bin
Before trying to execute the sudo operations, I enter echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers just to make sure that the password won't be needed
Since I'm going to use this image in jenkins later on, I need a solution that I could implement in a "non interactive" way, preferably configuring it directly in the dockerfile.
Thanks!!
Found the solution:
Just like the comments said, the sudo go was not the solution, the solution was to provide the user jenkins:jenkins the permission to do so.
In dockerfile, I've created the user jenkins:jenkins, installed sudo package, and turned the user jenkins into sudo. Pointed the changes with "<-----":
FROM debian:latest
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y gnupg2
RUN apt-get install sudo <-----
RUN addgroup --gid 6002 jenkins <-----
RUN useradd -u 6002 -g jenkins -s /bin/sh jenkins <-----
RUN usermod -aG sudo jenkins <-----
RUN DEBIAN_FRONTEND="noninteractive" apt install -y apt-transport-https ca-certificates software-properties-common curl git jq wget unzip
RUN curl -s https://storage.googleapis.com/golang/go1.15.6.linux-amd64.tar.gz| tar -v -C /usr/local -xz
RUN export PATH=$PATH:/usr/local/go/bin
...
Doing that, in jenkins I was able to run in the Gopath, with the user that jenkins use(jenkins:jenkins), the command: sh "sudo chown -R jenkins:jenkins ./*"
So the sudo go get wasn't needed .
I've created a docker images to be able to run node >= 7.9.0 and monogodb for testing in Jenkins. Some might argue that testing with mongodb is not correct approach but the app uses it extensively and I have some complex updates and deletes so I need it there.
Docker file is under dockerfiles/test/Dockerfile in my github repo. When using the pipeline syntax the docker images is built successfully but I can't do sh 'npm install' or sh 'npm -v' in the steps of the pipeline. The docker images is tested and if I build it locally and run it I can do the npm install there. sh 'node -v' runs successfully in the pipeline and also sh 'ls'.
Here is the pipeline syntax.
pipeline {
agent { dockerfile { dir 'dockerfiles/test' } }
stages {
stage('Build') {
steps {
sh 'npm install'
}
}
}
post {
always {
echo 'I will always say Hello again!'
}
}
}
I get this error: ERROR: script returned exit code -1. I can't see anything wrong here. I've also tested with other node images with the same result. If I run it with a node slave I can do the installation but I do not want to have many different slaves with a lot of setups for integration tests.
And here is the dockerfile
FROM ubuntu:16.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
RUN echo "deb http://repo.mongodb.org/apt/ubuntu $(cat /etc/lsb-release | grep DISTRIB_CODENAME | cut -d= -f2)/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list
RUN apt-get update && apt-get install -y \
curl && \
curl -sL https://deb.nodesource.com/setup_7.x | bash - && \
apt-get install -y nodejs && \
apt-get install -y mongodb-org
RUN mkdir -p /data/db
RUN export LC_ALL=C
RUN groupadd -g 1000 jenkins && useradd -u 1000 jenkins -g jenkins
EXPOSE 27017
CMD ["/usr/bin/mongod"]
Found a workaround to a similar problem.
Problem
Jenkins running a pipeline job
This job is running commands inside a debian slim container
All commands are failing instantly with no error output, only a ERROR: script returned exit code -1
Running the container outside docker and executing the same commands with the same user is working as it should be
Extract from Jenkinfile :
androidImage = docker.build("android")
androidImage.inside('-u root') {
stage('Install'){
sh 'npm install' // is failing with generic error and no output
}
Solution
Found the answer on Jenkins bugtracker : https://issues.jenkins-ci.org/browse/JENKINS-35370 and on Jenkins Docker Pipeline Exit Code -1
My problem was solved by installing the procps package in my debian Dockerfile :
apt-get install -y procps
I replicated your setup as faithfully as I could. I used your Dockerfile and Jenkinsfile, and here's my package.json:
{
"name": "minimal",
"description": "Minimal package.json",
"version": "0.0.1",
"devDependencies": {
"mocha": "*"
}
}
It failed like this for me during npm install:
npm ERR! Error: EACCES: permission denied, mkdir '/home/jenkins'
I updated one line in your Dockerfile to add --create-home:
RUN groupadd -g 1000 jenkins && useradd -u 1000 jenkins -g jenkins --create-home
And the build passed. Kudos to #mkobit for keying in on the issue and linking to the jenkins issue that will make this cleaner in the future.
I have the below Dockerfile.
FROM ubuntu:14.04
MAINTAINER Samuel Alexander <samuel#alexander.com>
RUN apt-get -y install software-properties-common
RUN apt-get -y update
# Install Java.
RUN echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections
RUN add-apt-repository -y ppa:webupd8team/java
RUN apt-get -y update
RUN apt-get install -y oracle-java8-installer
RUN rm -rf /var/lib/apt/lists/*
RUN rm -rf /var/cache/oracle-jdk8-installer
# Define working directory.
WORKDIR /work
# Define commonly used JAVA_HOME variable
ENV JAVA_HOME /usr/lib/jvm/java-8-oracle
# JAVA PATH
ENV PATH /usr/lib/jvm/java-8-oracle/bin:$PATH
# Install maven
RUN apt-get -y update
RUN apt-get -y install maven
# Install Open SSH and git
RUN apt-get -y install openssh-server
RUN apt-get -y install git
# clone Spark
RUN git clone https://github.com/apache/spark.git
WORKDIR /work/spark
RUN mvn -DskipTests clean package
# clone and build zeppelin fork
RUN git clone https://github.com/apache/incubator-zeppelin.git
WORKDIR /work/incubator-zeppelin
RUN mvn clean package -Pspark-1.6 -Phadoop-2.6 -DskipTests
# Install Supervisord
RUN apt-get -y install supervisor
RUN mkdir -p var/log/supervisor
# Configure Supervisord
COPY conf/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# bash
RUN sed -i s#/home/git:/bin/false#/home/git:/bin/bash# /etc/passwd
EXPOSE 8080 8082
CMD ["/usr/bin/supervisord"]
While building image it failed in step 23 i.e.
RUN mvn clean package -Pspark-1.6 -Phadoop-2.6 -DskipTests
Now when I rebuild it starts to build from step 23 as docker is using cache.
But if I want to rebuild the image from step 21 i.e.
RUN git clone https://github.com/apache/incubator-zeppelin.git
How can I do that?
Is deleting the cached image is the only option?
Is there any additional parameter to do that?
You can rebuild the entire thing without using the cache by doing a
docker build --no-cache -t user/image-name
To force a rerun starting at a specific line, you can pass an arg that is otherwise unused. Docker passes ARG values as environment variables to your RUN command, so changing an ARG is a change to the command which breaks the cache. It's not even necessary to define it yourself on the RUN line.
FROM ubuntu:14.04
MAINTAINER Samuel Alexander <samuel#alexander.com>
RUN apt-get -y install software-properties-common
RUN apt-get -y update
# Install Java.
RUN echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections
RUN add-apt-repository -y ppa:webupd8team/java
RUN apt-get -y update
RUN apt-get install -y oracle-java8-installer
RUN rm -rf /var/lib/apt/lists/*
RUN rm -rf /var/cache/oracle-jdk8-installer
# Define working directory.
WORKDIR /work
# Define commonly used JAVA_HOME variable
ENV JAVA_HOME /usr/lib/jvm/java-8-oracle
# JAVA PATH
ENV PATH /usr/lib/jvm/java-8-oracle/bin:$PATH
# Install maven
RUN apt-get -y update
RUN apt-get -y install maven
# Install Open SSH and git
RUN apt-get -y install openssh-server
RUN apt-get -y install git
# clone Spark
RUN git clone https://github.com/apache/spark.git
WORKDIR /work/spark
RUN mvn -DskipTests clean package
# clone and build zeppelin fork, changing INCUBATOR_VER will break the cache here
ARG INCUBATOR_VER=unknown
RUN git clone https://github.com/apache/incubator-zeppelin.git
WORKDIR /work/incubator-zeppelin
RUN mvn clean package -Pspark-1.6 -Phadoop-2.6 -DskipTests
# Install Supervisord
RUN apt-get -y install supervisor
RUN mkdir -p var/log/supervisor
# Configure Supervisord
COPY conf/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# bash
RUN sed -i s#/home/git:/bin/false#/home/git:/bin/bash# /etc/passwd
EXPOSE 8080 8082
CMD ["/usr/bin/supervisord"]
And then just run it with a unique arg:
docker build --build-arg INCUBATOR_VER=20160613.2 -t user/image-name .
To change the argument with every build, you can pass a timestamp as the arg:
docker build --build-arg INCUBATOR_VER=$(date +%Y%m%d-%H%M%S) -t user/image-name .
or:
docker build --build-arg INCUBATOR_VER=$(date +%s) -t user/image-name .
As an aside, I'd recommend the following changes to keep your layers smaller, the more you can merge the cleanup and delete steps on a single RUN command after the download and install, the smaller your final image will be. Otherwise your layers will include all the intermediate steps between the download and cleanup:
FROM ubuntu:14.04
MAINTAINER Samuel Alexander <samuel#alexander.com>
RUN DEBIAN_FRONTEND=noninteractive \
apt-get -y install software-properties-common && \
apt-get -y update
# Install Java.
RUN echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections && \
add-apt-repository -y ppa:webupd8team/java && \
apt-get -y update && \
DEBIAN_FRONTEND=noninteractive \
apt-get install -y oracle-java8-installer && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* && \
rm -rf /var/cache/oracle-jdk8-installer && \
# Define working directory.
WORKDIR /work
# Define commonly used JAVA_HOME variable
ENV JAVA_HOME /usr/lib/jvm/java-8-oracle
# JAVA PATH
ENV PATH /usr/lib/jvm/java-8-oracle/bin:$PATH
# Install maven
RUN apt-get -y update && \
DEBIAN_FRONTEND=noninteractive \
apt-get -y install
maven \
openssh-server \
git \
supervisor && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# clone Spark
RUN git clone https://github.com/apache/spark.git
WORKDIR /work/spark
RUN mvn -DskipTests clean package
# clone and build zeppelin fork
ARG INCUBATOR_VER=unknown
RUN git clone https://github.com/apache/incubator-zeppelin.git
WORKDIR /work/incubator-zeppelin
RUN mvn clean package -Pspark-1.6 -Phadoop-2.6 -DskipTests
# Configure Supervisord
RUN mkdir -p var/log/supervisor
COPY conf/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# bash
RUN sed -i s#/home/git:/bin/false#/home/git:/bin/bash# /etc/passwd
EXPOSE 8080 8082
CMD ["/usr/bin/supervisord"]
One workaround:
Locate the step you want to execute from.
Before that step put a simple dummy operation like "RUN pwd"
Then just build your Dockerfile. It will take everything up to that step from the cache and then execute the lines after the dummy command.
To complete Dmitry's answer, you can use uniq arg like date +%s to keep always same commanline
docker build --build-arg DUMMY=`date +%s` -t me/myapp:1.0.0
Dockerfile:
...
ARG DUMMY=unknown
RUN DUMMY=${DUMMY} git clone xxx
...
A simpler technique.
Dockerfile:Add this line where you want the caching to start being skipped.
COPY marker /dev/null
Then build using
date > marker && docker build .
Another option is to delete the cached intermediate image you want to rebuild.
Find the hash of the intermediate image you wish to rebuild in your build output:
Step 27/42 : RUN lolarun.sh
---> Using cache
---> 328dfe03e436
Then delete that image:
$ docker image rmi 328dfe03e436
Or if it gives you an error and you're okay with forcing it:
$ docker image rmi -f 328dfe03e436
Finally, rerun your build command, and it will need to restart from that point because it's not in the cache.
If place ARG INCUBATOR_VER=unknown at top, then cache will not be used in case of change of INCUBATOR_VER from command line (just tested the build).
For me worked:
# The rebuild starts from here
ARG INCUBATOR_VER=unknown
RUN INCUBATOR_VER=${INCUBATOR_VER} git clone https://github.com/apache/incubator-zeppelin.git
As there is no official way to do this, the most simple way is to temporarily change the specific RUN command to include something harmless like echo:
RUN echo && apt-get -qq update && \
apt-get install nano
After building remove it.