I've created a docker images to be able to run node >= 7.9.0 and monogodb for testing in Jenkins. Some might argue that testing with mongodb is not correct approach but the app uses it extensively and I have some complex updates and deletes so I need it there.
Docker file is under dockerfiles/test/Dockerfile in my github repo. When using the pipeline syntax the docker images is built successfully but I can't do sh 'npm install' or sh 'npm -v' in the steps of the pipeline. The docker images is tested and if I build it locally and run it I can do the npm install there. sh 'node -v' runs successfully in the pipeline and also sh 'ls'.
Here is the pipeline syntax.
pipeline {
agent { dockerfile { dir 'dockerfiles/test' } }
stages {
stage('Build') {
steps {
sh 'npm install'
}
}
}
post {
always {
echo 'I will always say Hello again!'
}
}
}
I get this error: ERROR: script returned exit code -1. I can't see anything wrong here. I've also tested with other node images with the same result. If I run it with a node slave I can do the installation but I do not want to have many different slaves with a lot of setups for integration tests.
And here is the dockerfile
FROM ubuntu:16.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
RUN echo "deb http://repo.mongodb.org/apt/ubuntu $(cat /etc/lsb-release | grep DISTRIB_CODENAME | cut -d= -f2)/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list
RUN apt-get update && apt-get install -y \
curl && \
curl -sL https://deb.nodesource.com/setup_7.x | bash - && \
apt-get install -y nodejs && \
apt-get install -y mongodb-org
RUN mkdir -p /data/db
RUN export LC_ALL=C
RUN groupadd -g 1000 jenkins && useradd -u 1000 jenkins -g jenkins
EXPOSE 27017
CMD ["/usr/bin/mongod"]
Found a workaround to a similar problem.
Problem
Jenkins running a pipeline job
This job is running commands inside a debian slim container
All commands are failing instantly with no error output, only a ERROR: script returned exit code -1
Running the container outside docker and executing the same commands with the same user is working as it should be
Extract from Jenkinfile :
androidImage = docker.build("android")
androidImage.inside('-u root') {
stage('Install'){
sh 'npm install' // is failing with generic error and no output
}
Solution
Found the answer on Jenkins bugtracker : https://issues.jenkins-ci.org/browse/JENKINS-35370 and on Jenkins Docker Pipeline Exit Code -1
My problem was solved by installing the procps package in my debian Dockerfile :
apt-get install -y procps
I replicated your setup as faithfully as I could. I used your Dockerfile and Jenkinsfile, and here's my package.json:
{
"name": "minimal",
"description": "Minimal package.json",
"version": "0.0.1",
"devDependencies": {
"mocha": "*"
}
}
It failed like this for me during npm install:
npm ERR! Error: EACCES: permission denied, mkdir '/home/jenkins'
I updated one line in your Dockerfile to add --create-home:
RUN groupadd -g 1000 jenkins && useradd -u 1000 jenkins -g jenkins --create-home
And the build passed. Kudos to #mkobit for keying in on the issue and linking to the jenkins issue that will make this cleaner in the future.
Related
Below is the error from Jenkins console output:
+ sonar-scanner -Dsonar.login=**** -Dsonar.projectBaseDir=.
/var/lib/jenkins/workspace/Mtr-Pipeline_develop#2#tmp/durable-0080bcff/script.sh: 1: /var/lib/jenkins/workspace/Mtr-Pipeline_develop#2#tmp/durable-0080bcff/script.sh: sonar-scanner: Permission denied
I have setup the token and pasted the key in t-m-sonar-login variable in Jenkins global credentials.But I dont think it should be the keys causing `permission denied error. Can someone provide some pointers to look into the issue.
stage('SonarQube scan') {
agent {
dockerfile { filename 'sonar/Dockerfile' }
}
steps {
withCredentials([string(credentialsId: 't-m-sonar-login', variable: 'SONAR_LOGIN')]) {
script {
unstash 'coverage'
unstash 'testResults'
sh 'sonar-scanner -Dsonar.login="$SONAR_LOGIN" -Dsonar.projectBaseDir=. || true'
}
}
}
}
sonar/Dockerfile:
FROM node:15-buster
################
# Install java #
################
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get -y install openjdk-11-jre-headless && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
############################
# Install SonarQube client #
############################
WORKDIR /root
RUN apt-get install -y curl grep sed unzip
RUN curl --insecure -o ./sonarscanner.zip -L https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-4.4.0.2170-linux.zip
RUN unzip -q sonarscanner.zip
RUN rm sonarscanner.zip
RUN mv sonar-scanner-4.4.0.2170-linux sonar-scanner
ENV SONAR_RUNNER_HOME=/root/sonar-scanner
ENV PATH $PATH:/root/sonar-scanner/bin
# Include Sonar configuration and project paths
COPY ./sonar/sonar-runner.properties ./sonar-scanner/conf/sonar-scanner.properties
# Ensure Sonar uses the provided Java for musl instead of a borked glibc one
RUN sed -i 's/use_embedded_jre=true/use_embedded_jre=false/g' /root/sonar-scanner/bin/sonar-scanner
Please check whether Java is available on the system where SonarQube Scanner is running.
Another thing you can try is:
Go to SonarQube Scanner Directory -> Go to bin -> chmod +x sonar-scanner
Idea here is simple - dbt provides a way to generate static files and serve them by using commands dbt docs generate and dbt docs serve and I want to share in a way that everyone in my organization can see them (bypassing security concerns as of now). For this task I thought Cloud Run would be ideal solution as I already have Dockerfile and bash scrips which do some background work (cron job to clone git repo every x hours, etc.). Running this container locally works fine. But deploying this image in Cloud Run wasn't successful - it fails on the last step (which is dbt docs server --port 8080) with default error message Cloud Run error: The user-provided container failed to start and listen on the port defined provided by the PORT=8080 environment variable. Logs for this revision might contain more information. No additional information in logs before that wasn't printed.
Dockerfile:
FROM --platform=$build_for python:3.9.9-slim-bullseye
WORKDIR /usr/src/dbtdocs
RUN apt-get update && apt-get install -y --no-install-recommends git apt-transport-https ca-certificates gnupg curl cron \
&& apt-get clean
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install tzdata
RUN echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] http://packages.cloud.google.com/apt cloud-sdk main" | tee -a /etc/apt/sources.list.d/google-cloud-sdk.list && curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key --keyring /usr/share/keyrings/cloud.google.gpg add - && apt-get update -y && apt-get install google-cloud-sdk -y
RUN python -m pip install --upgrade pip setuptools wheel --no-cache-dir
RUN pip install dbt-bigquery
RUN ln -s /usr/local/bin/dbt /usr/bin/
RUN rm -rf /var/lib/apt/lists/*
COPY ./api-entrypoint.sh /usr/src/dbtdocs/
COPY ./cron_dbt_docs.sh /usr/src/dbtdocs/
COPY ./cron_script.sh /usr/src/dbtdocs/
ENV PORT=8080
RUN chmod 755 api-entrypoint.sh
RUN chmod 755 cron_dbt_docs.sh
RUN chmod 755 cron_script.sh
ENTRYPOINT ["/bin/bash", "-c", "/usr/src/dbtdocs/api-entrypoint.sh" ] ```
api-entrypoint.sh
#!/bin/bash
#set -e
#catch() {
# echo 'catching!'
# if [ "$1" != "0" ]; then
# echo "Error $1 occurred on $2"
# fi
#}
#trap 'catch $? $LINENO' EXIT
exec 2>&1
echo 'Starting DBT Workload'
echo 'Checking dependencies'
dbt --version
git --version
mkdir -p /data/dbt/ && cd /data/dbt/
echo 'Cloning dbt Repo'
git clone ${GITLINK} /data/dbt/
echo 'Working on dbt directory'
export DBT_PROFILES_DIR=/data/dbt/profile/
echo "Authentificate at GCP"
echo "Decrypting and saving sa.json file"
mkdir -p /usr/src/secret/
echo "${SA_SECRET}" | base64 --decode > /usr/src/secret/sa.json
gcloud auth activate-service-account ${SA_EMAIL} --key-file /usr/src/secret/sa.json
echo 'The Project set'
if test "${PROJECT_ID}"; then
gcloud config set project ${PROJECT_ID}
gcloud config set disable_prompts true
else
echo "Project Name not in environment variables ${PROJECT_ID}"
fi
echo 'Use Google Cloud Secret Manager Secret'
if test "${PROFILE_SECRET_NAME}"; then
#mkdir -p /root/.dbt/
mkdir -p /root/secret/
gcloud secrets versions access latest --secret="${PROFILE_SECRET_NAME}" > /root/secret/creds.json
export GOOGLE_APPLICATION_CREDENTIALS=/root/secret/creds.json
else
echo 'No Secret Name described - GCP Secret Manager'
fi
echo 'Apply cron Scheduler'
sh -c "/usr/src/dbtdocs/cron_script.sh install"
/etc/init.d/cron restart
touch /data/dbt_docs_job.log
sh -c "/usr/src/dbtdocs/cron_dbt_docs.sh"
touch /data/cron_up.log
tail -f /data/dbt_docs_job.log &
tail -f /data/cron_up.log &
dbt docs serve --port 8080
Container port is set to 8080 when creating Cloud Run service, so I don't think it's a problem here.
Have someone actually encountered similar problems using Cloud Run?
Logs in Cloud Logging
Your container is not listening/responding on port 8080 and has been terminated before the server process starts listening.
Review the last line in the logs. The previous line is building catalog.
Your container is taking too long to startup. Containers should start within 10 seconds because Cloud Run will only keep pending requests for 10 seconds.
All of the work I see in the logs should be performed before the container is deployed and not during container start.
The solution is to redesign how you are building and deploying this container so that the application begins responding to requests as soon as the container starts.
I'm getting a "Bad substitution" error when trying to pass a pipeline parameter to the Dockerfile.
Jenkins parameter: version
Jenkinsfile:
pipeline {
agent any
stages {
stage('Build in docker container') {
agent { dockerfile true }
steps {
sh 'node -v'
}
}
}
}
Dockerfile:
FROM ubuntu:16.04
WORKDIR /root
# install dependencies
RUN apt-get update
RUN apt-get install curl wget vim nano zip git htop ncdu build-essential chrpath libssl-dev libxft-dev apt-transport-https -y
# install node 10
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash
RUN apt-get install --yes nodejs
#RUN node -v
#RUN npm -v
RUN echo ${params.version}
#ARG VERSION
#RUN echo $VERSION
Jenkins error message:
Jenkins error message
I'm sure the problem is that im new to pipelines/docker. :)
I would be grateful for any help.
issue resolved by adding the ARG variable to the Dockerfile.
This is how the Dockerfile looks like:
FROM ubuntu:16.04
WORKDIR /root
# install dependencies
RUN apt-get update
RUN apt-get install curl wget vim nano zip git htop ncdu build-essential chrpath libssl-dev libxft-dev apt-transport-https -y
# install node 10
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash
RUN apt-get install --yes nodejs
#RUN node -v
#RUN npm -v
ARG version=fisticuff
RUN echo $version
and this is how the Jenkinsfile looks like:
pipeline {
agent any
stages {
stage('Build in docker container') {
agent {
dockerfile {
additionalBuildArgs '--build-arg version="$version"'
}
}
steps {
sh 'node -v'
}
}
}
}
Console output in Jenkins:
Jenkins console output
Much obliged to all of you for giving me the hints. It helped me a lot!
Try running Dockerfile independently first.
Since you are new to docker try one step at a time.
I'm new with Jenkins-Groovy and try to run a command within an existing Docker-Container and before setting some environmental-variables using a Jenkins-Pipeline.
The bash-script used for right now (so just executing it from the command line) looks like that and works:
export LIB_ROOT=/usr/local/LIBS
export TMP_MAC_ADDRESS=b5:17:a3:28:55:ea
sudo docker run --rm -i -v "$LIB_ROOT":/usr/local/LIBS/from-host -v /home/sbuild/Dockerfiles/Sfiles/mnt:/home/sbuild/mount --mac-address="$TMP_MAC_ADDRESS" -t sbuild:current
Afterwards I want to build some of my sources (mounted) inside the Docker-Container using something like:
python3 batchCompile.sh ../mount/src.zip
Right now I've been trying to write it like that in my Jenkins:
node ('linux-slave') {
withEnv(['PATH=/usr/local/LIBS:/usr/local/MATLAB/from-host -v /home/sbuild/Dockerfiles/Sfiles/mnt:/home/sbuild/mount --mac-address=b5:17:a3:28:55:ea']) {
docker.image('sbuild').inside {
sh 'echo $PATH'
sh 'mvn --version'
}
}
sh 'echo $PATH'
}
Yet this just fails with an opaque message:
Running in Durability level: MAX_SURVIVABILITY
org.codehaus.groovy.control.MultipleCompilationErrorsException: startup failed:
WorkflowScript: 71: Expected a symbol # line 71, column 25.
docker.image('sbuild:current').inside {
^
1 error
at org.codehaus.groovy.control.ErrorCollector.failIfErrors(ErrorCollector.java:310)
I'm not able to figure out what is running wrong.
So I was just trying to get inside the Docker and look what I can do from there. With this little script I was experimenting a little:
script{
docker.image('sbuild:current').inside{
sh 'touch asdf'
sh 'cd /home/sbuild/'
sh 'pwd'
}
Yet by default I'm just working from the Jeninks-Folder and none of these commands are actually called inside the Docker. Also the container doesn't seem to run at any time.
How do I have to write my code to start the Docker I had configured and use commands inside?
There's some documentation outside for creating new Docker containers, but I have difficulties to figure out how to make sense of that error message and how to properly debug.
Edit 1: The Dockerfile:
FROM labs:R2018
# Avoid interaction
ENV DEBIAN_FRONTEND noninteractive
# Set user to root
USER root
# =========== Basic Configuration ======================================================
# Update the system
#RUN apt-get -y update \
# && apt-get install -y sudo build-essential git python python-dev \
# python-setuptools make g++ cmake gfortran ipython swig ant python-numpy \
# python-scipy python-matplotlib cython python-lxml python-nose python-jpype \
# libboost-dev jcc git subversion wget zlib1g-dev pkg-config clang
# Install system libs
# RUN apt-get install sudo
# ========== Install pip for managing python packages ==================================
RUN apt-get install -y python-pip python-lxml && pip install cython
# Install simulix dependencies
RUN apt-get install -y git
RUN apt-get install --assume-yes python
RUN apt-get install --assume-yes cmake
RUN apt-get install --assume-yes mingw-w64
# Add User
#RUN adduser --disabled-password --gecos '' docker
#RUN adduser docker sudo
#RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER build
# Install simulix
WORKDIR /home/sbuild
RUN git clone https://github.com/***.git
RUN mkdir mount
WORKDIR /home/sbuild/Sfiles
RUN pip install -r requirements.txt
When I use Docker with Jenkins Pipeline I do it with the sh step only:
try {
stage('Start Docker') {
sh 'docker-compose up'
}
stage('Build project') {
sh 'docker-compose exec my_service make:build
}
} catch (Error e)
// Maybe do something
} finally {
sh 'docker-compose stop'
}
You want to surround your stages with a try/catch/finally block to always stop the docker containers in case of failure.
I am using jenkins image to run on docker container. I have a modified version of the image as below:
USER root
RUN apt-get update
RUN apt-get install -y sudo
RUN curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash -
RUN apt-get install -y nodejs
RUN npm -v
USER jenkins
when I run the container based on this image it all goes fine. I can go into the container and do npm -v and it all works just fine. However, the build script on my jenkins which is simply as
echo 'starting build'
npm -v
fails with error npm not found.
npm is not in the path of your jenkins' user.
You could get a shell on your container to figure out the npm path:
docker exec -it <CONTAINER_NAME> bash
which npm
Then you could run it with a full path in the jenkins script, symlink it, add it to $PATH etc...