Jenkins not respecting PATH modification? - docker

I'm struggling to modify the PATH in Jenkins, using various methods to no avail.
I am using Jenkins: 2.319.3
Here is an example where I've tried to use a Dockerfile and powershell. I had comparable results with the bourne shell, where even if the path was exported it would not persist to subsequent shell calls.
When running the dockerfile locally I can confirm that the path is modified correctly, so I feel it must be some Jenkins specific issue?
Dockerfile.unix
ARG VARIANT="bullseye"
ARG PYTHON="3.8"
FROM python:${PYTHON}-${VARIANT}
# Installl powershell
RUN \
DEBIAN_FRONTEND=noninteractive \
&& apt-get update -y \
&& apt-get install -y software-properties-common lsb-release --no-install-recommends \
&& wget "https://packages.microsoft.com/config/debian/$(lsb_release -rs)/packages-microsoft-prod.deb" \
&& dpkg -i packages-microsoft-prod.deb \
&& apt-get update -y \
&& apt-get install -y powershell --no-install-recommends \
&& rm *.deb
RUN \
python3 -m pip install pipx \
&& pipx ensurepath \
&& pipx install tox \
&& pipx install hatch
CMD [ "pwsh" ]
Jenkinsfile
pipeline {
agent {
dockerfile {
filename 'Dockerfile.unix'
reuseNode true
args '-u root'
// Agent label
label 'ubuntu-docker'
}
}
options {
timestamps()
timeout(time: 5, unit: 'MINUTES') // timeout on whole pipeline job
}
stages {
stage('Setup') {
environment {
PATH = "/root/.local/bin:$PATH"
}
steps {
pwsh '$env:PATH' // ------------------------------ not in path
pwsh 'python -m pipx ensurepath'
pwsh '$env:PATH' // ------------------------------ not in path
pwsh 'python -m pipx list'
pwsh '''
$env:PATH="/root/.local/bin:"+$env:PATH
python --version
python -m pip freeze
$env:PATH # ------------------------------------- works
tox --version
hatch --version
'''
}
}
}
}

Related

GitHub Self Hosted Runner Can't Find Command

I have the following Dockerfile:
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get -y install curl \
iputils-ping \
apt-transport-https \
tar \
jq \
python && \
curl -sL https://deb.nodesource.com/setup_14.x | bash && \
apt-get install nodejs -yq && \
apt-get clean && apt-get autoremove
RUN npm install -g npm#latest
ARG GH_RUNNER_VERSION="2.283.3"
WORKDIR /actions-runner
RUN curl -o actions.tar.gz --location "https://github.com/actions/runner/releases/download/v${GH_RUNNER_VERSION}/actions-runner-linux-x64-${GH_RUNNER_VERSION}.tar.gz" && \
tar -zxf actions.tar.gz && \
rm -f actions.tar.gz && \
./bin/installdependencies.sh
COPY entrypoint.sh .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["/actions-runner/entrypoint.sh"]
and the following step on the ci:
- name: Create DB
run: npm run dc-up
The output of that step is: npm: command not found.
I added the path using the method the docs suggested, it was done by adding a new step:
- name: add npm to path
run: echo "/usr/bin/npm" >> $GITHUB_PATH
I've checked that node is in the path by printing the path in a separate step inside the CI and the output is:
Run echo "$PATH"
/usr/bin/npm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
I know 100% that NPM is installed into the docker image because when I run it local and only try to interact withit without the ENTRYPOINT then I'm able to print the NPM version and I checked that is it indeed in /usr/bin/npm, but still, inside the steps of the CI it can't find npm for some reason.
And its not only for npm, but for every single installation that I tried to do, I just picked npm for showcase.
Anyone has any idea what can be done?

Errors Installing singularity inside dockerfile

I am trying to run a nextflow pipeline which uses an older version of nextflow (21.04.3) and java version 8. Since I have to use this pipeline on a remote server, therefore I can only use singularity.
As this nextflow pipeline also uses singularity pull calls therefore I need the singularity installed inside the docker image as well. Then, I can convert this image docker image to a singularity image and then I can move it to the remote server.
I am trying to install singularity inside dockerfile but I am getting errors,
This is the dockerfile that I am using,
FROM python:3.8.9-slim
LABEL authors="phil.ewels#scilifelab.se,erik.danielsson#scilifelab.se" \
description="Docker image containing requirements for the nfcore tools"
# Do not pick up python packages from $HOME
ENV PYTHONNUSERSITE=1
# Update pip to latest version
RUN python -m pip install --upgrade pip
# Install dependencies
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
# Install Nextflow dependencies
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y git \
&& apt-get install -y wget
# Create man dir required for Java installation
# and install Java
RUN mkdir -p /usr/share/man/man1 \
&& apt-get install -y openjdk-11-jre \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*
# Install Singularity
RUN wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee /etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update
RUN apt-get install -y singularity-container
# Setup ARG for NXF_VER ENV
ARG NXF_VER=""
ENV NXF_VER ${NXF_VER}
# Install Nextflow
RUN wget https://github.com/nextflow- io/nextflow/releases/download/v21.04.3/nextflow | bash \
&& mv nextflow /usr/local/bin \
&& chmod a+rx /usr/local/bin/nextflow
# Add the nf-core source files to the image
COPY . /usr/src/nf_core
WORKDIR /usr/src/nf_core
# Install nf-core
RUN python -m pip install .
# Set up entrypoint and cmd for easy docker usage
CMD [ "." ]
These are the errors I am getting
Step 9/17 : RUN wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee
/etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --
keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update
---> Running in afc3dcbbd1ee
--2022-03-17 17:40:19-- http://neuro.debian.net/lists/xenial.us-ca.full
Resolving neuro.debian.net (neuro.debian.net)... 129.170.233.11
Connecting to neuro.debian.net (neuro.debian.net)|129.170.233.11|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 262
Saving to: ‘STDOUT’
0K 100% 18.4M=0s
deb http://neurodeb.pirsquared.org data main contrib non-free
#deb-src http://neurodeb.pirsquared.org data main contrib non-free
deb http://neurodeb.pirsquared.org xenial main contrib non-free
#deb-src http://neurodeb.pirsquared.org xenial main contrib non-free
2022-03-17 17:40:19 (18.4 MB/s) - written to stdout [262/262]
/bin/sh: 1: apt-key: not found
The command '/bin/sh -c wget -O- http://neuro.debian.net/lists/xenial.us-ca.full | tee /etc/apt/sources.list.d/neurodebian.sources.list && \ apt-key adv --recv-keys --keyserver hkp://pool.sks-keyservers.net:80 0xA5D32F012649A5A9 && \ apt-get update'
returned a non-zero code: 127
I there a way to install singularity using a dockerfile ?
Thanks
I made some changes in the dockerfile based on the method to install singularity in linux given here.
The complete dockerfile with which I was able to run successfully nextflow, java and singularity within singularity is given below,
FROM python:3.8.9-slim
LABEL
authors="phil.ewels#scilifelab.se,erik.danielsson#scilifelab.se" \
description="Docker image containing requirements for the nfcore tools"
# Do not pick up python packages from $HOME
ENV PYTHONNUSERSITE=1
# Update pip to latest version
RUN python -m pip install --upgrade pip
# Install dependencies
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
# Install Nextflow dependencies
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y git \
&& apt-get install -y wget
# Create man dir required for Java installation
# and install Java
RUN mkdir -p /usr/share/man/man1 \
&& apt-get install -y openjdk-11-jre \
&& apt-get clean -y && rm -rf /var/lib/apt/lists/*
# Install Singularity
RUN apt-get update && apt-get install -y \
build-essential \
libssl-dev \
uuid-dev \
libgpgme11-dev \
squashfs-tools \
libseccomp-dev \
wget \
pkg-config \
procps
# Download Go source version 1.16.3, install them and modify the PATH
ENV VERSION=1.16.3
ENV OS=linux
ENV ARCH=amd64
RUN wget https://dl.google.com/go/go$VERSION.$OS-$ARCH.tar.gz && \
tar -C /usr/local -xzvf go$VERSION.$OS-$ARCH.tar.gz && \
rm go$VERSION.$OS-$ARCH.tar.gz && \
echo 'export PATH=$PATH:/usr/local/go/bin' | tee -a /etc/profile
# Download Singularity from version 3.7.3 (security version)
ENV VERSION=3.7.3
RUN wget https://github.com/sylabs/singularity/releases/download/v${VERSION}/singularity-${VERSION}.tar.gz && \
tar -xzf singularity-${VERSION}.tar.gz
# Compile Singularity sources and install it
RUN export PATH=$PATH:/usr/local/go/bin && \
cd singularity && \
./mconfig --without-suid && \
make -C ./builddir && \
make -C ./builddir install
# Setup ARG for NXF_VER ENV
ARG NXF_VER=""
ENV NXF_VER ${NXF_VER}
# Install Nextflow
RUN wget https://github.com/nextflow-io/nextflow/releases/download/v21.04.3/nextflow | bash \
&& mv nextflow /usr/local/bin \
&& chmod a+rx /usr/local/bin/nextflow
# Add the nf-core source files to the image
COPY . /usr/src/nf_core
WORKDIR /usr/src/nf_core
# Install nf-core
RUN python -m pip install .
# Set up entrypoint and cmd for easy docker usage
CMD [ "." ]
The file named requirements.txt used in the above dockerfile is given below,
click
GitPython
jinja2
jsonschema
packaging
prompt_toolkit>=3.0.3
pyyaml
pytest-workflow
questionary>=1.8.0
requests_cache
requests
rich>=10.0.0
tabulate

Passing Jenkins Pipeline parameter to a Dockerfile

I'm getting a "Bad substitution" error when trying to pass a pipeline parameter to the Dockerfile.
Jenkins parameter: version
Jenkinsfile:
pipeline {
agent any
stages {
stage('Build in docker container') {
agent { dockerfile true }
steps {
sh 'node -v'
}
}
}
}
Dockerfile:
FROM ubuntu:16.04
WORKDIR /root
# install dependencies
RUN apt-get update
RUN apt-get install curl wget vim nano zip git htop ncdu build-essential chrpath libssl-dev libxft-dev apt-transport-https -y
# install node 10
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash
RUN apt-get install --yes nodejs
#RUN node -v
#RUN npm -v
RUN echo ${params.version}
#ARG VERSION
#RUN echo $VERSION
Jenkins error message:
Jenkins error message
I'm sure the problem is that im new to pipelines/docker. :)
I would be grateful for any help.
issue resolved by adding the ARG variable to the Dockerfile.
This is how the Dockerfile looks like:
FROM ubuntu:16.04
WORKDIR /root
# install dependencies
RUN apt-get update
RUN apt-get install curl wget vim nano zip git htop ncdu build-essential chrpath libssl-dev libxft-dev apt-transport-https -y
# install node 10
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash
RUN apt-get install --yes nodejs
#RUN node -v
#RUN npm -v
ARG version=fisticuff
RUN echo $version
and this is how the Jenkinsfile looks like:
pipeline {
agent any
stages {
stage('Build in docker container') {
agent {
dockerfile {
additionalBuildArgs '--build-arg version="$version"'
}
}
steps {
sh 'node -v'
}
}
}
}
Console output in Jenkins:
Jenkins console output
Much obliged to all of you for giving me the hints. It helped me a lot!
Try running Dockerfile independently first.
Since you are new to docker try one step at a time.

Permission denied trying to mkdir in build with jenkins and kubernetes

I need to create a folder in one of the stages of a build in jenkins, but I'm getting a permission error when it tries to run mkdir.
dockerfile:
FROM debian:latest
USER root
WORKDIR /root
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y gnupg2
RUN DEBIAN_FRONTEND="noninteractive" apt install -y apt-transport-https ca-certificates software-properties-common curl git jq wget unzip
RUN curl -s https://storage.googleapis.com/golang/go1.15.6.linux-amd64.tar.gz| tar -v -C /usr/local -xz
RUN export PATH=$PATH:/usr/local/go/bin
#
# Docker
#
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
RUN apt-key fingerprint 0EBFCD88
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
RUN apt-get update
RUN apt-get install -y docker-ce docker-ce-cli containerd.io
ENV PATH=$PATH:/usr/local/go/bin
RUN export PATH=$PATH:/usr/local/go/bin
WORKDIR /app/src/
Stage of Jenkinsfile:
stage('Restore') {
try {
timeout(time: 10, unit: 'MINUTES') {
dir('.') {
sh "mv * /root/go/src/Proj/"
}
dir('/root/go/src/Proj/') {
sh "mkdir ./${repoName}"
}
}
} catch (Exception err) {
cleanWS()
error("[FAILED]: " + err.getMessage())
}
}
The mv command works fine, but when it comes to mkdir, it gives me this error:
java.nio.file.AccessDeniedException: /root/go/src
Does anyone know how could I set this permission?

how to start docker container from gradle?

I am trying to switch from Cmake to gradle. I want to configure gradle to work as follow
$ cd myapp && ls myapp
$ Dockerfile build.gradle src
$ gradle build
Build the docker image from docker file
start container
build the application
The docker image contains complete environment for my app.
FROM debian:stretch
RUN apt-get update -y && apt install -y git \
python3-dev libncurses5-dev libxml2-dev \
libedit-dev swig doxygen graphviz xz-utils ninja-build
RUN echo "deb http://ftp.de.debian.org/debian stretch main" >> /etc/apt/source.list
RUN apt-get update && apt-get install -y openjdk-8-jre openjdk-8-jdk
# Clang 8 as a compiler
RUN apt-get update && apt-get install -y \
xz-utils \
build-essential \
curl \
&& rm -rf /var/lib/apt/lists/* \
&& curl -SL http://releases.llvm.org/8.0.0/clang+llvm-8.0.0-x86_64-linux-gnu-ubuntu-18.04.tar.xz \
| tar -xJC . && \
mv clang+llvm-8.0.0-x86_64-linux-gnu-ubuntu-18.04 clang_8.0.0 && \
echo 'export PATH=/clang_8.0.0/bin:$PATH' >> ~/.bashrc && \
echo 'export LD_LIBRARY_PATH=/clang_8.0.0/lib:LD_LIBRARY_PATH' >> ~/.bashrc
#
RUN apt-get update
#install sdkman
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN apt-get -qq -y install curl wget unzip zip
RUN curl -s "https://get.sdkman.io" | bash
RUN source "$HOME/.sdkman/bin/sdkman-init.sh"
#install gradle
RUN yes | /bin/bash -l -c 'sdk install gradle 6.1'
PS: This is cpp project
You can build a docker image from Gradle tasks by using **com.bmuschko:gradle-docker-plugin:3.1.0 plugin**
buildscript {
repositories {
jcenter()
mavenCentral()
}
dependencies {
classpath 'com.bmuschko:gradle-docker-plugin:3.1.0'
}
}
apply plugin: 'com.bmuschko.docker-remote-api'
import com.bmuschko.gradle.docker.tasks.image.Dockerfile
import com.bmuschko.gradle.docker.tasks.image.DockerBuildImage
import com.bmuschko.gradle.docker.tasks.image.*
task buildImage(type: DockerBuildImage) {
group = ''
inputDir = file('.')
tag = 'image name:'+tag
}
read the documentation for more details https://bmuschko.github.io/gradle-docker-plugin/
Build an image from Gradle task - ./gradlew taskname
To start container and run the command inside it you can use CMD or ENTRYPOINT and specify the command in Dockerfile
CMD ["start.sh"]
in start.sh you can specify your command to be executed after running the container
Let me slightly clean up that Dockerfile first:
FROM debian:stretch
RUN echo "deb http://ftp.de.debian.org/debian stretch main" >> /etc/apt/source.list
RUN apt-get update -y && apt install -qq -y \
python3-dev libncurses5-dev libxml2-dev \
libedit-dev swig doxygen graphviz xz-utils ninja-build \
openjdk-8-jre openjdk-8-jdk \
xz-utils curl git build-essential wget unzip zip
# Clang 8 as a compiler
RUN curl -SL http://releases.llvm.org/8.0.0/clang+llvm-8.0.0-x86_64-linux-gnu-ubuntu-18.04.tar.xz \
| tar -xJC . && \
mv clang+llvm-8.0.0-x86_64-linux-gnu-ubuntu-18.04 clang_8.0.0 && \
echo 'export PATH=/clang_8.0.0/bin:$PATH' >> ~/.bashrc && \
echo 'export LD_LIBRARY_PATH=/clang_8.0.0/lib:LD_LIBRARY_PATH' >> ~/.bashrc
#install sdkman
RUN ln -fs /bin/bash /bin/sh
RUN curl -s "https://get.sdkman.io" | bash
RUN source "$HOME/.sdkman/bin/sdkman-init.sh"
RUN yes | /bin/bash -l -c 'sdk install gradle 6.1'
RUN mkdir /src /work
WORKDIR /src
ENTRYPOINT gradle build -p /src
The important bits are at the bottom: it creates a /src directory and executes gradle build there. All that remains for you is to make that directory available when you build.
Assuming you built the container once with docker build -t my-build-container ., you can run it as follows:
docker run -v $(pwd):/src my-build-container
Depending on your build system, this might pollute your source tree with various build artifacts owned by root. If so, consider switching to out-of-tree builds by changing the default working directory to /work instead. All build results will go to /work, and you can extract them from the container afterwards.
Add docker plugin first
buildscript {
dependencies {
classpath("se.transmode.gradle:gradle-docker:1.2")
}
}
create simple task like this in build.gradle file
task buildDocker(type: Docker, dependsOn: build) {
push = false
project.group = 'testProject'
project.archivesBaseName = jar.baseName
applicationName = jar.baseName
dockerfile = file('src/main/docker/Dockerfile')
doFirst {
copy {
from jar
into stageDir
}
}
}

Resources