Jenkins Pipeline - executing job in docker permissions issues - docker

I have a Jenkinsfile that looks like
pipeline {
agent {
docker {
image 'myartifactory/cloud-eng/sls-build:0.13'
label 'docker'
registryUrl 'https://myartifactory'
registryCredentialsId 'artfifactory-cred-id'
}
}
environment {
}
stages {
stage('Test') {
sh "env | sort"
sh "make setup-ci"
sh "make test"
}
}
}
When I run this I see that jenkins executed a command that looks like:
docker run -t -d -u 1318244366:1318464184 -w /jenkins_home/jenkins-rh7-a01/8b13f8c3/workspace/te_csoe-1624-switch-shared-https -v /jenkins_home/jenkins-rh7-a01/8b13f8c3/workspace/te_csoe-1624-switch-shared-https:/jenkins_home/jenkins-rh7-a01/8b13f8c3/workspace/te_csoe-1624-switch-shared-https:rw,z -v /jenkins_home/jenkins-rh7-a01/8b13f8c3/workspace/te_csoe-1624-switch-shared-https#tmp:/jenkins_home/jenkins-rh7-a01/8b13f8c3/workspace/te_csoe-1624-switch-shared-https#tmp:rw,z -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** -e ******** myartifactory/cloud-eng/sls-build:0.13 cat
This project uses python, NPM, and the serverless framework (javascript).
If I run this as above it will fail
npm ERR! correctMkdir failed to make directory /.npm/_locks
2021-03-11 16:17:02 npm ERR! code EACCES
2021-03-11 16:17:02 npm ERR! syscall mkdir
2021-03-11 16:17:02 npm ERR! path /.npm
2021-03-11 16:17:02 npm ERR! errno -13
2021-03-11 16:17:02 npm ERR!
2021-03-11 16:17:02 npm ERR! Your cache folder contains root-owned files, due to a bug in
2021-03-11 16:17:02 npm ERR! previous versions of npm which has since been addressed.
2021-03-11 16:17:02 npm ERR!
2021-03-11 16:17:02 npm ERR! To permanently fix this problem, please run:
2021-03-11 16:17:02 npm ERR! sudo chown -R 1318244366:1318464184 "/.npm"
2021-03-11 16:17:02 make: *** [setup-ci] Error 243
I tried many solutions with varying success. If I add this:
args '-u root' to the docker section it works as of course root has permissions to everything.... however security isn't going to like running the docker container as root.
No matter what I do with overriding $HOME in environment or args, changing users I always end up with permissions issues either with NPMs or python.
Other errors I've encountered with various hacks such as args '-e HOME=/tmp -e NPM_CONFIG_PREFIX=/tmp/.npm'
../../../../../tmp/.local/share/virtualenvs/te_csoe-1624-switch-shared-https-y_ilovXz/lib/python3.8/site-packages/_pytest/cacheprovider.py:428
2021-03-11 14:45:14 /tmp/.local/share/virtualenvs/te_csoe-1624-switch-shared-https-y_ilovXz/lib/python3.8/site-packages/_pytest/cacheprovider.py:428: PytestCacheWarning: cache could not write path /jenkins_home/jenkins-rh7-a01/8b13f8c3/workspace/te_csoe-1624-switch-shared-https/.pytest_cache/v/cache/nodeids
2021-03-11 14:45:14 config.cache.set("cache/nodeids", sorted(self.cached_nodeids))
Error: EACCES: permission denied, unlink '/jenkins_home/jenkins-rh7-a01/8b13f8c3/workspace/te_csoe-1624-switch-shared-https/.serverless/cloudformation-template-update-stack.json'
2021-03-11 14:45:19 at Object.unlinkSync (fs.js:1136:3)
Since jenkins mounts random directories to share and random users I am not sure how to modify the Dockerfile for the image to grant write permissions....
Does anyone know how to get the permissions correct?
EDIT added Dockerfile
FROM amazonlinux:2
RUN yum install -y amazon-linux-extras
RUN yum install -y unzip
RUN yum groupinstall -y "Development Tools"
RUN yum install vim-enhanced -y
# install python/pipenv
ENV PYTHON_VERSION=3.9
RUN amazon-linux-extras install python${PYTHON_VERSION}
RUN /bin/pip-${PYTHON_VERSION} install pipenv
# install node/npm
RUN curl -sL https://rpm.nodesource.com/setup_12.x | bash -
RUN yum install -y nodejs
RUN mkdir /tmp/node-cache
RUN npm config set cache /tmp/node-cache --global
# install aws-cli2
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \
unzip awscliv2.zip && \
./aws/install && \
rm -rf awscliv2.zip
`
# install vault client
ENV VAULT_VERSION=1.5.4
RUN curl -sSLo /tmp/vault.zip https://releases.hashicorp.com/vault/$VAULT_VERSION/vault_${VAULT_VERSION}_linux_amd64.zip && \
unzip -d /bin /tmp/vault.zip && \
rm -rf /tmp/vault.zip && \
setcap cap_ipc_lock= /bin/vault
ADD ./aws-login.sh /usr/local/bin/aws-login.sh
ADD ./ghe-token.sh /usr/local/bin/ghe-token.sh
ENV PATH="/bin:${PATH}"
# indicates CI CONTAINER so processes can check if running in CI
ENV CI_CONTAINER=1
ENV LANG="en_US.UTF-8"
ENV TERM xterm
# avoid million NPM install messages
ENV npm_config_loglevel warn
ENTRYPOINT []

The thing that was tripping me up was I had run this as -u root many times and I only have one agent (don't ask) and jenkins caches the workspace directory. So the file permissions had been changed in that code by the docker container running as root. So when I got rid of -u root and it started using the jenkins user it didn't have rights to some files and directories.
The solution was to delete the workspace and make sure that all make calls had an export HOME=${WORKSPACE} before any command.
There might be a better way to export HOME but this solves the prob

Related

Docker: GLIBC_2.34 not found (required by /usr/local/lib/bashlib.so)

I am having an issue where I'm trying to build in docker, I am getting an error saying GLIBC_2.34 not found (required by /usr/local/lib/bashlib.so)
I am doing this on WSL (Ubuntu) on Windows 11
The contents of my Makefile in the assets folder is as follows
all: bashlib.so
bashlib.so: processhider.c
gcc -Wall -fPIC -shared -o bashlib.so processhider.c -ldl
.PHONY clean:
rm -f bashlib.so
I issue make all and the bashlib.so file gets created. Running ldd --version brings ldd (Ubuntu GLIBC 2.35-0ubuntu3.1) 2.35
I go up a folder (I do the above) to create the bashlib.so needed for the next bit.
The processhider.c file mentioned above is taken from https://github.com/gianlucaborello/libprocesshider/blob/master/processhider.c
So now I go into the parent folder and run sudo make build and get the following error:
...
=> ERROR [11/40] RUN groupadd init -g 1050 0.3s
------
> [11/40] RUN groupadd init -g 1050:
#15 0.277 /bin/sh: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /usr/local/lib/bashlib.so)
------
executor failed running [/bin/sh -c groupadd init -g 1050]: exit code: 1
make: *** [Makefile:8: build] Error 1
The content of the Makefile is as follows
TAG?=cheeseballs/$(shell basename `pwd`):latest
all: build
build:
docker build -t "${TAG}" .
run: build
docker run --rm -e RESOURCE_ID=00000000-0000-4000-0000-000000000000 -ti "${TAG}"
clean:
docker rmi "${TAG}"
The contents of the Dockerfile are below (up to and including the line where it errors)
# Pull base image.
FROM debian:bullseye
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y man
# Uncomment the line below to set a root password for testing purposes
# RUN bash -c 'echo -e "testing\ntesting" | passwd root'
# No changes should need to be made here
RUN python3 -m pip install
RUN python3 -m pip install libtmux
RUN python3 -m pip install tmuxp
RUN python3 -m pip install pyinstaller
COPY assets/bashlib.so /usr/local/lib/bashlib.so
RUN chmod 755 /usr/local/lib/bashlib.so
RUN echo /usr/local/lib/bashlib.so >> /etc/ld.so.preload
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
RUN groupadd init -g 1050
...
I don't know where to go from here - any suggestions?

makefile:513: pod/perlintern.pod Segmentation fault (core dumped) When installing specific perl version in dockerfile

I am trying to install perl 5.12.3 onto a Fedora 33 Docker image in my dockerfile however when I attempt to build the image I am faced with this error:
/bin/sh: line 1: /dev/tty: No such device or address
make[1]: Leaving directory '/'
make[1]: [makefile:964: minitest] Error 1 (ignored)
./miniperl -Ilib autodoc.pl
make: *** [makefile:513: pod/perlintern.pod] Segmentation fault (core dumped)
This is how I am attempting to install it:
RUN wget https://www.cpan.org/authors/id/R/RJ/RJBS/perl-5.12.3.tar.gz
RUN tar -xzf perl-5.12.3.tar.gz
RUN perl-5.12.3/Configure -Dmksymlinks -des -Dprefix=/usr/local/ -d y &&\
make && \
make test && \
make install
RUN perl -v
I guess that the problem is that docker is running the build context with no stdin or tty. Does anyone know a fix for this? I tried to install perlbrew instead to accomplish this but that was already proving to have quite a few of its own issues. Thank you for any help or advice. I am open to any other methods to installing perl 5.12.3 in the image.
I was able to install Perl version 5.12.4 with perlbrew like this (building fedora:33 docker image from my Ubuntu 21.04 laptop):
Dockerfile:
FROM fedora:33
SHELL ["/bin/bash", "-c"]
RUN yum -y update \
&& yum -y install gcc gcc-c++ make curl \
vim wget zlib-devel openssl-devel bzip2 patch \
perl-CPAN perl-App-cpanminus
ARG user=root
ARG home=/$user
WORKDIR $home
USER $user
COPY entrypoint.sh .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
entrypoint.sh:
curl -L https://install.perlbrew.pl | SHELL=/bin/bash bash
echo 'export PERLBREW_ROOT=$HOME/perl5/perlbrew' >> .bashrc
echo 'source $PERLBREW_ROOT/etc/bashrc' >> .bashrc
export PERLBREW_ROOT=$HOME/perl5/perlbrew
source $PERLBREW_ROOT/etc/bashrc
perlbrew install --notest --noman perl-5.12.4
perlbrew install-cpanm
perlbrew switch perl-5.12.4
perl --version
exec bash

Pass ssh-agent to dockerfile to install private repository modules

I am trying to automate a docker build in Jenkins pipeline. In my dockerfile, I basically build a node application. In my npm install, I have some private git repositories which need os bindings and so have to be installed in the container. When I run this manually, I transfer my ssh keys (id_rsa) to dockerfile which is used for doing npm install. Now, my problem is when running this task in jenkins pipeline, I will be configuring a ssh-agent(Jenkins plugin). It will not be possible to extract private key from ssh-agent. How should I pass my ssh-agent to my dockerfile.
EDIT 1:
I got it partially working by this:
Docker Build Command:
DOCKER_BUILDKIT=1 docker build --no-cache -t $DOCKER_REGISTRY_URL/$IMAGE_NAME:v$BUILD_NUMBER --ssh default . &&
Then in Docker file:
This works fine:
RUN --mount=type=ssh GIT_SSH_COMMAND="ssh -vvvT -o StrictHostKeyChecking=no"
git clone git#github.com:****
Weird thing is this doesn't work:
RUN --mount=type=ssh GIT_SSH_COMMAND="ssh -vvvT -o StrictHostKeyChecking=no" npm install git+ssh//git#github.com:****
I feel this is something to do with StrictHostKeyChecking=no
I finally got it working by using ROOT user in Dockerfile and setting the npm cache to root.
The problem was that git was using the /root/.ssh folder while npm was using a different path - /home/.ssh as it's npm cache was set on /home/.ssh
For anyone still struggling, this is the config I used
Docker Build Command:
DOCKER_BUILDKIT=1 docker build --no-cache -t test --ssh default .
Dockerfile:
USER root
RUN apt-get update && \
apt-get install -y \
git \
openssh-server \
openssh-client
RUN mkdir -p -m 600 /root/.ssh && ssh-keyscan github.com >> /root/.ssh/known_hosts && echo "Host *\n StrictHostKeyChecking no" > /root/.ssh/config
RUN echo "Check ssh_config" && cat /root/.ssh/config
RUN rm -rf node_modules
RUN npm config set cache /root
RUN --mount=type=ssh GIT_SSH_COMMAND="ssh -vvvT" npm install

Docker container run exits immediately when mounting paths

I am running docker on windows 10. When I run this command without the -v flag to mount the host drive and file path to the container path, then the container runs fine and I can connect to it. However, when I provide the flag to mount the paths the container exits immediately. This is my command which runs without any error
docker container run -v c:/container-fs:/usr/src/app --publish 8001:8080 --detach --name bboard-ubuntu bulletinboard:Ubuntu
when I run the command docker container ls --all, I see that the container named bboard-ubuntu has exited almost immediately after it started up.
When trying to exec into the container using the command docker exec -it bboard-ubuntu /bin/bash, I get the error message as below:
Error response from daemon: Container
26a2d3361dfc0c890xxxxxxxxxxxxxxx97be532ab6e8771652e5b is not running
When I remove the mount flags and run it like this below, there are no issues and I can exec into the container file system.
docker container run --publish 8001:8080 --detach --name bboard-ubuntu bulletinboard:Ubuntu
How do I trace and fix this issue caused by providing the mount flag?
Edit
This is the Dockerfile
FROM ubuntu:18.04
WORKDIR /usr/src/app
COPY package.json .
# replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN apt-get update \
&& apt-get install -y curl \
&& apt-get -y autoclean
RUN apt-get install -y apt-utils
RUN apt-get -y install nano
# nvm environment variables
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 13.9.0
# install nvm
# https://github.com/creationix/nvm#install-script
RUN curl --silent -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.2/install.sh | bash
# install node and npm
RUN source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
# add node and npm to path so the commands are available
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# confirm installation
RUN node -v
RUN npm -v
RUN npm install
EXPOSE 8080
CMD [ "npm", "start" ]
COPY . .
Here is the error after removing detach
npm ERR! code ENOENT npm ERR! syscall open npm ERR! path
/usr/src/app/package.json npm ERR! errno -2 npm ERR! enoent ENOENT: no
such file or directory, open '/usr/src/app/package.json' npm ERR!
enoent This is related to npm not being able to find a file. npm ERR!
enoent
npm ERR! A complete log of this run can be found in: npm ERR!
/root/.npm/_logs/2020-02-26T19_02_33_143Z-debug.log
I am running these commands on windows host. Where do I find the /root/.npm/ log folder?

npm install fails in Jenkins pipeline

I've created a docker images to be able to run node >= 7.9.0 and monogodb for testing in Jenkins. Some might argue that testing with mongodb is not correct approach but the app uses it extensively and I have some complex updates and deletes so I need it there.
Docker file is under dockerfiles/test/Dockerfile in my github repo. When using the pipeline syntax the docker images is built successfully but I can't do sh 'npm install' or sh 'npm -v' in the steps of the pipeline. The docker images is tested and if I build it locally and run it I can do the npm install there. sh 'node -v' runs successfully in the pipeline and also sh 'ls'.
Here is the pipeline syntax.
pipeline {
agent { dockerfile { dir 'dockerfiles/test' } }
stages {
stage('Build') {
steps {
sh 'npm install'
}
}
}
post {
always {
echo 'I will always say Hello again!'
}
}
}
I get this error: ERROR: script returned exit code -1. I can't see anything wrong here. I've also tested with other node images with the same result. If I run it with a node slave I can do the installation but I do not want to have many different slaves with a lot of setups for integration tests.
And here is the dockerfile
FROM ubuntu:16.04
ENV DEBIAN_FRONTEND noninteractive
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927
RUN echo "deb http://repo.mongodb.org/apt/ubuntu $(cat /etc/lsb-release | grep DISTRIB_CODENAME | cut -d= -f2)/mongodb-org/3.2 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.2.list
RUN apt-get update && apt-get install -y \
curl && \
curl -sL https://deb.nodesource.com/setup_7.x | bash - && \
apt-get install -y nodejs && \
apt-get install -y mongodb-org
RUN mkdir -p /data/db
RUN export LC_ALL=C
RUN groupadd -g 1000 jenkins && useradd -u 1000 jenkins -g jenkins
EXPOSE 27017
CMD ["/usr/bin/mongod"]
Found a workaround to a similar problem.
Problem
Jenkins running a pipeline job
This job is running commands inside a debian slim container
All commands are failing instantly with no error output, only a ERROR: script returned exit code -1
Running the container outside docker and executing the same commands with the same user is working as it should be
Extract from Jenkinfile :
androidImage = docker.build("android")
androidImage.inside('-u root') {
stage('Install'){
sh 'npm install' // is failing with generic error and no output
}
Solution
Found the answer on Jenkins bugtracker : https://issues.jenkins-ci.org/browse/JENKINS-35370 and on Jenkins Docker Pipeline Exit Code -1
My problem was solved by installing the procps package in my debian Dockerfile :
apt-get install -y procps
I replicated your setup as faithfully as I could. I used your Dockerfile and Jenkinsfile, and here's my package.json:
{
"name": "minimal",
"description": "Minimal package.json",
"version": "0.0.1",
"devDependencies": {
"mocha": "*"
}
}
It failed like this for me during npm install:
npm ERR! Error: EACCES: permission denied, mkdir '/home/jenkins'
I updated one line in your Dockerfile to add --create-home:
RUN groupadd -g 1000 jenkins && useradd -u 1000 jenkins -g jenkins --create-home
And the build passed. Kudos to #mkobit for keying in on the issue and linking to the jenkins issue that will make this cleaner in the future.

Resources