I have created a very basic Dockerfile which wraps the official Jenkins Docker image. I am running an instance of my image and have come across the following problem:
I start and instance of my image and create a file in my instance in the /tmp directory(any directory will work) called tmp.txt.
I go to the "Script Console" of my running Jenkins instance and in the console, I type in the following code:
f = new File('/tmp/')
println f.list()
I expect to see a list of files including my newly created /tmp/tmp.txt but that file is not in the list. This is symbolic of the problem that is blocking me which is that I want to call evaluate() on a groovy script from a Jenkinsfile but that file is not script is not visible to Jenkins.
My gut feeling is that this has something to do with Docker file system layers. Something about the fact that since Jenkins is installed at a lower docker file system layer in the base image, it cannot access files created within the running instance layer above it. The behavior seems very weird but somewhat understandable.
Has anyone encountered this issue before? If so, how were you able to resolve it?
FROM jenkins
ENV JAVA_OPTS="-Xmx8192m"
USER root
RUN mkdir /var/log/jenkins
RUN chown -R jenkins:jenkins /var/log/jenkins
RUN mkdir /var/cache/jenkins
RUN chown -R jenkins:jenkins /var/cache/jenkins
RUN useradd -ms /bin/bash jenkadm
USER jenkadm
WORKDIR /home/jenkadm
COPY id_rsa /home/jenkadm/.ssh/
COPY id_rsa.pub /home/jenkadm/.ssh/
USER root
RUN mkdir -p /opt/staples/ci-tools
RUN chown jenkadm:jenkadm /opt/staples/*
USER jenkins
ENV JENKINS_OPTS="--handlerCountMax=300 --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war"
Related
Created container using jenkins/jenkins:lts-dk11 - and as far as I know a Jenkins user also has to be created with a home directory but that isn't happening
Below is the docker file, am I doing anything wrong?
Dockerfile:
FROM jenkins/jenkins:lts-jdk11
WORKDIR /var/jenkins_home
RUN apt-get update
COPY terraform .
COPY sencha .
COPY go .
COPY helm.
RUN chown -R jenkins:jenkins /var/jenkins_home
Built with:
docker build .
The image gets created, container also gets created, I do see Jenkins user with id 1000 but this user has no home dir, and moreover, helm, go, sencha, terraform are also not installed.
I did exec into the container to double-check if terraform is installed or not
#terraform --version, I see command not found
#which terraform also shows no result.
same output for go, sencha and helm
Any suggestions?
You need install the binaries in the /usr/local/bin/ path, like this example:
FROM jenkins/jenkins:lts-jdk11
WORKDIR /var/jenkins_home
RUN apt-get update
COPY terraform /usr/local/bin/terraform
Btw, the docker image jenkins:lts-jdk11 is based in debian distribution so you can use the apt package manager for install your apps.
I have this Dockerfile setup:
FROM node:14.5-buster-slim AS base
WORKDIR /app
FROM base AS production
ENV NODE_ENV=production
RUN chown -R node:node /app
RUN chmod 755 /app
USER node
... other copies
COPY ./scripts/startup-production.sh ./
COPY ./scripts/healthz.sh ./
CMD ["./startup-production.sh"]
The problem I'm facing is that I can't execute ./healthz.sh because it's only executable by the node user. When I commented out the two RUN and the USER commands, I could execute the file just fine. But I want to enforce the executable permissions only to the node for security reasons.
I need the ./healthz.sh to be externally executable by Kubernetes' liveness & rediness probes.
How can I make it so? Folder restructuring or stuff like that are fine with me.
In most cases, you probably want your code to be owned by root, but to be world-readable, and for scripts be world-executable. The Dockerfile COPY directive will copy in a file with its existing permissions from the host system (hidden in the list of bullet points at the end is a note that a file "is copied individually along with its metadata"). So the easiest way to approach this is to make sure the script has the right permissions on the host system:
# mode 0755 is readable and executable by everyone but only writable by owner
chmod 0755 healthz.sh
git commit -am 'make healthz script executable'
Then you can just COPY it in, without any special setup.
# Do not RUN chown or chmod; just
WORKDIR /app
COPY ./scripts/healthz.sh .
# Then when launching the container, specify
USER node
CMD ["./startup-production.sh"]
You should be able to verify this locally by running your container and manually invoking the health-check script
docker run -d --name app the-image
# possibly with a `docker exec -u` option to specify a different user
docker exec app /app/healthz.sh && echo OK
The important thing to check is that the file is world-executable. You can also double-check this by looking at the built container
docker run --rm the-image ls -l /app/healthz.sh
That should print out one line, starting with a permission string -rwxr-xr-x; the last three r-x are the important part. If you can't get the permissions right another way, you can also fix them up in your image build
COPY ./scripts/healthz.sh .
# If you can't make the permissions on the original file right:
RUN chmod 0755 *.sh
You need to modify user Dockerfile CMD command like this : ["sh", "./startup-production.sh"]
This will interpret the script as sh, but it can be dangerous if your script is using bash specific features like [[]] with #!/bin/bash as its first line.
Moreover I would say use ENTRYPOINT here instead of CMD if you want this to run whenever container is up
Seems like a basic issue but couldnt find any answers so far ..
When using ADD / COPY in Dockerfile and running the image on linux, the default file permission of the file copied in the image is 644. The onwner of this file seems to be as 'root'
However, when running the image, a non-root user starts the container and any file thus copied with 644 permission cannot execute this copied/added file and if the file is executed at ENTRYPOINT it fails to start with permission denied error.
I read in one of the posts that COPY/ADD after Docker 1.17.0+ allows chown but in my case i dont know who will be the non-root user starting so i cannot set the permission as that user.
I also saw another work around to ADD/COPY files to a different location and use RUN to copy them from the temp location to actual folder like what am doing below. But this approach doesnt work as the final image doesnt have the files in /otp/scm
#Installing Bitbucket and setting variables
WORKDIR /tmp
ADD atlassian-bitbucket-${BITBUCKET_VERSION}.tar.gz .
COPY bbconfigupdater.sh .
#Copying Entrypoint script which will get executed when container starts
WORKDIR /tmp
COPY entrypoint.sh .
RUN ls -lrth /tmp
WORKDIR /opt/scm
RUN pwd && cp /tmp/bbconfigupdater.sh /opt/scm \
&& cp /tmp/entrypoint.sh /opt/scm \
&& cp -r /tmp/atlassian-bitbucket-${BITBUCKET_VERSION} /opt/scm \
&& chgrp -R 0 /opt/ \
&& chmod -R 755 /opt/ \
&& chgrp -R 0 /scm/bitbucket \
&& chmod -R 755 /scm/bitbucket \
&& ls -lrth /opt/scm && ls -lrth /scmdata
Any help is appreciated to figure out how i can get my entrypoint script copied to the desired path with execute permissions set.
The default file permission is whatever the file permission is in your build context from where you copy the file. If you control the source, then it's best to fix the permissions there to avoid a copy-on-write operation. Otherwise, if you cannot guarantee the system building the image will have the execute bit set on the files, a chmod after the copy operation will fix the permission. E.g.
COPY entrypoint.sh .
RUN chmod +x entrypoint.sh
A better option with newer versions of docker (and which didn't exist when this answer was first posted) is to use the --chmod flag (the permissions must be specified in octal at last check):
COPY --chmod=0755 entrypoint.sh .
You do not need to know who will run the container. The user inside the container is typically configured by the image creator (using USER) and doesn't depend on the user running the container from the docker host. When the user runs the container, they send a request to the docker API which does not track the calling user id.
The only time I've seen the host user matter is if you have a host volume and want to avoid permission issues. If that's your scenario, I often start the entrypoint as root, run a script called fix-perms to align the container uid with the host volume uid, and then run gosu to switch from root back to the container user.
A --chmod flag was added to ADD and COPY instructions in Docker CE 20.10. So you can now do.
COPY --chmod=0755 entrypoint.sh .
To be able to use it you need to enable BuildKit.
# enable buildkit for docker
DOCKER_BUILDKIT=1
# enable buildkit for docker-compose
COMPOSE_DOCKER_CLI_BUILD=1
Note: It seems to not be documented at this time, see this issue.
I'm running Jenkins in a Docker container by extending the official image in my own Dockerfile.
The top section of that page recommends to put the whole $JENKINS_HOME folder into a named volume in order to have changes made via the UI persist over container restarts and re-creations.
However, I do not want the whole $JENKINS_HOME folder to be part of this volume but only the $JENKINS_HOME/jobs folder. The reasons for this are:
Plugins are set up by the install_plugins.sh script from the base image during the image build process as documented here.
All other configuration will be created from scratch on each image build by the configuration-as-code plugin.
Only the jobs are not re-created from scratch on each image build and consequently should persist in a named volume.
In consequence I start the Jenkins container like this:
docker run \
-p 8080:8080 \
-p 50000:50000 \
-v jenkins-jobs:/var/jenkins_home/jobs \
my-custom-jenkins-image
The container now fails to properly start with permission denied errors in the logs. Checking the permissions inside $JENKINS_HOME via docker exec container_name_or_id ls -ahl /var/jenkins_home shows that $JENKINS_HOME/jobs is now owned by root instead of the jenkins user who owns all other files and subdirectories there and $JENKINS_HOME itself.
Interestingly enough, when putting the whole $JENKINS_HOME folder into a named volume than all files and subfolders in it will be correctly owned by the jenkins user.
How could I only put the jobs folder into a named volume and make sure that it belongs to the jenkins user inside of the container?
edit:
My Dockerfile stripped down to the minimum looks like this. However, I don't suspect any of this to be the root cause since the same thing happens when running the jenkins/jenkins:lts stock image, as in:
docker run \
-p 8080:8080 \
-p 50000:50000 \
-v jenkins-jobs:/var/jenkins_home/jobs \
jenkins/jenkins:lts
The Dockerfile of the base image can be found on GitHub.
FROM jenkins/jenkins:lts
USER root
# install plugins
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
# Configuration as code plugin
# The configuration file must be stored outside of the Jenkins home directory
# because this is mounted as a volume - consequently, changes to the file in
# the image would not make it into the container which would override it with
# the previous version from the volume.
ENV CASC_JENKINS_CONFIG=/run/jenkins.yaml
COPY --chown=jenkins:jenkins jenkins.yaml /run/jenkins.yaml
# don't run plugin and admin user setup wizard at first run
ENV JAVA_OPTS="-Djenkins.install.runSetupWizard=false"
USER jenkins
Terrible work-around that fixes the problem until there is a better solution:
Add the following lines to the Dockerfile:
FROM jenkins/jenkins:lts
USER root
# Install additional tools and plugins, set up configuration etc.
# We need the gosu tool to step down from the root user to an unprivileged
# user as part of the entrypoint script.
# See further: https://github.com/tianon/gosu
RUN apt-get -y update && apt-get -y install gosu
# Note that we stay the root user and do not step down to the jenkins user yet.
COPY fix_volume_ownership.sh /usr/local/bin/fix_volume_ownership.sh
ENTRYPOINT ["/sbin/tini", "--", "/usr/local/bin/fix_volume_ownership.sh"]
Create fix_volume_ownership.sh:
#!/bin/bash
# This script is run by the root user in order to have the privileges to change
# ownership of the jobs directory. The jobs directory is mounted as a named
# volume and otherwise is owned by the root user so that the jenkins user
# cannot write into it.
#
# "gosu" finally steps down from the root user to the jenkins user since we
# do not want to run the Jenkins process with root privileges.
#
# /usr/local/bin/jenkins.sh is the original entrypoint script from the base image.
chown -R jenkins:jenkins /var/jenkins_home/jobs
gosu jenkins /usr/local/bin/jenkins.sh
Now, docker exec container_name_or_id ls -ahl /var/jenkins_home will show that the jobs subfolder is correctly owned by the jenkins user. In addition, docker exec container_name_or_id ps aux will show that the Jenkins process is being run by the jenkins user.
I'm trying to build a docker image for jenkins that automates the configuration of the server. I'd like to use yaml for my config files. For that I need to make snakeyaml available to the groovy grapes. Here is my docker file
FROM jenkins/jenkins:2.107.3
ENV JAVA_OPTS="-Djenkins.install.runSetupWizard=false"
USER root
RUN mkdir -p /var/jenkins_home/files
RUN mkdir -p /var/jenkins_home/.groovy/grapes/org.yaml/snakeyaml/jars
RUN chown -R jenkins:jenkins /var/jenkins_home/files
RUN chown -R jenkins:jenkins /var/jenkins_home/.groovy/grapes/org.yaml/snakeyaml/jars
USER jenkins
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
COPY 03security.groovy /usr/share/jenkins/ref/init.groovy.d/03security.groovy
COPY ivy-1.21.xml /var/jenkins_home/.groovy/grapes/org.yaml/snakeyaml/ivy-1.21.xml
COPY snakeyaml-1.21.jar /var/jenkins_home/.groovy/grapes/org.yaml/snakeyaml/jars/snakeyaml-1.21.jar
COPY mainConfig.yml /var/jenkins_home/files/mainConfig.yml
COPY 03mainConfig.groovy /usr/share/jenkins/ref/init.groovy.d/03mainConfig.groovy
I don't why I'm having this problem, but when I run the build I'm getting this error:
chown: cannot access '/var/jenkins_home/files': No such file or directory
I've run similar commands in other images and not had this issue, but it won't let me create or access that file and I get the same error when I exclude it and try with only the .groovy/grapes path.
Any help in this is appreciated. Also, if you know a working solution to get snakeyaml(or another library) loaded into a jenkins docker image then I'd like to see that too.
I guess it's because /var/jenkins_home/ is volume. If you run command docker history jenkins/jenkins you will see it.
<missing> 2 months ago /bin/sh -c #(nop) VOLUME [/var/jenkins_home] 0B
You can add your files you want to copy to /var/jenkins_home, to direcotry /usr/share/jenkins/ref
COPY *.xml /usr/share/jenkins/ref/
It means that all you xml files will be copied into /var/jenkins_home after container starts.