How to access docker daemon from container with other user than root - docker

I'm trying to run a Jenkins container that builds docker images. I've started last week with docker and I'm a bit confused with the use of volumes from host and how users are handled.
I've been searching on internet and I've found a git issue were someone posted a solution to have access to the docker daemon from the container. Basically, the idea is to mound inside the Jenkins container the volumes that contain the docker bin folder and the docker.sock from the host like this:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /usr/local/bin/docker:/usr/local/bin/docker
I've done that and it works but only if I'm root. When I started to learn docker, I followed the example in a blog where, instead of directly using a jenkins image, the author copied the Dockerfiles from the jenkins image itself and its dependencies to explain the process. As part of the process, a jenkins user is created and it is the one in used when starting the container. My problem now is that I cannot make the jenkins user have access to the docker.sock mounted as it belongs to root and the group docker in the host. I tried adding the user docker in the Dockerfile but I still get a permission denied error from a Jenkins job when accessing the docker.sock. If I inspect the mounted /var/run/docker.sock inside the container I can see that docker.sock belongs to group user instead of docker so I don't know exactly what's going on when the directory is mounted. I haven't worked much with Linux so my guess is that the user docker doesn't exist when the directory is mounted and that it then uses a default user but I may probably be completely wrong.
Another thing I still don't get is, if I create a container specifically to be used as a Jenkins container and nothing else is supposed to be run there, what's the purpose of creating a specific jenkins user? Is there any reason why I cannot use directly the user root?
This is the Dockerfile I use. Thanks.
FROM centos:7
# Yum workaround to stalled mirror
RUN sed -i -e 's/enabled=1/enabled=0/g' /etc/yum/pluginconf.d/fastestmirror.conf
RUN rm -f /var/lib/rpm/__*
RUN rpm --rebuilddb -v -v
RUN yum clean all
# see https://bugs.debian.org/775775
# and https://github.com/docker-library/java/issues/19#issuecomment-70546872
ENV CA_CERTIFICATES_JAVA_VERSION 20140324
RUN yum -v install -y \
wget \
zip \
which \
openssh-client \
unzip \
java-1.8.0-openjdk-devel \
git \
&& yum clean all
#RUN /var/lib/dpkg/info/ca-certificates-java.postinst configure
# Install Tini
ENV TINI_VERSION 0.9.0
ENV TINI_SHA fa23d1e20732501c3bb8eeeca423c89ac80ed452
# Use tini as subreaper in Docker container to adopt zombie processes
RUN curl -fsSL https://github.com/krallin/tini/releases/download/v${TINI_VERSION}/tini-static -o /bin/tini && chmod +x /bin/tini \
&& echo "$TINI_SHA /bin/tini" | sha1sum -c -
# SET Jenkins Environment Variables
ENV JENKINS_HOME /var/jenkins_home
ENV JENKINS_SLAVE_AGENT_PORT 50000
ENV JENKINS_VERSION 2.22
ENV JENKINS_SHA 5b89b6967e7af8119c52c7e86223b47665417a22
ENV JENKINS_UC https://updates.jenkins-ci.org
ENV COPY_REFERENCE_FILE_LOG $JENKINS_HOME/copy_reference_file.log
# SET Java variables
ENV JAVA_HOME /usr/lib/jvm/java/jre
ENV PATH /usr/lib/jvm/java/bin:$PATH
# Jenkins is run with user `jenkins`, uid = 1000
# If you bind mount a volume from the host or a data container,
# ensure you use the same uid
RUN useradd -d "$JENKINS_HOME" -u 1000 -m -s /bin/bash jenkins
#Not working. Folder not yet mounted?
#RUN DOCKER_GID=$(stat -c '%g' /var/run/docker.sock) && \
#Using gid from host
RUN groupadd -for -g 50 docker && \
usermod -aG docker jenkins
# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades
VOLUME /var/jenkins_home
# `/usr/share/jenkins/ref/` contains all reference configuration we want
# to set on a fresh new installation. Use it to bundle additional plugins
# or config file with your custom jenkins Docker image.
RUN mkdir -p /usr/share/jenkins/ref/init.groovy.d
# Install Jenkins
RUN curl -fL http://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/${JENKINS_VERSION}/jenkins-war-${JENKINS_VERSION}.war -o /usr/share/jenkins/jenkins.war \
&& echo "$JENKINS_SHA /usr/share/jenkins/jenkins.war" | sha1sum -c -
ENV JAVA_OPTS="-Xmx8192m"
ENV JENKINS_OPTS="--logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war"
# Prep Jenkins Directories
RUN chown -R jenkins "$JENKINS_HOME" /usr/share/jenkins/ref
RUN mkdir /var/log/jenkins
RUN mkdir /var/cache/jenkins
RUN chown -R jenkins:jenkins /var/log/jenkins
RUN chown -R jenkins:jenkins /var/cache/jenkins
# Expose Ports for web and slave agents
EXPOSE 8080
EXPOSE 50000
# Copy in local config files
COPY init.groovy /usr/share/jenkins/ref/init.groovy.d/tcp-slave-agent-port.groovy
COPY jenkins.sh /usr/local/bin/jenkins.sh
COPY plugins.sh /usr/local/bin/plugins.sh
RUN chmod +x /usr/local/bin/plugins.sh
RUN chmod +x /usr/local/bin/jenkins.sh
# Install default plugins
COPY plugins.txt /tmp/plugins.txt
RUN /usr/local/bin/plugins.sh /tmp/plugins.txt
# Add ssh key
RUN eval "$(ssh-agent -s)"
RUN mkdir /usr/share/jenkins/ref/.ssh && \
chmod 700 /usr/share/jenkins/ref/.ssh && \
ssh-keyscan github.com > /usr/share/jenkins/ref/.ssh/known_hosts
COPY id_rsa /usr/share/jenkins/ref/.ssh/id_rsa
COPY id_rsa /usr/share/jenkins/ref/.ssh/id_rsa.pub
COPY hudson.tasks.Maven.xml /usr/share/jenkins/ref/hudson.tasks.Maven.xml
RUN chown -R jenkins:jenkins /usr/share/jenkins/ref && \
chmod 600 /usr/share/jenkins/ref/.ssh/id_rsa && \
chmod 600 /usr/share/jenkins/ref/.ssh/id_rsa.pub && \
chmod 600 /usr/share/jenkins/ref/hudson.tasks.Maven.xml
COPY id_rsa /root/.ssh/id_rsa
COPY id_rsa /root/.ssh/id_rsa.pub
# ssh keys for root. To use root as the user
RUN chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub && \
ssh-keyscan github.com > /root/.ssh/known_hosts
# Switch to the jenkins user
USER jenkins
# Tini as the entry point to manage zombie processes
ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/jenkins.sh"]

Apparently the issue was in the gid. For some reason I thought the docker gid of the group in the host was 50 but actually it was actually 100. When I changed it to be 100, the jenkins job started to work.
I still don't know why docker.sock shows it belongs to group user instead of docker in the container though. If I do cat /etc/group in the container I see
root:x:0:
...
users:x:100:
...
jenkins:x:1000:
docker:x:100:jenkins
and in the host
root:x:0:
lp:x:7:lp
nogroup:x:65534:
staff:x:50:docker
docker:x:100:docker
dockremap:x:101:dockremap

Related

Unknown Instruction : Sudo , when i try to build the docker image

When I try to build the below docker file , i get the error "Error response from daemon: Dockerfile parse error line 12: unknown instruction: SUDO"
FROM jenkins
USER root
RUN apt-get -qqy update; apt-get install -qqy sudo
RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
RUN wget http://get.docker.com/builds/Linux/x86_64/docker-latest.tgz
RUN tar -xvzf docker-latest.tgz
RUN mv docker/* /usr/bin/
USER jenkins
RUN /usr/local/bin/install-plugins.sh junit git git-client ssh-slaves greenballs chucknorris ws-cleanup
sudo mkdir -p /var/jenkins_home
cd /var/jenkins_home
sudo chown -R 1000 /var/jenkins_home
Below commands doesn't belong to Dockerfile syntax
sudo mkdir -p /var/jenkins_home
cd /var/jenkins_home
sudo chown -R 1000 /var/jenkins_home
Add the RUN infront of them if you wants to run them. But the good practice is to mount folder from local to container. If you are tying to map the jenkins home folder, then create /var/jenkins_home folder on local system & then mount to docker container with -v option.
You can follow given link for using docker in dockerized jenkins: https://medium.com/#manav503/how-to-build-docker-images-inside-a-jenkins-container-d59944102f30

Why "docker build" fails on local image

Probably I'm missing something obvious, but could someone please explain the following:
When I pull and run an image, e.g docker pull dgraziotin/lamp && docker run -t -i -p 80:80 -p 3306:3306 --name osxlamp dgraziotin/lamp - it works just fine
Now I want to play with Dockerfile and build it manually on my computer (I can do this, right?)
So I download the source files from Github https://github.com/dgraziotin/osx-docker-lamp, cd to unpacked folder and run docker build -t test .
The building process starts but I see lot of weird errors like "Package php5-mysql is not available". I tried different images with the same result. How to properly build local images?
UPD:
Dockerfile
FROM phusion/baseimage:latest
MAINTAINER Daniel Graziotin <daniel#ineed.coffee>
ENV REFRESHED_AT 2016-03-29
# based on tutumcloud/tutum-docker-lamp
# MAINTAINER Fernando Mayo <fernando#tutum.co>, Feng Honglin <hfeng#tutum.co>
ENV DOCKER_USER_ID 501
ENV DOCKER_USER_GID 20
ENV BOOT2DOCKER_ID 1000
ENV BOOT2DOCKER_GID 50
# Tweaks to give Apache/PHP write permissions to the app
RUN usermod -u ${BOOT2DOCKER_ID} www-data && \
usermod -G staff www-data && \
useradd -r mysql && \
usermod -G staff mysql
RUN groupmod -g $(($BOOT2DOCKER_GID + 10000)) $(getent group $BOOT2DOCKER_GID | cut -d: -f1)
RUN groupmod -g ${BOOT2DOCKER_GID} staff
# Install packages
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
apt-get -y install supervisor wget git apache2 libapache2-mod-php5 mysql-server php5-mysql pwgen php-apc php5-mcrypt zip unzip && \
echo "ServerName localhost" >> /etc/apache2/apache2.conf
# needed for phpMyAdmin
run php5enmod mcrypt
# Add image configuration and scripts
ADD start-apache2.sh /start-apache2.sh
ADD start-mysqld.sh /start-mysqld.sh
ADD run.sh /run.sh
RUN chmod 755 /*.sh
ADD supervisord-apache2.conf /etc/supervisor/conf.d/supervisord-apache2.conf
ADD supervisord-mysqld.conf /etc/supervisor/conf.d/supervisord-mysqld.conf
# Remove pre-installed database
RUN rm -rf /var/lib/mysql
# Add MySQL utils
ADD create_mysql_users.sh /create_mysql_users.sh
RUN chmod 755 /*.sh
# Add phpmyadmin
RUN wget -O /tmp/phpmyadmin.tar.gz https://files.phpmyadmin.net/phpMyAdmin/4.6.0/phpMyAdmin-4.6.0-all-languages.tar.gz
RUN tar xfvz /tmp/phpmyadmin.tar.gz -C /var/www
RUN ln -s /var/www/phpMyAdmin-4.6.0-all-languages /var/www/phpmyadmin
RUN mv /var/www/phpmyadmin/config.sample.inc.php /var/www/phpmyadmin/config.inc.php
ENV MYSQL_PASS:-$(pwgen -s 12 1)
# config to enable .htaccess
ADD apache_default /etc/apache2/sites-available/000-default.conf
RUN a2enmod rewrite
# Configure /app folder with sample app
RUN mkdir -p /app && rm -fr /var/www/html && ln -s /app /var/www/html
ADD app/ /app
#Environment variables to configure php
ENV PHP_UPLOAD_MAX_FILESIZE 10M
ENV PHP_POST_MAX_SIZE 10M
# Add volumes for the app and MySql
VOLUME ["/etc/mysql", "/var/lib/mysql", "/app" ]
EXPOSE 80 3306
CMD ["/run.sh"]
SOLVED As I understood many of custom images contain outdated/invalid code and must be avoided as much as possible. We should rely on official well known and supported images.
Unrelated to the exact problem, but your Dockerfile could use some rework based on Best Practices for writing Dockerfiles.
I'd like to point out the ADD vs COPY best practice and the deprecated MAINTAINER Instruction (you should use LABEL maintainer="Daniel Graziotin ").
Also on the part where you add phpmyadmin it's useless to use RUN instead of ADD if you don't extract and delete the archive in the same layer (using multiline arguments). This can also be found under the ADD vs COPY best practices.
Other than that I can say this is a pretty solid Dockerfile! Sad it won't work because of the application...

Docker - Celery as a daemon - no pidfiles found

I seem to have tried every solution on here but none seem to be working, I'm not sure what I'm missing. Im trying to run celery as a daemon through my docker container.
root#bae5de770400:/itapp/itapp# /etc/init.d/celeryd status
celery init v10.1.
Using config script: /etc/default/celeryd
celeryd down: no pidfiles found
root#bae5de770400:/itapp/itapp# /etc/init.d/celerybeat status
celery init v10.1.
Using configuration: /etc/default/celeryd
celerybeat is down: no pid file found
root#bae5de770400:/itapp/itapp#
I've seen lots of posts to do with perms and I've tried them all to no avail.
this is my docker file which creates all the perms and folders
FROM python:latest
ENV PYTHONUNBUFFERED 1
# add source for snmp
RUN sed -i "s#jessie main#jessie main contrib non-free#g" /etc/apt/sources.list
# install dependancies
RUN apt-get update -y \
&& apt-get install -y apt-utils python-software-properties libsasl2-dev python3-dev libldap2-dev libssl-dev libsnmp-dev snmp-mibs-downloader git vim
# copy and install requirements
RUN mkdir /config
ADD /config/requirements.txt /config/
RUN pip install -r /config/requirements.txt
# create folders
RUN mkdir /itapp;
RUN mkdir /static;
# create celery user
RUN useradd -N -M --system -s /bin/false celery
RUN echo celery:"*****" | /usr/sbin/chpasswd
# celery perms
RUN groupadd grp_celery
RUN usermod -a -G grp_celery celery
RUN mkdir /var/run/celery/
RUN mkdir /var/log/celery/
RUN chown root:root /var/run/celery/
RUN chown root:root /var/log/celery/
# copy celery daemon files
ADD /config/celery/init_celeryd /etc/init.d/celeryd
RUN chmod +x /etc/init.d/celeryd
ADD /config/celery/celerybeat /etc/init.d/celerybeat
RUN chmod +x /etc/init.d/celerybeat
RUN chmod 755 /etc/init.d/celeryd
RUN chown root:root /etc/init.d/celeryd
RUN chmod 755 /etc/init.d/celerybeat
RUN chown root:root /etc/init.d/celerybeat
# copy celery config
ADD /config/celery/default_celeryd /etc/default/celeryd
# RUN /etc/init.d/celeryd start
# set workign DIR for copying code
WORKDIR /itapp
if I start it manually it works
celery -A itapp worker -l info
/usr/local/lib/python3.6/site-packages/celery/platforms.py:795: RuntimeWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the -u option.
...
[2017-09-25 17:29:51,707: INFO/MainProcess] Connected to amqp://it-app:**#rabbitmq:5672/it-app-vhost
[2017-09-25 17:29:51,730: INFO/MainProcess] mingle: searching for neighbors
[2017-09-25 17:29:52,764: INFO/MainProcess] mingle: all alone
the init.d files are copied from the celery repo and this is the contents of my default file if it helps
# Names of nodes to start
# most people will only start one node:
CELERYD_NODES="worker1"
# but you can also start multiple and configure settings
# for each in CELERYD_OPTS
#CELERYD_NODES="worker1 worker2 worker3"
# alternatively, you can specify the number of nodes to start:
#CELERYD_NODES=10
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/usr/local/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="itapp"
# or fully qualified:
# Where to chdir at start.
CELERYD_CHDIR="/itapp/itapp/"
# Extra command-line arguments to the worker
CELERYD_OPTS="flower --time-limit=300 --concurrency=8"
# Configure node-specific settings by appending node name to arguments:
#CELERYD_OPTS="--time-limit=300 -c 8 -c:worker2 4 -c:worker3 2 -Ofair:worker1"
# Set logging level to DEBUG
#CELERYD_LOG_LEVEL="DEBUG"
# %n will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists (e.g., nobody).
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
the only thing in this file which may be wrong I think is the CELERY_BIN value, I'm not sure what to set that too in a docker container
Thanks
So you had few issues in your Dockerfile
Celery process shell was set to /bin/false which didn't allow any process to be started.
You needed to give permission on /var/run/celery and /var/log/celery to the celery user
/etc/default/celeryd should be 640 permission
Also too many layers in your Dockerfile
So I updated the Dockerfile to below
FROM python:latest
ENV PYTHONUNBUFFERED 1
# add source for snmp
RUN sed -i "s#jessie main#jessie main contrib non-free#g" /etc/apt/sources.list
# install dependancies
RUN apt-get update -y \
&& apt-get install -y apt-utils python-software-properties libsasl2-dev python3-dev libldap2-dev libssl-dev libsnmp-dev git vim
# copy and install requirements
RUN mkdir /config
ADD /config/requirements.txt /config/
RUN pip install -r /config/requirements.txt
# create folders
RUN mkdir /itapp && mkdir /static;
# create celery user
RUN useradd -N -M --system -s /bin/bash celery && echo celery:"B1llyB0n3s" | /usr/sbin/chpasswd
# celery perms
RUN groupadd grp_celery && usermod -a -G grp_celery celery && mkdir -p /var/run/celery/ /var/log/celery/
RUN chown -R celery:grp_celery /var/run/celery/ /var/log/celery/
# copy celery daemon files
ADD /config/celery/init_celeryd /etc/init.d/celeryd
RUN chmod +x /etc/init.d/celeryd
ADD /config/celery/celerybeat /etc/init.d/celerybeat
RUN chmod 750 /etc/init.d/celeryd /etc/init.d/celerybeat
RUN chown root:root /etc/init.d/celeryd /etc/init.d/celerybeat
# copy celery config
ADD /config/celery/default_celeryd /etc/default/celeryd
RUN chmod 640 /etc/default/celeryd
# set workign DIR for copying code
ADD /itapp/ /itapp/itapp
WORKDIR /itapp
And then got into the web service container and all worked fine
root#ab658c5d0c67:/itapp/itapp# /etc/init.d/celeryd status
celery init v10.1.
Using config script: /etc/default/celeryd
celeryd down: no pidfiles found
root#ab658c5d0c67:/itapp/itapp# /etc/init.d/celeryd start
celery init v10.1.
Using config script: /etc/default/celeryd
celery multi v4.1.0 (latentcall)
> Starting nodes...
> worker1#ab658c5d0c67: OK
> flower#ab658c5d0c67: OK
root#ab658c5d0c67:/itapp/itapp# /etc/init.d/celeryd status
celery init v10.1.
Using config script: /etc/default/celeryd
celeryd down: no pidfiles found
root#ab658c5d0c67:/itapp/itapp# /etc/init.d/celeryd status
celery init v10.1.
Using config script: /etc/default/celeryd
celeryd (node worker1) (pid 66) is up...
root#ab658c5d0c67:/itapp/itapp#

building jenkins docker image from official Dockerfile

I am trying to build a jenkins docker image from official jenkins git repo:
https://github.com/jenkinsci/docker.
But when I try to run the container of the image using docker run -it -dP jenkins, it exits immediately and when i check the docker logs, I get the following error:
: invalid option
I read that the error could be because the pid of tini is not 1. I looked at the documents and saw that if we do the following, it should solve the issue.
Passing the -s argument to Tini (tini -s -- ...)
Setting the environment variable TINI_SUBREAPER (e.g. export TINI_SUBREAPER=).
But it did not solve anything.
The following is the exact copy of the Dockerfile being built with docker build -t jenkins .:
FROM openjdk:8-jdk
RUN apt-get update && apt-get install -y git curl && rm -rf /var/lib/apt/lists/*
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
ARG http_port=8080
ARG agent_port=50000
ENV JENKINS_HOME /var/jenkins_home
ENV JENKINS_SLAVE_AGENT_PORT ${agent_port}
ENV TINI_SUBREAPER=
# Jenkins is run with user `jenkins`, uid = 1000
# If you bind mount a volume from the host or a data container,
# ensure you use the same uid
RUN groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades
VOLUME /var/jenkins_home
# `/usr/share/jenkins/ref/` contains all reference configuration we want
# to set on a fresh new installation. Use it to bundle additional plugins
# or config file with your custom jenkins Docker image.
RUN mkdir -p /usr/share/jenkins/ref/init.groovy.d
ENV TINI_VERSION 0.14.0
ENV TINI_SHA 6c41ec7d33e857d4779f14d9c74924cab0c7973485d2972419a3b7c7620ff5fd
# Use tini as subreaper in Docker container to adopt zombie processes
RUN curl -fsSL https://github.com/krallin/tini/releases/download/v${TINI_VERSION}/tini-static-amd64 -o /bin/tini && chmod +x /bin/tini \
&& echo "$TINI_SHA /bin/tini" | sha256sum -c -
COPY init.groovy /usr/share/jenkins/ref/init.groovy.d/tcp-slave-agent-port.groovy
# jenkins version being bundled in this docker image
ARG JENKINS_VERSION
ENV JENKINS_VERSION ${JENKINS_VERSION:-2.60.1}
# jenkins.war checksum, download will be validated using it
ARG JENKINS_SHA=34fde424dde0e050738f5ad1e316d54f741c237bd380bd663a07f96147bb1390
# Can be used to customize where jenkins.war get downloaded from
ARG JENKINS_URL=https://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/${JENKINS_VERSION}/jenkins-war-${JENKINS_VERSION}.war
# could use ADD but this one does not check Last-Modified header neither does it allow to control checksum
# see https://github.com/docker/docker/issues/8331
RUN curl -fsSL ${JENKINS_URL} -k -o /usr/share/jenkins/jenkins.war \
&& echo "${JENKINS_SHA} /usr/share/jenkins/jenkins.war" | sha256sum -c -
ENV JENKINS_UC https://updates.jenkins.io
RUN chown -R ${user} "$JENKINS_HOME" /usr/share/jenkins/ref
# for main web interface:
EXPOSE ${http_port}
# will be used by attached slave agents:
EXPOSE ${agent_port}
ENV COPY_REFERENCE_FILE_LOG $JENKINS_HOME/copy_reference_file.log
USER ${user}
COPY jenkins-support /usr/local/bin/jenkins-support
COPY jenkins.sh /usr/local/bin/jenkins.sh
ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/jenkins.sh"]
# from a derived Dockerfile, can use `RUN plugins.sh active.txt` to setup /usr/share/jenkins/ref/plugins from a support bundle
COPY plugins.sh /usr/local/bin/plugins.sh
COPY install-plugins.sh /usr/local/bin/install-plugins.sh
The problem was with the docker version. My Docker version was old. Not sure which command was not supported, but the new docker built the dockerfile.

How to create a Jenkins job and/or user from a dockerfile?

I am trying to set up a customised Jenkins 2 server from a dockerfile.
I use the official image and I want to be able to add things that I need like custom jobs and an admin user.
This is my dockerfile so far:
FROM openjdk:8-jdk
RUN apt-get update && apt-get install -y git curl && rm -rf /var/lib/apt/lists/*
ENV JENKINS_HOME /var/jenkins_home
ENV JENKINS_SLAVE_AGENT_PORT 50000
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
# Jenkins is run with user `jenkins`, uid = 1000
# If you bind mount a volume from the host or a data container,
# ensure you use the same uid
RUN groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades
VOLUME /var/jenkins_home
# `/usr/share/jenkins/ref/` contains all reference configuration we want
# to set on a fresh new installation. Use it to bundle additional plugins
# or config file with your custom jenkins Docker image.
RUN mkdir -p /usr/share/jenkins/ref/init.groovy.d
ENV TINI_VERSION 0.9.0
ENV TINI_SHA fa23d1e20732501c3bb8eeeca423c89ac80ed452
# Use tini as subreaper in Docker container to adopt zombie processes
RUN curl -fsSL https://github.com/krallin/tini/releases/download/v${TINI_VERSION}/tini-static -o /bin/tini && chmod +x /bin/tini \
&& echo "$TINI_SHA /bin/tini" | sha1sum -c -
COPY init.groovy /usr/share/jenkins/ref/init.groovy.d/tcp-slave-agent-port.groovy
# jenkins version being bundled in this docker image
ARG JENKINS_VERSION
ENV JENKINS_VERSION ${JENKINS_VERSION:-2.19.2}
# jenkins.war checksum, download will be validated using it
ARG JENKINS_SHA=32b8bd1a86d6d4a91889bd38fb665db4090db081
# Can be used to customize where jenkins.war get downloaded from
ARG JENKINS_URL=https://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/${JENKINS_VERSION}/jenkins-war-${JENKINS_VERSION}.war
# could use ADD but this one does not check Last-Modified header neither does it allow to control checksum
# see https://github.com/docker/docker/issues/8331
RUN curl -fsSL ${JENKINS_URL} -o /usr/share/jenkins/jenkins.war \
&& echo "${JENKINS_SHA} /usr/share/jenkins/jenkins.war" | sha1sum -c -
ENV JENKINS_UC https://updates.jenkins.io
RUN chown -R ${user} "$JENKINS_HOME" /usr/share/jenkins/ref
# for main web interface:
EXPOSE 8080
# will be used by attached slave agents:
EXPOSE 50000
ENV COPY_REFERENCE_FILE_LOG $JENKINS_HOME/copy_reference_file.log
USER ${user}
COPY jenkins-support /usr/local/bin/jenkins-support
COPY jenkins.sh /usr/local/bin/jenkins.sh
ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/jenkins.sh"]
# from a derived Dockerfile, can use `RUN plugins.sh active.txt` to setup /usr/share/jenkins/ref/plugins from a support bundle
COPY plugins.txt /usr/share/jenkins/plugins.txt
COPY plugins.sh /usr/local/bin/plugins.sh
COPY install-plugins.sh /usr/local/bin/install-plugins.sh
# Add the command line tools
COPY jenkins-cli.jar "$JENKINS_HOME"
# Create jobs
ARG job_name_1="my_super_job"
#ARG job_name_2="my_ultra_job"
# create the jobs folder recursively
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/workspace/
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds/lastFailedBuild
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds/lastStableBuild
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds/lastSuccessfulBuild
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds/lastUnstableBuild
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds/lastUnsuccessfulBuild
RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_1}/builds/legacyIds
#RUN mkdir -p "$JENKINS_HOME"/jobs/${job_name_2}
## add the custom configs to the container
COPY ${job_name_1}_config.xml "$JENKINS_HOME"/jobs/${job_name_1}/config.xml
USER root
#RUN chmod 600 "$JENKINS_HOME"/jobs/${job_name_1}/config.xml
RUN java -jar /var/jenkins_home/jenkins-cli.jar -s http://localhost:8080 create-job my_super_job < /var/jenkins_home/jobs/my_super_job/config.xml
#COPY ${job_name_2}_config.xml "$JENKINS_HOME"/jobs/${job_name_2}/config.xml
# --Install plugins--
# Notice: Deprecated method which however works with a 'plugins.txt' file
#USER root
#RUN chmod 600 /usr/share/jenkins/plugins.txt
#RUN chmod 600 /usr/local/bin/install-plugins.sh
#RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
# Notice: Recommended method with open case on Github [https://github.com/jenkinsci/docker/issues/348]
# Notice: Select whichever plugins you want
#RUN /usr/local/bin/install-plugins.sh \
#dashboard-view:2.9.10 \
#pipeline-stage-view:2.2 \
#parameterized-trigger:2.32 \
#bitbucket:1.1.5 \
#git:3.0.0 \
#github:1.22.4
# --Install plugins--
I have tried to create a job on build time by first launching a container, creating the job manually, saving the config.xml file, and then copying it in the image from the Dockerfile. Moreover, I am trying to replicate the files/folder structure when a job is being created.
But it is not working. The job is not appearing in Jenkins.
I also tried to use the jenkins-cli.jar, but as I understood , there must be a live Jenkins server to connect to and execute anything which is not the case at build time.
Finally, I suppose creating an admin user in build time must be way more complicated that creating a job...
So, does anyone have any experience on this?

Resources