change gradle Dockerfile to be executed as root user - docker

I am working in gitlab and want to use gradle to build my java project, but I ran into this bug with gitlab runner: https://gitlab.com/gitlab-org/gitlab-runner/issues/2570
One comment is: I can confirm that it works in v9.1.3 but v9.2.0 is broken. Only when I use root user inside container I can proceed. That really should be fixed, because that this regression is seriously impacting security.
So my question is on which places I have to change the Dockerfile to execute as root user? https://github.com/keeganwitt/docker-gradle/blob/b0419babd3271f6c8e554fbc8bbd8dc909936763/jdk8-alpine/Dockerfile
So my idea is to change the dockerfile that it is executed as root push it to my registry and use it inside gitlab. But I am not so much into linux/docker that I know where the user is defined in the file. Maybe I am totally wrong?
build_java:
image: gradle:4.4.1-jdk8-alpine-root
stage: build_java
script:
- gradle build
artifacts:
expire_in: 1 hour # Workaround to delete artifacts after build, we only artifacts it to keep it between stages (but not after the build)
paths:
- build/
- .gradle/
Dockerfile
FROM openjdk:8-jdk-alpine
CMD ["gradle"]
ENV GRADLE_HOME /opt/gradle
ENV GRADLE_VERSION 4.4.1
ARG GRADLE_DOWNLOAD_SHA256=e7cf7d1853dfc30c1c44f571d3919eeeedef002823b66b6a988d27e919686389
RUN set -o errexit -o nounset \
&& echo "Installing build dependencies" \
&& apk add --no-cache --virtual .build-deps \
ca-certificates \
openssl \
unzip \
\
&& echo "Downloading Gradle" \
&& wget -O gradle.zip "https://services.gradle.org/distributions/gradle-${GRADLE_VERSION}-bin.zip" \
\
&& echo "Checking download hash" \
&& echo "${GRADLE_DOWNLOAD_SHA256} *gradle.zip" | sha256sum -c - \
\
&& echo "Installing Gradle" \
&& unzip gradle.zip \
&& rm gradle.zip \
&& mkdir /opt \
&& mv "gradle-${GRADLE_VERSION}" "${GRADLE_HOME}/" \
&& ln -s "${GRADLE_HOME}/bin/gradle" /usr/bin/gradle \
\
&& apk del .build-deps \
\
&& echo "Adding gradle user and group" \
&& addgroup -S -g 1000 gradle \
&& adduser -D -S -G gradle -u 1000 -s /bin/ash gradle \
&& mkdir /home/gradle/.gradle \
&& chown -R gradle:gradle /home/gradle \
\
&& echo "Symlinking root Gradle cache to gradle Gradle cache" \
&& ln -s /home/gradle/.gradle /root/.gradle
# Create Gradle volume
USER gradle
VOLUME "/home/gradle/.gradle"
WORKDIR /home/gradle
RUN set -o errexit -o nounset \
&& echo "Testing Gradle installation" \
&& gradle --version
EDIT:
Okay how to use gradle in docker after it is downloaded as image and available in gitlab.
build_java:
image: docker:dind
stage: build_java
script:
- docker images
- docker login -u _json_key -p "$(echo $GCR_SERVICE_ACCOUNT | base64 -d)" https://eu.gcr.io
- docker pull eu.gcr.io/test/gradle:4.4.1-jdk8-alpine-root
- docker images
- ??WHAT COMMAND TO CALL GRADLE BUILD??

Related

Run 'opentsdb' image as non-root

I'm trying to build a custom image of opentsdb to run as non-root user. Our k8s clusters have security policies that doesn't allow containers to run as root. I'm utilizing an existing Dockerfile from here https://hub.docker.com/r/petergrace/opentsdb-docker/dockerfile
Below is my Docker file where I have added extra step to create a new user 'opentsdb' and at the end running it as USER 'opentsdb'
FROM alpine:latest
ENV TINI_VERSION v0.18.0
ENV TSDB_VERSION 2.4.0
ENV HBASE_VERSION 1.4.4
ENV GNUPLOT_VERSION 5.2.4
ENV JAVA_HOME /usr/lib/jvm/java-1.8-openjdk
ENV PATH $PATH:/usr/lib/jvm/java-1.8-openjdk/bin/
ENV ALPINE_PACKAGES "rsyslog bash openjdk8 make wget libgd libpng libjpeg libwebp libjpeg-turbo cairo pango lua"
ENV BUILD_PACKAGES "build-base autoconf automake git python3-dev cairo-dev pango-dev gd-dev lua-dev readline-dev libpng-dev libjpeg-turbo-dev libwebp-dev sed"
ENV HBASE_OPTS "-XX:+UseConcMarkSweepGC -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap"
ENV JVMARGS "-XX:+UseConcMarkSweepGC -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -enableassertions -enablesystemassertions"
RUN addgroup opentsdb && adduser -D -u 100 -G opentsdb opentsdb
# Tini is a tiny init that helps when a container is being culled to stop things nicely
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini-static-amd64 /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--"]
# Add the base packages we'll need
RUN apk --update add apk-tools \
&& apk add ${ALPINE_PACKAGES} \
# repo required for gnuplot \
--repository http://dl-cdn.alpinelinux.org/alpine/v3.0/testing/ \
&& mkdir -p /opt/opentsdb
WORKDIR /opt/opentsdb/
# Add build deps, build opentsdb, and clean up afterwards.
RUN set -ex && apk add --virtual builddeps ${BUILD_PACKAGES}
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN wget --no-check-certificate \
-O v${TSDB_VERSION}.zip \
https://github.com/OpenTSDB/opentsdb/archive/v${TSDB_VERSION}.zip \
&& unzip v${TSDB_VERSION}.zip \
&& rm v${TSDB_VERSION}.zip \
&& cd /opt/opentsdb/opentsdb-${TSDB_VERSION} \
&& echo "tsd.http.request.enable_chunked = true" >> src/opentsdb.conf \
&& echo "tsd.http.request.max_chunk = 1000000" >> src/opentsdb.conf
RUN cd /opt/opentsdb/opentsdb-${TSDB_VERSION} \
&& find . | xargs grep -s central.maven.org | cut -f1 -d : | xargs sed -i "s/http:\/\/central/https:\/\/repo1/g" \
&& find . | xargs grep -s repo1.maven.org | cut -f1 -d : | xargs sed -i "s/http:\/\/repo1/https:\/\/repo1/g" \
&& ./build.sh \
&& cp build-aux/install-sh build/build-aux \
&& cd build \
&& make install \
&& cd / \
&& rm -rf /opt/opentsdb/opentsdb-${TSDB_VERSION}
RUN cd /tmp && \
wget --no-check-certificate https://sourceforge.net/projects/gnuplot/files/gnuplot/${GNUPLOT_VERSION}/gnuplot-${GNUPLOT_VERSION}.tar.gz && \
tar xzf gnuplot-${GNUPLOT_VERSION}.tar.gz && \
cd gnuplot-${GNUPLOT_VERSION} && \
./configure && \
make install && \
cd /tmp && rm -rf /tmp/gnuplot-${GNUPLOT_VERSION} && rm /tmp/gnuplot-${GNUPLOT_VERSION}.tar.gz
RUN apk del builddeps && rm -rf /var/cache/apk/*
#Install HBase and scripts
RUN mkdir -p /data/hbase /root/.profile.d /opt/downloads
WORKDIR /opt/downloads
RUN wget -O hbase-${HBASE_VERSION}.bin.tar.gz http://archive.apache.org/dist/hbase/${HBASE_VERSION}/hbase-${HBASE_VERSION}-bin.tar.gz \
&& tar xzvf hbase-${HBASE_VERSION}.bin.tar.gz \
&& mv hbase-${HBASE_VERSION} /opt/hbase \
&& rm -r /opt/hbase/docs \
&& rm hbase-${HBASE_VERSION}.bin.tar.gz
# Add misc startup files
RUN ln -s /usr/local/share/opentsdb/etc/opentsdb /etc/opentsdb \
&& rm /etc/opentsdb/opentsdb.conf \
&& mkdir /opentsdb-plugins
ADD files/opentsdb.conf /etc/opentsdb/opentsdb.conf.sample
ADD files/hbase-site.xml /opt/hbase/conf/hbase-site.xml.sample
ADD files/start_opentsdb.sh /opt/bin/
ADD files/create_tsdb_tables.sh /opt/bin/
ADD files/start_hbase.sh /opt/bin/
ADD files/entrypoint.sh /entrypoint.sh
# Fix ENV variables in installed scripts
RUN for i in /opt/bin/start_hbase.sh /opt/bin/start_opentsdb.sh /opt/bin/create_tsdb_tables.sh; \
do \
sed -i "s#::JAVA_HOME::#$JAVA_HOME#g; s#::PATH::#$PATH#g; s#::TSDB_VERSION::#$TSDB_VERSION#g;" $i; \
done
RUN echo "export HBASE_OPTS=\"${HBASE_OPTS}\"" >> /opt/hbase/conf/hbase-env.sh
#4242 is tsdb, rest are hbase ports
EXPOSE 60000 60010 60030 4242 16010 16070
USER opentsdb
#HBase is configured to store data in /data/hbase, vol-mount it to persist your data.
VOLUME ["/data/hbase", "/tmp", "/opentsdb-plugins"]
CMD ["/entrypoint.sh"]
however the newly built image is throwing error and says permission denied for /opt/bin/ files. And the opentsdb is not getting deployed correctly.
On local using docker desktop, everything works fine using root, when I run below command
docker run -dp 4242:4242 petergrace/opentsdb-docker
Do i need to use any chown commands too ?
Could you help how to make opentsdb get deployed correctly using uid 100 ? Thanks in advance!

Can you install command-line packages in jib docker image?

I need to install command line tools like jq, curl etc in the docker image created by maven jib plugin. How can I achieve this? Any help would be greatly appreciated. Thanks.
As explained in the other answer, using a base image customized with pre-installed tools that rarely change is a good solution.
Alternatively, you may put curl using Jib's <extraDirectories> feature, which enables adding arbitrary files to the target image. Check the Maven and Gradle docs for more details. As explained in the docs, you will also need to configure <permissions> to set executable bits to curl.
If you prefer, you could even set up your Maven or Gradle builds to download curl and unpack it. Here's an example Jib setup (showing both Maven and Gradle) from the Jib repository.
Adding a reference Dockerfile and you can build your own base image by creating your Dockerfile and then build it.
FROM openjdk:8-jdk-alpine
RUN apk add --no-cache curl tar bash procps
# Downloading and installing Maven
ARG MAVEN_VERSION=3.6.1
ARG USER_HOME_DIR="/root"
ARG SHA=b4880fb7a3d81edd190a029440cdf17f308621af68475a4fe976296e71ff4a4b546dd6d8a58aaafba334d309cc11e638c52808a4b0e818fc0fd544226d952544
ARG BASE_URL=https://apache.osuosl.org/maven/maven-3/${MAVEN_VERSION}/binaries
RUN mkdir -p /usr/share/maven /usr/share/maven/ref \
&& echo "Downlaoding maven" \
&& curl -fsSL -o /tmp/apache-maven.tar.gz ${BASE_URL}/apache-maven-${MAVEN_VERSION}-bin.tar.gz \
\
&& echo "Checking download hash" \
&& echo "${SHA} /tmp/apache-maven.tar.gz" | sha512sum -c - \
\
&& echo "Unziping maven" \
&& tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1 \
\
&& echo "Cleaning and setting links" \
&& rm -f /tmp/apache-maven.tar.gz \
&& ln -s /usr/share/maven/bin/mvn /usr/bin/mvn
ENV MAVEN_HOME /usr/share/maven
ENV MAVEN_CONFIG "$USER_HOME_DIR/.m2"
# Downloading and installing Gradle
# 1- Define a constant with the version of gradle you want to install
ARG GRADLE_VERSION=4.0.1
# 2- Define the URL where gradle can be downloaded from
ARG GRADLE_BASE_URL=https://services.gradle.org/distributions
# 3- Define the SHA key to validate the gradle download
# obtained from here https://gradle.org/release-checksums/
ARG GRADLE_SHA=d717e46200d1359893f891dab047fdab98784143ac76861b53c50dbd03b44fd4
# 4- Create the directories, download gradle, validate the download, install it, remove downloaded file and set links
RUN mkdir -p /usr/share/gradle /usr/share/gradle/ref \
&& echo "Downlaoding gradle hash" \
&& curl -fsSL -o /tmp/gradle.zip ${GRADLE_BASE_URL}/gradle-${GRADLE_VERSION}-bin.zip \
\
&& echo "Checking download hash" \
&& echo "${GRADLE_SHA} /tmp/gradle.zip" | sha256sum -c - \
\
&& echo "Unziping gradle" \
&& unzip -d /usr/share/gradle /tmp/gradle.zip \
\
&& echo "Cleaning and setting links" \
&& rm -f /tmp/gradle.zip \
&& ln -s /usr/share/gradle/gradle-${GRADLE_VERSION} /usr/bin/gradle
# 5- Define environmental variables required by gradle
ENV GRADLE_VERSION 4.0.1
ENV GRADLE_HOME /usr/bin/gradle
ENV GRADLE_USER_HOME /cache
ENV PATH $PATH:$GRADLE_HOME/bin
VOLUME $GRADLE_USER_HOME
CMD [""]
Ref:- https://docs.docker.com/engine/reference/builder/
Once your custom image is ready, push it to Registry and then reference it in jib in following manner.
mvn compile jib:build \
-Djib.from.image=customImage

Run ansible-playbook in Jenkins container

Question: Is it possible to have a docker-compose file to run ansible-playbook in a Jenkins container?
Summary:
I have a Jenkins container (containerA) that I would like to run ansible-playbook. However, since the containers stop after the execution, I don't know how to define a non-running container in docker-compose.
I have posted the output for docker ps -a, the docker-compose and the Dockerfile for ansible-playbook
Please let me know if my question is unclear.
PG
root#jenkins1:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c47a4ee06d71 jenkins/jenkins "/sbin/tini -- /usr/…" 2 months ago Up 2 months 0.0.0.0:50000->50000/tcp, 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp jenkins1
956309ae7370 foo/ansible "ansible-playbook" 2 months ago Exited (2) 2 months ago hopeful_hypatia
cat /opt/docker_jenkins/docker-compose.yml
version: '3.2'
services:
jenkins:
restart: always
image: 'jenkins/jenkins'
container_name: jenkins1
ports:
- '80:8080'
- '443:8443'
- '50000:50000'
volumes:
- type: volume
source: jenkins_home
target: /var/jenkins_home
volume:
nocopy: true
- type: bind
source: /var/lib/bin
target: /root/.local/bin
volumes:
jenkins_home:
FROM alpine:3.7
ENV ANSIBLE_VERSION 2.8.6
ENV BUILD_PACKAGES \
bash \
curl \
tar \
openssh-client \
sshpass \
git \
python \
py-boto \
py-dateutil \
py-httplib2 \
py-jinja2 \
py-paramiko \
py-pip \
py-yaml \
ca-certificates
# If installing ansible#testing
#RUN \
# echo "#testing http://nl.alpinelinux.org/alpine/edge/testing" >> #/etc/apk/repositories
RUN set -x && \
\
echo "==> Adding build-dependencies..." && \
apk --update add --virtual build-dependencies \
gcc \
musl-dev \
libffi-dev \
openssl-dev \
python-dev && \
\
echo "==> Upgrading apk and system..." && \
apk update && apk upgrade && \
\
echo "==> Adding Python runtime..." && \
apk add --no-cache ${BUILD_PACKAGES} && \
pip install --upgrade pip && \
pip install python-keyczar docker-py && \
\
echo "==> Installing Ansible..." && \
pip install ansible==${ANSIBLE_VERSION} && \
\
echo "==> Cleaning up..." && \
apk del build-dependencies && \
rm -rf /var/cache/apk/* && \
\
echo "==> Adding hosts for convenience..." && \
mkdir -p /etc/ansible /ansible && \
echo "[local]" >> /etc/ansible/hosts && \
echo "localhost" >> /etc/ansible/hosts
ENV ANSIBLE_GATHERING smart
ENV ANSIBLE_HOST_KEY_CHECKING false
ENV ANSIBLE_RETRY_FILES_ENABLED false
ENV ANSIBLE_ROLES_PATH /ansible/playbooks/roles
ENV ANSIBLE_SSH_PIPELINING True
ENV PYTHONPATH /ansible/lib
ENV PATH /ansible/bin:$PATH
ENV ANSIBLE_LIBRARY /ansible/library
WORKDIR /ansible/playbooks
ENTRYPOINT ["ansible-playbook"]
the docker container will be up only if there is any process or service that is holding that container to run.
In your docker file you are executing ansible-playbook command this will error out stating ERROR! You must specify a playbook file to run along with the help options.
If you want to execute ansible playbook you have to pass more arguments
A successful playbook execution happens like.
ansible-playbook -i <inventory_file> <playbook_name>

Gradle docker container ignores cache when started from Jenkinsfile

I am running a build in a gradle container with a volume for the cache, but gradle does not make use of the downloaded dependencies in the cache for the subsequent builds.
Here's the dockerfile for the gradle image:
FROM **custom image base**
# Install the Java Development Kit
RUN apk --no-cache add openjdk8=8.131.11-r2
CMD ["gradle"]
ENV GRADLE_HOME /opt/gradle
ENV GRADLE_VERSION 4.6
ARG GRADLE_DOWNLOAD_SHA256=98bd5fd2b30e070517e03c51cbb32beee3e2ee1a84003a5a5d748996d4b1b915
RUN set -o errexit -o nounset \
&& echo "Installing build dependencies" \
&& apk add --no-cache --virtual .build-deps \
ca-certificates \
openssl \
unzip \
\
&& echo "Downloading Gradle" \
&& wget -O gradle.zip "https://services.gradle.org/distributions/gradle-${GRADLE_VERSION}-bin.zip" \
\
&& echo "Checking download hash" \
&& echo "${GRADLE_DOWNLOAD_SHA256} *gradle.zip" | sha256sum -c - \
\
&& echo "Installing Gradle" \
&& unzip gradle.zip \
&& rm gradle.zip \
&& mkdir /opt \
&& mv "gradle-${GRADLE_VERSION}" "${GRADLE_HOME}/" \
&& ln -s "${GRADLE_HOME}/bin/gradle" /usr/bin/gradle \
\
&& apk del .build-deps \
\
&& echo "Adding gradle user and group" \
&& addgroup -S -g 1000 gradle \
&& adduser -D -S -G gradle -u 1000 -s /bin/ash gradle \
&& mkdir /home/gradle/.gradle \
&& chown -R gradle:gradle /home/gradle \
\
&& echo "Symlinking root Gradle cache to gradle Gradle cache" \
&& ln -s /home/gradle/.gradle /root/.gradle
# Create Gradle volume
USER gradle
VOLUME "/home/gradle/.gradle"
WORKDIR /home/gradle
RUN set -o errexit -o nounset \
&& echo "Testing Gradle installation" \
&& gradle --version
In the Jenkinsfile I have the build stage declared like this:
stage('Build') {
docker.image('custom-gradle').withRun('-v gradle-cache:/home/gradle/.gradle') { c ->
docker.image('custom-gradle').inside {
sh './scripts/build.py -br ' + branchName
sh 'cp build/libs/JARFILE*.jar build/libs/JARFILE.jar'
}
}
}
The 'gradle-cache' volume is a volume that was created with the docker volume create command.
The python script just runs a gradle command:
gradle clean assemble javaDoc
When I inspect the gradle-cache volume data on the host machine it contains the following files/folders:
4.6 buildOutputCleanup caches daemon native
So the build successfully writes to the cache volume, but appears not to read from it, re-downloading every dependency for every build.
How can I get gradle to use these downloaded dependencies?
UPDATE
stage('Build Bag End') {
docker.image('custom-gradle').inside('-v gradle-cache:/home/gradle/.gradle') {
sh './scripts/build.py -br ' + branchName
sh 'cp build/libs/JARFILE*.jar build/libs/JARFILE.jar'
}
}
So I found the .inside() command also supports parameters but it still won't read from the cache; only write to it.
I believe that Jenkins Docker Plugin always runs containers as the jenkins user. It looks like your gradle image is assuming that it is running as the gradle user. You might try adding the following option:
-e GRADLE_USER_HOME=/home/gradle/.gradle
Though you might run into permission issues with that.

Non-Alpine dind docker image

Is there an existing non-Alpine dind docker image?
Bind-mounting the host's docker socket does not work for me. I need proper dind. Docker's dind images seem to be all Alpine based, which also doesn't work for me.
Not exactly an answer to your question, but might solve your needs:
I assume that you don't really need non-Alpine, but rather GLIBC-enabled image.
I wanted an Docker-in-Docker capable image for Gitlab CI, which would have OpenJDK 12.
Such image is not yet available - AdoptOpenJDK images do not have DinD, and the official docker:* images can't install normal OpenJDK.
So I combined adoptopenjdk:12 with docker:stable, and it seems to work.
docker build --label docker-with-openjdk12 .
# ------------------------------------------------------------------------------
# NOTE: THIS DOCKERFILE IS GENERATED VIA "update.sh"
#
# PLEASE DO NOT EDIT IT DIRECTLY.
# ------------------------------------------------------------------------------
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
FROM docker:stable
ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en' LC_ALL='en_US.UTF-8'
RUN apk add --no-cache --virtual .build-deps curl binutils \
&& GLIBC_VER="2.29-r0" \
&& ALPINE_GLIBC_REPO="https://github.com/sgerrand/alpine-pkg-glibc/releases/download" \
&& GCC_LIBS_URL="https://archive.archlinux.org/packages/g/gcc-libs/gcc-libs-9.1.0-2-x86_64.pkg.tar.xz" \
&& GCC_LIBS_SHA256="91dba90f3c20d32fcf7f1dbe91523653018aa0b8d2230b00f822f6722804cf08" \
&& ZLIB_URL="https://archive.archlinux.org/packages/z/zlib/zlib-1%3A1.2.11-3-x86_64.pkg.tar.xz" \
&& ZLIB_SHA256=17aede0b9f8baa789c5aa3f358fbf8c68a5f1228c5e6cba1a5dd34102ef4d4e5 \
&& curl -LfsS https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub -o /etc/apk/keys/sgerrand.rsa.pub \
&& SGERRAND_RSA_SHA256="823b54589c93b02497f1ba4dc622eaef9c813e6b0f0ebbb2f771e32adf9f4ef2" \
&& echo "${SGERRAND_RSA_SHA256} */etc/apk/keys/sgerrand.rsa.pub" | sha256sum -c - \
&& curl -LfsS ${ALPINE_GLIBC_REPO}/${GLIBC_VER}/glibc-${GLIBC_VER}.apk > /tmp/glibc-${GLIBC_VER}.apk \
&& apk add /tmp/glibc-${GLIBC_VER}.apk \
&& curl -LfsS ${ALPINE_GLIBC_REPO}/${GLIBC_VER}/glibc-bin-${GLIBC_VER}.apk > /tmp/glibc-bin-${GLIBC_VER}.apk \
&& apk add /tmp/glibc-bin-${GLIBC_VER}.apk \
&& curl -Ls ${ALPINE_GLIBC_REPO}/${GLIBC_VER}/glibc-i18n-${GLIBC_VER}.apk > /tmp/glibc-i18n-${GLIBC_VER}.apk \
&& apk add /tmp/glibc-i18n-${GLIBC_VER}.apk \
&& /usr/glibc-compat/bin/localedef --force --inputfile POSIX --charmap UTF-8 "$LANG" || true \
&& echo "export LANG=$LANG" > /etc/profile.d/locale.sh \
&& curl -LfsS ${GCC_LIBS_URL} -o /tmp/gcc-libs.tar.xz \
&& echo "${GCC_LIBS_SHA256} */tmp/gcc-libs.tar.xz" | sha256sum -c - \
&& mkdir /tmp/gcc \
&& tar -xf /tmp/gcc-libs.tar.xz -C /tmp/gcc \
&& mv /tmp/gcc/usr/lib/libgcc* /tmp/gcc/usr/lib/libstdc++* /usr/glibc-compat/lib \
&& strip /usr/glibc-compat/lib/libgcc_s.so.* /usr/glibc-compat/lib/libstdc++.so* \
&& curl -LfsS ${ZLIB_URL} -o /tmp/libz.tar.xz \
&& echo "${ZLIB_SHA256} */tmp/libz.tar.xz" | sha256sum -c - \
&& mkdir /tmp/libz \
&& tar -xf /tmp/libz.tar.xz -C /tmp/libz \
&& mv /tmp/libz/usr/lib/libz.so* /usr/glibc-compat/lib \
&& apk del --purge .build-deps glibc-i18n \
&& rm -rf /tmp/*.apk /tmp/gcc /tmp/gcc-libs.tar.xz /tmp/libz /tmp/libz.tar.xz /var/cache/apk/*
ENV JAVA_VERSION jdk-12.0.2+10
RUN set -eux; \
apk add --virtual .fetch-deps curl; \
ARCH="$(apk --print-arch)"; \
case "${ARCH}" in \
aarch64|arm64) \
ESUM='855f046afc5a5230ad6da45a5c811194267acd1748f16b648bfe5710703fe8c6'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk12-binaries/releases/download/jdk-12.0.2%2B10/OpenJDK12U-jdk_aarch64_linux_hotspot_12.0.2_10.tar.gz'; \
;; \
armhf) \
ESUM='9fec85826ffb7b2b2cf2853a6ed3e001b528ed5cf13e435cd13026398b5178d8'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk12-binaries/releases/download/jdk-12.0.2%2B10/OpenJDK12U-jdk_arm_linux_hotspot_12.0.2_10.tar.gz'; \
;; \
ppc64el|ppc64le) \
ESUM='4b0c9f5cdea1b26d7f079fa6478aceebf1923c947c4209d5709c0869dd71b98f'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk12-binaries/releases/download/jdk-12.0.2%2B10/OpenJDK12U-jdk_ppc64le_linux_hotspot_12.0.2_10.tar.gz'; \
;; \
s390x) \
ESUM='9897deeaf7a2c90374fbaca8b3eb8e63267d8fc1863b43b21c0bfc86e4783470'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk12-binaries/releases/download/jdk-12.0.2%2B10/OpenJDK12U-jdk_s390x_linux_hotspot_12.0.2_10.tar.gz'; \
;; \
amd64|x86_64) \
ESUM='1202f536984c28d68681d51207a84b6c76e5998579132d3fe1b8085aa6a5f21e'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk12-binaries/releases/download/jdk-12.0.2%2B10/OpenJDK12U-jdk_x64_linux_hotspot_12.0.2_10.tar.gz'; \
;; \
*) \
echo "Unsupported arch: ${ARCH}"; \
exit 1; \
;; \
esac; \
curl -LfsSo /tmp/openjdk.tar.gz ${BINARY_URL}; \
echo "${ESUM} */tmp/openjdk.tar.gz" | sha256sum -c -; \
mkdir -p /opt/java/openjdk; \
cd /opt/java/openjdk; \
tar -xf /tmp/openjdk.tar.gz --strip-components=1; \
apk del --purge .fetch-deps; \
rm -rf /var/cache/apk/*; \
rm -rf /tmp/openjdk.tar.gz;
ENV JAVA_HOME=/opt/java/openjdk \
PATH="/opt/java/openjdk/bin:$PATH"
CMD ["jshell"]
You just want to run Docker to perform CI System (build, run, push container images to hub) in Jenkins. Jenkins Master will launch Jenkins Slave as container and it will perform CI operation in it.
Yes you can run docker commands in inside jenkins slave container.
We required Docker binaries files in jenkins slave container, here while creating docker image through docker file we have added docker binaries file to it.
Once image build, we have just to volume mount Docker sock /var/run/docker.sock of host machine on container /var/run/docker.sock.
Here we are executing Docker-daemon from host machine and Docker client as jenkins slave container.
Please refer below GIT and Docker Hub repo:
https://github.com/Nilesh7756/dind-jnlp-slave.git
https://hub.docker.com/r/nilesh7756/jnlp-slave/

Resources