what is the lightest docker image to be used for automation test? - docker

I need to create a docker image to run the UI automation test in headless mode.
it should contain:
NodeJs, JDK, chrome browser.
I have created the one below, which is 1.6 GB, is there a better way to make it lighter and optimized
FROM node:slim
ENV DEBIAN_FRONTEND noninteractive
WORKDIR /project
#=============================
# Install Dependenices
#=============================
SHELL ["/bin/bash", "-c"]
RUN apt update && apt install -y wget bzip2 openjdk-11-jre xvfb libnotify-dev
#==============================
# install chrome
#==============================
RUN wget https://dl.google.com/linux/direct/${CHROME_PACKAGE} && \
dpkg-deb -x ${CHROME_PACKAGE} / && \
apt-get install -f -y
#=========================
# Copying Scripts to root
#=========================
COPY . /project
RUN chmod a+x ./execute_test.sh
#=======================
# framework entry point
#=======================
CMD [ "/bin/bash" ]

Your build will fail if not use SHELL ["/bin/bash", "-c"]? Otherwise eliminate this line can save you a layer. You can combine the 2 RUN into one which save you a few more. Then try --squash flag when building your image. Note your docker daemon needs to have experiment enabled to use this flag. You should get a smaller image using these steps.

Related

dockerfile build cache permission issue

I'm building a container using a binary like this:
Basically the container will run an executable go program.
FROM myrepo/ubi8/go-toolset:latest AS build
COPY --chown=1001:0 . /build
RUN cd /build && \
go env -w GO111MODULE=auto && \
go build
#---------------------------------------------------------------
FROM myrepo/ubi8/ubi-minimal:latest AS runtime
RUN microdnf update -y --nodocs && microdnf clean all && \
microdnf install go -y && \
microdnf install cronie -y && \
groupadd -g 1000 usercontainer && adduser -u 1000 -g usercontainer usercontainer && chmod 755 /home/usercontainer && \
microdnf clean all
ENV XDG_CACHE_HOME=/home/usercontainer/.cache
COPY executable.go /tmp/executable.go
RUN chmod 0555 /tmp/executable.go
USER usercontainer
WORKDIR /home/usercontainer
However, when running the container in Jenkins I'm getting this error:
failed to initialize build cache at /.cache/go-build: mkdir /.cache: permission denied
When running the container manually in a kubernetes deployment I'm not getting any issue but Jenkins is throwing this error and I can see the pod in CrashLoopBackOff and the container is showing the previous permissions issue.
Also, I'm not sure if I'm building the container correctly. Maybe I need to include the executable go program in the binary and later create the runtime?
Any clear example would be appreciated.
Go is a compiled language, which means that you don't actually need the go tool to run a Go program. In a Docker context, a typical setup is to use a multi-stage build to compile an application, and then copy the built application into a final image that runs it. The final image doesn't need the Go toolchain or the source code, just the compiled binary.
I might rewrite the final stage as:
FROM myrepo/ubi8/go-toolset:latest AS build
# ... as you have it now ...
FROM myrepo/ubi8/ubi-minimal:latest AS runtime
# Do not install `go` in this sequence
RUN microdnf update -y --nodocs &&
microdnf install cronie -y && \
microdnf clean all
# Create a non-root user, but not a home directory;
# specific uid/gid doesn't matter
RUN adduser --system usercontainer
# Get the built binary out of the first container
# and put it somewhere in $PATH
COPY --from=build /build/build /usr/local/bin/myapp
# Switch to a non-root user and explain how to run the container
USER usercontainer
CMD ["myapp"]
This sequence doesn't use go run or use any go command in the final image, which hopefully gets around the issue of needing a $HOME/.cache directory. (It will also give you a smaller container and faster startup time.)

Why can't I run command in Dockerfile but I can from within the my Docker container?

I have the following Dockerfile. This is what the "n" package is.
FROM ubuntu:18.04
SHELL ["/bin/bash", "-c"]
# Need to install curl, git, build-essential
RUN apt-get clean
RUN apt-get update
RUN apt-get install -y build-essential
RUN apt-get install -y curl
RUN apt-get install -y git
# Per docs, the following allows automated installation of n without installing node https://github.com/mklement0/n-install
RUN curl -L https://git.io/n-install | bash -s -- -y
# This refreshes the terminal to use "n"
RUN . /root/.bashrc
# Install node version 6.9.0
RUN /root/n/bin/n 6.9.0
This works perfectly and does everything I expect.
Unfortunately, after refreshing the terminal via RUN . /root/.bashrc, I can't seem to call "n" directly and instead I have to reference the exact binary using RUN /root/n/bin/n 6.9.0.
However, when I docker run -it container /bin/bash into the container and run the above sequence of commands, I am able to call "n" like so: Shell command: n 6.9.0 with no issues.
Why does the following command not work in the Dockerfile?
RUN n 6.9.0
I get the following error when I try to build my image:
/bin/bash: n: command not found
Each RUN command runs a separate shell and a separate container; any environment variables set in a RUN command are lost at the end of that RUN command. You must use the ENV command to permanently change environment variables like $PATH.
# Does nothing
RUN export FOO=bar
# Does nothing, if all the script does is set environment variables
RUN . ./vars.sh
# Needed to set variables
ENV FOO=bar
Since a Docker image generally only contains one prepackaged application and its runtime, you don't need version managers like this. Install the single version of the language runtime you need, or use a prepackaged image with it preinstalled.
# Easiest
FROM node:6.9.0
# The hard way
FROM ubuntu:18.04
ARG NODE_VERSION=6.9.0
ENV NODE_VERSION=NODE_VERSION
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --assume-yes --no-install-recommends \
curl
RUN cd /usr/local \
&& curl -LO https://nodejs.org/dist/v${NODE_VERSION}/node-v${NODE_VERSION}-linux-x64.tar.xz \
&& tar xjf node-v${NODE_VERSION}-linux-x64.tar.xz \
&& rm node-v${NODE_VERSION}-linux-x64.tar.xz \
&& for f in node npm npx; do \
ln -s ../node-v${NODE_VERSION}-linux-x64/bin/$f bin/$f; \
done

Docker containers retaining data

I'm learning Docker and I've been crafting a Dockerfile for an Ubuntu container.
My problem is I keep getting persistent information between different containers. I have exited, removed the container and then removed its image. After I made changes to my Dockerfile, I executed docker build -t playbuntu . Executing the following Dockerfile:
FROM ubuntu:latest
## for apt to be noninteractive
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
## preesed tzdata, update package index, upgrade packages and install needed software
RUN echo "tzdata tzdata/Areas select Europe" > /tmp/preseed.txt; \
echo "tzdata tzdata/Zones/Europe select London" >> /tmp/preseed.txt; \
debconf-set-selections /tmp/preseed.txt && \
apt-get update && \
apt-get install -y tzdata
RUN apt-get update -y && apt-get upgrade -y && apt-get install tree nano vim -y && apt-get install less -y && apt-get install lamp-server^ -y
RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf
EXPOSE 80
WORKDIR /var/www
COPY ./index.php /var/www
COPY ./000-default.conf /etc/apache2/sites-available
CMD [ "apache2ctl", "start", "&&", "apache2ctl", "restart" ]
Once I execute winpty docker run -it -p 80:80 playbuntu bash, my problem is, instead of my index.php file outputting the following:
<?php
print "<center><h3>Sisko's LAMP activated!</h3><center>";
phpinfo();
I get the following debug code I experimented with hours ago:
<?php
print "...responding";
phpinfo();
Is there a caching system Docker might be using? I pruned all Docker volumes just in case that is how Docker might be caching. All except 2 volumes which are being used by other containers unrelated to my project.
I suspect that, after you changed index.php on your host machine you did not rerun the docker build ... command. You will need to recreate the container image any time any of the image's content changes.
Please confirm whether running the docker build ... command again solves the issue.
Background
Docker images comprise one of more layers.
Image (layers) and Volumes are distinct.
Since your docker run ... command contains no e.g. --volume= bind mounts (or equivalent), I suspect Docker Volumes are not relevant to this question.
There is a possibility that rebuilding images does not replace image layers and thus caching does occur. However, in your case, I think this is not the issue. See the Docker documentation (link) for an overview of the Dockerfile commands that add layers.
Because of the way that layers work, if the preceding layers in your Dockerfile were unchanged and the index.php were unchanged, then Docker would not rebuild these layers. However, because your Dockerfile includes a layer that apt-get update && apt-get install ...., the layer will be invalidated, recreated and so will subsequent layers.
If you change index.php on the host and rebuild the image, this layer will always be rebuilt.
I built your Dockerfile twice. Here's the beginning of the second (!) build command. NB the Using cache commands for the unchanged layers preceding the RUN apt-get update... layer which is rebuilt.
docker build --rm -f "Dockerfile" -t 59582820:0939 "."
Sending build context to Docker daemon 3.584kB
Step 1/10 : FROM ubuntu:latest
---> 549b9b86cb8d
Step 2/10 : ENV DEBIAN_FRONTEND noninteractive
---> Using cache
---> 1529d0e293f3
Step 3/10 : ENV DEBCONF_NONINTERACTIVE_SEEN true
---> Using cache
---> 1ba10410d06a
Step 4/10 : RUN echo "tzdata tzdata/Areas select Europe" > /tmp/preseed.txt; echo "tzdata tzdata/Zones/Europe select London" >> /tmp/preseed.txt; debconf-set-selections /tmp/preseed.txt && apt-get update && apt-get install -y tzdata
---> Using cache
---> afb861da52e4
Step 5/10 : RUN apt-get update -y && apt-get upgrade -y && apt-get install tree nano vim -y && apt-get install less -y && apt-get install lamp-server^ -y
---> Running in 6f05bbb8e80a
The evidence suggests to me that you didn't rebuild after changing the index.php.
You can use volumes to persist some folders like configs or properties etc. Docker Storage

Dockerfile - Hide --build-args from showing up in the build time

I have the following Dockerfile:
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y \
git \
make \
python-pip \
python2.7 \
python2.7-dev \
ssh \
&& apt-get autoremove \
&& apt-get clean
ARG password
ARG username
ENV password $password
ENV username $username
RUN pip install git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
I use the following commands to build the image from this Dockerfile:
docker build -t myimage:v1 --build-arg password="somepassoword" --build-arg username="someuser" .
However, in the build log the username and password that I pass as --build-arg are visible.
Step 8/8 : RUN pip install git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
---> Running in 650d9423b549
Collecting git+http://someuser:somepassword#org.bitbucket.com/scm/do/repo.git
How to hide them? Or is there a different way of passing the credentials in the Dockerfile?
Update
You know, I was focusing on the wrong part of your question. You shouldn't be using a username and password at all. You should be using access keys, which permit read-only access to private repositories.
Once you've created an ssh key and added the public component to your repository, you can then drop the private key into your image:
RUN mkdir -m 700 -p /root/.ssh
COPY my_access_key /root/.ssh/id_rsa
RUN chmod 700 /root/.ssh/id_rsa
And now you can use that key when installing your Python project:
RUN pip install git+ssh://git#bitbucket.org/you/yourproject.repo
(Original answer follows)
You would generally not bake credentials into an image like this. In addition to the problem you've already discovered, it makes your image less useful because you would need to rebuild it every time your credentials changed, or if more than one person wanted to be able to use it.
Credentials are more generally provided at runtime via one of various mechanisms:
Environment variables: you can place your credentials in a file, e.g.:
USERNAME=myname
PASSWORD=secret
And then include that on the docker run command line:
docker run --env-file myenvfile.env ...
The USERNAME and PASSWORD environment variables will be available to processes in your container.
Bind mounts: you can place your credentials in a file, and then expose that file inside your container as a bind mount using the -v option to docker run:
docker run -v /path/to/myfile:/path/inside/container ...
This would expose the file as /path/inside/container inside your container.
Docker secrets: If you're running Docker in swarm mode, you can expose your credentials as docker secrets.
It's worse than that: they're in docker history in perpetuity.
I've done two things here in the past that work:
You can configure pip to use local packages, or to download dependencies ahead of time into "wheel" files. Outside of Docker you can download the package from the private repository, giving the credentials there, and then you can COPY in the resulting .whl file.
pip install wheel
pip wheel --wheel-dir ./wheels git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
docker build .
COPY ./wheels/ ./wheels/
RUN pip install wheels/*.whl
The second is to use a multi-stage Dockerfile where the first stage does all of the installation, and the second doesn't need the credentials. This might look something like
FROM ubuntu:16.04 AS build
RUN apt-get update && ...
...
RUN pip install git+http://$username:$password#org.bitbucket.com/scm/do/repo.git
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install \
python2.7
COPY --from=build /usr/lib/python2.7/site-packages/ /usr/lib/python2.7/site-packages/
COPY ...
CMD ["./app.py"]
It's worth double-checking in the second case that nothing has gotten leaked into your final image, because the ARG values are still available to the second stage.
For me, I created a bash file call set-up-cred.sh.
Inside set-up-cred.sh
echo $CRED > cred.txt;
Then, in Dockerfile,
RUN bash set-up-cred.sh;
...
RUN rm cred.txt;
This is for hiding echoing credential variables.

Can't build openjdk:8-jdk image directly

I'm slowly making my way through the Riot Taking Control of your Docker Image tutorial http://engineering.riotgames.com/news/taking-control-your-docker-image. This tutorial is a little old, so there are some definite changes to how the end file looks. After hitting several walls I decided to work in the opposite order of the tutorial. I successfully folded the official jenkinsci image into my personal Dockerfile, starting with FROM: openjdk:8-dk. But when I try to fold in the openjdk:8-dk file into my personal image I receive the following error
E: Version '8u102-b14.1-1~bpo8+1' for 'openjdk-8-jdk' was not found
ERROR: Service 'jenkinsmaster' failed to build: The command '/bin/sh
-c set -x && apt-get update && apt-get install -y openjdk-8-jdk="$JAVA_DEBIAN_VERSION"
ca-certificates-java="$CA_CERTIFICATES_JAVA_VERSION" && rm -rf
/var/lib/apt/lists/* && [ "$JAVA_HOME" = "$(docker-java-home)" ]'
returned a non-zero code: 100 Cosettes-MacBook-Pro:docker-test
Cosette$
I'm receiving this error even when I gave up and directly copied and pasted the openjdk:8-jdk Dockerfile into my own. My end goal is to bring my personal Dockerfile down to the point that it starts FROM debian-jessie. Any help would be appreciated.
My Dockerfile:
FROM buildpack-deps:jessie-scm
# A few problems with compiling Java from source:
# 1. Oracle. Licensing prevents us from redistributing the official JDK.
# 2. Compiling OpenJDK also requires the JDK to be installed, and it gets
# really hairy.
RUN apt-get update && apt-get install -y --no-install-recommends \
bzip2 \
unzip \
xz-utils \
&& rm -rf /var/lib/apt/lists/*
RUN echo 'deb http://deb.debian.org/debian jessie-backports main' > /etc/apt/sources.list.d/jessie-backports.list
# Default to UTF-8 file.encoding
ENV LANG C.UTF-8
# add a simple script that can auto-detect the appropriate JAVA_HOME value
# based on whether the JDK or only the JRE is installed
RUN { \
echo '#!/bin/sh'; \
echo 'set -e'; \
echo; \
echo 'dirname "$(dirname "$(readlink -f "$(which javac || which java)")")"'; \
} > /usr/local/bin/docker-java-home \
&& chmod +x /usr/local/bin/docker-java-home
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
ENV JAVA_VERSION 8u102
ENV JAVA_DEBIAN_VERSION 8u102-b14.1-1~bpo8+1
# see https://bugs.debian.org/775775
# and https://github.com/docker-library/java/issues/19#issuecomment-70546872
ENV CA_CERTIFICATES_JAVA_VERSION 20140324
RUN set -x \
&& apt-get update \
&& apt-get install -y \
openjdk-8-jdk="$JAVA_DEBIAN_VERSION" \
ca-certificates-java="$CA_CERTIFICATES_JAVA_VERSION" \
&& rm -rf /var/lib/apt/lists/* \
&& [ "$JAVA_HOME" = "$(docker-java-home)" ]
# see CA_CERTIFICATES_JAVA_VERSION notes above
RUN /var/lib/dpkg/info/ca-certificates-java.postinst configure
# Jenkins Specifics
# install Tini
ENV TINI_VERSION 0.9.0
ENV TINI_SHA fa23d1e20732501c3bb8eeeca423c89ac80ed452
# Use tini as subreaper in Docker container to adopt zombie processes
RUN curl -fsSL https://github.com/krallin/tini/releases/download/v${TINI_VERSION}/tini-static -o /bin/tini && chmod +x /bin/tini \
&& echo "$TINI_SHA /bin/tini" | sha1sum -c -
# Set Jenkins Environmental Variables
ENV JENKINS_HOME /var/jenkins_home
ENV JENKINS_SLAVE_AGENT_PORT 50000
# jenkins version being bundled in this docker image
ARG JENKINS_VERSION
ENV JENKINS_VERSION ${JENKINS_VERSION:-2.19.1}
# jenkins.war checksum, download will be validated using it
ARG JENKINS_SHA=dc28b91e553c1cd42cc30bd75d0f651671e6de0b
ENV JENKINS_UC https://updates.jenkins.io
ENV COPY_REFERENCE_FILE_LOG $JENKINS_HOME/copy_reference_file.log
ENV JAVA_OPTS="-Xmx8192m"
ENV JENKINS_OPTS="--handlerCountMax=300 --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war"
# Can be used to customize where jenkins.war get downloaded from
ARG JENKINS_URL=http://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/${JENKINS_VERSION}/jenkins-war-${JENKINS_VERSION}.war
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
# Jenkins is run with user `jenkins`, uid = 1000. If you bind mount a volume from the host or a data
# container, ensure you use the same uid.
RUN groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades
VOLUME /var/jenkins_home
# `/usr/share/jenkins/ref/` contains all reference configuration we want
# to set on a fresh new installation. Use it to bundle additional plugins
# or config file with your custom jenkins Docker image.
RUN mkdir -p /usr/share/jenkins/ref/init.groovy.d
# Install Jenkins. Could use ADD but this one does not check Last-Modified header neither does it
# allow to control checksum. see https://github.com/docker/docker/issues/8331
RUN curl -fsSL ${JENKINS_URL} -o /usr/share/jenkins/jenkins.war \
&& echo "${JENKINS_SHA} /usr/share/jenkins/jenkins.war" | sha1sum -c -
# Prep Jenkins Directories
USER root
RUN chown -R ${user} "$JENKINS_HOME" /usr/share/jenkins/ref
RUN mkdir /var/log/jenkins
RUN mkdir /var/cache/jenkins
RUN chown -R ${group}:${user} /var/log/jenkins
RUN chown -R ${group}:${user} /var/cache/jenkins
# Expose ports for web (8080) & node (50000) agents
EXPOSE 8080
EXPOSE 50000
# Copy in local config filesfiles
COPY init.groovy /usr/share/jenkins/ref/init.groovy.d/tcp-slave-agent-port.groovy
COPY jenkins-support /usr/local/bin/jenkins-support
COPY jenkins.sh /usr/local/bin/jenkins.sh
# NOTE : Just set pluginID to download latest version of plugin.
# NOTE : All plugins need to be listed as there is no transitive dependency resolution.
# from a derived Dockerfile, can use `RUN plugins.sh active.txt` to setup
# /usr/share/jenkins/ref/plugins from a support bundle
COPY plugins.sh /usr/local/bin/plugins.sh
RUN chmod +x /usr/local/bin/plugins.sh
RUN chmod +x /usr/local/bin/jenkins.sh
# Switch to the jenkins user
USER ${user}
# Tini as the entry point to manage zombie processes
ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/jenkins.sh"]
Try a JAVA_DEBIAN_VERSION of 8u111-b14-2~bpo8+1
Here's what happens: when you build the docker file, docker tries to execute all the lines in the dockerfile. One of those is this apt command: apt-get install -y openjdk-8-jdk="$JAVA_DEBIAN_VERSION". This comand says "Install OpenJDK version $JAVA_DEBIAN_VERSION, exactly. Nothing else.". This version is no longer available in Debian repositories, so it can't be apt-get installed! I believe this happens with all packages in official mirrors: if a new version of the package is released, the older version is no longer around to be installed.
If you want to access older Debian packages, you can use something like http://snapshot.debian.org/. The older OpenJDK package has known security vulnerabilities. I recommend using the latest version.
You can use the latest version by leaving out the explicit version in the apt-get command. On the other hand, this will make your image less reproducible: building the image today may get you u111, building it tomorrow may get you u112.
As for why the instructions worked in the other Dockerfile, I think the reason is that at the time the other Dockerfile was built, the package was available. So docker could apt-get install it. Docker then built the image containing the (older) OpenJDK. That image is a binary, so you can install it, or use it in FROM without any issues. But you can't reproduce the image: if you were to try and build the same image yourself, you would run into the same errors.
This also brings up an issue about security updates: since docker images are effectively static binaries (built once, bundle in all dependencies), they don't get security updates once built. You need to keep track of any security updates affecting your docker images and rebuild any affected docker images.

Resources