Is it possible to create Dockerfile from the container/image? [duplicate] - docker

Is it possible to generate a Dockerfile from an image? I want to know for two reasons:
I can download images from the repository but would like to see the recipe that generated them.
I like the idea of saving snapshots, but once I am done it would be nice to have a structured format to review what was done.

How to generate or reverse a Dockerfile from an image?
You can. Mostly.
Notes: It does not generate a Dockerfile that you can use directly with docker build; the output is just for your reference.
alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm alpine/dfimage"
dfimage -sV=1.36 nginx:latest
It will pull the target docker image automatically and export Dockerfile. Parameter -sV=1.36 is not always required.
Reference: https://hub.docker.com/r/alpine/dfimage
Now hub.docker.com shows the image layers with detail commands directly, if you choose a particular tag.
Bonus
If you want to know which files are changed in each layer
alias dive="docker run -ti --rm -v /var/run/docker.sock:/var/run/docker.sock wagoodman/dive"
dive nginx:latest
On the left, you see each layer's command, on the right (jump with tab), the yellow line is the folder that some files are changed in that layer
(Use SPACE to collapse dir)
Old answer
below is the old answer, it doesn't work any more.
$ docker pull centurylink/dockerfile-from-image
$ alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm centurylink/dockerfile-from-image"
$ dfimage --help
Usage: dockerfile-from-image.rb [options] <image_id>
-f, --full-tree Generate Dockerfile for all parent layers
-h, --help Show this message

To understand how a docker image was built, use the
docker history --no-trunc command.
You can build a docker file from an image, but it will not contain everything you would want to fully understand how the image was generated. Reasonably what you can extract is the MAINTAINER, ENV, EXPOSE, VOLUME, WORKDIR, ENTRYPOINT, CMD, and ONBUILD parts of the dockerfile.
The following script should work for you:
#!/bin/bash
docker history --no-trunc "$1" | \
sed -n -e 's,.*/bin/sh -c #(nop) \(MAINTAINER .*[^ ]\) *0 B,\1,p' | \
head -1
docker inspect --format='{{range $e := .Config.Env}}
ENV {{$e}}
{{end}}{{range $e,$v := .Config.ExposedPorts}}
EXPOSE {{$e}}
{{end}}{{range $e,$v := .Config.Volumes}}
VOLUME {{$e}}
{{end}}{{with .Config.User}}USER {{.}}{{end}}
{{with .Config.WorkingDir}}WORKDIR {{.}}{{end}}
{{with .Config.Entrypoint}}ENTRYPOINT {{json .}}{{end}}
{{with .Config.Cmd}}CMD {{json .}}{{end}}
{{with .Config.OnBuild}}ONBUILD {{json .}}{{end}}' "$1"
I use this as part of a script to rebuild running containers as images:
https://github.com/docbill/docker-scripts/blob/master/docker-rebase
The Dockerfile is mainly useful if you want to be able to repackage an image.
The thing to keep in mind, is a docker image can actually just be the tar backup of a real or virtual machine. I have made several docker images this way. Even the build history shows me importing a huge tar file as the first step in creating the image...

I somehow absolutely missed the actual command in the accepted answer, so here it is again, bit more visible in its own paragraph, to see how many people are like me
$ docker history --no-trunc <IMAGE_ID>

A bash solution :
docker history --no-trunc $argv | tac | tr -s ' ' | cut -d " " -f 5- | sed 's,^/bin/sh -c #(nop) ,,g' | sed 's,^/bin/sh -c,RUN,g' | sed 's, && ,\n & ,g' | sed 's,\s*[0-9]*[\.]*[0-9]*\s*[kMG]*B\s*$,,g' | head -n -1
Step by step explanations:
tac : reverse the file
tr -s ' ' trim multiple whitespaces into 1
cut -d " " -f 5- remove the first fields (until X months/years ago)
sed 's,^/bin/sh -c #(nop) ,,g' remove /bin/sh calls for ENV,LABEL...
sed 's,^/bin/sh -c,RUN,g' remove /bin/sh calls for RUN
sed 's, && ,\n & ,g' pretty print multi command lines following Docker best practices
sed 's,\s*[0-9]*[\.]*[0-9]*\s*[kMG]*B\s*$,,g' remove layer size information
head -n -1 remove last line ("SIZE COMMENT" in this case)
Example:
~  dih ubuntu:18.04
ADD file:28c0771e44ff530dba3f237024acc38e8ec9293d60f0e44c8c78536c12f13a0b in /
RUN set -xe
&& echo '#!/bin/sh' > /usr/sbin/policy-rc.d
&& echo 'exit 101' >> /usr/sbin/policy-rc.d
&& chmod +x /usr/sbin/policy-rc.d
&& dpkg-divert --local --rename --add /sbin/initctl
&& cp -a /usr/sbin/policy-rc.d /sbin/initctl
&& sed -i 's/^exit.*/exit 0/' /sbin/initctl
&& echo 'force-unsafe-io' > /etc/dpkg/dpkg.cfg.d/docker-apt-speedup
&& echo 'DPkg::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' > /etc/apt/apt.conf.d/docker-clean
&& echo 'APT::Update::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' >> /etc/apt/apt.conf.d/docker-clean
&& echo 'Dir::Cache::pkgcache ""; Dir::Cache::srcpkgcache "";' >> /etc/apt/apt.conf.d/docker-clean
&& echo 'Acquire::Languages "none";' > /etc/apt/apt.conf.d/docker-no-languages
&& echo 'Acquire::GzipIndexes "true"; Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/docker-gzip-indexes
&& echo 'Apt::AutoRemove::SuggestsImportant "false";' > /etc/apt/apt.conf.d/docker-autoremove-suggests
RUN rm -rf /var/lib/apt/lists/*
RUN sed -i 's/^#\s*\(deb.*universe\)$/\1/g' /etc/apt/sources.list
RUN mkdir -p /run/systemd
&& echo 'docker' > /run/systemd/container
CMD ["/bin/bash"]

Update Dec 2018 to BMW's answer
chenzj/dfimage - as described on hub.docker.com regenerates Dockerfile from other images. So you can use it as follows:
docker pull chenzj/dfimage
alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm chenzj/dfimage"
dfimage IMAGE_ID > Dockerfile

This is derived from #fallino's answer, with some adjustments and simplifications by using the output format option for docker history. Since macOS and Gnu/Linux have different command-line utilities, a different version is necessary for Mac. If you only need one or the other, you can just use those lines.
#!/bin/bash
case "$OSTYPE" in
linux*)
docker history --no-trunc --format "{{.CreatedBy}}" $1 | # extract information from layers
tac | # reverse the file
sed 's,^\(|3.*\)\?/bin/\(ba\)\?sh -c,RUN,' | # change /bin/(ba)?sh calls to RUN
sed 's,^RUN #(nop) *,,' | # remove RUN #(nop) calls for ENV,LABEL...
sed 's, *&& *, \\\n \&\& ,g' # pretty print multi command lines following Docker best practices
;;
darwin*)
docker history --no-trunc --format "{{.CreatedBy}}" $1 | # extract information from layers
tail -r | # reverse the file
sed -E 's,^(\|3.*)?/bin/(ba)?sh -c,RUN,' | # change /bin/(ba)?sh calls to RUN
sed 's,^RUN #(nop) *,,' | # remove RUN #(nop) calls for ENV,LABEL...
sed $'s, *&& *, \\\ \\\n \&\& ,g' # pretty print multi command lines following Docker best practices
;;
*)
echo "unknown OSTYPE: $OSTYPE"
;;
esac

It is not possible at this point (unless the author of the image explicitly included the Dockerfile).
However, it is definitely something useful! There are two things that will help to obtain this feature.
Trusted builds (detailed in this docker-dev discussion
More detailed metadata in the successive images produced by the build process. In the long run, the metadata should indicate which build command produced the image, which means that it will be possible to reconstruct the Dockerfile from a sequence of images.

If you are interested in an image that is in the Docker hub registry and wanted to take a look at Dockerfile?.
Example:
If you want to see the Dockerfile of image "jupyter/datascience-notebook" type the word "Dockerfile" in the address bar of your browser as shown below.
https://hub.docker.com/r/jupyter/datascience-notebook/
https://hub.docker.com/r/jupyter/datascience-notebook/Dockerfile
Note:
Not all the images have Dockerfile, for example, https://hub.docker.com/r/redislabs/redisinsight/Dockerfile
Sometimes this way is much faster than searching for Dockerfile in Github.

docker pull chenzj/dfimage
alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm chenzj/dfimage"
dfimage image_id
Below is the output of the dfimage command:
$ dfimage 0f1947a021ce
FROM node:8
WORKDIR /usr/src/app
COPY file:e76d2e84545dedbe901b7b7b0c8d2c9733baa07cc821054efec48f623e29218c in ./
RUN /bin/sh -c npm install
COPY dir:a89a4894689a38cbf3895fdc0870878272bb9e09268149a87a6974a274b2184a in .
EXPOSE 8080
CMD ["npm" "start"]

it is possible in just two step. First pull the image then run docker history command. also, shown in SS.
docker pull kalilinux/kali-rolling
docker history --format "{{.CreatedBy}}" kalilinux/kali-rolling --no-trunc

What is image2df
image2df is tool for Generate Dockerfile by an image.
This tool is very useful when you only have docker image and need to generate a Dockerfile whit it.
How does it work
Reverse parsing by history information of an image.
How to use this image
# Command alias
echo "alias image2df='docker run -v /var/run/docker.sock:/var/run/docker.sock --rm cucker/image2df'" >> ~/.bashrc
. ~/.bashrc
# Excute command
image2df <IMAGE>
See help
docker run --rm cucker/image2df --help
For example
$ echo "alias image2df='docker run -v /var/run/docker.sock:/var/run/docker.sock --rm cucker/image2df'" >> ~/.bashrc
$ . ~/.bashrc
$ docker pull mysql
$ image2df mysql
========== Dockerfile ==========
FROM mysql:latest
RUN groupadd -r mysql && useradd -r -g mysql mysql
RUN apt-get update && apt-get install -y --no-install-recommends gnupg dirmngr && rm -rf /var/lib/apt/lists/*
ENV GOSU_VERSION=1.12
RUN set -eux; \
savedAptMark="$(apt-mark showmanual)"; \
apt-get update; \
apt-get install -y --no-install-recommends ca-certificates wget; \
rm -rf /var/lib/apt/lists/*; \
dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; \
wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; \
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \
apt-mark auto '.*' > /dev/null; \
[ -z "$savedAptMark" ] || apt-mark manual $savedAptMark > /dev/null; \
apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; \
chmod +x /usr/local/bin/gosu; \
gosu --version; \
gosu nobody true
RUN mkdir /docker-entrypoint-initdb.d
RUN apt-get update && apt-get install -y --no-install-recommends \
pwgen \
openssl \
perl \
xz-utils \
&& rm -rf /var/lib/apt/lists/*
RUN set -ex; \
key='A4A9406876FCBD3C456770C88C718D3B5072E1F5'; \
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
gpg --batch --export "$key" > /etc/apt/trusted.gpg.d/mysql.gpg; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME"; \
apt-key list > /dev/null
ENV MYSQL_MAJOR=8.0
ENV MYSQL_VERSION=8.0.24-1debian10
RUN echo 'deb http://repo.mysql.com/apt/debian/ buster mysql-8.0' > /etc/apt/sources.list.d/mysql.list
RUN { \
echo mysql-community-server mysql-community-server/data-dir select ''; \
echo mysql-community-server mysql-community-server/root-pass password ''; \
echo mysql-community-server mysql-community-server/re-root-pass password ''; \
echo mysql-community-server mysql-community-server/remove-test-db select false; \
} | debconf-set-selections \
&& apt-get update \
&& apt-get install -y \
mysql-community-client="${MYSQL_VERSION}" \
mysql-community-server-core="${MYSQL_VERSION}" \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf /var/lib/mysql && mkdir -p /var/lib/mysql /var/run/mysqld \
&& chown -R mysql:mysql /var/lib/mysql /var/run/mysqld \
&& chmod 1777 /var/run/mysqld /var/lib/mysql
VOLUME [/var/lib/mysql]
COPY dir:2e040acc386ebd23b8571951a51e6cb93647df091bc26159b8c757ef82b3fcda in /etc/mysql/
COPY file:345a22fe55d3e6783a17075612415413487e7dba27fbf1000a67c7870364b739 in /usr/local/bin/
RUN ln -s usr/local/bin/docker-entrypoint.sh /entrypoint.sh # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306 33060
CMD ["mysqld"]
reference

Related

Cannot configure code with echo in docker with alpine os but can in ubuntu

I have a Dockerfile which was originally pulling from ubuntu and I recently came across alpine which is more lightweight so would like to pull from that instead. Part of the code I'm trying to build is called Healpix which depends on cfitsio. When I originally built the ubuntu version I found this Dockerfile https://github.com/MilesCranmer/dockers/blob/master/dockerfiles/healpix.
Essentially the problem is the following works in ubuntu but not with alpine:
RUN echo "3\ngfortran\n\nY\n\n\ngcc\n\n\n\n\nN\n1\nY\nN\nN\n0\n" |
./configure && make
The error I get is
Something went wrong ...
Quitting configuration script !
./configure: exit: line 162: Illegal number: -1
The command '/bin/sh -c echo "3\ngfortran\n\nY\n\n\ngcc\n\n\n\n\nN\n1\nY\nN\nN\n0\n" | ./configure && make' returned a non-zero code: 2
somewhat confusingly the configure script in question isn't 162 lines long https://sourceforge.net/p/healpix/code/HEAD/tree/branches/branch_v350r1006/configure. I have tried installing bash and changing script to that but that didn't work.
ubuntu Dockerfile
FROM ubuntu
RUN apt-get update && apt-get install -y gcc g++ gfortran make wget
WORKDIR /home
RUN wget \
http://heasarc.gsfc.nasa.gov/FTP/software/fitsio/c/cfitsio_latest.tar.gz \
&& tar xzf cfitsio_latest.tar.gz
WORKDIR cfitsio
RUN ./configure --prefix=/usr && make && make install
WORKDIR /home
RUN wget \
https://sourceforge.net/projects/healpix/files/Healpix_3.50/Healpix_3.50_2018Dec10.tar.gz \
&& tar xzf Healpix*.tar.gz
WORKDIR Healpix_3.50
RUN echo \
"3\ngfortran\n\nY\n\n\ngcc\n\n\n\n\nN\n1\nY\nN\nN\n0\n" | ./configure \
&& make
alpine Dockerfile
FROM alpine
RUN apk --no-cache add gcc g++ gfortran make wget
WORKDIR /home
RUN wget \
http://heasarc.gsfc.nasa.gov/FTP/software/fitsio/c/cfitsio_latest.tar.gz \
&& tar xzf cfitsio_latest.tar.gz
WORKDIR cfitsio
RUN ./configure --prefix=/usr && make && make install
WORKDIR /home
RUN wget \
https://sourceforge.net/projects/healpix/files/Healpix_3.50/Healpix_3.50_2018Dec10.tar.gz \
&& tar xzf Healpix*.tar.gz
WORKDIR Healpix_3.50
RUN echo \
"3\ngfortran\n\nY\n\n\ngcc\n\n\n\n\nN\n1\nY\nN\nN\n0\n" | ./configure \
&& make
TL;DR
In your Dockerfile, use :
RUN /bin/echo -e "3\ngfortran\n[...]" | ./configure && make
to have the same behavior on Ubuntu and Alpine.
Explanations
The ./configure script is executed with /bin/sh (see the shebang). On Ubuntu, /bin/sh is a link to /bin/dash, while on Alpine, /bin/sh is a link to /bin/busybox.
The following small example reproduces your problem.
Consider the following ./configure script :
#!/bin/sh
read -p "1st prompt : " first
read -p "2nd prompt : " second
echo "$first-$second"
On Ubuntu :
docker run --rm -v $PWD/configure:/configure ubuntu:18.04 \
/bin/sh -c 'echo "a\nb" | ./configure'
prints :
a-b
While, on Alpine :
docker run --rm -v $PWD/configure:/configure alpine:3.8 \
/bin/sh -c 'echo "a\nb" | ./configure'
prints :
anb-
On Alpine (busybox), the echoed string (a\nb) is interpreted as a single argument, while on Ubuntu (dash), the \n is used to separate both arguments.
To have the same behavior as Ubuntu on Alpine, you can run :
docker run --rm -v $PWD/configure:/configure alpine:3.8 /bin/sh -c 'echo "a
b
" | ./configure'
or :
docker run --rm -v $PWD/configure:/configure alpine:3.8 /bin/sh -c \
'echo -e "a\nb" | ./configure'
(see the -e parameter of echo)
These 2 commands print :
a-b
As for your Dockerfile, you should write something like :
RUN /bin/echo -e "3\ngfortran\n[...]" | ./configure && make
/bin/echo is used instead of echo because on Ubuntu, echo -e "3\ngfortran\n[...]" will print -e 3\nngfortran\n[...].
This is because echo is parsed a shell built-in, while /bin/echo is explicitly not (source : https://github.com/moby/moby/issues/8949#issuecomment-61682684).

Create alpine linux iso from docker - libburn permission denied

I have been trying to build an iso-image for alpine-linux inside a docker container following the standard instructions here however i seem to be unable to actually write the .iso back into the mounted volume due to libburn :
>>> mkimage-x86_64: Creating alpine-standard-edge-x86_64.iso
xorriso 1.4.8 : RockRidge filesystem manipulator, libburnia project.
libburn : SORRY : Failed to open device (a pseudo-drive) : Permission denied
libburn : FATAL : Burn run failed
xorriso : FATAL : -abort_on 'FAILURE' encountered 'FATAL' during image writing
libisofs: MISHAP : Image write cancelled
xorriso : FAILURE : libburn indicates failure with writing.
This is the standard result of trying to run the downloaded script from the tutorial:
sh aports/scripts/mkimage.sh --tag edge --outdir /build2/ --arch x86_64 --repository http://dl-cdn.alpinelinux.org/alpine/edge/main --profile standard
The docker image im using:
FROM alpine:latest
RUN addgroup root abuild
RUN apk add --update \
alpine-sdk \
# build-base \
apk-tools \
alpine-conf \
busybox \
git \
fakeroot \
syslinux \
xorriso \
squashfs-tools \
mtools \
dosfstools \
grub-efi \
&& rm -rf /var/cache/apk/*
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN mkdir /usr/src/app/build
RUN touch /usr/src/app/build/worked.txt
RUN adduser -G abuild -g "Alpine Package Builder" -s /bin/sh -u 12345 -D builder
RUN echo "builder:newpass"|chpasswd
RUN chgrp -R abuild /usr/local; \
find /usr/local -type d | xargs chmod g+w; \
echo "builder ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/builder; \
chmod 0440 /etc/sudoers.d/builder
WORKDIR /build2/
RUN git clone git://git.alpinelinux.org/aports
RUN chmod +x aports/scripts/mkimage.sh
RUN abuild-keygen -i -a
USER builder
I have looked over the official forum however only one post mentioned something similar but did not allude to any actual resolution.
Failing to find a solution for this, can anyone else recommend a good alternative minimal distro that can be build an iso via script for x_86, x_64 and rpi?
You can easily create your own Alpine Linux ISO image using script alpine-make-vm-image.
Example:
sudo ./alpine-make-vm-image \
--image-format qcow2 \
--image-size 5G \
--packages "ca-certificates git ssl_client" \
--script-chroot \
alpine-$(date +%Y-%m-%d).qcow2 -- ./configure.sh
You're getting a permission denied error because the user you created can't access the pseudo device needed by xorriso. I removed all the user creation parts and just ran the whole thing as root and it works.
Here's the Dockerfile I used:
FROM alpine:latest
RUN apk add --no-cache \
alpine-conf \
alpine-sdk \
apk-tools \
dosfstools \
grub-efi \
mtools \
squashfs-tools \
syslinux \
xorriso
WORKDIR /src
RUN git clone git://git.alpinelinux.org/aports
RUN chmod +x aports/scripts/mkimage.sh
RUN addgroup root abuild
RUN abuild-keygen -i -a -n
WORKDIR /build
ENTRYPOINT /src/aports/scripts/mkimage.sh
CMD "--tag edge --arch x86_64 --repository http://dl-cdn.alpinelinux.org/alpine/edge/main --profile standard"
Then build and run.
docker build -t alpine-iso .
docker run -v "$(pwd):/build" -it alpine-iso

My docker starts zookeeper, but it then automatically exists

i write dockfile start zookeeper
FROM buildpack-deps:sid-scm
RUN apt-get update && apt-get install -y --no-install-recommends \
bzip2 \
unzip \
xz-utils \
gettext-base \
&& rm -rf /var/lib/apt/lists/*
COPY zookeeper-3.4.12.tar.gz /opt
COPY config.template.properties /opt
RUN tar xfz /opt/zookeeper-3.4.12.tar.gz -C /opt
ENV ZK_HOME /opt/zookeeper-3.4.12
COPY startzookeeper.sh /opt
RUN chmod a+x /opt/startzookeeper.sh $ZK_HOME
CMD ["/opt/startzookeeper.sh"]
the startzookeeper.sh file is
#!/usr/bin/env bash
eval "cat <<EOF
$(</opt/config.template.properties)
EOF
" | tee /opt/zoo.cfg 2> /dev/null
#echo "$ZK_HOME" > 2.txt
cp /opt/zoo.cfg "$ZK_HOME"/conf
#
exec "$ZK_HOME/bin/zkServer.sh" start
but when i run docker ps,it is empty.
i try add tail -f /dev/null,but it does not work.
i don't know why,the zookeeper should run always,why it exist?
thanks any suggestions.
You could adapt your script to imitate the one from the official zookeeper-docker
(from hub.docker.com)
Its docker-entrypoint.sh ends with exec "$#", which executes "zkServer.sh", "start-foreground".
The important part is the start-foreground option, which ensures the process does not exit immediately, as that would exit your container as well.

docker-credential-gcr not found inside Docker image although executable is there

I'm trying to build a Docker image for my Gitlab CI pipeline containing docker client + gcloud along with the following gcloud components:
kubectl
docker-credential-gcr
This is my dockerfile:
FROM docker:git
RUN mkdir /opt \
&& cd /opt \
&& wget -q https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-152.0.0-linux-x86_64.tar.gz \
&& tar -xzf google-cloud-sdk-152.0.0-linux-x86_64.tar.gz \
&& rm google-cloud-sdk-152.0.0-linux-x86_64.tar.gz \
&& ln -s /opt/google-cloud-sdk/bin/gcloud /usr/bin/gcloud \
&& apk -q update \
&& apk -q add python \
&& apk add --update libintl \
&& apk add --virtual build_deps gettext \
&& cp /usr/bin/envsubst /usr/local/bin/envsubst \
&& apk del build_deps \
&& rm -rf /var/cache/apk/* \
&& echo "y" | gcloud components install kubectl docker-credential-gcr \
&& ln -s /opt/google-cloud-sdk/bin/kubectl /usr/bin/kubectl \
&& ln -s /opt/google-cloud-sdk/bin/docker-credential-gcr /usr/bin/docker-credential-gcr
Inside my CI flow, I need to run docker-credential-gcr (because of this issue).
The docker-credential-gcr executable is correctly installed inside /opt/google-cloud-sdk/bin like shown by running docker run -i -t gitlabci-test ls /opt/google-cloud-sdk/bin
It is also correctly simlinked inside /usr/bin as shown by docker run -i -t gitlabci-test ls -la /usr/bin
And yet, trying to call it with any of the methods below fails miserably
docker run -i -t gitlabci-test docker-credential-gcr
docker run -i -t gitlabci-test /usr/bin/docker-credential-gcr
docker run -i -t gitlabci-test /opt/google-cloud-sdk/bin/docker-credential-gcr
Error message:
/usr/local/bin/docker-entrypoint.sh: exec: line 20: docker-credential-gcr: not found
On the other hand, running the kubectl component works fine
docker run -i -t gitlabci-test kubectl version
Any idea how I can fix this issue to be able to run docker-credential-gcr with the container ?

Pipe RUN's output to ENV in Dockerfile

I have the following command in my Dockerfile:
RUN echo "\
export NODE_VERSION=$(\
curl -sL https://nodejs.org/dist/latest/ |\
tac |\
tac |\
grep -oPa -m 1 '(?<=node-v)(.*?)(?=-linux-x64\.tar\.xz)' |\
head -1\
)" >> /etc/bash.bashrc
RUN source /etc/bash.bashrc
The following command should store export NODE_VERSION=6.2.2 in /etc/bash.bashrc, but it's not storing anything.
This works however when I'm inside an image with bash and manually entering the following commands.
Update:
I changed back the shell from bash to the Debian/Ubuntu default dash, which is POSIX standard. I removed this line:
RUN ln -sf /bin/bash /bin/sh && ln -sf /bin/bash /bin/sh.distrib
Than I tried to add to the environment variables with export:
RUN export NODE_VERSION=$(\
curl -sL https://nodejs.org/dist/latest/ |\
tac |\
tac |\
grep -oPa -m 1 '(?<=node-v)(.*?)(?=-linux-x64\.tar\.xz)' |\
head -1\
)
But again, the output is missing at image creation, but works when I running the image with $ docker run --rm -it debian /bin/sh. Why?
Update 2:
Looks like the final solution should be something like this:
RUN NODE_VERSION=$( \
curl -sL https://nodejs.org/dist/latest/ | \
tac | \
tac | \
grep -oPa -m 1 '(?<=node-v)(.*?)(?=-linux-x64\.tar\.xz)' | \
head -1 \
) && echo $NODE_VERSION
ENV NODE_VERSION $NODE_VERSION
echo $NODE_VERSION returning 6.2.2 as it should at the execution of the Dockerfile also, but ENV NODE_VERSION $NODE_VERSION cannot read this. Is there a way to define variables globally or how can I pass the RUN's output to ENV?
Solution:
I ended up putting the node.js installation part under the same RUN command:
RUN NODE_VERSION=$( \
curl -sL https://nodejs.org/dist/latest/ | \
tac | \
tac | \
grep -oPa -m 1 '(?<=node-v)(.*?)(?=-linux-x64\.tar\.xz)' | \
head -1 \
) \
&& echo $NODE_VERSION \
&& curl -SLO "https://nodejs.org/dist/latest/node-v$NODE_VERSION-linux-x64.tar.xz" -o "node-v$NODE_VERSION-linux-x64.tar.xz" \
&& curl -SLO "https://nodejs.org/dist/latest/SHASUMS256.txt.asc" \
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
&& grep " node-v$NODE_VERSION-linux-x64.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
&& tar -xJf "node-v$NODE_VERSION-linux-x64.tar.xz" -C /usr/local --strip-components=1 \
&& rm "node-v$NODE_VERSION-linux-x64.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt
Update:
But again, the output is missing at image creation, but works when I
running the image with $ docker run --rm -it debian /bin/sh. Why?
This is because each statement (conventionally started with an uppercase verb like RUN, ADD, COPY, ENV, etc) is a brand-new intermediate container.
These intermediate containers do not share the environment (e.g. environment variables) but a Union File System. That is, only data saved in file system and those variables defined in Dockerfile (e.g. through ENV) deliver through intermediate containers. Check out this post and UnionFS Wiki if you want to know how UFS works.
If your goal is to install the latest node each time you build the image. How about having a try for nvm (Node Version Manager)?
ARG UBUNTU=16.04
# Pull base image.
FROM ubuntu:${UBUNTU}
# arguments
ARG NVM=0.33.9
ARG NODE=node
# update apt
RUN apt-get update
# Install curl
RUN apt-get install -y curl
# Set home for NVM
ENV NVM_DIR=/home/inazuma/.nvm
# Install Node.js with NVM
RUN mkdir -p ${NVM_DIR} && \
curl -o- https://raw.githubusercontent.com/creationix/nvm/v${NVM}/install.sh | bash && \
. ${NVM_DIR}/nvm.sh && \
nvm install ${NODE}
# The first following line should always be called in each intermediate container
# to gain nvm, node and npm command
. ${NVM_DIR}/nvm.sh && nvm use ${NODE} && \
npm install -g cowsay && \
cowsay "Making Docker images is really a headache!"
# Set up your PATH for nvm, node and npm command
CMD ". ${NVM_DIR}/nvm.sh && nvm use ${NODE} && bash"
Note that nvm does not persist across intermediate containers so you should use . ${NVM_DIR}/nvm.sh to set up nvm command for each new intermediate container.
NVM manages node binary locally, use nvm use ${NODE} to include node and npm into PATH. In NVM, node stands for an alias of the latest version of Node; therefore, we set NODE argument to be node (it can also be set to a string of semantic version like 5.0, 9.11.1, etc).

Resources