I wrote a dockerfile for varnish plus. Docker build execute successfully but on docker run says /bin/sh: 1: ./init: not found not found. What am i missing on dockerfile?
I'm trying to build a custom docker build for Kubernetes varnish deployment.
I tried another parameters like CMD["sh", "init"] then i got ./start-agent failed. If I put sh to everywhere not found on /etc/default/varnish error. Also got init done error it says expecting "then". I installed on bare metal in the same way but couldn't run on a docker container.
FROM ubuntu:16.04
ARG varnishFile
ARG tokenName
ARG Project
ARG varnishPlusCredential="xxx"
RUN echo " $tokenName, $Project, $varnishFile "
RUN apt-get update
RUN apt-get -y install \
git \
python \
apt-transport-https \
wget \
curl \
gnupg2 \
libmicrohttpd10 \
libssl1.0.0 \
vim \
telnet
RUN wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
RUN echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | tee /etc/apt/sources.list.d/elastic-5.x.list
RUN curl https://${varnishPlusCredential}#repo.varnish-software.com/GPG-key.txt | apt-key add -
RUN echo "deb http://${varnishPlusCredential}#repo.varnish-software.com/ubuntu trusty varnish-4.1-plus" >> /etc/apt/sources.list
RUN echo "deb http://${varnishPlusCredential}#repo.varnish-software.com/ubuntu trusty non-free" >> /etc/apt/sources.list
RUN echo " #apt-get update "
RUN apt-get update -y
RUN apt-get -y install \
varnish-plus \
varnish-plus-ha \
varnish-agent \
filebeat \
varnishtuner
RUN vha-generate-vcl --token ${tokenName} > /etc/varnish/vha.vcl
COPY /${Project}/varnishConfiguration/nodes.conf /etc/varnish/nodes.conf
COPY /${Project}/varnishConfiguration/default.vcl /etc/varnish/vcl/default.vcl
COPY /${Project}/varnishConfiguration/varnish /etc/default/varnish
COPY /${Project}/varnishConfiguration/varnishncsa /etc/default/varnishncsa
COPY /"${Project}"/varnishConfiguration/varnishncsa-init.d/varnishncsa /etc/init.d
#Copy varnish configuration varnish files for varnish nodes
COPY /${Project}/${varnishFile}/varnish-agent /etc/default/varnish-agent
COPY /${Project}/${varnishFile}/vha-agent /etc/default/vha-agent
COPY /${Project}/filebeat/filebeat.yml /etc/filebeat/filebeat.yml
COPY /scripts/start-varnish-agent.sh /start-varnish-agent
COPY /scripts/start-varnish.sh /start-varnish
COPY /scripts/start-vha-agent.sh /start-vha-agent
COPY /scripts/start-varnishncsa.sh /start-varnishncsa
COPY /scripts/start-filebeat.sh /start-filebeat
COPY /scripts/init.sh /init
#Executive permisson to startup scripts
RUN chmod +x /init \
/start-varnish-agent \
/start-varnish \
/start-vha-agent \
/start-varnishncsa \
/etc/init.d/varnishncsa \
/start-filebeat
EXPOSE 80
EXPOSE 6082
EXPOSE 6085
CMD ./init
My init.sh file is located under scripts folder on same location with dockerfile.
#!/bin/bash
# Start the varnish service
./start-varnish
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start varnish service: $status"
exit $status
fi
# Start the vha-agent
./start-vha-agent
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start vha-agent: $status"
exit $status
fi
# Start the varnish-agent
./start-varnish-agent
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start varnish-agent: $status"
exit $status
fi
# Start the varnishncsa
./start-varnishncsa
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start varnishncsa: $status"
exit $status
fi
# Start the filebeat
./start-filebeat
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start filebeat: $status"
exit $status
fi
while sleep 60; do
ps aux |grep varnishd |grep -v "grep"
PROCESS_1_STATUS=$?
ps aux |grep vha-agent |grep -v "grep"
PROCESS_2_STATUS=$?
ps aux |grep varnish-agent |grep -v "grep"
PROCESS_3_STATUS=$?
ps aux |grep varnishncsa |grep -v "grep"
PROCESS_4_STATUS=$?
# If the greps above find anything, they exit with 0 status
# If they are not both 0, then something is wrong
if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 -o $PROCESS_3_STATUS -ne 0 -o $PROCESS_4_STATUS -ne 0 ]; then
echo "One of the processes has already exited."
exit 1
fi
done
Varnish Software has an official Varnish Cache Plus Docker image. If you have a subscription, which I suspect you have, you can get help from support via support#varnish-software.com.
Support can have a look at your Dockerfile and advise you, but they can also explain how you can use the official image to get the job done, without having to maintain the Dockerfile yourself.
Is it possible to generate a Dockerfile from an image? I want to know for two reasons:
I can download images from the repository but would like to see the recipe that generated them.
I like the idea of saving snapshots, but once I am done it would be nice to have a structured format to review what was done.
How to generate or reverse a Dockerfile from an image?
You can. Mostly.
Notes: It does not generate a Dockerfile that you can use directly with docker build; the output is just for your reference.
alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm alpine/dfimage"
dfimage -sV=1.36 nginx:latest
It will pull the target docker image automatically and export Dockerfile. Parameter -sV=1.36 is not always required.
Reference: https://hub.docker.com/r/alpine/dfimage
Now hub.docker.com shows the image layers with detail commands directly, if you choose a particular tag.
Bonus
If you want to know which files are changed in each layer
alias dive="docker run -ti --rm -v /var/run/docker.sock:/var/run/docker.sock wagoodman/dive"
dive nginx:latest
On the left, you see each layer's command, on the right (jump with tab), the yellow line is the folder that some files are changed in that layer
(Use SPACE to collapse dir)
Old answer
below is the old answer, it doesn't work any more.
$ docker pull centurylink/dockerfile-from-image
$ alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm centurylink/dockerfile-from-image"
$ dfimage --help
Usage: dockerfile-from-image.rb [options] <image_id>
-f, --full-tree Generate Dockerfile for all parent layers
-h, --help Show this message
To understand how a docker image was built, use the
docker history --no-trunc command.
You can build a docker file from an image, but it will not contain everything you would want to fully understand how the image was generated. Reasonably what you can extract is the MAINTAINER, ENV, EXPOSE, VOLUME, WORKDIR, ENTRYPOINT, CMD, and ONBUILD parts of the dockerfile.
The following script should work for you:
#!/bin/bash
docker history --no-trunc "$1" | \
sed -n -e 's,.*/bin/sh -c #(nop) \(MAINTAINER .*[^ ]\) *0 B,\1,p' | \
head -1
docker inspect --format='{{range $e := .Config.Env}}
ENV {{$e}}
{{end}}{{range $e,$v := .Config.ExposedPorts}}
EXPOSE {{$e}}
{{end}}{{range $e,$v := .Config.Volumes}}
VOLUME {{$e}}
{{end}}{{with .Config.User}}USER {{.}}{{end}}
{{with .Config.WorkingDir}}WORKDIR {{.}}{{end}}
{{with .Config.Entrypoint}}ENTRYPOINT {{json .}}{{end}}
{{with .Config.Cmd}}CMD {{json .}}{{end}}
{{with .Config.OnBuild}}ONBUILD {{json .}}{{end}}' "$1"
I use this as part of a script to rebuild running containers as images:
https://github.com/docbill/docker-scripts/blob/master/docker-rebase
The Dockerfile is mainly useful if you want to be able to repackage an image.
The thing to keep in mind, is a docker image can actually just be the tar backup of a real or virtual machine. I have made several docker images this way. Even the build history shows me importing a huge tar file as the first step in creating the image...
I somehow absolutely missed the actual command in the accepted answer, so here it is again, bit more visible in its own paragraph, to see how many people are like me
$ docker history --no-trunc <IMAGE_ID>
A bash solution :
docker history --no-trunc $argv | tac | tr -s ' ' | cut -d " " -f 5- | sed 's,^/bin/sh -c #(nop) ,,g' | sed 's,^/bin/sh -c,RUN,g' | sed 's, && ,\n & ,g' | sed 's,\s*[0-9]*[\.]*[0-9]*\s*[kMG]*B\s*$,,g' | head -n -1
Step by step explanations:
tac : reverse the file
tr -s ' ' trim multiple whitespaces into 1
cut -d " " -f 5- remove the first fields (until X months/years ago)
sed 's,^/bin/sh -c #(nop) ,,g' remove /bin/sh calls for ENV,LABEL...
sed 's,^/bin/sh -c,RUN,g' remove /bin/sh calls for RUN
sed 's, && ,\n & ,g' pretty print multi command lines following Docker best practices
sed 's,\s*[0-9]*[\.]*[0-9]*\s*[kMG]*B\s*$,,g' remove layer size information
head -n -1 remove last line ("SIZE COMMENT" in this case)
Example:
~ dih ubuntu:18.04
ADD file:28c0771e44ff530dba3f237024acc38e8ec9293d60f0e44c8c78536c12f13a0b in /
RUN set -xe
&& echo '#!/bin/sh' > /usr/sbin/policy-rc.d
&& echo 'exit 101' >> /usr/sbin/policy-rc.d
&& chmod +x /usr/sbin/policy-rc.d
&& dpkg-divert --local --rename --add /sbin/initctl
&& cp -a /usr/sbin/policy-rc.d /sbin/initctl
&& sed -i 's/^exit.*/exit 0/' /sbin/initctl
&& echo 'force-unsafe-io' > /etc/dpkg/dpkg.cfg.d/docker-apt-speedup
&& echo 'DPkg::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' > /etc/apt/apt.conf.d/docker-clean
&& echo 'APT::Update::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' >> /etc/apt/apt.conf.d/docker-clean
&& echo 'Dir::Cache::pkgcache ""; Dir::Cache::srcpkgcache "";' >> /etc/apt/apt.conf.d/docker-clean
&& echo 'Acquire::Languages "none";' > /etc/apt/apt.conf.d/docker-no-languages
&& echo 'Acquire::GzipIndexes "true"; Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/docker-gzip-indexes
&& echo 'Apt::AutoRemove::SuggestsImportant "false";' > /etc/apt/apt.conf.d/docker-autoremove-suggests
RUN rm -rf /var/lib/apt/lists/*
RUN sed -i 's/^#\s*\(deb.*universe\)$/\1/g' /etc/apt/sources.list
RUN mkdir -p /run/systemd
&& echo 'docker' > /run/systemd/container
CMD ["/bin/bash"]
Update Dec 2018 to BMW's answer
chenzj/dfimage - as described on hub.docker.com regenerates Dockerfile from other images. So you can use it as follows:
docker pull chenzj/dfimage
alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm chenzj/dfimage"
dfimage IMAGE_ID > Dockerfile
This is derived from #fallino's answer, with some adjustments and simplifications by using the output format option for docker history. Since macOS and Gnu/Linux have different command-line utilities, a different version is necessary for Mac. If you only need one or the other, you can just use those lines.
#!/bin/bash
case "$OSTYPE" in
linux*)
docker history --no-trunc --format "{{.CreatedBy}}" $1 | # extract information from layers
tac | # reverse the file
sed 's,^\(|3.*\)\?/bin/\(ba\)\?sh -c,RUN,' | # change /bin/(ba)?sh calls to RUN
sed 's,^RUN #(nop) *,,' | # remove RUN #(nop) calls for ENV,LABEL...
sed 's, *&& *, \\\n \&\& ,g' # pretty print multi command lines following Docker best practices
;;
darwin*)
docker history --no-trunc --format "{{.CreatedBy}}" $1 | # extract information from layers
tail -r | # reverse the file
sed -E 's,^(\|3.*)?/bin/(ba)?sh -c,RUN,' | # change /bin/(ba)?sh calls to RUN
sed 's,^RUN #(nop) *,,' | # remove RUN #(nop) calls for ENV,LABEL...
sed $'s, *&& *, \\\ \\\n \&\& ,g' # pretty print multi command lines following Docker best practices
;;
*)
echo "unknown OSTYPE: $OSTYPE"
;;
esac
It is not possible at this point (unless the author of the image explicitly included the Dockerfile).
However, it is definitely something useful! There are two things that will help to obtain this feature.
Trusted builds (detailed in this docker-dev discussion
More detailed metadata in the successive images produced by the build process. In the long run, the metadata should indicate which build command produced the image, which means that it will be possible to reconstruct the Dockerfile from a sequence of images.
If you are interested in an image that is in the Docker hub registry and wanted to take a look at Dockerfile?.
Example:
If you want to see the Dockerfile of image "jupyter/datascience-notebook" type the word "Dockerfile" in the address bar of your browser as shown below.
https://hub.docker.com/r/jupyter/datascience-notebook/
https://hub.docker.com/r/jupyter/datascience-notebook/Dockerfile
Note:
Not all the images have Dockerfile, for example, https://hub.docker.com/r/redislabs/redisinsight/Dockerfile
Sometimes this way is much faster than searching for Dockerfile in Github.
docker pull chenzj/dfimage
alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm chenzj/dfimage"
dfimage image_id
Below is the output of the dfimage command:
$ dfimage 0f1947a021ce
FROM node:8
WORKDIR /usr/src/app
COPY file:e76d2e84545dedbe901b7b7b0c8d2c9733baa07cc821054efec48f623e29218c in ./
RUN /bin/sh -c npm install
COPY dir:a89a4894689a38cbf3895fdc0870878272bb9e09268149a87a6974a274b2184a in .
EXPOSE 8080
CMD ["npm" "start"]
it is possible in just two step. First pull the image then run docker history command. also, shown in SS.
docker pull kalilinux/kali-rolling
docker history --format "{{.CreatedBy}}" kalilinux/kali-rolling --no-trunc
What is image2df
image2df is tool for Generate Dockerfile by an image.
This tool is very useful when you only have docker image and need to generate a Dockerfile whit it.
How does it work
Reverse parsing by history information of an image.
How to use this image
# Command alias
echo "alias image2df='docker run -v /var/run/docker.sock:/var/run/docker.sock --rm cucker/image2df'" >> ~/.bashrc
. ~/.bashrc
# Excute command
image2df <IMAGE>
See help
docker run --rm cucker/image2df --help
For example
$ echo "alias image2df='docker run -v /var/run/docker.sock:/var/run/docker.sock --rm cucker/image2df'" >> ~/.bashrc
$ . ~/.bashrc
$ docker pull mysql
$ image2df mysql
========== Dockerfile ==========
FROM mysql:latest
RUN groupadd -r mysql && useradd -r -g mysql mysql
RUN apt-get update && apt-get install -y --no-install-recommends gnupg dirmngr && rm -rf /var/lib/apt/lists/*
ENV GOSU_VERSION=1.12
RUN set -eux; \
savedAptMark="$(apt-mark showmanual)"; \
apt-get update; \
apt-get install -y --no-install-recommends ca-certificates wget; \
rm -rf /var/lib/apt/lists/*; \
dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; \
wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; \
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \
apt-mark auto '.*' > /dev/null; \
[ -z "$savedAptMark" ] || apt-mark manual $savedAptMark > /dev/null; \
apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; \
chmod +x /usr/local/bin/gosu; \
gosu --version; \
gosu nobody true
RUN mkdir /docker-entrypoint-initdb.d
RUN apt-get update && apt-get install -y --no-install-recommends \
pwgen \
openssl \
perl \
xz-utils \
&& rm -rf /var/lib/apt/lists/*
RUN set -ex; \
key='A4A9406876FCBD3C456770C88C718D3B5072E1F5'; \
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
gpg --batch --export "$key" > /etc/apt/trusted.gpg.d/mysql.gpg; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME"; \
apt-key list > /dev/null
ENV MYSQL_MAJOR=8.0
ENV MYSQL_VERSION=8.0.24-1debian10
RUN echo 'deb http://repo.mysql.com/apt/debian/ buster mysql-8.0' > /etc/apt/sources.list.d/mysql.list
RUN { \
echo mysql-community-server mysql-community-server/data-dir select ''; \
echo mysql-community-server mysql-community-server/root-pass password ''; \
echo mysql-community-server mysql-community-server/re-root-pass password ''; \
echo mysql-community-server mysql-community-server/remove-test-db select false; \
} | debconf-set-selections \
&& apt-get update \
&& apt-get install -y \
mysql-community-client="${MYSQL_VERSION}" \
mysql-community-server-core="${MYSQL_VERSION}" \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf /var/lib/mysql && mkdir -p /var/lib/mysql /var/run/mysqld \
&& chown -R mysql:mysql /var/lib/mysql /var/run/mysqld \
&& chmod 1777 /var/run/mysqld /var/lib/mysql
VOLUME [/var/lib/mysql]
COPY dir:2e040acc386ebd23b8571951a51e6cb93647df091bc26159b8c757ef82b3fcda in /etc/mysql/
COPY file:345a22fe55d3e6783a17075612415413487e7dba27fbf1000a67c7870364b739 in /usr/local/bin/
RUN ln -s usr/local/bin/docker-entrypoint.sh /entrypoint.sh # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306 33060
CMD ["mysqld"]
reference
I'm attempting to install RabbitMQ inside a Docker container using an Ubuntu 18.04 image for running unittests against it.
To install, I'm running the normal sudo apt-get install rabbitmq-server, and it appears to install fine, but when I attempt to start or communicate with the service, I get the error:
Error: unable to connect to node rabbit#b562da1810ce: nodedown
DIAGNOSTICS
===========
attempted to contact: [rabbit#b562da1810ce]
rabbit#b562da1810ce:
* connected to epmd (port 4369) on b562da1810ce
* epmd reports node 'rabbit' running on port 25672
* TCP connection succeeded but Erlang distribution failed
* Authentication failed (rejected by the remote node), please check the Erlang cookie
current node details:
- node name: 'rabbitmq-cli-69#b562da1810ce'
- home dir: /var/lib/rabbitmq
- cookie hash: YUZIPS6zyhfUBX5afdKGcw==
Researching the "please check the Erlang cookie" text gets me a ton of similar questions, none of which seem to apply to Docker or my situation.
I've tried deleting the ~/.erlang.cookie then restarting the service, and completely purging the package and reinstalling. Nothing's worked.
How do I run RabbitMQ inside Docker?
Edit: This is my install procedure.
root#b562da1810ce:$ sudo apt-get purge -yq rabbitmq-server
Reading package lists...
Building dependency tree...
Reading state information...
The following packages were automatically installed and are no longer required:
erlang-asn1 erlang-base erlang-corba erlang-crypto erlang-diameter erlang-edoc erlang-eldap erlang-erl-docgen erlang-eunit erlang-ic erlang-inets erlang-mnesia erlang-nox erlang-odbc erlang-os-mon erlang-parsetools erlang-public-key erlang-runtime-tools erlang-snmp erlang-ssh
erlang-ssl erlang-syntax-tools erlang-tools erlang-xmerl libodbc1
Use 'sudo apt autoremove' to remove them.
The following packages will be REMOVED:
rabbitmq-server*
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
After this operation, 5,678 kB disk space will be freed.
(Reading database ... 69832 files and directories currently installed.)
Removing rabbitmq-server (3.6.10-1) ...
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of stop.
(Reading database ... 69618 files and directories currently installed.)
Purging configuration files for rabbitmq-server (3.6.10-1) ...
Processing triggers for systemd (237-3ubuntu10.33) ...
root#b562da1810ce:$ rm -Rf /var/log/rabbitmq/*
root#b562da1810ce:$ sudo apt-get install -yq rabbitmq-server
Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
rabbitmq-server
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 4,625 kB of archives.
After this operation, 5,678 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic/main amd64 rabbitmq-server all 3.6.10-1 [4,625 kB]
Fetched 4,625 kB in 4s (1,070 kB/s)
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package rabbitmq-server.
(Reading database ... 69613 files and directories currently installed.)
Preparing to unpack .../rabbitmq-server_3.6.10-1_all.deb ...
Unpacking rabbitmq-server (3.6.10-1) ...
Setting up rabbitmq-server (3.6.10-1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/rabbitmq-server.service → /lib/systemd/system/rabbitmq-server.service.
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.
Processing triggers for systemd (237-3ubuntu10.33) ...
root#b562da1810ce:$ sudo service rabbitmq-server status
Status of node rabbit#b562da1810ce
Error: unable to connect to node rabbit#b562da1810ce: nodedown
DIAGNOSTICS
===========
attempted to contact: [rabbit#b562da1810ce]
rabbit#b562da1810ce:
* connected to epmd (port 4369) on b562da1810ce
* epmd reports node 'rabbit' running on port 25672
* TCP connection succeeded but Erlang distribution failed
* Authentication failed (rejected by the remote node), please check the Erlang cookie
current node details:
- node name: 'rabbitmq-cli-30#b562da1810ce'
- home dir: /var/lib/rabbitmq
- cookie hash: DHe9O00f7sIHn/dTThKVVQ==
root#b562da1810ce:$ sudo service rabbitmq-server start
* Starting RabbitMQ Messaging Server rabbitmq-server * FAILED - check /var/log/rabbitmq/startup_\{log, _err\}
[fail]
root#b562da1810ce:$ sudo service rabbitmq-server status
Status of node rabbit#b562da1810ce
Error: unable to connect to node rabbit#b562da1810ce: nodedown
DIAGNOSTICS
===========
attempted to contact: [rabbit#b562da1810ce]
rabbit#b562da1810ce:
* connected to epmd (port 4369) on b562da1810ce
* epmd reports node 'rabbit' running on port 25672
* TCP connection succeeded but Erlang distribution failed
* Authentication failed (rejected by the remote node), please check the Erlang cookie
current node details:
- node name: 'rabbitmq-cli-13#b562da1810ce'
- home dir: /var/lib/rabbitmq
- cookie hash: DHe9O00f7sIHn/dTThKVVQ==
root#b562da1810ce:$ cat /var/log/rabbitmq/startup_err
root#b562da1810ce:$ cat /var/log/rabbitmq/startup_log
ERROR: node with name "rabbit" already running on "b562da1810ce"
Based on the last line from the log, I decided to check ps aux|grep -i rabbit, which shows Rabbit is running. Yet neither service nor rabbitmqctl is able to communicate with it. Why is this?
Either use official docker image from https://hub.docker.com//rabbitmq or yo can use the Dockerfile from https://hub.docker.com//rabbitmq
# Alpine Linux is not officially supported by the RabbitMQ team -- use at your own risk!
FROM alpine:3.10
RUN apk add --no-cache \
# grab su-exec for easy step-down from root
'su-exec>=0.2' \
# bash for docker-entrypoint.sh
bash \
# "ps" for "rabbitmqctl wait" (https://github.com/docker-library/rabbitmq/issues/162)
procps
# Default to a PGP keyserver that pgp-happy-eyeballs recognizes, but allow for substitutions locally
ARG PGP_KEYSERVER=ha.pool.sks-keyservers.net
# If you are building this image locally and are getting `gpg: keyserver receive failed: No data` errors,
# run the build with a different PGP_KEYSERVER, e.g. docker build --tag rabbitmq:3.7 --build-arg PGP_KEYSERVER=pgpkeys.eu 3.7/ubuntu
# For context, see https://github.com/docker-library/official-images/issues/4252
# Using the latest OpenSSL LTS release, with support until September 2023 - https://www.openssl.org/source/
ENV OPENSSL_VERSION 1.1.1d
ENV OPENSSL_SOURCE_SHA256="1e3a91bc1f9dfce01af26026f856e064eab4c8ee0a8f457b5ae30b40b8b711f2"
# https://www.openssl.org/community/omc.html
ENV OPENSSL_PGP_KEY_IDS="0x8657ABB260F056B1E5190839D9C4D26D0E604491 0x5B2545DAB21995F4088CEFAA36CEE4DEB00CFE33 0xED230BEC4D4F2518B9D7DF41F0DB4D21C1D35231 0xC1F33DD8CE1D4CC613AF14DA9195C48241FBF7DD 0x7953AC1FBC3DC8B3B292393ED5E9E43F7DF9EE8C 0xE5E52560DD91C556DDBDA5D02064C53641C25E5D"
# Use the latest stable Erlang/OTP release (https://github.com/erlang/otp/tags)
ENV OTP_VERSION 22.1.8
# TODO add PGP checking when the feature will be added to Erlang/OTP's build system
# http://erlang.org/pipermail/erlang-questions/2019-January/097067.html
ENV OTP_SOURCE_SHA256="7302be70cee2c33689bf2c2a3e7cfee597415d0fb3e4e71bd3e86bd1eff9cfdc"
# Install dependencies required to build Erlang/OTP from source
# http://erlang.org/doc/installation_guide/INSTALL.html
# autoconf: Required to configure Erlang/OTP before compiling
# dpkg-dev: Required to set up host & build type when compiling Erlang/OTP
# gnupg: Required to verify OpenSSL artefacts
# libncurses5-dev: Required for Erlang/OTP new shell & observer_cli - https://github.com/zhongwencool/observer_cli
RUN set -eux; \
\
apk add --no-cache --virtual .build-deps \
autoconf \
ca-certificates \
dpkg-dev dpkg \
gcc \
gnupg \
libc-dev \
linux-headers \
make \
ncurses-dev \
; \
\
OPENSSL_SOURCE_URL="https://www.openssl.org/source/openssl-$OPENSSL_VERSION.tar.gz"; \
OPENSSL_PATH="/usr/local/src/openssl-$OPENSSL_VERSION"; \
OPENSSL_CONFIG_DIR=/usr/local/etc/ssl; \
\
# /usr/local/src doesn't exist in Alpine by default
mkdir /usr/local/src; \
\
# Required by the crypto & ssl Erlang/OTP applications
wget --output-document "$OPENSSL_PATH.tar.gz.asc" "$OPENSSL_SOURCE_URL.asc"; \
wget --output-document "$OPENSSL_PATH.tar.gz" "$OPENSSL_SOURCE_URL"; \
export GNUPGHOME="$(mktemp -d)"; \
for key in $OPENSSL_PGP_KEY_IDS; do \
gpg --batch --keyserver "$PGP_KEYSERVER" --recv-keys "$key"; \
done; \
gpg --batch --verify "$OPENSSL_PATH.tar.gz.asc" "$OPENSSL_PATH.tar.gz"; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME"; \
echo "$OPENSSL_SOURCE_SHA256 *$OPENSSL_PATH.tar.gz" | sha256sum -c -; \
mkdir -p "$OPENSSL_PATH"; \
tar --extract --file "$OPENSSL_PATH.tar.gz" --directory "$OPENSSL_PATH" --strip-components 1; \
\
# Configure OpenSSL for compilation
cd "$OPENSSL_PATH"; \
# OpenSSL's "config" script uses a lot of "uname"-based target detection...
MACHINE="$(dpkg-architecture --query DEB_BUILD_GNU_CPU)" \
RELEASE="4.x.y-z" \
SYSTEM='Linux' \
BUILD='???' \
./config \
--openssldir="$OPENSSL_CONFIG_DIR" \
# add -rpath to avoid conflicts between our OpenSSL's "libssl.so" and the libssl package by making sure /usr/local/lib is searched first (but only for Erlang/OpenSSL to avoid issues with other tools using libssl; https://github.com/docker-library/rabbitmq/issues/364)
-Wl,-rpath=/usr/local/lib \
; \
# Compile, install OpenSSL, verify that the command-line works & development headers are present
make -j "$(getconf _NPROCESSORS_ONLN)"; \
make install_sw install_ssldirs; \
cd ..; \
rm -rf "$OPENSSL_PATH"*; \
# use Alpine's CA certificates
rmdir "$OPENSSL_CONFIG_DIR/certs" "$OPENSSL_CONFIG_DIR/private"; \
ln -sf /etc/ssl/certs /etc/ssl/private "$OPENSSL_CONFIG_DIR"; \
# smoke test
openssl version; \
\
OTP_SOURCE_URL="https://github.com/erlang/otp/archive/OTP-$OTP_VERSION.tar.gz"; \
OTP_PATH="/usr/local/src/otp-$OTP_VERSION"; \
\
# Download, verify & extract OTP_SOURCE
mkdir -p "$OTP_PATH"; \
wget --output-document "$OTP_PATH.tar.gz" "$OTP_SOURCE_URL"; \
echo "$OTP_SOURCE_SHA256 *$OTP_PATH.tar.gz" | sha256sum -c -; \
tar --extract --file "$OTP_PATH.tar.gz" --directory "$OTP_PATH" --strip-components 1; \
\
# Configure Erlang/OTP for compilation, disable unused features & applications
# http://erlang.org/doc/applications.html
# ERL_TOP is required for Erlang/OTP makefiles to find the absolute path for the installation
cd "$OTP_PATH"; \
export ERL_TOP="$OTP_PATH"; \
./otp_build autoconf; \
export CFLAGS='-g -O2'; \
# add -rpath to avoid conflicts between our OpenSSL's "libssl.so" and the libssl package by making sure /usr/local/lib is searched first (but only for Erlang/OpenSSL to avoid issues with other tools using libssl; https://github.com/docker-library/rabbitmq/issues/364)
export CFLAGS="$CFLAGS -Wl,-rpath=/usr/local/lib"; \
hostArch="$(dpkg-architecture --query DEB_HOST_GNU_TYPE)"; \
buildArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; \
dpkgArch="$(dpkg --print-architecture)"; dpkgArch="${dpkgArch##*-}"; \
./configure \
--host="$hostArch" \
--build="$buildArch" \
--disable-dynamic-ssl-lib \
--disable-hipe \
--disable-sctp \
--disable-silent-rules \
--enable-clock-gettime \
--enable-hybrid-heap \
--enable-kernel-poll \
--enable-shared-zlib \
--enable-smp-support \
--enable-threads \
--with-microstate-accounting=extra \
--without-common_test \
--without-debugger \
--without-dialyzer \
--without-diameter \
--without-edoc \
--without-erl_docgen \
--without-erl_interface \
--without-et \
--without-eunit \
--without-ftp \
--without-hipe \
--without-jinterface \
--without-megaco \
--without-observer \
--without-odbc \
--without-reltool \
--without-ssh \
--without-tftp \
--without-wx \
; \
# Compile & install Erlang/OTP
make -j "$(getconf _NPROCESSORS_ONLN)" GEN_OPT_FLGS="-O2 -fno-strict-aliasing"; \
make install; \
cd ..; \
rm -rf \
"$OTP_PATH"* \
/usr/local/lib/erlang/lib/*/examples \
/usr/local/lib/erlang/lib/*/src \
; \
\
runDeps="$( \
scanelf --needed --nobanner --format '%n#p' --recursive /usr/local \
| tr ',' '\n' \
| sort -u \
| awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' \
)"; \
apk add --no-cache --virtual .otp-run-deps $runDeps; \
apk del --no-network .build-deps; \
\
# Check that OpenSSL still works after purging build dependencies
openssl version; \
# Check that Erlang/OTP crypto & ssl were compiled against OpenSSL correctly
erl -noshell -eval 'io:format("~p~n~n~p~n~n", [crypto:supports(), ssl:versions()]), init:stop().'
ENV RABBITMQ_DATA_DIR=/var/lib/rabbitmq
# Create rabbitmq system user & group, fix permissions & allow root user to connect to the RabbitMQ Erlang VM
RUN set -eux; \
addgroup -g 101 -S rabbitmq; \
adduser -u 100 -S -h "$RABBITMQ_DATA_DIR" -G rabbitmq rabbitmq; \
mkdir -p "$RABBITMQ_DATA_DIR" /etc/rabbitmq /tmp/rabbitmq-ssl /var/log/rabbitmq; \
chown -fR rabbitmq:rabbitmq "$RABBITMQ_DATA_DIR" /etc/rabbitmq /tmp/rabbitmq-ssl /var/log/rabbitmq; \
chmod 777 "$RABBITMQ_DATA_DIR" /etc/rabbitmq /tmp/rabbitmq-ssl /var/log/rabbitmq; \
ln -sf "$RABBITMQ_DATA_DIR/.erlang.cookie" /root/.erlang.cookie
# Use the latest stable RabbitMQ release (https://www.rabbitmq.com/download.html)
ENV RABBITMQ_VERSION 3.7.23-rc.1
# https://www.rabbitmq.com/signatures.html#importing-gpg
ENV RABBITMQ_PGP_KEY_ID="0x0A9AF2115F4687BD29803A206B73A36E6026DFCA"
ENV RABBITMQ_HOME=/opt/rabbitmq
# Add RabbitMQ to PATH, send all logs to TTY
ENV PATH=$RABBITMQ_HOME/sbin:$PATH \
RABBITMQ_LOGS=- RABBITMQ_SASL_LOGS=-
# Install RabbitMQ
RUN set -eux; \
\
apk add --no-cache --virtual .build-deps \
ca-certificates \
gnupg \
xz \
; \
\
RABBITMQ_SOURCE_URL="https://github.com/rabbitmq/rabbitmq-server/releases/download/v$RABBITMQ_VERSION/rabbitmq-server-generic-unix-latest-toolchain-$RABBITMQ_VERSION.tar.xz"; \
RABBITMQ_PATH="/usr/local/src/rabbitmq-$RABBITMQ_VERSION"; \
\
wget --output-document "$RABBITMQ_PATH.tar.xz.asc" "$RABBITMQ_SOURCE_URL.asc"; \
wget --output-document "$RABBITMQ_PATH.tar.xz" "$RABBITMQ_SOURCE_URL"; \
\
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys "$RABBITMQ_PGP_KEY_ID"; \
gpg --batch --verify "$RABBITMQ_PATH.tar.xz.asc" "$RABBITMQ_PATH.tar.xz"; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME"; \
\
mkdir -p "$RABBITMQ_HOME"; \
tar --extract --file "$RABBITMQ_PATH.tar.xz" --directory "$RABBITMQ_HOME" --strip-components 1; \
rm -rf "$RABBITMQ_PATH"*; \
# Do not default SYS_PREFIX to RABBITMQ_HOME, leave it empty
grep -qE '^SYS_PREFIX=\$\{RABBITMQ_HOME\}$' "$RABBITMQ_HOME/sbin/rabbitmq-defaults"; \
sed -i 's/^SYS_PREFIX=.*$/SYS_PREFIX=/' "$RABBITMQ_HOME/sbin/rabbitmq-defaults"; \
grep -qE '^SYS_PREFIX=$' "$RABBITMQ_HOME/sbin/rabbitmq-defaults"; \
chown -R rabbitmq:rabbitmq "$RABBITMQ_HOME"; \
\
apk del .build-deps; \
\
# verify assumption of no stale cookies
[ ! -e "$RABBITMQ_DATA_DIR/.erlang.cookie" ]; \
# Ensure RabbitMQ was installed correctly by running a few commands that do not depend on a running server, as the rabbitmq user
# If they all succeed, it's safe to assume that things have been set up correctly
su-exec rabbitmq rabbitmqctl help; \
su-exec rabbitmq rabbitmqctl list_ciphers; \
su-exec rabbitmq rabbitmq-plugins list; \
# no stale cookies
rm "$RABBITMQ_DATA_DIR/.erlang.cookie"
# Added for backwards compatibility - users can simply COPY custom plugins to /plugins
RUN ln -sf /opt/rabbitmq/plugins /plugins
# set home so that any `--user` knows where to put the erlang cookie
ENV HOME $RABBITMQ_DATA_DIR
# Hint that the data (a.k.a. home dir) dir should be separate volume
VOLUME $RABBITMQ_DATA_DIR
# warning: the VM is running with native name encoding of latin1 which may cause Elixir to malfunction as it expects utf8. Please ensure your locale is set to UTF-8 (which can be verified by running "locale" in your shell)
# Setting all environment variables that control language preferences, behaviour differs - https://www.gnu.org/software/gettext/manual/html_node/The-LANGUAGE-variable.html#The-LANGUAGE-variable
# https://docs.docker.com/samples/library/ubuntu/#locales
ENV LANG=C.UTF-8 LANGUAGE=C.UTF-8 LC_ALL=C.UTF-8
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 4369 5671 5672 25672
CMD ["rabbitmq-server"]
Use command to build the container (dot is to locate the Dockerfile in current directory).
docker build .
Once the image is built then you can use following command to start the container
docker container start youtImageName
I have a need to install awscli and jq library in Hasura Docker Image. I tried to use yum, apt-get or apk commands to install the dependencies, but none of them worked.
Docker Image: https://hub.docker.com/r/hasura/graphql-engine/
how to install these dependencies in Hasura Docker Image? Any help is appreciated.
Dockerfile:
FROM hasura/graphql-engine:latest
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
CMD ["./entrypoint.sh"]
entrypoint.sh:
#!/bin/sh
set -o errexit -o nounset -o pipefail
DB_HOST=${DB_HOST:-postgres}
DB_PORT=${DB_PORT:-5432}
if [ -z "${DB_NAME}" ]; then
echo "Must provide DB_NAME environment variable. Exiting...."
exit 1
fi
if [ -z "${DB_USER}" ]; then
echo "Must provide DB_USER environment variable. Exiting...."
exit 1
fi
if [ -z "${DB_PASSWORD}" ]; then
echo "Must provide DB_PASSWORD environment variable. Exiting...."
exit 1
fi
export HASURA_GRAPHQL_DATABASE_URL=postgres://${DB_USER}:${DB_PASSWORD}#${DB_HOST}:${DB_PORT}/${DB_NAME}
/bin/graphql-engine serve
DB_PASSWORD is encrypted with KMS, so i want to use aws cli to decrypt the password in entrypoint.sh file before setting the Environment Variable: HASURA_GRAPHQL_DATABASE_URL
I was able to customize Hasura Docker Image with the help of Hasura Team support.
Here is the link to github issue: https://github.com/hasura/graphql-engine/issues/2729
Dockerfile:
FROM hasura/graphql-engine:v1.0.0-beta.4 as base
FROM python:3.7-slim-stretch
RUN apt-get -y update \
&& apt-get install -y --no-install-recommends libpq-dev jq \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf /usr/share/doc/ \
&& rm -rf /usr/share/man/ \
&& rm -rf /usr/share/locale/ \
&& pip install awscli
# copy hausra binary from base container
COPY --from=base /bin/graphql-engine /bin/graphql-engine
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
CMD ["/entrypoint.sh"]
entrypoint.sh:
#!/bin/bash
set -e
DB_HOST=${DB_HOST:-postgres}
DB_PORT=${DB_PORT:-5432}
AWS_REGION=${AWS_REGION:-us-east-1}
DB_PASSWORD_ENCYPTED=${DB_PASSWORD_ENCYPTED:-false}
if [ -z "${DB_NAME}" ]; then
echo "Must provide DB_NAME environment variable. Exiting...."
exit 1
fi
if [ -z "${DB_USER}" ]; then
echo "Must provide DB_USER environment variable. Exiting...."
exit 1
fi
if [ -z "${DB_PASSWORD}" ]; then
echo "Must provide DB_PASSWORD environment variable. Exiting...."
exit 1
fi
if [ ${DB_PASSWORD_ENCYPTED} == "true" ]
then
echo "loading KMS credentials"
decrypted_value_base64=$( \
aws --region ${AWS_REGION} kms decrypt \
--ciphertext-blob fileb://<(echo "${DB_PASSWORD}" | base64 -d) \
--query Plaintext \
--output text
)
decrypted_value=$(echo $decrypted_value_base64 | base64 -d)
export HASURA_GRAPHQL_DATABASE_URL=postgres://${DB_USER}:${decrypted_value}#${DB_HOST}:${DB_PORT}/${DB_NAME}
else
export HASURA_GRAPHQL_DATABASE_URL=postgres://${DB_USER}:${DB_PASSWORD}#${DB_HOST}:${DB_PORT}/${DB_NAME}
fi
/bin/graphql-engine serve
I have the following command in my Dockerfile:
RUN echo "\
export NODE_VERSION=$(\
curl -sL https://nodejs.org/dist/latest/ |\
tac |\
tac |\
grep -oPa -m 1 '(?<=node-v)(.*?)(?=-linux-x64\.tar\.xz)' |\
head -1\
)" >> /etc/bash.bashrc
RUN source /etc/bash.bashrc
The following command should store export NODE_VERSION=6.2.2 in /etc/bash.bashrc, but it's not storing anything.
This works however when I'm inside an image with bash and manually entering the following commands.
Update:
I changed back the shell from bash to the Debian/Ubuntu default dash, which is POSIX standard. I removed this line:
RUN ln -sf /bin/bash /bin/sh && ln -sf /bin/bash /bin/sh.distrib
Than I tried to add to the environment variables with export:
RUN export NODE_VERSION=$(\
curl -sL https://nodejs.org/dist/latest/ |\
tac |\
tac |\
grep -oPa -m 1 '(?<=node-v)(.*?)(?=-linux-x64\.tar\.xz)' |\
head -1\
)
But again, the output is missing at image creation, but works when I running the image with $ docker run --rm -it debian /bin/sh. Why?
Update 2:
Looks like the final solution should be something like this:
RUN NODE_VERSION=$( \
curl -sL https://nodejs.org/dist/latest/ | \
tac | \
tac | \
grep -oPa -m 1 '(?<=node-v)(.*?)(?=-linux-x64\.tar\.xz)' | \
head -1 \
) && echo $NODE_VERSION
ENV NODE_VERSION $NODE_VERSION
echo $NODE_VERSION returning 6.2.2 as it should at the execution of the Dockerfile also, but ENV NODE_VERSION $NODE_VERSION cannot read this. Is there a way to define variables globally or how can I pass the RUN's output to ENV?
Solution:
I ended up putting the node.js installation part under the same RUN command:
RUN NODE_VERSION=$( \
curl -sL https://nodejs.org/dist/latest/ | \
tac | \
tac | \
grep -oPa -m 1 '(?<=node-v)(.*?)(?=-linux-x64\.tar\.xz)' | \
head -1 \
) \
&& echo $NODE_VERSION \
&& curl -SLO "https://nodejs.org/dist/latest/node-v$NODE_VERSION-linux-x64.tar.xz" -o "node-v$NODE_VERSION-linux-x64.tar.xz" \
&& curl -SLO "https://nodejs.org/dist/latest/SHASUMS256.txt.asc" \
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
&& grep " node-v$NODE_VERSION-linux-x64.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
&& tar -xJf "node-v$NODE_VERSION-linux-x64.tar.xz" -C /usr/local --strip-components=1 \
&& rm "node-v$NODE_VERSION-linux-x64.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt
Update:
But again, the output is missing at image creation, but works when I
running the image with $ docker run --rm -it debian /bin/sh. Why?
This is because each statement (conventionally started with an uppercase verb like RUN, ADD, COPY, ENV, etc) is a brand-new intermediate container.
These intermediate containers do not share the environment (e.g. environment variables) but a Union File System. That is, only data saved in file system and those variables defined in Dockerfile (e.g. through ENV) deliver through intermediate containers. Check out this post and UnionFS Wiki if you want to know how UFS works.
If your goal is to install the latest node each time you build the image. How about having a try for nvm (Node Version Manager)?
ARG UBUNTU=16.04
# Pull base image.
FROM ubuntu:${UBUNTU}
# arguments
ARG NVM=0.33.9
ARG NODE=node
# update apt
RUN apt-get update
# Install curl
RUN apt-get install -y curl
# Set home for NVM
ENV NVM_DIR=/home/inazuma/.nvm
# Install Node.js with NVM
RUN mkdir -p ${NVM_DIR} && \
curl -o- https://raw.githubusercontent.com/creationix/nvm/v${NVM}/install.sh | bash && \
. ${NVM_DIR}/nvm.sh && \
nvm install ${NODE}
# The first following line should always be called in each intermediate container
# to gain nvm, node and npm command
. ${NVM_DIR}/nvm.sh && nvm use ${NODE} && \
npm install -g cowsay && \
cowsay "Making Docker images is really a headache!"
# Set up your PATH for nvm, node and npm command
CMD ". ${NVM_DIR}/nvm.sh && nvm use ${NODE} && bash"
Note that nvm does not persist across intermediate containers so you should use . ${NVM_DIR}/nvm.sh to set up nvm command for each new intermediate container.
NVM manages node binary locally, use nvm use ${NODE} to include node and npm into PATH. In NVM, node stands for an alias of the latest version of Node; therefore, we set NODE argument to be node (it can also be set to a string of semantic version like 5.0, 9.11.1, etc).