Docker container healthcheck stop unhealthy container - docker

I have a docker container that has a healthcheck running every 1 min. I read that appending "|| kill 1" to the healthcheck in dockerfile can stop the container after healthcheck fails, but it does not seem to be working for me and I cannot find an example that works.
Does anybody know how I can stop the container after marked as unhealthy? I currently have this in my dockerfile:
HEALTHCHECK --start-period=30s --timeout=5s --interval=1m --retries=2 CMD bash /expressvpn/healthcheck.sh || kill 1
EDIT 1
Dockerfile
FROM debian:buster-slim
ENV CODE="code"
ENV SERVER="smart"
ARG VERSION="expressvpn_2.6.0.32-1_armhf.deb"
COPY files/ /expressvpn/
RUN apt-get update && apt-get install -y --no-install-recommends \
expect curl ca-certificates iproute2 wget jq \
&& wget -q https://download.expressvpn.xyz/clients/linux/${VERSION} -O /expressvpn/${VERSION} \
&& dpkg -i /expressvpn/${VERSION} \
&& rm -rf /expressvpn/*.deb \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get purge --autoremove -y wget \
&& rm -rf /var/log/*.log
HEALTHCHECK --start-period=30s --timeout=5s --interval=1m --retries=2 CMD bash /expressvpn/healthcheck.sh || exit 1
ENTRYPOINT ["/bin/bash", "/expressvpn/start.sh"]
healthcheck.sh
if [[ ! -z $DDNS ]];
then
checkIP=$(getent hosts $DDNS | awk '{ print $1 }')
else
checkIP=$IP
fi
if [[ ! -z $checkIP ]];
then
ipinfo=$(curl -s -H "Authorization: Bearer $BEARER" 'ipinfo.io' | jq -r '.')
currentIP=$(jq -r '.ip' <<< "$ipinfo")
hostname=$(jq -r '.hostname' <<< "$ipinfo")
if [[ $checkIP = $currentIP ]];
then
if [[ ! -z $HEALTHCHECK ]];
then
curl https://hc-ping.com/$HEALTHCHECK/fail
expressvpn disconnect
expressvpn connect $SERVER
exit 1
else
expressvpn disconnect
expressvpn connect $SERVER
exit 1
fi
else
if [[ ! -z $HOSTNAME_PART && ! -z $hostname && $hostname != *"$HOSTNAME_PART"* ]];
then
#THIS IS WHERE THE CONTAINER SHOULD STOP <------------
kill 1
fi
if [[ ! -z $HEALTHCHECK ]];
then
curl https://hc-ping.com/$HEALTHCHECK
exit 0
else
exit 0
fi
fi
else
exit 0
fi
start.sh
#!/usr/bin/bash
cp /etc/resolv.conf /etc/resolv.conf.bak
umount /etc/resolv.conf
cp /etc/resolv.conf.bak /etc/resolv.conf
rm /etc/resolv.conf.bak
service expressvpn restart
expect /expressvpn/activate.sh
expressvpn connect $SERVER
touch /var/log/temp.log
tail -f /var/log/temp.log
exec "$#"

Try Changing from kill to exit 1
HEALTHCHECK --start-period=30s --timeout=5s --interval=1m --retries=2 \
CMD bash /expressvpn/healthcheck.sh || exit 1
Reference from docker docs
Edit 1:
After some testing if you want to kill the container on unhealthy status you need to do it in the health check script /expressvpn/healthcheck.sh or by script on the host.
The following example the container status is healthy:
HEALTHCHECK --start-period=30s --timeout=5s --interval=10s --retries=2 CMD bash -c 'echo "0" || kill 1' || exit 1
The following example the container stops since the command ech not exit then the kill 1 is executed and the container got killed:
HEALTHCHECK --start-period=30s --timeout=5s --interval=10s --retries=2 CMD bash -c 'ech "0" || kill 1' || exit 1
Edit 2:
Well after a bit of digging i understood something that i saw in some dockerfiles:
RUN apt update -y && apt install tini -y
ENTRYPOINT ["tini", "--"]
CMD ["./echo.sh"]
From what i got docker keeps the pid 1=entrypoint process from getting killed by SIGTERM so for this you have tini small util that helps with this (still not sure what exactly the purpose of this i will keep it for next time i will be in the mood..).
Anyway after adding tini the container got killed with kill 1
Thank you for the question.

please check the output of your health check. You'll have to ensure that your healthcheck actually fails 2 consecutive times.
docker inspect --format "{{json .State.Health }}" <container name> | jq

Related

Issues deploying Self-Hosted Agent on Linux (Ubuntu 18.04) Container

To begin, I followed this documentation in order to deploy a self-hosted agent on a linux container. I didn't do anything other than create the Dockerfile as start.sh file as it stated (copy and paste) to confirm I will add the files here:
Dockerfile
FROM ubuntu:18.04
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq \
git \
iputils-ping \
libcurl4 \
libicu60 \
libunwind8 \
netcat \
libssl1.0
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
CMD ["./start.sh"]
Start.sh
#!/bin/bash
set -e
if [ -z "$AZP_URL" ]; then
echo 1>&2 "error: missing AZP_URL environment variable"
exit 1
fi
if [ -z "$AZP_TOKEN_FILE" ]; then
if [ -z "$AZP_TOKEN" ]; then
echo 1>&2 "error: missing AZP_TOKEN environment variable"
exit 1
fi
AZP_TOKEN_FILE=/azp/.token
echo -n $AZP_TOKEN > "$AZP_TOKEN_FILE"
fi
unset AZP_TOKEN
if [ -n "$AZP_WORK" ]; then
mkdir -p "$AZP_WORK"
fi
rm -rf /azp/agent
mkdir /azp/agent
cd /azp/agent
export AGENT_ALLOW_RUNASROOT="1"
cleanup() {
if [ -e config.sh ]; then
print_header "Cleanup. Removing Azure Pipelines agent..."
./config.sh remove --unattended \
--auth PAT \
--token $(cat "$AZP_TOKEN_FILE")
fi
}
print_header() {
lightcyan='\033[1;36m'
nocolor='\033[0m'
echo -e "${lightcyan}$1${nocolor}"
}
# Let the agent ignore the token env variables
export VSO_AGENT_IGNORE=AZP_TOKEN,AZP_TOKEN_FILE
print_header "1. Determining matching Azure Pipelines agent..."
AZP_AGENT_RESPONSE=$(curl -LsS \
-u user:$(cat "$AZP_TOKEN_FILE") \
-H 'Accept:application/json;api-version=3.0-preview' \
"$AZP_URL/_apis/distributedtask/packages/agent?platform=linux-x64")
if echo "$AZP_AGENT_RESPONSE" | jq . >/dev/null 2>&1; then
AZP_AGENTPACKAGE_URL=$(echo "$AZP_AGENT_RESPONSE" \
| jq -r '.value | map([.version.major,.version.minor,.version.patch,.downloadUrl]) | sort | .[length-1] | .[3]')
fi
if [ -z "$AZP_AGENTPACKAGE_URL" -o "$AZP_AGENTPACKAGE_URL" == "null" ]; then
echo 1>&2 "error: could not determine a matching Azure Pipelines agent - check that account '$AZP_URL' is correct and the token is valid for that account"
exit 1
fi
print_header "2. Downloading and installing Azure Pipelines agent..."
curl -LsS $AZP_AGENTPACKAGE_URL | tar -xz & wait $!
source ./env.sh
print_header "3. Configuring Azure Pipelines agent..."
./config.sh --unattended \
--agent "${AZP_AGENT_NAME:-$(hostname)}" \
--url "$AZP_URL" \
--auth PAT \
--token $(cat "$AZP_TOKEN_FILE") \
--pool "${AZP_POOL:-Default}" \
--work "${AZP_WORK:-_work}" \
--replace \
--acceptTeeEula & wait $!
print_header "4. Running Azure Pipelines agent..."
trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM
# To be aware of TERM and INT signals call run.sh
# Running it with the --once flag at the end will shut down the agent after the build is executed
./run.sh & wait $!
Despite copy and pasting these from the documentation. I receive an error when it reaches the 3rd step (Configuring Azure Pipelines Agent) in the start.sh script.
Error message: qemu-x86_64: Could not open '/lib64/ld-linux-x86-64.so.2': No such file or directory
If it helps, I am running docker on MacOS but as you can see the container is Ubuntu.
Thank you
According the documentation, we can know Both Windows and Linux are supported as container hosts. But the MacOS is not support as container hosts. So you can try to create a new Windows docker container to try again.

Dockerfile build successfully but container can not run

I wrote a dockerfile for varnish plus. Docker build execute successfully but on docker run says /bin/sh: 1: ./init: not found not found. What am i missing on dockerfile?
I'm trying to build a custom docker build for Kubernetes varnish deployment.
I tried another parameters like CMD["sh", "init"] then i got ./start-agent failed. If I put sh to everywhere not found on /etc/default/varnish error. Also got init done error it says expecting "then". I installed on bare metal in the same way but couldn't run on a docker container.
FROM ubuntu:16.04
ARG varnishFile
ARG tokenName
ARG Project
ARG varnishPlusCredential="xxx"
RUN echo " $tokenName, $Project, $varnishFile "
RUN apt-get update
RUN apt-get -y install \
git \
python \
apt-transport-https \
wget \
curl \
gnupg2 \
libmicrohttpd10 \
libssl1.0.0 \
vim \
telnet
RUN wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
RUN echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | tee /etc/apt/sources.list.d/elastic-5.x.list
RUN curl https://${varnishPlusCredential}#repo.varnish-software.com/GPG-key.txt | apt-key add -
RUN echo "deb http://${varnishPlusCredential}#repo.varnish-software.com/ubuntu trusty varnish-4.1-plus" >> /etc/apt/sources.list
RUN echo "deb http://${varnishPlusCredential}#repo.varnish-software.com/ubuntu trusty non-free" >> /etc/apt/sources.list
RUN echo " #apt-get update "
RUN apt-get update -y
RUN apt-get -y install \
varnish-plus \
varnish-plus-ha \
varnish-agent \
filebeat \
varnishtuner
RUN vha-generate-vcl --token ${tokenName} > /etc/varnish/vha.vcl
COPY /${Project}/varnishConfiguration/nodes.conf /etc/varnish/nodes.conf
COPY /${Project}/varnishConfiguration/default.vcl /etc/varnish/vcl/default.vcl
COPY /${Project}/varnishConfiguration/varnish /etc/default/varnish
COPY /${Project}/varnishConfiguration/varnishncsa /etc/default/varnishncsa
COPY /"${Project}"/varnishConfiguration/varnishncsa-init.d/varnishncsa /etc/init.d
#Copy varnish configuration varnish files for varnish nodes
COPY /${Project}/${varnishFile}/varnish-agent /etc/default/varnish-agent
COPY /${Project}/${varnishFile}/vha-agent /etc/default/vha-agent
COPY /${Project}/filebeat/filebeat.yml /etc/filebeat/filebeat.yml
COPY /scripts/start-varnish-agent.sh /start-varnish-agent
COPY /scripts/start-varnish.sh /start-varnish
COPY /scripts/start-vha-agent.sh /start-vha-agent
COPY /scripts/start-varnishncsa.sh /start-varnishncsa
COPY /scripts/start-filebeat.sh /start-filebeat
COPY /scripts/init.sh /init
#Executive permisson to startup scripts
RUN chmod +x /init \
/start-varnish-agent \
/start-varnish \
/start-vha-agent \
/start-varnishncsa \
/etc/init.d/varnishncsa \
/start-filebeat
EXPOSE 80
EXPOSE 6082
EXPOSE 6085
CMD ./init
My init.sh file is located under scripts folder on same location with dockerfile.
#!/bin/bash
# Start the varnish service
./start-varnish
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start varnish service: $status"
exit $status
fi
# Start the vha-agent
./start-vha-agent
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start vha-agent: $status"
exit $status
fi
# Start the varnish-agent
./start-varnish-agent
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start varnish-agent: $status"
exit $status
fi
# Start the varnishncsa
./start-varnishncsa
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start varnishncsa: $status"
exit $status
fi
# Start the filebeat
./start-filebeat
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start filebeat: $status"
exit $status
fi
while sleep 60; do
ps aux |grep varnishd |grep -v "grep"
PROCESS_1_STATUS=$?
ps aux |grep vha-agent |grep -v "grep"
PROCESS_2_STATUS=$?
ps aux |grep varnish-agent |grep -v "grep"
PROCESS_3_STATUS=$?
ps aux |grep varnishncsa |grep -v "grep"
PROCESS_4_STATUS=$?
# If the greps above find anything, they exit with 0 status
# If they are not both 0, then something is wrong
if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 -o $PROCESS_3_STATUS -ne 0 -o $PROCESS_4_STATUS -ne 0 ]; then
echo "One of the processes has already exited."
exit 1
fi
done
Varnish Software has an official Varnish Cache Plus Docker image. If you have a subscription, which I suspect you have, you can get help from support via support#varnish-software.com.
Support can have a look at your Dockerfile and advise you, but they can also explain how you can use the official image to get the job done, without having to maintain the Dockerfile yourself.

Is it possible to create Dockerfile from the container/image? [duplicate]

Is it possible to generate a Dockerfile from an image? I want to know for two reasons:
I can download images from the repository but would like to see the recipe that generated them.
I like the idea of saving snapshots, but once I am done it would be nice to have a structured format to review what was done.
How to generate or reverse a Dockerfile from an image?
You can. Mostly.
Notes: It does not generate a Dockerfile that you can use directly with docker build; the output is just for your reference.
alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm alpine/dfimage"
dfimage -sV=1.36 nginx:latest
It will pull the target docker image automatically and export Dockerfile. Parameter -sV=1.36 is not always required.
Reference: https://hub.docker.com/r/alpine/dfimage
Now hub.docker.com shows the image layers with detail commands directly, if you choose a particular tag.
Bonus
If you want to know which files are changed in each layer
alias dive="docker run -ti --rm -v /var/run/docker.sock:/var/run/docker.sock wagoodman/dive"
dive nginx:latest
On the left, you see each layer's command, on the right (jump with tab), the yellow line is the folder that some files are changed in that layer
(Use SPACE to collapse dir)
Old answer
below is the old answer, it doesn't work any more.
$ docker pull centurylink/dockerfile-from-image
$ alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm centurylink/dockerfile-from-image"
$ dfimage --help
Usage: dockerfile-from-image.rb [options] <image_id>
-f, --full-tree Generate Dockerfile for all parent layers
-h, --help Show this message
To understand how a docker image was built, use the
docker history --no-trunc command.
You can build a docker file from an image, but it will not contain everything you would want to fully understand how the image was generated. Reasonably what you can extract is the MAINTAINER, ENV, EXPOSE, VOLUME, WORKDIR, ENTRYPOINT, CMD, and ONBUILD parts of the dockerfile.
The following script should work for you:
#!/bin/bash
docker history --no-trunc "$1" | \
sed -n -e 's,.*/bin/sh -c #(nop) \(MAINTAINER .*[^ ]\) *0 B,\1,p' | \
head -1
docker inspect --format='{{range $e := .Config.Env}}
ENV {{$e}}
{{end}}{{range $e,$v := .Config.ExposedPorts}}
EXPOSE {{$e}}
{{end}}{{range $e,$v := .Config.Volumes}}
VOLUME {{$e}}
{{end}}{{with .Config.User}}USER {{.}}{{end}}
{{with .Config.WorkingDir}}WORKDIR {{.}}{{end}}
{{with .Config.Entrypoint}}ENTRYPOINT {{json .}}{{end}}
{{with .Config.Cmd}}CMD {{json .}}{{end}}
{{with .Config.OnBuild}}ONBUILD {{json .}}{{end}}' "$1"
I use this as part of a script to rebuild running containers as images:
https://github.com/docbill/docker-scripts/blob/master/docker-rebase
The Dockerfile is mainly useful if you want to be able to repackage an image.
The thing to keep in mind, is a docker image can actually just be the tar backup of a real or virtual machine. I have made several docker images this way. Even the build history shows me importing a huge tar file as the first step in creating the image...
I somehow absolutely missed the actual command in the accepted answer, so here it is again, bit more visible in its own paragraph, to see how many people are like me
$ docker history --no-trunc <IMAGE_ID>
A bash solution :
docker history --no-trunc $argv | tac | tr -s ' ' | cut -d " " -f 5- | sed 's,^/bin/sh -c #(nop) ,,g' | sed 's,^/bin/sh -c,RUN,g' | sed 's, && ,\n & ,g' | sed 's,\s*[0-9]*[\.]*[0-9]*\s*[kMG]*B\s*$,,g' | head -n -1
Step by step explanations:
tac : reverse the file
tr -s ' ' trim multiple whitespaces into 1
cut -d " " -f 5- remove the first fields (until X months/years ago)
sed 's,^/bin/sh -c #(nop) ,,g' remove /bin/sh calls for ENV,LABEL...
sed 's,^/bin/sh -c,RUN,g' remove /bin/sh calls for RUN
sed 's, && ,\n & ,g' pretty print multi command lines following Docker best practices
sed 's,\s*[0-9]*[\.]*[0-9]*\s*[kMG]*B\s*$,,g' remove layer size information
head -n -1 remove last line ("SIZE COMMENT" in this case)
Example:
~  dih ubuntu:18.04
ADD file:28c0771e44ff530dba3f237024acc38e8ec9293d60f0e44c8c78536c12f13a0b in /
RUN set -xe
&& echo '#!/bin/sh' > /usr/sbin/policy-rc.d
&& echo 'exit 101' >> /usr/sbin/policy-rc.d
&& chmod +x /usr/sbin/policy-rc.d
&& dpkg-divert --local --rename --add /sbin/initctl
&& cp -a /usr/sbin/policy-rc.d /sbin/initctl
&& sed -i 's/^exit.*/exit 0/' /sbin/initctl
&& echo 'force-unsafe-io' > /etc/dpkg/dpkg.cfg.d/docker-apt-speedup
&& echo 'DPkg::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' > /etc/apt/apt.conf.d/docker-clean
&& echo 'APT::Update::Post-Invoke { "rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true"; };' >> /etc/apt/apt.conf.d/docker-clean
&& echo 'Dir::Cache::pkgcache ""; Dir::Cache::srcpkgcache "";' >> /etc/apt/apt.conf.d/docker-clean
&& echo 'Acquire::Languages "none";' > /etc/apt/apt.conf.d/docker-no-languages
&& echo 'Acquire::GzipIndexes "true"; Acquire::CompressionTypes::Order:: "gz";' > /etc/apt/apt.conf.d/docker-gzip-indexes
&& echo 'Apt::AutoRemove::SuggestsImportant "false";' > /etc/apt/apt.conf.d/docker-autoremove-suggests
RUN rm -rf /var/lib/apt/lists/*
RUN sed -i 's/^#\s*\(deb.*universe\)$/\1/g' /etc/apt/sources.list
RUN mkdir -p /run/systemd
&& echo 'docker' > /run/systemd/container
CMD ["/bin/bash"]
Update Dec 2018 to BMW's answer
chenzj/dfimage - as described on hub.docker.com regenerates Dockerfile from other images. So you can use it as follows:
docker pull chenzj/dfimage
alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm chenzj/dfimage"
dfimage IMAGE_ID > Dockerfile
This is derived from #fallino's answer, with some adjustments and simplifications by using the output format option for docker history. Since macOS and Gnu/Linux have different command-line utilities, a different version is necessary for Mac. If you only need one or the other, you can just use those lines.
#!/bin/bash
case "$OSTYPE" in
linux*)
docker history --no-trunc --format "{{.CreatedBy}}" $1 | # extract information from layers
tac | # reverse the file
sed 's,^\(|3.*\)\?/bin/\(ba\)\?sh -c,RUN,' | # change /bin/(ba)?sh calls to RUN
sed 's,^RUN #(nop) *,,' | # remove RUN #(nop) calls for ENV,LABEL...
sed 's, *&& *, \\\n \&\& ,g' # pretty print multi command lines following Docker best practices
;;
darwin*)
docker history --no-trunc --format "{{.CreatedBy}}" $1 | # extract information from layers
tail -r | # reverse the file
sed -E 's,^(\|3.*)?/bin/(ba)?sh -c,RUN,' | # change /bin/(ba)?sh calls to RUN
sed 's,^RUN #(nop) *,,' | # remove RUN #(nop) calls for ENV,LABEL...
sed $'s, *&& *, \\\ \\\n \&\& ,g' # pretty print multi command lines following Docker best practices
;;
*)
echo "unknown OSTYPE: $OSTYPE"
;;
esac
It is not possible at this point (unless the author of the image explicitly included the Dockerfile).
However, it is definitely something useful! There are two things that will help to obtain this feature.
Trusted builds (detailed in this docker-dev discussion
More detailed metadata in the successive images produced by the build process. In the long run, the metadata should indicate which build command produced the image, which means that it will be possible to reconstruct the Dockerfile from a sequence of images.
If you are interested in an image that is in the Docker hub registry and wanted to take a look at Dockerfile?.
Example:
If you want to see the Dockerfile of image "jupyter/datascience-notebook" type the word "Dockerfile" in the address bar of your browser as shown below.
https://hub.docker.com/r/jupyter/datascience-notebook/
https://hub.docker.com/r/jupyter/datascience-notebook/Dockerfile
Note:
Not all the images have Dockerfile, for example, https://hub.docker.com/r/redislabs/redisinsight/Dockerfile
Sometimes this way is much faster than searching for Dockerfile in Github.
docker pull chenzj/dfimage
alias dfimage="docker run -v /var/run/docker.sock:/var/run/docker.sock --rm chenzj/dfimage"
dfimage image_id
Below is the output of the dfimage command:
$ dfimage 0f1947a021ce
FROM node:8
WORKDIR /usr/src/app
COPY file:e76d2e84545dedbe901b7b7b0c8d2c9733baa07cc821054efec48f623e29218c in ./
RUN /bin/sh -c npm install
COPY dir:a89a4894689a38cbf3895fdc0870878272bb9e09268149a87a6974a274b2184a in .
EXPOSE 8080
CMD ["npm" "start"]
it is possible in just two step. First pull the image then run docker history command. also, shown in SS.
docker pull kalilinux/kali-rolling
docker history --format "{{.CreatedBy}}" kalilinux/kali-rolling --no-trunc
What is image2df
image2df is tool for Generate Dockerfile by an image.
This tool is very useful when you only have docker image and need to generate a Dockerfile whit it.
How does it work
Reverse parsing by history information of an image.
How to use this image
# Command alias
echo "alias image2df='docker run -v /var/run/docker.sock:/var/run/docker.sock --rm cucker/image2df'" >> ~/.bashrc
. ~/.bashrc
# Excute command
image2df <IMAGE>
See help
docker run --rm cucker/image2df --help
For example
$ echo "alias image2df='docker run -v /var/run/docker.sock:/var/run/docker.sock --rm cucker/image2df'" >> ~/.bashrc
$ . ~/.bashrc
$ docker pull mysql
$ image2df mysql
========== Dockerfile ==========
FROM mysql:latest
RUN groupadd -r mysql && useradd -r -g mysql mysql
RUN apt-get update && apt-get install -y --no-install-recommends gnupg dirmngr && rm -rf /var/lib/apt/lists/*
ENV GOSU_VERSION=1.12
RUN set -eux; \
savedAptMark="$(apt-mark showmanual)"; \
apt-get update; \
apt-get install -y --no-install-recommends ca-certificates wget; \
rm -rf /var/lib/apt/lists/*; \
dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; \
wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; \
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \
apt-mark auto '.*' > /dev/null; \
[ -z "$savedAptMark" ] || apt-mark manual $savedAptMark > /dev/null; \
apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; \
chmod +x /usr/local/bin/gosu; \
gosu --version; \
gosu nobody true
RUN mkdir /docker-entrypoint-initdb.d
RUN apt-get update && apt-get install -y --no-install-recommends \
pwgen \
openssl \
perl \
xz-utils \
&& rm -rf /var/lib/apt/lists/*
RUN set -ex; \
key='A4A9406876FCBD3C456770C88C718D3B5072E1F5'; \
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
gpg --batch --export "$key" > /etc/apt/trusted.gpg.d/mysql.gpg; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME"; \
apt-key list > /dev/null
ENV MYSQL_MAJOR=8.0
ENV MYSQL_VERSION=8.0.24-1debian10
RUN echo 'deb http://repo.mysql.com/apt/debian/ buster mysql-8.0' > /etc/apt/sources.list.d/mysql.list
RUN { \
echo mysql-community-server mysql-community-server/data-dir select ''; \
echo mysql-community-server mysql-community-server/root-pass password ''; \
echo mysql-community-server mysql-community-server/re-root-pass password ''; \
echo mysql-community-server mysql-community-server/remove-test-db select false; \
} | debconf-set-selections \
&& apt-get update \
&& apt-get install -y \
mysql-community-client="${MYSQL_VERSION}" \
mysql-community-server-core="${MYSQL_VERSION}" \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf /var/lib/mysql && mkdir -p /var/lib/mysql /var/run/mysqld \
&& chown -R mysql:mysql /var/lib/mysql /var/run/mysqld \
&& chmod 1777 /var/run/mysqld /var/lib/mysql
VOLUME [/var/lib/mysql]
COPY dir:2e040acc386ebd23b8571951a51e6cb93647df091bc26159b8c757ef82b3fcda in /etc/mysql/
COPY file:345a22fe55d3e6783a17075612415413487e7dba27fbf1000a67c7870364b739 in /usr/local/bin/
RUN ln -s usr/local/bin/docker-entrypoint.sh /entrypoint.sh # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306 33060
CMD ["mysqld"]
reference

Several tors services in one docker container

I have dockerfile with run tor in docker:
FROM alpine:latest
RUN apk update && apk upgrade && \
apk add tor curl && \
rm /var/cache/apk/* && \
cp /etc/tor/torrc.sample /etc/tor/torrc && \
echo "SocksPort 0.0.0.0:9050" > /etc/tor/torrc
EXPOSE 9050
USER tor
CMD /usr/bin/tor -f /etc/tor/torrc
It works. I want to run several tors in one dockerfile and open different ports (9051,9052, etc). I can create docker-compose.yml in which for every port create one docker, but it isn't a good solution in my opinion.
May be anybody know how run several tors and publish theirs ports from docker?
For me help this dockerfile:
FROM alpine:latest
RUN apk update && apk upgrade && \
apk add tor curl bash && \
rm /var/cache/apk/* && \
cp /etc/tor/torrc.sample /etc/tor/torrc
EXPOSE 9050-9060
ADD start.sh /usr/local/bin/start.sh
RUN chmod +x /usr/local/bin/start.sh
RUN echo | sed -i 's/\r$//' /usr/local/bin/start.sh
CMD /usr/local/bin/start.sh
And script start.sh:
#!/bin/bash
#making script to stop on 1st error
set -e
# Original script from
# http://blog.databigbang.com/distributed-scraping-with-multiple-tor-circuits/
# if defined TOR_INSTANCE env variable sets the number of tor instances (default 10)
TOR_INSTANCES=${TOR_INSTANCES:=10 }
# if defined TOR_OPTIONSE env variable can be used to add options to TOR
TOR_OPTIONS=${TOR_OPTIONS:=''}
base_socks_port=9050
base_control_port=11000
dir_data="/tmp/multitor.$$"
# Create data directory if it doesn't exist
if [ ! -d $dir_data ]; then
mkdir $dir_data
fi
if [ ! $TOR_INSTANCES ] || [ $TOR_INSTANCES -lt 1 ]; then
echo "Please supply an instance count"
exit 1
fi
for i in $(seq $TOR_INSTANCES)
do
j=$((i+1))
socks_port=$((base_socks_port+i))
control_port=$((base_control_port+i))
if [ ! -d "$dir_data/tor$i" ]; then
echo "Creating directory $dir_data/tor$i"
mkdir "$dir_data/tor$i" && chmod -R 700 "$dir_data/tor$i"
fi
# Take into account that authentication for the control port is disabled. Must be used in secure and controlled environments
echo "Running: tor --RunAsDaemon 1 --CookieAuthentication 0 --HashedControlPassword \"\" --ControlPort 0.0.0.0:$control_port --PidFile tor$i.pid --SocksPort 0.0.0.0:$socks_port --DataDirectory $dir_data/tor$i -f /etc/tor/torrc"
tor --RunAsDaemon 1 --CookieAuthentication 0 --HashedControlPassword "" --PidFile $dir_data/tor$i/tor$i.pid --SocksPort 0.0.0.0:$socks_port --DataDirectory $dir_data/tor$i
done
# So that the container doesn't shut down, sleep this thread
sleep infinity
Build and start:
docker build -t torone ./
docker run -d -e "TOR_INSTANCES=10" -p 9050-9060:9050-9060 --rm --name torone torone
TOR_INSTANCES - contains how many tors processes want to start.

How to user cron inside docker container

I tryed to add crontab inside docker image "jenkinsci/blueocean" but after it, jenkins does not start. Where could be the problem?
Many thanks in advance for any help.
<Dockerfile>
FROM jenkinsci/blueocean:1.17.0
USER root
ENV SUPERCRONIC_URL=https://github.com/aptible/supercronic/releases/download/v0.1.9/supercronic-linux-amd64 \
SUPERCRONIC=supercronic-linux-amd64 \
SUPERCRONIC_SHA1SUM=5ddf8ea26b56d4a7ff6faecdd8966610d5cb9d85
RUN curl -fsSLO "$SUPERCRONIC_URL" \
&& echo "${SUPERCRONIC_SHA1SUM} ${SUPERCRONIC}" | sha1sum -c - \
&& chmod +x "$SUPERCRONIC" \
&& mv "$SUPERCRONIC" "/usr/local/bin/${SUPERCRONIC}" \
&& ln -s "/usr/local/bin/${SUPERCRONIC}" /usr/local/bin/supercronic
ADD crontab /etc/crontab
CMD ["supercronic", "/etc/crontab"]
<crontab>
# Run every minute
*/1 * * * * echo "hello world"
commands:
$docker build -t jenkins_test .
$docker run -it -p 8080:8080 --name=container_jenkins jenkins_test
If use docker inspect jenkinsci/blueocean:1.17.0 you will it's entrypoint is:
"Entrypoint": [
"/sbin/tini",
"--",
"/usr/local/bin/jenkins.sh"
],
So, when start the container it will first execute next script.
/usr/local/bin/jenkins.sh:
#! /bin/bash -e
: "${JENKINS_WAR:="/usr/share/jenkins/jenkins.war"}"
: "${JENKINS_HOME:="/var/jenkins_home"}"
touch "${COPY_REFERENCE_FILE_LOG}" || { echo "Can not write to ${COPY_REFERENCE_FILE_LOG}. Wrong volume permissions?"; exit 1; }
echo "--- Copying files at $(date)" >> "$COPY_REFERENCE_FILE_LOG"
find /usr/share/jenkins/ref/ \( -type f -o -type l \) -exec bash -c '. /usr/local/bin/jenkins-support; for arg; do copy_reference_file "$arg"; done' _ {} +
# if `docker run` first argument start with `--` the user is passing jenkins launcher arguments
if [[ $# -lt 1 ]] || [[ "$1" == "--"* ]]; then
# read JAVA_OPTS and JENKINS_OPTS into arrays to avoid need for eval (and associated vulnerabilities)
java_opts_array=()
while IFS= read -r -d '' item; do
java_opts_array+=( "$item" )
done < <([[ $JAVA_OPTS ]] && xargs printf '%s\0' <<<"$JAVA_OPTS")
readonly agent_port_property='jenkins.model.Jenkins.slaveAgentPort'
if [ -n "${JENKINS_SLAVE_AGENT_PORT:-}" ] && [[ "${JAVA_OPTS:-}" != *"${agent_port_property}"* ]]; then
java_opts_array+=( "-D${agent_port_property}=${JENKINS_SLAVE_AGENT_PORT}" )
fi
if [[ "$DEBUG" ]] ; then
java_opts_array+=( \
'-Xdebug' \
'-Xrunjdwp:server=y,transport=dt_socket,address=5005,suspend=y' \
)
fi
jenkins_opts_array=( )
while IFS= read -r -d '' item; do
jenkins_opts_array+=( "$item" )
done < <([[ $JENKINS_OPTS ]] && xargs printf '%s\0' <<<"$JENKINS_OPTS")
exec java -Duser.home="$JENKINS_HOME" "${java_opts_array[#]}" -jar ${JENKINS_WAR} "${jenkins_opts_array[#]}" "$#"
fi
# As argument is not jenkins, assume user want to run his own process, for example a `bash` shell to explore this image
exec "$#"
From above script, you can see, if you add CMD ["supercronic", "/etc/crontab"] to your own dockerfile, then when your container starts, it equals to execute next:
/usr/local/bin/jenkins.sh "supercronic" "/etc/crontab"
As if [[ $# -lt 1 ]] || [[ "$1" == "--"* ]]; then not match, it will directly execute the exec "$# at the last line, which results in the jenkins start code never execute.
To fix it, you had to use your own docker-entrypoint.sh to override its default entrypoint:
docker-entrypoint.sh:
#!/bin/bash
supercronic /etc/crontab &
/usr/local/bin/jenkins.sh
Dockerfile:
FROM jenkinsci/blueocean:1.17.0
USER root
ENV SUPERCRONIC_URL=https://github.com/aptible/supercronic/releases/download/v0.1.9/supercronic-linux-amd64 \
SUPERCRONIC=supercronic-linux-amd64 \
SUPERCRONIC_SHA1SUM=5ddf8ea26b56d4a7ff6faecdd8966610d5cb9d85
RUN curl -fsSLO "$SUPERCRONIC_URL" \
&& echo "${SUPERCRONIC_SHA1SUM} ${SUPERCRONIC}" | sha1sum -c - \
&& chmod +x "$SUPERCRONIC" \
&& mv "$SUPERCRONIC" "/usr/local/bin/${SUPERCRONIC}" \
&& ln -s "/usr/local/bin/${SUPERCRONIC}" /usr/local/bin/supercronic
ADD crontab /etc/crontab
COPY docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/sbin/tini", "--", "/docker-entrypoint.sh"]

Resources