How to user cron inside docker container - docker

I tryed to add crontab inside docker image "jenkinsci/blueocean" but after it, jenkins does not start. Where could be the problem?
Many thanks in advance for any help.
<Dockerfile>
FROM jenkinsci/blueocean:1.17.0
USER root
ENV SUPERCRONIC_URL=https://github.com/aptible/supercronic/releases/download/v0.1.9/supercronic-linux-amd64 \
SUPERCRONIC=supercronic-linux-amd64 \
SUPERCRONIC_SHA1SUM=5ddf8ea26b56d4a7ff6faecdd8966610d5cb9d85
RUN curl -fsSLO "$SUPERCRONIC_URL" \
&& echo "${SUPERCRONIC_SHA1SUM} ${SUPERCRONIC}" | sha1sum -c - \
&& chmod +x "$SUPERCRONIC" \
&& mv "$SUPERCRONIC" "/usr/local/bin/${SUPERCRONIC}" \
&& ln -s "/usr/local/bin/${SUPERCRONIC}" /usr/local/bin/supercronic
ADD crontab /etc/crontab
CMD ["supercronic", "/etc/crontab"]
<crontab>
# Run every minute
*/1 * * * * echo "hello world"
commands:
$docker build -t jenkins_test .
$docker run -it -p 8080:8080 --name=container_jenkins jenkins_test

If use docker inspect jenkinsci/blueocean:1.17.0 you will it's entrypoint is:
"Entrypoint": [
"/sbin/tini",
"--",
"/usr/local/bin/jenkins.sh"
],
So, when start the container it will first execute next script.
/usr/local/bin/jenkins.sh:
#! /bin/bash -e
: "${JENKINS_WAR:="/usr/share/jenkins/jenkins.war"}"
: "${JENKINS_HOME:="/var/jenkins_home"}"
touch "${COPY_REFERENCE_FILE_LOG}" || { echo "Can not write to ${COPY_REFERENCE_FILE_LOG}. Wrong volume permissions?"; exit 1; }
echo "--- Copying files at $(date)" >> "$COPY_REFERENCE_FILE_LOG"
find /usr/share/jenkins/ref/ \( -type f -o -type l \) -exec bash -c '. /usr/local/bin/jenkins-support; for arg; do copy_reference_file "$arg"; done' _ {} +
# if `docker run` first argument start with `--` the user is passing jenkins launcher arguments
if [[ $# -lt 1 ]] || [[ "$1" == "--"* ]]; then
# read JAVA_OPTS and JENKINS_OPTS into arrays to avoid need for eval (and associated vulnerabilities)
java_opts_array=()
while IFS= read -r -d '' item; do
java_opts_array+=( "$item" )
done < <([[ $JAVA_OPTS ]] && xargs printf '%s\0' <<<"$JAVA_OPTS")
readonly agent_port_property='jenkins.model.Jenkins.slaveAgentPort'
if [ -n "${JENKINS_SLAVE_AGENT_PORT:-}" ] && [[ "${JAVA_OPTS:-}" != *"${agent_port_property}"* ]]; then
java_opts_array+=( "-D${agent_port_property}=${JENKINS_SLAVE_AGENT_PORT}" )
fi
if [[ "$DEBUG" ]] ; then
java_opts_array+=( \
'-Xdebug' \
'-Xrunjdwp:server=y,transport=dt_socket,address=5005,suspend=y' \
)
fi
jenkins_opts_array=( )
while IFS= read -r -d '' item; do
jenkins_opts_array+=( "$item" )
done < <([[ $JENKINS_OPTS ]] && xargs printf '%s\0' <<<"$JENKINS_OPTS")
exec java -Duser.home="$JENKINS_HOME" "${java_opts_array[#]}" -jar ${JENKINS_WAR} "${jenkins_opts_array[#]}" "$#"
fi
# As argument is not jenkins, assume user want to run his own process, for example a `bash` shell to explore this image
exec "$#"
From above script, you can see, if you add CMD ["supercronic", "/etc/crontab"] to your own dockerfile, then when your container starts, it equals to execute next:
/usr/local/bin/jenkins.sh "supercronic" "/etc/crontab"
As if [[ $# -lt 1 ]] || [[ "$1" == "--"* ]]; then not match, it will directly execute the exec "$# at the last line, which results in the jenkins start code never execute.
To fix it, you had to use your own docker-entrypoint.sh to override its default entrypoint:
docker-entrypoint.sh:
#!/bin/bash
supercronic /etc/crontab &
/usr/local/bin/jenkins.sh
Dockerfile:
FROM jenkinsci/blueocean:1.17.0
USER root
ENV SUPERCRONIC_URL=https://github.com/aptible/supercronic/releases/download/v0.1.9/supercronic-linux-amd64 \
SUPERCRONIC=supercronic-linux-amd64 \
SUPERCRONIC_SHA1SUM=5ddf8ea26b56d4a7ff6faecdd8966610d5cb9d85
RUN curl -fsSLO "$SUPERCRONIC_URL" \
&& echo "${SUPERCRONIC_SHA1SUM} ${SUPERCRONIC}" | sha1sum -c - \
&& chmod +x "$SUPERCRONIC" \
&& mv "$SUPERCRONIC" "/usr/local/bin/${SUPERCRONIC}" \
&& ln -s "/usr/local/bin/${SUPERCRONIC}" /usr/local/bin/supercronic
ADD crontab /etc/crontab
COPY docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
ENTRYPOINT ["/sbin/tini", "--", "/docker-entrypoint.sh"]

Related

Setting conditional variables in a Dockerfile

I'm trying to create a multi-architecture docker image via buildkit, using the predefined TARGETARCH arg variable.
What I want to do is - I think - something like bash variable indirection, but I understand that's not supported and I'm struggling to come up with an alternative.
Here's what I've got:
FROM alpine:latest
# Buildkit should populate this on build with e.g. "arm64" or "amd64"
ARG TARGETARCH
# Set some temp variables via ARG... :/
ARG DOWNLOAD_amd64="x86_64"
ARG DOWNLOAD_arm64="aarch64"
ARG DOWNLOAD_URL="https://download.url/path/to/toolkit-${DOWNLOAD_amd64}"
# DOWNLOAD_URL must also be set in container as ENV var.
ENV DOWNLOAD_URL $DOWNLOAD_URL
RUN echo "Installing Toolkit" && \
curl -sSL ${DOWNLOAD_URL} -o /tmp/toolkit-${DOWNLOAD_amd64}
... which is a bit pseudo-code but hopefully illustrates what I'm trying to do: I want the value of either $DOWNLOAD_amd64 or $DOWNLOAD_arm64 dropped into $DOWNLOAD_URL, depending on what $TARGETARCH is set to.
This is probably a long-solved issue but either I'm googling the wrong stuff or just not getting it.
Ok, was not complete. Here a full working solution:
Dockerfile:
FROM ubuntu:18.04
ARG TARGETARCH
ARG DOWNLOAD_amd64="x86_64"
ARG DOWNLOAD_arm64="aarch64"
WORKDIR /tmp
ARG DOWNLOAD_URL_BASE="https://download.url/path/to/toolkit-"
RUN touch .env; \
if [ "$TARGETARCH" = "arm64" ]; then \
export DOWNLOAD_URL=$(echo $DOWNLOAD_URL_BASE$DOWNLOAD_arm64) ; \
elif [ "$TARGETARCH" = "amd64" ]; then \
export DOWNLOAD_URL=$(echo $DOWNLOAD_URL_BASE$DOWNLOAD_amd64) ; \
else \
export DOWNLOAD_URL="" ; \
fi; \
echo DOWNLOAD_URL=$DOWNLOAD_URL > .env; \
curl ... #ENVS JUST VALID IN THIS RUN!
COPY ./entrypoint.sh ./entrypoint.sh
ENTRYPOINT ["/bin/bash", "entrypoint.sh"]
entrypoint.sh
#!/bin/sh
ENV_FILE=/tmp/.env
if [ -f "$ENV_FILE" ]; then
echo "export " $(grep -v '^#' $ENV_FILE | xargs -d '\n') >> /etc/bash.bashrc
rm $ENV_FILE
fi
trap : TERM INT; sleep infinity & wait
Test:
# bash
root#da1dd15acb64:/tmp# echo $DOWNLOAD_URL
https://download.url/path/to/toolkit-aarch64
Now for Alpine:
Dockerfile
FROM alpine:3.13
RUN apk add --no-cache bash
Entrypoint.sh
ENV_FILE=/tmp/.env
if [ -f "$ENV_FILE" ]; then
echo "export " $(grep -v '^#' $ENV_FILE) >> /etc/profile.d/environ.sh
rm $ENV_FILE
fi
Alpine does not accept xargs -d. But not that interesting here due to the fact URL does not contain any blank..
Testing:
Alpine just uses that for login shells.. so:
docker exec -it containername sh --login
echo $DOWNLOAD_URL
I had a similar problem, and landed here for ideas to solve it. In the end, I used uname -p
curl -sSL https://download.url/path/to/toolkit-$(uname -p) -o /tmp/toolkit-${TARGETARCH}

Docker container healthcheck stop unhealthy container

I have a docker container that has a healthcheck running every 1 min. I read that appending "|| kill 1" to the healthcheck in dockerfile can stop the container after healthcheck fails, but it does not seem to be working for me and I cannot find an example that works.
Does anybody know how I can stop the container after marked as unhealthy? I currently have this in my dockerfile:
HEALTHCHECK --start-period=30s --timeout=5s --interval=1m --retries=2 CMD bash /expressvpn/healthcheck.sh || kill 1
EDIT 1
Dockerfile
FROM debian:buster-slim
ENV CODE="code"
ENV SERVER="smart"
ARG VERSION="expressvpn_2.6.0.32-1_armhf.deb"
COPY files/ /expressvpn/
RUN apt-get update && apt-get install -y --no-install-recommends \
expect curl ca-certificates iproute2 wget jq \
&& wget -q https://download.expressvpn.xyz/clients/linux/${VERSION} -O /expressvpn/${VERSION} \
&& dpkg -i /expressvpn/${VERSION} \
&& rm -rf /expressvpn/*.deb \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get purge --autoremove -y wget \
&& rm -rf /var/log/*.log
HEALTHCHECK --start-period=30s --timeout=5s --interval=1m --retries=2 CMD bash /expressvpn/healthcheck.sh || exit 1
ENTRYPOINT ["/bin/bash", "/expressvpn/start.sh"]
healthcheck.sh
if [[ ! -z $DDNS ]];
then
checkIP=$(getent hosts $DDNS | awk '{ print $1 }')
else
checkIP=$IP
fi
if [[ ! -z $checkIP ]];
then
ipinfo=$(curl -s -H "Authorization: Bearer $BEARER" 'ipinfo.io' | jq -r '.')
currentIP=$(jq -r '.ip' <<< "$ipinfo")
hostname=$(jq -r '.hostname' <<< "$ipinfo")
if [[ $checkIP = $currentIP ]];
then
if [[ ! -z $HEALTHCHECK ]];
then
curl https://hc-ping.com/$HEALTHCHECK/fail
expressvpn disconnect
expressvpn connect $SERVER
exit 1
else
expressvpn disconnect
expressvpn connect $SERVER
exit 1
fi
else
if [[ ! -z $HOSTNAME_PART && ! -z $hostname && $hostname != *"$HOSTNAME_PART"* ]];
then
#THIS IS WHERE THE CONTAINER SHOULD STOP <------------
kill 1
fi
if [[ ! -z $HEALTHCHECK ]];
then
curl https://hc-ping.com/$HEALTHCHECK
exit 0
else
exit 0
fi
fi
else
exit 0
fi
start.sh
#!/usr/bin/bash
cp /etc/resolv.conf /etc/resolv.conf.bak
umount /etc/resolv.conf
cp /etc/resolv.conf.bak /etc/resolv.conf
rm /etc/resolv.conf.bak
service expressvpn restart
expect /expressvpn/activate.sh
expressvpn connect $SERVER
touch /var/log/temp.log
tail -f /var/log/temp.log
exec "$#"
Try Changing from kill to exit 1
HEALTHCHECK --start-period=30s --timeout=5s --interval=1m --retries=2 \
CMD bash /expressvpn/healthcheck.sh || exit 1
Reference from docker docs
Edit 1:
After some testing if you want to kill the container on unhealthy status you need to do it in the health check script /expressvpn/healthcheck.sh or by script on the host.
The following example the container status is healthy:
HEALTHCHECK --start-period=30s --timeout=5s --interval=10s --retries=2 CMD bash -c 'echo "0" || kill 1' || exit 1
The following example the container stops since the command ech not exit then the kill 1 is executed and the container got killed:
HEALTHCHECK --start-period=30s --timeout=5s --interval=10s --retries=2 CMD bash -c 'ech "0" || kill 1' || exit 1
Edit 2:
Well after a bit of digging i understood something that i saw in some dockerfiles:
RUN apt update -y && apt install tini -y
ENTRYPOINT ["tini", "--"]
CMD ["./echo.sh"]
From what i got docker keeps the pid 1=entrypoint process from getting killed by SIGTERM so for this you have tini small util that helps with this (still not sure what exactly the purpose of this i will keep it for next time i will be in the mood..).
Anyway after adding tini the container got killed with kill 1
Thank you for the question.
please check the output of your health check. You'll have to ensure that your healthcheck actually fails 2 consecutive times.
docker inspect --format "{{json .State.Health }}" <container name> | jq

Several tors services in one docker container

I have dockerfile with run tor in docker:
FROM alpine:latest
RUN apk update && apk upgrade && \
apk add tor curl && \
rm /var/cache/apk/* && \
cp /etc/tor/torrc.sample /etc/tor/torrc && \
echo "SocksPort 0.0.0.0:9050" > /etc/tor/torrc
EXPOSE 9050
USER tor
CMD /usr/bin/tor -f /etc/tor/torrc
It works. I want to run several tors in one dockerfile and open different ports (9051,9052, etc). I can create docker-compose.yml in which for every port create one docker, but it isn't a good solution in my opinion.
May be anybody know how run several tors and publish theirs ports from docker?
For me help this dockerfile:
FROM alpine:latest
RUN apk update && apk upgrade && \
apk add tor curl bash && \
rm /var/cache/apk/* && \
cp /etc/tor/torrc.sample /etc/tor/torrc
EXPOSE 9050-9060
ADD start.sh /usr/local/bin/start.sh
RUN chmod +x /usr/local/bin/start.sh
RUN echo | sed -i 's/\r$//' /usr/local/bin/start.sh
CMD /usr/local/bin/start.sh
And script start.sh:
#!/bin/bash
#making script to stop on 1st error
set -e
# Original script from
# http://blog.databigbang.com/distributed-scraping-with-multiple-tor-circuits/
# if defined TOR_INSTANCE env variable sets the number of tor instances (default 10)
TOR_INSTANCES=${TOR_INSTANCES:=10 }
# if defined TOR_OPTIONSE env variable can be used to add options to TOR
TOR_OPTIONS=${TOR_OPTIONS:=''}
base_socks_port=9050
base_control_port=11000
dir_data="/tmp/multitor.$$"
# Create data directory if it doesn't exist
if [ ! -d $dir_data ]; then
mkdir $dir_data
fi
if [ ! $TOR_INSTANCES ] || [ $TOR_INSTANCES -lt 1 ]; then
echo "Please supply an instance count"
exit 1
fi
for i in $(seq $TOR_INSTANCES)
do
j=$((i+1))
socks_port=$((base_socks_port+i))
control_port=$((base_control_port+i))
if [ ! -d "$dir_data/tor$i" ]; then
echo "Creating directory $dir_data/tor$i"
mkdir "$dir_data/tor$i" && chmod -R 700 "$dir_data/tor$i"
fi
# Take into account that authentication for the control port is disabled. Must be used in secure and controlled environments
echo "Running: tor --RunAsDaemon 1 --CookieAuthentication 0 --HashedControlPassword \"\" --ControlPort 0.0.0.0:$control_port --PidFile tor$i.pid --SocksPort 0.0.0.0:$socks_port --DataDirectory $dir_data/tor$i -f /etc/tor/torrc"
tor --RunAsDaemon 1 --CookieAuthentication 0 --HashedControlPassword "" --PidFile $dir_data/tor$i/tor$i.pid --SocksPort 0.0.0.0:$socks_port --DataDirectory $dir_data/tor$i
done
# So that the container doesn't shut down, sleep this thread
sleep infinity
Build and start:
docker build -t torone ./
docker run -d -e "TOR_INSTANCES=10" -p 9050-9060:9050-9060 --rm --name torone torone
TOR_INSTANCES - contains how many tors processes want to start.

Customize Hasura Docker Image

I have a need to install awscli and jq library in Hasura Docker Image. I tried to use yum, apt-get or apk commands to install the dependencies, but none of them worked.
Docker Image: https://hub.docker.com/r/hasura/graphql-engine/
how to install these dependencies in Hasura Docker Image? Any help is appreciated.
Dockerfile:
FROM hasura/graphql-engine:latest
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
CMD ["./entrypoint.sh"]
entrypoint.sh:
#!/bin/sh
set -o errexit -o nounset -o pipefail
DB_HOST=${DB_HOST:-postgres}
DB_PORT=${DB_PORT:-5432}
if [ -z "${DB_NAME}" ]; then
echo "Must provide DB_NAME environment variable. Exiting...."
exit 1
fi
if [ -z "${DB_USER}" ]; then
echo "Must provide DB_USER environment variable. Exiting...."
exit 1
fi
if [ -z "${DB_PASSWORD}" ]; then
echo "Must provide DB_PASSWORD environment variable. Exiting...."
exit 1
fi
export HASURA_GRAPHQL_DATABASE_URL=postgres://${DB_USER}:${DB_PASSWORD}#${DB_HOST}:${DB_PORT}/${DB_NAME}
/bin/graphql-engine serve
DB_PASSWORD is encrypted with KMS, so i want to use aws cli to decrypt the password in entrypoint.sh file before setting the Environment Variable: HASURA_GRAPHQL_DATABASE_URL
I was able to customize Hasura Docker Image with the help of Hasura Team support.
Here is the link to github issue: https://github.com/hasura/graphql-engine/issues/2729
Dockerfile:
FROM hasura/graphql-engine:v1.0.0-beta.4 as base
FROM python:3.7-slim-stretch
RUN apt-get -y update \
&& apt-get install -y --no-install-recommends libpq-dev jq \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf /usr/share/doc/ \
&& rm -rf /usr/share/man/ \
&& rm -rf /usr/share/locale/ \
&& pip install awscli
# copy hausra binary from base container
COPY --from=base /bin/graphql-engine /bin/graphql-engine
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
CMD ["/entrypoint.sh"]
entrypoint.sh:
#!/bin/bash
set -e
DB_HOST=${DB_HOST:-postgres}
DB_PORT=${DB_PORT:-5432}
AWS_REGION=${AWS_REGION:-us-east-1}
DB_PASSWORD_ENCYPTED=${DB_PASSWORD_ENCYPTED:-false}
if [ -z "${DB_NAME}" ]; then
echo "Must provide DB_NAME environment variable. Exiting...."
exit 1
fi
if [ -z "${DB_USER}" ]; then
echo "Must provide DB_USER environment variable. Exiting...."
exit 1
fi
if [ -z "${DB_PASSWORD}" ]; then
echo "Must provide DB_PASSWORD environment variable. Exiting...."
exit 1
fi
if [ ${DB_PASSWORD_ENCYPTED} == "true" ]
then
echo "loading KMS credentials"
decrypted_value_base64=$( \
aws --region ${AWS_REGION} kms decrypt \
--ciphertext-blob fileb://<(echo "${DB_PASSWORD}" | base64 -d) \
--query Plaintext \
--output text
)
decrypted_value=$(echo $decrypted_value_base64 | base64 -d)
export HASURA_GRAPHQL_DATABASE_URL=postgres://${DB_USER}:${decrypted_value}#${DB_HOST}:${DB_PORT}/${DB_NAME}
else
export HASURA_GRAPHQL_DATABASE_URL=postgres://${DB_USER}:${DB_PASSWORD}#${DB_HOST}:${DB_PORT}/${DB_NAME}
fi
/bin/graphql-engine serve

How to create a docker that runs a wget on cron and access the downloaded files via ftp on OpenShift?

I would like to create a docker that runs a wget every minute in a cron indefinitely. To recover the files downloaded with wget, I would like to access via FTP. The FTP server used is vsftpd.
Locally docker works well and can access it via FTP, but when the docker is deployed on OpenShift, the container doesn't start and both, crond and vsftpd, doesn't run.
What changes must to do on this docker to run on OpenShift?
Dockerfile:
FROM alpine:3.4
RUN apk update && apk add vsftpd
RUN adduser -h /home/./files -s /bin/false -D files
RUN echo "local_enable=YES" >> /etc/vsftpd/vsftpd.conf \
&& echo "chroot_local_user=YES" >> /etc/vsftpd/vsftpd.conf \
&& echo "write_enable=YES" >> /etc/vsftpd/vsftpd.conf \
&& echo "local_umask=022" >> /etc/vsftpd/vsftpd.conf \
&& echo "passwd_chroot_enable=yes" >> /etc/vsftpd/vsftpd.conf \
&& echo 'seccomp_sandbox=NO' >> /etc/vsftpd/vsftpd.conf \
&& echo 'pasv_enable=Yes' >> /etc/vsftpd/vsftpd.conf \
&& echo 'pasv_max_port=10100' >> /etc/vsftpd/vsftpd.conf \
&& echo 'pasv_min_port=10090' >> /etc/vsftpd/vsftpd.conf \
&& sed -i "s/anonymous_enable=YES/anonymous_enable=NO/" /etc/vsftpd/vsftpd.conf
RUN apk update && \
apk add wget && \
rm -rf /var/cache/apk/*
COPY myScript /bin/myScript
COPY cron /var/spool/cron/crontabs/root
RUN chmod +x /bin/myScript
RUN echo "files:mypassword" |/usr/sbin/chpasswd
RUN chown files:files /home/files/ -R
EXPOSE 20 21 10090-10100
VOLUME /home/files
CMD /usr/sbin/crond -l 2 -L /var/log/cron.log | /usr/sbin/vsftpd /etc/vsftpd/vsftpd.conf
cron:
* * * * * /bin/myScript
myScript:
#!/bin/sh
wget -P /home/files -nv http://www.google.com >> /var/log/cron.log 2>&1
It seems that cron always need root, so it can't start on OpenShift. To solve this problem, I've changed cron by Supercronic.
To download the data, I'm using the openshift client tools, using the oc rsync command

Resources