Docker container environment variable file during runtime - docker

I have a docker image that basically schedules a cron job at a frequency defined when building the image using the below.
COPY myjobtime /etc/cron.d/myjobtime
RUN chmod 0644 /etc/cron.d/myjobtime &&\
crontab /etc/cron.d/myjobtime
CMD cron
I have the cron entry in the file myjobtime.
*/10 * * * * /usr/local/bin/sh /app/myscript.py
I would like to be able to pass the cron schedule during the runtime. Meaning, if someone wants to modify the cron schedule to a different frequency, they should be able to do that while running the container and passing an environment variable file with the new cron schedule in it. Can this be done?

The important detail is that you need to create and install the crontab file when the container starts up. I find an entrypoint wrapper script to be a useful pattern for this: set the image's ENTRYPOINT to be a shell script that does whatever first-time setup is required, then have it exec "$#" to run the image's CMD.
If your image is ultimately based on a Linux distribution based on the GNU toolset, then envsubst is a really helpful program here. It reads in a text file, expands environment variable references, and writes out the result. I'll assume you have this available; on Alpine-based images you can do similar tricks with sed(1) (though escaping around the cron schedule will become tricky).
This makes the entrypoint wrapper script something like:
#!/bin/sh
# entrypoint.sh
# Set a default schedule, if the user didn't provide one
if [ -z "$CRON_SCHEDULE" ]; then
export CRON_SCHEDULE='*/10 * * * *'
fi
# Run substitutions on the template file and inject the crontab
envsubst < /app/myjobtime.cron.tmpl | crontab
# Run the main container command
exec "$#"
Since the template isn't a "normal" crontab, it can't go in the "normal" crontab directory; putting it in the application directory is fine. That file has an environment variable reference where the schedule would go
# myjobtime.cron.tmpl
${CRON_SCHEDULE} /app/myscript.py
In your image, set the wrapper script to be the ENTRYPOINT, make sure the template file is in the right place, and leave the CMD unchanged.
# (assuming there's not a broad `COPY . .`)
COPY myjobtime.cron.tmpl .
COPY entrypoint.sh .
ENTRYPOINT ["/app/entrypoint.sh"] # must be JSON-array syntax
CMD cron # unchanged
This should allow you to override the cron schedule.
docker run -d --name hourly myappcron
docker run -d --name daily -e 'CRON_SCHEDULE=0 0 * * *' myappcron
Since the entrypoint wrapper script runs whatever command was provided, and you can override the command pretty easily, this also lets you double-check that the right schedule got set.
docker run --rm -e 'CRON_SCHEDULE=0 0 * * *' myappcron \
crontab -l # runs instead of the cron daemon

Related

Schedule Cron Job with docker [duplicate]

I am trying to run a cronjob inside a docker container that invokes a shell script.
Yesterday I have been searching all over the web and stack overflow, but I could not really find a solution that works.
How can I do this?
You can copy your crontab into an image, in order for the container launched from said image to run the job.
Important: as noted in docker-cron issue 3: use LF, not CRLF for your cron file.
See "Run a cron job with Docker" from Julien Boulay in his Ekito/docker-cron:
Let’s create a new file called "hello-cron" to describe our job.
# must be ended with a new line "LF" (Unix) and not "CRLF" (Windows)
* * * * * echo "Hello world" >> /var/log/cron.log 2>&1
# An empty line is required at the end of this file for a valid cron file.
If you are wondering what is 2>&1, Ayman Hourieh explains.
The following Dockerfile describes all the steps to build your image
FROM ubuntu:latest
MAINTAINER docker#ekito.fr
RUN apt-get update && apt-get -y install cron
# Copy hello-cron file to the cron.d directory
COPY hello-cron /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Apply cron job
RUN crontab /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
But: if cron dies, the container keeps running.
(see Gaafar's comment and How do I make apt-get install less noisy?:
apt-get -y install -qq --force-yes cron can work too)
As noted by Nathan Lloyd in the comments:
Quick note about a gotcha:
If you're adding a script file and telling cron to run it, remember to
RUN chmod 0744 /the_script
Cron fails silently if you forget.
OR, make sure your job itself redirect directly to stdout/stderr instead of a log file, as described in hugoShaka's answer:
* * * * * root echo hello > /proc/1/fd/1 2>/proc/1/fd/2
Replace the last Dockerfile line with
CMD ["cron", "-f"]
But: it doesn't work if you want to run tasks as a non-root.
See also (about cron -f, which is to say cron "foreground") "docker ubuntu cron -f is not working"
Build and run it:
sudo docker build --rm -t ekito/cron-example .
sudo docker run -t -i ekito/cron-example
Be patient, wait for 2 minutes and your command-line should display:
Hello world
Hello world
Eric adds in the comments:
Do note that tail may not display the correct file if it is created during image build.
If that is the case, you need to create or touch the file during container runtime in order for tail to pick up the correct file.
See "Output of tail -f at the end of a docker CMD is not showing".
See more in "Running Cron in Docker" (Apr. 2021) from Jason Kulatunga, as he commented below
See Jason's image AnalogJ/docker-cron based on:
Dockerfile installing cronie/crond, depending on distribution.
an entrypoint initializing /etc/environment and then calling
cron -f -l 2
The accepted answer may be dangerous in a production environment.
In docker you should only execute one process per container because if you don't, the process that forked and went background is not monitored and may stop without you knowing it.
When you use CMD cron && tail -f /var/log/cron.log the cron process basically fork in order to execute cron in background, the main process exits and let you execute tailf in foreground. The background cron process could stop or fail you won't notice, your container will still run silently and your orchestration tool will not restart it.
You can avoid such a thing by redirecting directly the cron's commands output into your docker stdout and stderr which are located respectively in /proc/1/fd/1 and /proc/1/fd/2.
Using basic shell redirects you may want to do something like this :
* * * * * root echo hello > /proc/1/fd/1 2>/proc/1/fd/2
And your CMD will be : CMD ["cron", "-f"]
But: this doesn't work if you want to run tasks as a non-root.
For those who wants to use a simple and lightweight image:
FROM alpine:3.6
# copy crontabs for root user
COPY config/cronjobs /etc/crontabs/root
# start crond with log level 8 in foreground, output to stderr
CMD ["crond", "-f", "-d", "8"]
Where cronjobs is the file that contains your cronjobs, in this form:
* * * * * echo "hello stackoverflow" >> /test_file 2>&1
# remember to end this file with an empty new line
But apparently you won't see hello stackoverflow in docker logs.
What #VonC has suggested is nice but I prefer doing all cron job configuration in one line. This would avoid cross platform issues like cronjob location and you don't need a separate cron file.
FROM ubuntu:latest
# Install cron
RUN apt-get -y install cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Setup cron job
RUN (crontab -l ; echo "* * * * * echo "Hello world" >> /var/log/cron.log") | crontab
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
After running your docker container, you can make sure if cron service is working by:
# To check if the job is scheduled
docker exec -ti <your-container-id> bash -c "crontab -l"
# To check if the cron service is running
docker exec -ti <your-container-id> bash -c "pgrep cron"
If you prefer to have ENTRYPOINT instead of CMD, then you can substitute the CMD above with
ENTRYPOINT cron start && tail -f /var/log/cron.log
But: if cron dies, the container keeps running.
There is another way to do it, is to use Tasker, a task runner that has cron (a scheduler) support.
Why ? Sometimes to run a cron job, you have to mix, your base image (python, java, nodejs, ruby) with the crond. That means another image to maintain. Tasker avoid that by decoupling the crond and you container. You can just focus on the image that you want to execute your commands, and configure Tasker to use it.
Here an docker-compose.yml file, that will run some tasks for you
version: "2"
services:
tasker:
image: strm/tasker
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
environment:
configuration: |
logging:
level:
ROOT: WARN
org.springframework.web: WARN
sh.strm: DEBUG
schedule:
- every: minute
task: hello
- every: minute
task: helloFromPython
- every: minute
task: helloFromNode
tasks:
docker:
- name: hello
image: debian:jessie
script:
- echo Hello world from Tasker
- name: helloFromPython
image: python:3-slim
script:
- python -c 'print("Hello world from python")'
- name: helloFromNode
image: node:8
script:
- node -e 'console.log("Hello from node")'
There are 3 tasks there, all of them will run every minute (every: minute), and each of them will execute the script code, inside the image defined in image section.
Just run docker-compose up, and see it working. Here is the Tasker repo with the full documentation:
http://github.com/opsxcq/tasker
Though this aims to run jobs beside a running process in a container via Docker's exec interface, this may be of interest for you.
I've written a daemon that observes containers and schedules jobs, defined in their metadata, on them. Example:
version: '2'
services:
wordpress:
image: wordpress
mysql:
image: mariadb
volumes:
- ./database_dumps:/dumps
labels:
deck-chores.dump.command: sh -c "mysqldump --all-databases > /dumps/dump-$$(date -Idate)"
deck-chores.dump.interval: daily
'Classic', cron-like configuration is also possible.
Here are the docs, here's the image repository.
VonC's answer is pretty thorough. In addition I'd like to add one thing that helped me. If you just want to run a cron job without tailing a file, you'd be tempted to just remove the && tail -f /var/log/cron.log from the cron command.
However this will cause the Docker container to exit shortly after running because when the cron command completes, Docker thinks the last command has exited and hence kills the container. This can be avoided by running cron in the foreground via cron -f.
If you're using docker for windows, remember that you have to change your line-ending format from CRLF to LF (i.e. from dos to unix) if you intend on importing your crontab file from windows to your ubuntu container. If not, your cron-job won't work. Here's a working example:
FROM ubuntu:latest
RUN apt-get update && apt-get -y install cron
RUN apt-get update && apt-get install -y dos2unix
# Add crontab file (from your windows host) to the cron directory
ADD cron/hello-cron /etc/cron.d/hello-cron
# Change line ending format to LF
RUN dos2unix /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Apply cron job
RUN crontab /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/hello-cron.log
# Run the command on container startup
CMD cron && tail -f /var/log/hello-cron.log
This actually took me hours to figure out, as debugging cron jobs in docker containers is a tedious task. Hope it helps anyone else out there that can't get their code to work!
But: if cron dies, the container keeps running.
I created a Docker image based on the other answers, which can be used like
docker run -v "/path/to/cron:/etc/cron.d/crontab" gaafar/cron
where /path/to/cron: absolute path to crontab file, or you can use it as a base in a Dockerfile:
FROM gaafar/cron
# COPY crontab file in the cron directory
COPY crontab /etc/cron.d/crontab
# Add your commands here
For reference, the image is here.
Unfortunately, none of the above answers worked for me, although all answers lead to the solution and eventually to my solution, here is the snippet if it helps someone. Thanks
This can be solved with the bash file, due to the layered architecture of the Docker, cron service doesn't get initiated with RUN/CMD/ENTRYPOINT commands.
Simply add a bash file which will initiate the cron and other services (if required)
DockerFile
FROM gradle:6.5.1-jdk11 AS build
# apt
RUN apt-get update
RUN apt-get -y install cron
# Setup cron to run every minute to print (you can add/update your cron here)
RUN touch /var/log/cron-1.log
RUN (crontab -l ; echo "* * * * * echo testing cron.... >> /var/log/cron-1.log 2>&1") | crontab
# entrypoint.sh
RUN chmod +x entrypoint.sh
CMD ["bash","entrypoint.sh"]
entrypoint.sh
#!/bin/sh
service cron start & tail -f /var/log/cron-2.log
If any other service is also required to run along with cron then add that service with & in the same command, for example: /opt/wildfly/bin/standalone.sh & service cron start & tail -f /var/log/cron-2.log
Once you will get into the docker container there you can see that testing cron.... will be getting printed every minute in file: /var/log/cron-1.log
But, if cron dies, the container keeps running.
Define the cronjob in a dedicated container which runs the command via docker exec to your service.
This is higher cohesion and the running script will have access to the environment variables you have defined for your service.
#docker-compose.yml
version: "3.3"
services:
myservice:
environment:
MSG: i'm being cronjobbed, every minute!
image: alpine
container_name: myservice
command: tail -f /dev/null
cronjobber:
image: docker:edge
volumes:
- /var/run/docker.sock:/var/run/docker.sock
container_name: cronjobber
command: >
sh -c "
echo '* * * * * docker exec myservice printenv | grep MSG' > /etc/crontabs/root
&& crond -f"
I decided to use busybox, as it is one of the smallest images.
crond is executed in foreground (-f), logging is send to stderr (-d), I didn't choose to change the loglevel.
crontab file is copied to the default path: /var/spool/cron/crontabs
FROM busybox:1.33.1
# Usage: crond [-fbS] [-l N] [-d N] [-L LOGFILE] [-c DIR]
#
# -f Foreground
# -b Background (default)
# -S Log to syslog (default)
# -l N Set log level. Most verbose 0, default 8
# -d N Set log level, log to stderr
# -L FILE Log to FILE
# -c DIR Cron dir. Default:/var/spool/cron/crontabs
COPY crontab /var/spool/cron/crontabs/root
CMD [ "crond", "-f", "-d" ]
But output of the tasks apparently can't be seen in docker logs.
When you deploy your container on another host, just note that it won't start any processes automatically. You need to make sure that 'cron' service is running inside your container.
In our case, I am using Supervisord with other services to start cron service.
[program:misc]
command=/etc/init.d/cron restart
user=root
autostart=true
autorestart=true
stderr_logfile=/var/log/misc-cron.err.log
stdout_logfile=/var/log/misc-cron.out.log
priority=998
From above examples I created this combination:
Alpine Image & Edit Using Crontab in Nano (I hate vi)
FROM alpine
RUN apk update
RUN apk add curl nano
ENV EDITOR=/usr/bin/nano
# start crond with log level 8 in foreground, output to stderr
CMD ["crond", "-f", "-d", "8"]
# Shell Access
# docker exec -it <CONTAINERID> /bin/sh
# Example Cron Entry
# crontab -e
# * * * * * echo hello > /proc/1/fd/1 2>/proc/1/fd/2
# DATE/TIME WILL BE IN UTC
Setup a cron in parallel to a one-time job
Create a script file, say run.sh, with the job that is supposed to run periodically.
#!/bin/bash
timestamp=`date +%Y/%m/%d-%H:%M:%S`
echo "System path is $PATH at $timestamp"
Save and exit.
Use Entrypoint instead of CMD
f you have multiple jobs to kick in during docker containerization, use the entrypoint file to run them all.
Entrypoint file is a script file that comes into action when a docker run command is issued. So, all the steps that we want to run can be put in this script file.
For instance, we have 2 jobs to run:
Run once job: echo “Docker container has been started”
Run periodic job: run.sh
Create entrypoint.sh
#!/bin/bash
# Start the run once job.
echo "Docker container has been started"
# Setup a cron schedule
echo "* * * * * /run.sh >> /var/log/cron.log 2>&1
# This extra line makes it a valid cron" > scheduler.txt
crontab scheduler.txt
cron -f
Let’s understand the crontab that has been set up in the file
* * * * *: Cron schedule; the job must run every minute. You can update the schedule based on your requirement.
/run.sh: The path to the script file which is to be run periodically
/var/log/cron.log: The filename to save the output of the scheduled cron job.
2>&1: The error logs(if any) also will be redirected to the same output file used above.
Note: Do not forget to add an extra new line, as it makes it a valid cron.
Scheduler.txt: the complete cron setup will be redirected to a file.
Using System/User specific environment variables in cron
My actual cron job was expecting most of the arguments as the environment variables passed to the docker run command. But, with bash, I was not able to use any of the environment variables that belongs to the system or the docker container.
Then, this came up as a walkaround to this problem:
Add the following line in the entrypoint.sh
declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /container.env
Update the cron setup and specify-
SHELL=/bin/bash
BASH_ENV=/container.env
At last, your entrypoint.sh should look like
#!/bin/bash
# Start the run once job.
echo "Docker container has been started"
declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /container.env
# Setup a cron schedule
echo "SHELL=/bin/bash
BASH_ENV=/container.env
* * * * * /run.sh >> /var/log/cron.log 2>&1
# This extra line makes it a valid cron" > scheduler.txt
crontab scheduler.txt
cron -f
Last but not the least: Create a Dockerfile
FROM ubuntu:16.04
MAINTAINER Himanshu Gupta
# Install cron
RUN apt-get update && apt-get install -y cron
# Add files
ADD run.sh /run.sh
ADD entrypoint.sh /entrypoint.sh
RUN chmod +x /run.sh /entrypoint.sh
ENTRYPOINT /entrypoint.sh
That’s it. Build and run the Docker image!
Here's my docker-compose based solution:
cron:
image: alpine:3.10
command: crond -f -d 8
depends_on:
- servicename
volumes:
- './conf/cron:/etc/crontabs/root:z'
restart: unless-stopped
the lines with cron entries are on the ./conf/cron file.
Note: this won't run commands that aren't in the alpine image.
Also, output of the tasks apparently won't appear in docker logs.
This question have a lot of answers, but some are complicated and another has some drawbacks. I try to explain the problems and try to deliver a solution.
cron-entrypoint.sh:
#!/bin/bash
# copy machine environment variables to cron environment
printenv | cat - /etc/crontab > temp && mv temp /etc/crontab
## validate cron file
crontab /etc/crontab
# cron service with SIGTERM support
service cron start
trap "service cron stop; exit" SIGINT SIGTERM
# just dump your logs to std output
tail -f \
/app/storage/logs/laravel.log \
/var/log/cron.log \
& wait $!
Problems solved
environment variables are not available on cron environment (like env vars or kubernetes secrets)
stop when crontab file is not valid
stop gracefully cron jobs when machine receive an SIGTERM signal
For context, I use previous script on Kubernetes with Laravel app.
this line was the one that helped me run my pre-scheduled task.
ADD mycron/root /etc/cron.d/root
RUN chmod 0644 /etc/cron.d/root
RUN crontab /etc/cron.d/root
RUN touch /var/log/cron.log
CMD ( cron -f -l 8 & ) && apache2-foreground # <-- run cron
--> My project run inside: FROM php:7.2-apache
But: if cron dies, the container keeps running.
When running on some trimmed down images that restrict root access, I had to add my user to the sudoers and run as sudo cron
FROM node:8.6.0
RUN apt-get update && apt-get install -y cron sudo
COPY crontab /etc/cron.d/my-cron
RUN chmod 0644 /etc/cron.d/my-cron
RUN touch /var/log/cron.log
# Allow node user to start cron daemon with sudo
RUN echo 'node ALL=NOPASSWD: /usr/sbin/cron' >>/etc/sudoers
ENTRYPOINT sudo cron && tail -f /var/log/cron.log
Maybe that helps someone
But: if cron dies, the container keeps running.
So, my problem was the same. The fix was to change the command section in the docker-compose.yml.
From
command: crontab /etc/crontab && tail -f /etc/crontab
To
command: crontab /etc/crontab
command: tail -f /etc/crontab
The problem was the '&&' between the commands. After deleting this, it was all fine.
Focusing on gracefully stopping the cronjobs when receiving SIGTERM or SIGQUIT signals (e.g. when running docker stop).
That's not too easy. By default, the cron process just got killed without paying attention to running cronjobs. I'm elaborating on pablorsk's answer:
Dockerfile:
FROM ubuntu:latest
RUN apt-get update \
&& apt-get -y install cron procps \
&& rm -rf /var/lib/apt/lists/*
# Copy cronjobs file to the cron.d directory
COPY cronjobs /etc/cron.d/cronjobs
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/cronjobs
# similarly prepare the default cronjob scripts
COPY run_cronjob.sh /root/run_cronjob.sh
RUN chmod +x /root/run_cronjob.sh
COPY run_cronjob_without_log.sh /root/run_cronjob_without_log.sh
RUN chmod +x /root/run_cronjob_without_log.sh
# Apply cron job
RUN crontab /etc/cron.d/cronjobs
# to gain access to environment variables, we need this additional entrypoint script
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
# optionally, change received signal from SIGTERM TO SIGQUIT
#STOPSIGNAL SIGQUIT
# Run the command on container startup
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh:
#!/bin/bash
# make global environment variables available within crond, too
printenv | grep -v "no_proxy" >> /etc/environment
# SIGQUIT/SIGTERM-handler
term_handler() {
echo 'stopping cron'
service cron stop
echo 'stopped'
echo 'waiting'
x=$(($(ps u -C run_cronjob.sh | wc -l)-1))
xold=0
while [ "$x" -gt 0 ]
do
if [ "$x" != "$xold" ]; then
echo "Waiting for $x running cronjob(s):"
ps u -C run_cronjob.sh
xold=$x
sleep 1
fi
x=$(($(ps u -C run_cronjob.sh | wc -l)-1))
done
echo 'done waiting'
exit 143; # 128 + 15 -- SIGTERM
}
# cron service with SIGTERM and SIGQUIT support
service cron start
trap "term_handler" QUIT TERM
# endless loop
while true
do
tail -f /dev/null & wait ${!}
done
cronjobs
* * * * * ./run_cronjob.sh cron1
*/2 * * * * ./run_cronjob.sh cron2
*/3 * * * * ./run_cronjob.sh cron3
Assuming you wrap all your cronjobs in a run_cronjob.sh script. That way, you can execute arbitrary code for which shutdown will wait gracefully.
run_cronjobs.sh (optional helper script to keep cronjob definitions clean)
#!/bin/bash
DIR_INCL="${BASH_SOURCE%/*}"
if [[ ! -d "$DIR_INCL" ]]; then DIR_INCL="$PWD"; fi
cd "$DIR_INCL"
# redirect all cronjob output to docker
./run_cronjob_without_log.sh "$#" > /proc/1/fd/1 2>/proc/1/fd/2
run_cronjob_without_log.sh
your_actual_cronjob_src()
Btw, when receiving a SIGKILL the container still shut downs immediately. That way you can use a command like docker-compose stop -t 60 cron-container to wait 60s for cronjobs to finish gracefully, but still terminate them for sure after the timeout.
All the answers require root access inside the container because 'cron' itself requests for UID 0.
To request root acces (e.g. via sudo) is against docker best practices.
I used https://github.com/gjcarneiro/yacron to manage scheduled tasks.
I occasionally tried to find a docker-friendly cron implementation. And this last time I tried, I've found a couple.
By docker-friendly I mean, "output of the tasks can be seen in docker logs w/o resorting to tricks."
The most promising I see at the moment is supercronic. It can be fed a crontab file, all while being docker-friendly. To make use of it:
docker-compose.yml:
services:
supercronic:
build: .
command: supercronic crontab
Dockerfile:
FROM alpine:3.17
RUN set -x \
&& apk add --no-cache supercronic shadow \
&& useradd -m app
USER app
COPY crontab .
crontab:
* * * * * date
A gist with a bit more info.
Another good one is yacron, but it uses YAML.
ofelia can be used, but they seem to focus on running tasks in separate containers. Which is probably not a downside, but I'm not sure why I'd want to do that.
And there's also a number of traditional cron implementations: dcron, fcron, cronie. But they come with "no easy way to see output of the tasks."
Just adding to the list of answers that you can also use this image:
https://hub.docker.com/repository/docker/cronit/simple-cron
And use it as a basis to start cron jobs, using it like this:
FROM cronit/simple-cron # Inherit from the base image
#Set up all your dependencies
COPY jobs.cron ./ # Copy your local config
Evidently, it is possible to run cron as a process inside the container (under root user) alongside other processes , using ENTRYPOINT statement in Dockerfile with start.sh script what includes line process cron start. More info here
#!/bin/bash
# copy environment variables for local use
env >> etc/environment
# start cron service
service cron start
# start other service
service other start
#...
If your image doesn't contain any daemon (so it's only the short-running script or process), you may also consider starting your cron from outside, by simply defining a LABEL with the cron information, plus the scheduler itself. This way, your default container state is "exited". If you have multiple scripts, this may result in a lower footprint on your system than having multiple parallel-running cron instances.
See: https://github.com/JaciBrunning/docker-cron-label
Example docker-compose.yaml:
version: '3.8'
# Example application of the cron image
services:
cron:
image: jaci/cron-label:latest
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/etc/localtime:/etc/localtime:ro"
hello:
image: hello-world
restart: "no"
labels:
- "cron.schedule=* * * * * "
I wanted to share a modification to the typical off of some of these other suggestions that I found more flexible. I wanted to enable changing the cron time with an environment variable and ended up adding an additional script that runs within my entrypoint.sh, but before the call to cron -f
*updatecron.sh*
#!/bin/sh
#remove old cron files
rm -rf /etc/cron.*/*
#create a new formatted cron definition
echo "$crondef [appname] >/proc/1/fd/1 2>/proc/1/fd/2" >> /etc/cron.d/restart-cron
echo \ >> /etc/cron.d/restart-cron
chmod 0644 /etc/cron.d/restart-cron
crontab /etc/cron.d/restart-cron
This removes any existing cron files, creates a new cronfile using an ENV variable of crondef, and then loads it.
Our's was a nodejs application to be run as cron job and it was also dependent on environment variables.
The below solution worked for us.
Docker file:
# syntax=docker/dockerfile:1
FROM node:12.18.1
ENV NODE_ENV=production
COPY ["startup.sh", "./"]
# Removed steps to build the node js application
#--------------- Setup cron ------------------
# Install Cron
RUN apt-get update
RUN apt-get -y install cron
# Run every day at 1AM
#/proc/1/fd/1 2>/proc/1/fd/2 is used to redirect cron logs to standard output and standard error
RUN (crontab -l ; echo "0 1 * * * /usr/local/bin/node /app/dist/index.js > /proc/1/fd/1 2>/proc/1/fd/2") | crontab
#--------------- Start Cron ------------------
# Grant execution rights
RUN chmod 755 startup.sh
CMD ["./startup.sh"]
startup.sh:
!/bin/bash
echo "Copying env variables to /etc/environment so that it is available for cron jobs"
printenv >> /etc/environment
echo "Starting cron"
cron -f
With multiple jobs and various dependencies like zsh and curl, this is a good approach while also combining the best practices from other answers. Bonus: This does NOT require you to set +x execution permissions on myScript.sh, which can be easy to miss in a new environment.
cron.dockerfile
FROM ubuntu:latest
# Install dependencies
RUN apt-get update && apt-get -y install \
cron \
zsh \
curl;
# Setup multiple jobs with zsh and redirect outputs to docker logs
RUN (echo "\
* * * * * zsh -c 'echo "Hello World"' 1> /proc/1/fd/1 2>/proc/1/fd/2 \n\
* * * * * zsh /myScript.sh 1> /proc/1/fd/1 2>/proc/1/fd/2 \n\
") | crontab
# Run cron in forground, so docker knows the task is running
CMD ["cron", "-f"]
Integrate this with docker compose like so:
docker-compose.yml
services:
cron:
build:
context: .
dockerfile: ./cron.dockerfile
volumes:
- ./myScript.sh:/myScript.sh
Keep in mind that you need to docker compose build cron when you change contents of the cron.dockerfile, but changes to myScript.sh will be reflected right away as it's mounted in compose.

Docker entrypoint script not sourcing file

I have an entrypoint script with docker which is getting executed. However, it just doesn't run the source command to source a file full of env values.
Here's the relevant section from tehe dockerfile
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD ["-production"]
I have tried 2 version of entrypoint script. Neither of them are working.
VERSION 1
#!/bin/bash
cat >> /etc/bash.bashrc <<EOF
if [[ -f "/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env" ]]
then
echo "${SERVICE_NAME}.env found ..."
set -a
source "/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env"
set +a
fi
EOF
echo "INFO: Starting ${SERVICE_NAME} application, environment:"
exec -a $SERVICE_NAME node .
VERSION 2
ENV_FILE=/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env
if [[] -f "$ENV_FILE" ]; then
echo "INFO: Loading environment variables from file: ${ENV_FILE}"
set -a
source $ENV_FILE
set +a
fi
echo "INFO: Starting ${SERVICE_NAME} application..."
exec -a $SERVICE_NAME node .
Version 2 of above prints to the log that it has found the file however, source command simply isn't loading the contents of file into memory. I check if contents have been loaded by running the env command.
I've been trying few things for 3 days now with no progress. Please can someone help me? Please note I am new to docker which is making things quite difficult.
I think your second version is almost there.
Normally Docker doesn't read or use shell dotfiles at all. This isn't anything particular to Docker, just that you're not running an "interactive" or "login" shell at any point in the sequence. In your first form you write out a .bashrc file but then exec node, and nothing there ever re-reads the dotfile.
You mention in the question that you use the env command to check the environment. If this is via docker exec, that launches a new process inside the container, but it's not a child of the entrypoint script, so any setup that happens there won't be visible to docker exec. This usually isn't a problem.
I can suggest a couple of cleanups that might make it a little easier to see the effects of this. The biggest is to split out the node invocation from the entrypoint script. If you have both an ENTRYPOINT and a CMD then Docker passes the CMD as arguments to the ENTRYPOINT; if you change the entrypoint script to end with exec "$#" then it will run whatever it got passed.
#!/bin/sh
# (trying to avoid bash-specific constructs)
# Read the environment file
ENV_FILE="/usr/local/etc/${SERVICE_NAME}/${SERVICE_NAME}.env"
if [[ -f "$ENV_FILE" ]; then
. $ENV_FILE
fi
# Run the main container command
exec "$#"
And then in the Dockerfile, put the node invocation as the main command
ENTRYPOINT ["./entrypoint.sh"] # must be JSON-array syntax
CMD ["node", "."] # could be shell-command syntax
The important thing with this is that it's easy to override the command but leave the entrypoint intact. So if you run
docker run --rm your-image env
that will launch a temporary container, but passing env as the command instead of node .. That will go through the steps in the entrypoint script, including setting up the environment, but then print out the environment and exit immediately. That will let you observe the changes.

crond isn't running unless I ssh to the container [duplicate]

I am trying to run a cronjob inside a docker container that invokes a shell script.
Yesterday I have been searching all over the web and stack overflow, but I could not really find a solution that works.
How can I do this?
You can copy your crontab into an image, in order for the container launched from said image to run the job.
Important: as noted in docker-cron issue 3: use LF, not CRLF for your cron file.
See "Run a cron job with Docker" from Julien Boulay in his Ekito/docker-cron:
Let’s create a new file called "hello-cron" to describe our job.
# must be ended with a new line "LF" (Unix) and not "CRLF" (Windows)
* * * * * echo "Hello world" >> /var/log/cron.log 2>&1
# An empty line is required at the end of this file for a valid cron file.
If you are wondering what is 2>&1, Ayman Hourieh explains.
The following Dockerfile describes all the steps to build your image
FROM ubuntu:latest
MAINTAINER docker#ekito.fr
RUN apt-get update && apt-get -y install cron
# Copy hello-cron file to the cron.d directory
COPY hello-cron /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Apply cron job
RUN crontab /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
But: if cron dies, the container keeps running.
(see Gaafar's comment and How do I make apt-get install less noisy?:
apt-get -y install -qq --force-yes cron can work too)
As noted by Nathan Lloyd in the comments:
Quick note about a gotcha:
If you're adding a script file and telling cron to run it, remember to
RUN chmod 0744 /the_script
Cron fails silently if you forget.
OR, make sure your job itself redirect directly to stdout/stderr instead of a log file, as described in hugoShaka's answer:
* * * * * root echo hello > /proc/1/fd/1 2>/proc/1/fd/2
Replace the last Dockerfile line with
CMD ["cron", "-f"]
But: it doesn't work if you want to run tasks as a non-root.
See also (about cron -f, which is to say cron "foreground") "docker ubuntu cron -f is not working"
Build and run it:
sudo docker build --rm -t ekito/cron-example .
sudo docker run -t -i ekito/cron-example
Be patient, wait for 2 minutes and your command-line should display:
Hello world
Hello world
Eric adds in the comments:
Do note that tail may not display the correct file if it is created during image build.
If that is the case, you need to create or touch the file during container runtime in order for tail to pick up the correct file.
See "Output of tail -f at the end of a docker CMD is not showing".
See more in "Running Cron in Docker" (Apr. 2021) from Jason Kulatunga, as he commented below
See Jason's image AnalogJ/docker-cron based on:
Dockerfile installing cronie/crond, depending on distribution.
an entrypoint initializing /etc/environment and then calling
cron -f -l 2
The accepted answer may be dangerous in a production environment.
In docker you should only execute one process per container because if you don't, the process that forked and went background is not monitored and may stop without you knowing it.
When you use CMD cron && tail -f /var/log/cron.log the cron process basically fork in order to execute cron in background, the main process exits and let you execute tailf in foreground. The background cron process could stop or fail you won't notice, your container will still run silently and your orchestration tool will not restart it.
You can avoid such a thing by redirecting directly the cron's commands output into your docker stdout and stderr which are located respectively in /proc/1/fd/1 and /proc/1/fd/2.
Using basic shell redirects you may want to do something like this :
* * * * * root echo hello > /proc/1/fd/1 2>/proc/1/fd/2
And your CMD will be : CMD ["cron", "-f"]
But: this doesn't work if you want to run tasks as a non-root.
For those who wants to use a simple and lightweight image:
FROM alpine:3.6
# copy crontabs for root user
COPY config/cronjobs /etc/crontabs/root
# start crond with log level 8 in foreground, output to stderr
CMD ["crond", "-f", "-d", "8"]
Where cronjobs is the file that contains your cronjobs, in this form:
* * * * * echo "hello stackoverflow" >> /test_file 2>&1
# remember to end this file with an empty new line
But apparently you won't see hello stackoverflow in docker logs.
What #VonC has suggested is nice but I prefer doing all cron job configuration in one line. This would avoid cross platform issues like cronjob location and you don't need a separate cron file.
FROM ubuntu:latest
# Install cron
RUN apt-get -y install cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Setup cron job
RUN (crontab -l ; echo "* * * * * echo "Hello world" >> /var/log/cron.log") | crontab
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
After running your docker container, you can make sure if cron service is working by:
# To check if the job is scheduled
docker exec -ti <your-container-id> bash -c "crontab -l"
# To check if the cron service is running
docker exec -ti <your-container-id> bash -c "pgrep cron"
If you prefer to have ENTRYPOINT instead of CMD, then you can substitute the CMD above with
ENTRYPOINT cron start && tail -f /var/log/cron.log
But: if cron dies, the container keeps running.
There is another way to do it, is to use Tasker, a task runner that has cron (a scheduler) support.
Why ? Sometimes to run a cron job, you have to mix, your base image (python, java, nodejs, ruby) with the crond. That means another image to maintain. Tasker avoid that by decoupling the crond and you container. You can just focus on the image that you want to execute your commands, and configure Tasker to use it.
Here an docker-compose.yml file, that will run some tasks for you
version: "2"
services:
tasker:
image: strm/tasker
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
environment:
configuration: |
logging:
level:
ROOT: WARN
org.springframework.web: WARN
sh.strm: DEBUG
schedule:
- every: minute
task: hello
- every: minute
task: helloFromPython
- every: minute
task: helloFromNode
tasks:
docker:
- name: hello
image: debian:jessie
script:
- echo Hello world from Tasker
- name: helloFromPython
image: python:3-slim
script:
- python -c 'print("Hello world from python")'
- name: helloFromNode
image: node:8
script:
- node -e 'console.log("Hello from node")'
There are 3 tasks there, all of them will run every minute (every: minute), and each of them will execute the script code, inside the image defined in image section.
Just run docker-compose up, and see it working. Here is the Tasker repo with the full documentation:
http://github.com/opsxcq/tasker
Though this aims to run jobs beside a running process in a container via Docker's exec interface, this may be of interest for you.
I've written a daemon that observes containers and schedules jobs, defined in their metadata, on them. Example:
version: '2'
services:
wordpress:
image: wordpress
mysql:
image: mariadb
volumes:
- ./database_dumps:/dumps
labels:
deck-chores.dump.command: sh -c "mysqldump --all-databases > /dumps/dump-$$(date -Idate)"
deck-chores.dump.interval: daily
'Classic', cron-like configuration is also possible.
Here are the docs, here's the image repository.
VonC's answer is pretty thorough. In addition I'd like to add one thing that helped me. If you just want to run a cron job without tailing a file, you'd be tempted to just remove the && tail -f /var/log/cron.log from the cron command.
However this will cause the Docker container to exit shortly after running because when the cron command completes, Docker thinks the last command has exited and hence kills the container. This can be avoided by running cron in the foreground via cron -f.
If you're using docker for windows, remember that you have to change your line-ending format from CRLF to LF (i.e. from dos to unix) if you intend on importing your crontab file from windows to your ubuntu container. If not, your cron-job won't work. Here's a working example:
FROM ubuntu:latest
RUN apt-get update && apt-get -y install cron
RUN apt-get update && apt-get install -y dos2unix
# Add crontab file (from your windows host) to the cron directory
ADD cron/hello-cron /etc/cron.d/hello-cron
# Change line ending format to LF
RUN dos2unix /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Apply cron job
RUN crontab /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/hello-cron.log
# Run the command on container startup
CMD cron && tail -f /var/log/hello-cron.log
This actually took me hours to figure out, as debugging cron jobs in docker containers is a tedious task. Hope it helps anyone else out there that can't get their code to work!
But: if cron dies, the container keeps running.
I created a Docker image based on the other answers, which can be used like
docker run -v "/path/to/cron:/etc/cron.d/crontab" gaafar/cron
where /path/to/cron: absolute path to crontab file, or you can use it as a base in a Dockerfile:
FROM gaafar/cron
# COPY crontab file in the cron directory
COPY crontab /etc/cron.d/crontab
# Add your commands here
For reference, the image is here.
Unfortunately, none of the above answers worked for me, although all answers lead to the solution and eventually to my solution, here is the snippet if it helps someone. Thanks
This can be solved with the bash file, due to the layered architecture of the Docker, cron service doesn't get initiated with RUN/CMD/ENTRYPOINT commands.
Simply add a bash file which will initiate the cron and other services (if required)
DockerFile
FROM gradle:6.5.1-jdk11 AS build
# apt
RUN apt-get update
RUN apt-get -y install cron
# Setup cron to run every minute to print (you can add/update your cron here)
RUN touch /var/log/cron-1.log
RUN (crontab -l ; echo "* * * * * echo testing cron.... >> /var/log/cron-1.log 2>&1") | crontab
# entrypoint.sh
RUN chmod +x entrypoint.sh
CMD ["bash","entrypoint.sh"]
entrypoint.sh
#!/bin/sh
service cron start & tail -f /var/log/cron-2.log
If any other service is also required to run along with cron then add that service with & in the same command, for example: /opt/wildfly/bin/standalone.sh & service cron start & tail -f /var/log/cron-2.log
Once you will get into the docker container there you can see that testing cron.... will be getting printed every minute in file: /var/log/cron-1.log
But, if cron dies, the container keeps running.
Define the cronjob in a dedicated container which runs the command via docker exec to your service.
This is higher cohesion and the running script will have access to the environment variables you have defined for your service.
#docker-compose.yml
version: "3.3"
services:
myservice:
environment:
MSG: i'm being cronjobbed, every minute!
image: alpine
container_name: myservice
command: tail -f /dev/null
cronjobber:
image: docker:edge
volumes:
- /var/run/docker.sock:/var/run/docker.sock
container_name: cronjobber
command: >
sh -c "
echo '* * * * * docker exec myservice printenv | grep MSG' > /etc/crontabs/root
&& crond -f"
I decided to use busybox, as it is one of the smallest images.
crond is executed in foreground (-f), logging is send to stderr (-d), I didn't choose to change the loglevel.
crontab file is copied to the default path: /var/spool/cron/crontabs
FROM busybox:1.33.1
# Usage: crond [-fbS] [-l N] [-d N] [-L LOGFILE] [-c DIR]
#
# -f Foreground
# -b Background (default)
# -S Log to syslog (default)
# -l N Set log level. Most verbose 0, default 8
# -d N Set log level, log to stderr
# -L FILE Log to FILE
# -c DIR Cron dir. Default:/var/spool/cron/crontabs
COPY crontab /var/spool/cron/crontabs/root
CMD [ "crond", "-f", "-d" ]
But output of the tasks apparently can't be seen in docker logs.
When you deploy your container on another host, just note that it won't start any processes automatically. You need to make sure that 'cron' service is running inside your container.
In our case, I am using Supervisord with other services to start cron service.
[program:misc]
command=/etc/init.d/cron restart
user=root
autostart=true
autorestart=true
stderr_logfile=/var/log/misc-cron.err.log
stdout_logfile=/var/log/misc-cron.out.log
priority=998
From above examples I created this combination:
Alpine Image & Edit Using Crontab in Nano (I hate vi)
FROM alpine
RUN apk update
RUN apk add curl nano
ENV EDITOR=/usr/bin/nano
# start crond with log level 8 in foreground, output to stderr
CMD ["crond", "-f", "-d", "8"]
# Shell Access
# docker exec -it <CONTAINERID> /bin/sh
# Example Cron Entry
# crontab -e
# * * * * * echo hello > /proc/1/fd/1 2>/proc/1/fd/2
# DATE/TIME WILL BE IN UTC
Setup a cron in parallel to a one-time job
Create a script file, say run.sh, with the job that is supposed to run periodically.
#!/bin/bash
timestamp=`date +%Y/%m/%d-%H:%M:%S`
echo "System path is $PATH at $timestamp"
Save and exit.
Use Entrypoint instead of CMD
f you have multiple jobs to kick in during docker containerization, use the entrypoint file to run them all.
Entrypoint file is a script file that comes into action when a docker run command is issued. So, all the steps that we want to run can be put in this script file.
For instance, we have 2 jobs to run:
Run once job: echo “Docker container has been started”
Run periodic job: run.sh
Create entrypoint.sh
#!/bin/bash
# Start the run once job.
echo "Docker container has been started"
# Setup a cron schedule
echo "* * * * * /run.sh >> /var/log/cron.log 2>&1
# This extra line makes it a valid cron" > scheduler.txt
crontab scheduler.txt
cron -f
Let’s understand the crontab that has been set up in the file
* * * * *: Cron schedule; the job must run every minute. You can update the schedule based on your requirement.
/run.sh: The path to the script file which is to be run periodically
/var/log/cron.log: The filename to save the output of the scheduled cron job.
2>&1: The error logs(if any) also will be redirected to the same output file used above.
Note: Do not forget to add an extra new line, as it makes it a valid cron.
Scheduler.txt: the complete cron setup will be redirected to a file.
Using System/User specific environment variables in cron
My actual cron job was expecting most of the arguments as the environment variables passed to the docker run command. But, with bash, I was not able to use any of the environment variables that belongs to the system or the docker container.
Then, this came up as a walkaround to this problem:
Add the following line in the entrypoint.sh
declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /container.env
Update the cron setup and specify-
SHELL=/bin/bash
BASH_ENV=/container.env
At last, your entrypoint.sh should look like
#!/bin/bash
# Start the run once job.
echo "Docker container has been started"
declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /container.env
# Setup a cron schedule
echo "SHELL=/bin/bash
BASH_ENV=/container.env
* * * * * /run.sh >> /var/log/cron.log 2>&1
# This extra line makes it a valid cron" > scheduler.txt
crontab scheduler.txt
cron -f
Last but not the least: Create a Dockerfile
FROM ubuntu:16.04
MAINTAINER Himanshu Gupta
# Install cron
RUN apt-get update && apt-get install -y cron
# Add files
ADD run.sh /run.sh
ADD entrypoint.sh /entrypoint.sh
RUN chmod +x /run.sh /entrypoint.sh
ENTRYPOINT /entrypoint.sh
That’s it. Build and run the Docker image!
Here's my docker-compose based solution:
cron:
image: alpine:3.10
command: crond -f -d 8
depends_on:
- servicename
volumes:
- './conf/cron:/etc/crontabs/root:z'
restart: unless-stopped
the lines with cron entries are on the ./conf/cron file.
Note: this won't run commands that aren't in the alpine image.
Also, output of the tasks apparently won't appear in docker logs.
This question have a lot of answers, but some are complicated and another has some drawbacks. I try to explain the problems and try to deliver a solution.
cron-entrypoint.sh:
#!/bin/bash
# copy machine environment variables to cron environment
printenv | cat - /etc/crontab > temp && mv temp /etc/crontab
## validate cron file
crontab /etc/crontab
# cron service with SIGTERM support
service cron start
trap "service cron stop; exit" SIGINT SIGTERM
# just dump your logs to std output
tail -f \
/app/storage/logs/laravel.log \
/var/log/cron.log \
& wait $!
Problems solved
environment variables are not available on cron environment (like env vars or kubernetes secrets)
stop when crontab file is not valid
stop gracefully cron jobs when machine receive an SIGTERM signal
For context, I use previous script on Kubernetes with Laravel app.
this line was the one that helped me run my pre-scheduled task.
ADD mycron/root /etc/cron.d/root
RUN chmod 0644 /etc/cron.d/root
RUN crontab /etc/cron.d/root
RUN touch /var/log/cron.log
CMD ( cron -f -l 8 & ) && apache2-foreground # <-- run cron
--> My project run inside: FROM php:7.2-apache
But: if cron dies, the container keeps running.
When running on some trimmed down images that restrict root access, I had to add my user to the sudoers and run as sudo cron
FROM node:8.6.0
RUN apt-get update && apt-get install -y cron sudo
COPY crontab /etc/cron.d/my-cron
RUN chmod 0644 /etc/cron.d/my-cron
RUN touch /var/log/cron.log
# Allow node user to start cron daemon with sudo
RUN echo 'node ALL=NOPASSWD: /usr/sbin/cron' >>/etc/sudoers
ENTRYPOINT sudo cron && tail -f /var/log/cron.log
Maybe that helps someone
But: if cron dies, the container keeps running.
So, my problem was the same. The fix was to change the command section in the docker-compose.yml.
From
command: crontab /etc/crontab && tail -f /etc/crontab
To
command: crontab /etc/crontab
command: tail -f /etc/crontab
The problem was the '&&' between the commands. After deleting this, it was all fine.
Focusing on gracefully stopping the cronjobs when receiving SIGTERM or SIGQUIT signals (e.g. when running docker stop).
That's not too easy. By default, the cron process just got killed without paying attention to running cronjobs. I'm elaborating on pablorsk's answer:
Dockerfile:
FROM ubuntu:latest
RUN apt-get update \
&& apt-get -y install cron procps \
&& rm -rf /var/lib/apt/lists/*
# Copy cronjobs file to the cron.d directory
COPY cronjobs /etc/cron.d/cronjobs
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/cronjobs
# similarly prepare the default cronjob scripts
COPY run_cronjob.sh /root/run_cronjob.sh
RUN chmod +x /root/run_cronjob.sh
COPY run_cronjob_without_log.sh /root/run_cronjob_without_log.sh
RUN chmod +x /root/run_cronjob_without_log.sh
# Apply cron job
RUN crontab /etc/cron.d/cronjobs
# to gain access to environment variables, we need this additional entrypoint script
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
# optionally, change received signal from SIGTERM TO SIGQUIT
#STOPSIGNAL SIGQUIT
# Run the command on container startup
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh:
#!/bin/bash
# make global environment variables available within crond, too
printenv | grep -v "no_proxy" >> /etc/environment
# SIGQUIT/SIGTERM-handler
term_handler() {
echo 'stopping cron'
service cron stop
echo 'stopped'
echo 'waiting'
x=$(($(ps u -C run_cronjob.sh | wc -l)-1))
xold=0
while [ "$x" -gt 0 ]
do
if [ "$x" != "$xold" ]; then
echo "Waiting for $x running cronjob(s):"
ps u -C run_cronjob.sh
xold=$x
sleep 1
fi
x=$(($(ps u -C run_cronjob.sh | wc -l)-1))
done
echo 'done waiting'
exit 143; # 128 + 15 -- SIGTERM
}
# cron service with SIGTERM and SIGQUIT support
service cron start
trap "term_handler" QUIT TERM
# endless loop
while true
do
tail -f /dev/null & wait ${!}
done
cronjobs
* * * * * ./run_cronjob.sh cron1
*/2 * * * * ./run_cronjob.sh cron2
*/3 * * * * ./run_cronjob.sh cron3
Assuming you wrap all your cronjobs in a run_cronjob.sh script. That way, you can execute arbitrary code for which shutdown will wait gracefully.
run_cronjobs.sh (optional helper script to keep cronjob definitions clean)
#!/bin/bash
DIR_INCL="${BASH_SOURCE%/*}"
if [[ ! -d "$DIR_INCL" ]]; then DIR_INCL="$PWD"; fi
cd "$DIR_INCL"
# redirect all cronjob output to docker
./run_cronjob_without_log.sh "$#" > /proc/1/fd/1 2>/proc/1/fd/2
run_cronjob_without_log.sh
your_actual_cronjob_src()
Btw, when receiving a SIGKILL the container still shut downs immediately. That way you can use a command like docker-compose stop -t 60 cron-container to wait 60s for cronjobs to finish gracefully, but still terminate them for sure after the timeout.
All the answers require root access inside the container because 'cron' itself requests for UID 0.
To request root acces (e.g. via sudo) is against docker best practices.
I used https://github.com/gjcarneiro/yacron to manage scheduled tasks.
I occasionally tried to find a docker-friendly cron implementation. And this last time I tried, I've found a couple.
By docker-friendly I mean, "output of the tasks can be seen in docker logs w/o resorting to tricks."
The most promising I see at the moment is supercronic. It can be fed a crontab file, all while being docker-friendly. To make use of it:
docker-compose.yml:
services:
supercronic:
build: .
command: supercronic crontab
Dockerfile:
FROM alpine:3.17
RUN set -x \
&& apk add --no-cache supercronic shadow \
&& useradd -m app
USER app
COPY crontab .
crontab:
* * * * * date
A gist with a bit more info.
Another good one is yacron, but it uses YAML.
ofelia can be used, but they seem to focus on running tasks in separate containers. Which is probably not a downside, but I'm not sure why I'd want to do that.
And there's also a number of traditional cron implementations: dcron, fcron, cronie. But they come with "no easy way to see output of the tasks."
Just adding to the list of answers that you can also use this image:
https://hub.docker.com/repository/docker/cronit/simple-cron
And use it as a basis to start cron jobs, using it like this:
FROM cronit/simple-cron # Inherit from the base image
#Set up all your dependencies
COPY jobs.cron ./ # Copy your local config
Evidently, it is possible to run cron as a process inside the container (under root user) alongside other processes , using ENTRYPOINT statement in Dockerfile with start.sh script what includes line process cron start. More info here
#!/bin/bash
# copy environment variables for local use
env >> etc/environment
# start cron service
service cron start
# start other service
service other start
#...
If your image doesn't contain any daemon (so it's only the short-running script or process), you may also consider starting your cron from outside, by simply defining a LABEL with the cron information, plus the scheduler itself. This way, your default container state is "exited". If you have multiple scripts, this may result in a lower footprint on your system than having multiple parallel-running cron instances.
See: https://github.com/JaciBrunning/docker-cron-label
Example docker-compose.yaml:
version: '3.8'
# Example application of the cron image
services:
cron:
image: jaci/cron-label:latest
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/etc/localtime:/etc/localtime:ro"
hello:
image: hello-world
restart: "no"
labels:
- "cron.schedule=* * * * * "
I wanted to share a modification to the typical off of some of these other suggestions that I found more flexible. I wanted to enable changing the cron time with an environment variable and ended up adding an additional script that runs within my entrypoint.sh, but before the call to cron -f
*updatecron.sh*
#!/bin/sh
#remove old cron files
rm -rf /etc/cron.*/*
#create a new formatted cron definition
echo "$crondef [appname] >/proc/1/fd/1 2>/proc/1/fd/2" >> /etc/cron.d/restart-cron
echo \ >> /etc/cron.d/restart-cron
chmod 0644 /etc/cron.d/restart-cron
crontab /etc/cron.d/restart-cron
This removes any existing cron files, creates a new cronfile using an ENV variable of crondef, and then loads it.
Our's was a nodejs application to be run as cron job and it was also dependent on environment variables.
The below solution worked for us.
Docker file:
# syntax=docker/dockerfile:1
FROM node:12.18.1
ENV NODE_ENV=production
COPY ["startup.sh", "./"]
# Removed steps to build the node js application
#--------------- Setup cron ------------------
# Install Cron
RUN apt-get update
RUN apt-get -y install cron
# Run every day at 1AM
#/proc/1/fd/1 2>/proc/1/fd/2 is used to redirect cron logs to standard output and standard error
RUN (crontab -l ; echo "0 1 * * * /usr/local/bin/node /app/dist/index.js > /proc/1/fd/1 2>/proc/1/fd/2") | crontab
#--------------- Start Cron ------------------
# Grant execution rights
RUN chmod 755 startup.sh
CMD ["./startup.sh"]
startup.sh:
!/bin/bash
echo "Copying env variables to /etc/environment so that it is available for cron jobs"
printenv >> /etc/environment
echo "Starting cron"
cron -f
With multiple jobs and various dependencies like zsh and curl, this is a good approach while also combining the best practices from other answers. Bonus: This does NOT require you to set +x execution permissions on myScript.sh, which can be easy to miss in a new environment.
cron.dockerfile
FROM ubuntu:latest
# Install dependencies
RUN apt-get update && apt-get -y install \
cron \
zsh \
curl;
# Setup multiple jobs with zsh and redirect outputs to docker logs
RUN (echo "\
* * * * * zsh -c 'echo "Hello World"' 1> /proc/1/fd/1 2>/proc/1/fd/2 \n\
* * * * * zsh /myScript.sh 1> /proc/1/fd/1 2>/proc/1/fd/2 \n\
") | crontab
# Run cron in forground, so docker knows the task is running
CMD ["cron", "-f"]
Integrate this with docker compose like so:
docker-compose.yml
services:
cron:
build:
context: .
dockerfile: ./cron.dockerfile
volumes:
- ./myScript.sh:/myScript.sh
Keep in mind that you need to docker compose build cron when you change contents of the cron.dockerfile, but changes to myScript.sh will be reflected right away as it's mounted in compose.

I would like to populate a entry in config file at run time via docker compose

I have tomcat installed in a container, inside it there is application configuration file. I would like to populate a value inside it during run time. (before that I dont know what the value so cant populate at the time of building the image)
I am invoking service with docker-compose up, and I would like the value in configuration file gets replaced via the value I provide to docker compose as parameter
something like docker-compose up -e "value at run time via docker compose"
URL for server
SERVERADD=https://{{value at run time via docker compose}}/{{index}}
Can I accomplish this with environment variable or any other way kindly suggest !!!
This is normally done in an ENTRYPOINT or CMD script that is built into the image.
The script checks for the environment variable, does the replacements or other work required, then continues on to run the command as before.
#!/bin/sh
if [ -n "$SOME_ENV" ]; then
sed -i '' -e 's/^param=.*/param='"$SOME_ENV"'/' /etc/file.conf
fi
exec "$#"
The script needs to be added to an image, the Dockerfile could be:
FROM whatever
COPY docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT [ "/entrypoint.sh" ]
CMD [ "run_server", "-o", "option" ]

Docker echo environment variable

I'm trying to write a little docker file that sets a User and just echos the current user as a little example to prove to myself it is working. I've tried a number of variants and couldn't find much help in the documentation.
FROM ubuntu
USER daemon
# ENTRYPOINT ["echo", "$USER"]
# just gives "$USER"
# ENTRYPOINT ["echo", "-e", "${USER}"]
# just gives "$USER"
# ENTRYPOINT echo $USER
# gives empty string
# ENTRYPOINT ["/bin/echo", "$USER"]
# just gives "$USER"
I'm running docker build . on the dockerfile and then running docker run <image-id> and getting the results
Expected result is daemon, or without the USER daemon line, I expect root. Probably a really simple answer.
This is the expected behavior, as weird as it seems!
When ENTRYPOINT is a list (as in ENTRYPOINT ["echo", "$USER"]), it is used as-is, without further parsing or interpretation. So $USER remains $USER, because there is no shell involved in the process to replace it with the value of the USER environment variable.
Now, when ENTRYPOINT is a string (as in ENTRYPOINT echo $USER), what is actually executed is sh -c "echo $USER", and $USER is replaced with the value of the environment variable (as you would expect).
However, the environment variable USER is not set by default. It is set by the login process; and when you just run sh -c ... the login process is not involved.
Compare the environment when running docker run -t -i ubuntu bash and docker run -t -i ubuntu login -f root. In the former case, you will get a very basic environment; in the latter case, you will get the complete environment that you are used to (including USERvariable).
Couldn't you set, in the Dockerfile, the ENV command to a default value, and then, when run-ning a container, use the -e, --env dictionary to override what would be interpreted by the:
ENTRYPOINT echo $SOMEENVVAR
form of ENTRYPOINT?
I think there´s a series of issues here.
when I
docker run -i -t ubuntu /bin/bash
echo $USER
set
I don´t see $USER set at all - whoami does report daemon though.
additionally, I have the suspicion (but have not looked at the code yet) that ENV vars in the Dockerfile are escaped, to avoid their use (many people assume that they can export host variables to the built container, but this is something that the docker guys would like to avoid)

Resources