crond isn't running unless I ssh to the container [duplicate] - docker
I am trying to run a cronjob inside a docker container that invokes a shell script.
Yesterday I have been searching all over the web and stack overflow, but I could not really find a solution that works.
How can I do this?
You can copy your crontab into an image, in order for the container launched from said image to run the job.
Important: as noted in docker-cron issue 3: use LF, not CRLF for your cron file.
See "Run a cron job with Docker" from Julien Boulay in his Ekito/docker-cron:
Let’s create a new file called "hello-cron" to describe our job.
# must be ended with a new line "LF" (Unix) and not "CRLF" (Windows)
* * * * * echo "Hello world" >> /var/log/cron.log 2>&1
# An empty line is required at the end of this file for a valid cron file.
If you are wondering what is 2>&1, Ayman Hourieh explains.
The following Dockerfile describes all the steps to build your image
FROM ubuntu:latest
MAINTAINER docker#ekito.fr
RUN apt-get update && apt-get -y install cron
# Copy hello-cron file to the cron.d directory
COPY hello-cron /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Apply cron job
RUN crontab /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
But: if cron dies, the container keeps running.
(see Gaafar's comment and How do I make apt-get install less noisy?:
apt-get -y install -qq --force-yes cron can work too)
As noted by Nathan Lloyd in the comments:
Quick note about a gotcha:
If you're adding a script file and telling cron to run it, remember to
RUN chmod 0744 /the_script
Cron fails silently if you forget.
OR, make sure your job itself redirect directly to stdout/stderr instead of a log file, as described in hugoShaka's answer:
* * * * * root echo hello > /proc/1/fd/1 2>/proc/1/fd/2
Replace the last Dockerfile line with
CMD ["cron", "-f"]
But: it doesn't work if you want to run tasks as a non-root.
See also (about cron -f, which is to say cron "foreground") "docker ubuntu cron -f is not working"
Build and run it:
sudo docker build --rm -t ekito/cron-example .
sudo docker run -t -i ekito/cron-example
Be patient, wait for 2 minutes and your command-line should display:
Hello world
Hello world
Eric adds in the comments:
Do note that tail may not display the correct file if it is created during image build.
If that is the case, you need to create or touch the file during container runtime in order for tail to pick up the correct file.
See "Output of tail -f at the end of a docker CMD is not showing".
See more in "Running Cron in Docker" (Apr. 2021) from Jason Kulatunga, as he commented below
See Jason's image AnalogJ/docker-cron based on:
Dockerfile installing cronie/crond, depending on distribution.
an entrypoint initializing /etc/environment and then calling
cron -f -l 2
The accepted answer may be dangerous in a production environment.
In docker you should only execute one process per container because if you don't, the process that forked and went background is not monitored and may stop without you knowing it.
When you use CMD cron && tail -f /var/log/cron.log the cron process basically fork in order to execute cron in background, the main process exits and let you execute tailf in foreground. The background cron process could stop or fail you won't notice, your container will still run silently and your orchestration tool will not restart it.
You can avoid such a thing by redirecting directly the cron's commands output into your docker stdout and stderr which are located respectively in /proc/1/fd/1 and /proc/1/fd/2.
Using basic shell redirects you may want to do something like this :
* * * * * root echo hello > /proc/1/fd/1 2>/proc/1/fd/2
And your CMD will be : CMD ["cron", "-f"]
But: this doesn't work if you want to run tasks as a non-root.
For those who wants to use a simple and lightweight image:
FROM alpine:3.6
# copy crontabs for root user
COPY config/cronjobs /etc/crontabs/root
# start crond with log level 8 in foreground, output to stderr
CMD ["crond", "-f", "-d", "8"]
Where cronjobs is the file that contains your cronjobs, in this form:
* * * * * echo "hello stackoverflow" >> /test_file 2>&1
# remember to end this file with an empty new line
But apparently you won't see hello stackoverflow in docker logs.
What #VonC has suggested is nice but I prefer doing all cron job configuration in one line. This would avoid cross platform issues like cronjob location and you don't need a separate cron file.
FROM ubuntu:latest
# Install cron
RUN apt-get -y install cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Setup cron job
RUN (crontab -l ; echo "* * * * * echo "Hello world" >> /var/log/cron.log") | crontab
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
After running your docker container, you can make sure if cron service is working by:
# To check if the job is scheduled
docker exec -ti <your-container-id> bash -c "crontab -l"
# To check if the cron service is running
docker exec -ti <your-container-id> bash -c "pgrep cron"
If you prefer to have ENTRYPOINT instead of CMD, then you can substitute the CMD above with
ENTRYPOINT cron start && tail -f /var/log/cron.log
But: if cron dies, the container keeps running.
There is another way to do it, is to use Tasker, a task runner that has cron (a scheduler) support.
Why ? Sometimes to run a cron job, you have to mix, your base image (python, java, nodejs, ruby) with the crond. That means another image to maintain. Tasker avoid that by decoupling the crond and you container. You can just focus on the image that you want to execute your commands, and configure Tasker to use it.
Here an docker-compose.yml file, that will run some tasks for you
version: "2"
services:
tasker:
image: strm/tasker
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
environment:
configuration: |
logging:
level:
ROOT: WARN
org.springframework.web: WARN
sh.strm: DEBUG
schedule:
- every: minute
task: hello
- every: minute
task: helloFromPython
- every: minute
task: helloFromNode
tasks:
docker:
- name: hello
image: debian:jessie
script:
- echo Hello world from Tasker
- name: helloFromPython
image: python:3-slim
script:
- python -c 'print("Hello world from python")'
- name: helloFromNode
image: node:8
script:
- node -e 'console.log("Hello from node")'
There are 3 tasks there, all of them will run every minute (every: minute), and each of them will execute the script code, inside the image defined in image section.
Just run docker-compose up, and see it working. Here is the Tasker repo with the full documentation:
http://github.com/opsxcq/tasker
Though this aims to run jobs beside a running process in a container via Docker's exec interface, this may be of interest for you.
I've written a daemon that observes containers and schedules jobs, defined in their metadata, on them. Example:
version: '2'
services:
wordpress:
image: wordpress
mysql:
image: mariadb
volumes:
- ./database_dumps:/dumps
labels:
deck-chores.dump.command: sh -c "mysqldump --all-databases > /dumps/dump-$$(date -Idate)"
deck-chores.dump.interval: daily
'Classic', cron-like configuration is also possible.
Here are the docs, here's the image repository.
VonC's answer is pretty thorough. In addition I'd like to add one thing that helped me. If you just want to run a cron job without tailing a file, you'd be tempted to just remove the && tail -f /var/log/cron.log from the cron command.
However this will cause the Docker container to exit shortly after running because when the cron command completes, Docker thinks the last command has exited and hence kills the container. This can be avoided by running cron in the foreground via cron -f.
If you're using docker for windows, remember that you have to change your line-ending format from CRLF to LF (i.e. from dos to unix) if you intend on importing your crontab file from windows to your ubuntu container. If not, your cron-job won't work. Here's a working example:
FROM ubuntu:latest
RUN apt-get update && apt-get -y install cron
RUN apt-get update && apt-get install -y dos2unix
# Add crontab file (from your windows host) to the cron directory
ADD cron/hello-cron /etc/cron.d/hello-cron
# Change line ending format to LF
RUN dos2unix /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Apply cron job
RUN crontab /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/hello-cron.log
# Run the command on container startup
CMD cron && tail -f /var/log/hello-cron.log
This actually took me hours to figure out, as debugging cron jobs in docker containers is a tedious task. Hope it helps anyone else out there that can't get their code to work!
But: if cron dies, the container keeps running.
I created a Docker image based on the other answers, which can be used like
docker run -v "/path/to/cron:/etc/cron.d/crontab" gaafar/cron
where /path/to/cron: absolute path to crontab file, or you can use it as a base in a Dockerfile:
FROM gaafar/cron
# COPY crontab file in the cron directory
COPY crontab /etc/cron.d/crontab
# Add your commands here
For reference, the image is here.
Unfortunately, none of the above answers worked for me, although all answers lead to the solution and eventually to my solution, here is the snippet if it helps someone. Thanks
This can be solved with the bash file, due to the layered architecture of the Docker, cron service doesn't get initiated with RUN/CMD/ENTRYPOINT commands.
Simply add a bash file which will initiate the cron and other services (if required)
DockerFile
FROM gradle:6.5.1-jdk11 AS build
# apt
RUN apt-get update
RUN apt-get -y install cron
# Setup cron to run every minute to print (you can add/update your cron here)
RUN touch /var/log/cron-1.log
RUN (crontab -l ; echo "* * * * * echo testing cron.... >> /var/log/cron-1.log 2>&1") | crontab
# entrypoint.sh
RUN chmod +x entrypoint.sh
CMD ["bash","entrypoint.sh"]
entrypoint.sh
#!/bin/sh
service cron start & tail -f /var/log/cron-2.log
If any other service is also required to run along with cron then add that service with & in the same command, for example: /opt/wildfly/bin/standalone.sh & service cron start & tail -f /var/log/cron-2.log
Once you will get into the docker container there you can see that testing cron.... will be getting printed every minute in file: /var/log/cron-1.log
But, if cron dies, the container keeps running.
Define the cronjob in a dedicated container which runs the command via docker exec to your service.
This is higher cohesion and the running script will have access to the environment variables you have defined for your service.
#docker-compose.yml
version: "3.3"
services:
myservice:
environment:
MSG: i'm being cronjobbed, every minute!
image: alpine
container_name: myservice
command: tail -f /dev/null
cronjobber:
image: docker:edge
volumes:
- /var/run/docker.sock:/var/run/docker.sock
container_name: cronjobber
command: >
sh -c "
echo '* * * * * docker exec myservice printenv | grep MSG' > /etc/crontabs/root
&& crond -f"
I decided to use busybox, as it is one of the smallest images.
crond is executed in foreground (-f), logging is send to stderr (-d), I didn't choose to change the loglevel.
crontab file is copied to the default path: /var/spool/cron/crontabs
FROM busybox:1.33.1
# Usage: crond [-fbS] [-l N] [-d N] [-L LOGFILE] [-c DIR]
#
# -f Foreground
# -b Background (default)
# -S Log to syslog (default)
# -l N Set log level. Most verbose 0, default 8
# -d N Set log level, log to stderr
# -L FILE Log to FILE
# -c DIR Cron dir. Default:/var/spool/cron/crontabs
COPY crontab /var/spool/cron/crontabs/root
CMD [ "crond", "-f", "-d" ]
But output of the tasks apparently can't be seen in docker logs.
When you deploy your container on another host, just note that it won't start any processes automatically. You need to make sure that 'cron' service is running inside your container.
In our case, I am using Supervisord with other services to start cron service.
[program:misc]
command=/etc/init.d/cron restart
user=root
autostart=true
autorestart=true
stderr_logfile=/var/log/misc-cron.err.log
stdout_logfile=/var/log/misc-cron.out.log
priority=998
From above examples I created this combination:
Alpine Image & Edit Using Crontab in Nano (I hate vi)
FROM alpine
RUN apk update
RUN apk add curl nano
ENV EDITOR=/usr/bin/nano
# start crond with log level 8 in foreground, output to stderr
CMD ["crond", "-f", "-d", "8"]
# Shell Access
# docker exec -it <CONTAINERID> /bin/sh
# Example Cron Entry
# crontab -e
# * * * * * echo hello > /proc/1/fd/1 2>/proc/1/fd/2
# DATE/TIME WILL BE IN UTC
Setup a cron in parallel to a one-time job
Create a script file, say run.sh, with the job that is supposed to run periodically.
#!/bin/bash
timestamp=`date +%Y/%m/%d-%H:%M:%S`
echo "System path is $PATH at $timestamp"
Save and exit.
Use Entrypoint instead of CMD
f you have multiple jobs to kick in during docker containerization, use the entrypoint file to run them all.
Entrypoint file is a script file that comes into action when a docker run command is issued. So, all the steps that we want to run can be put in this script file.
For instance, we have 2 jobs to run:
Run once job: echo “Docker container has been started”
Run periodic job: run.sh
Create entrypoint.sh
#!/bin/bash
# Start the run once job.
echo "Docker container has been started"
# Setup a cron schedule
echo "* * * * * /run.sh >> /var/log/cron.log 2>&1
# This extra line makes it a valid cron" > scheduler.txt
crontab scheduler.txt
cron -f
Let’s understand the crontab that has been set up in the file
* * * * *: Cron schedule; the job must run every minute. You can update the schedule based on your requirement.
/run.sh: The path to the script file which is to be run periodically
/var/log/cron.log: The filename to save the output of the scheduled cron job.
2>&1: The error logs(if any) also will be redirected to the same output file used above.
Note: Do not forget to add an extra new line, as it makes it a valid cron.
Scheduler.txt: the complete cron setup will be redirected to a file.
Using System/User specific environment variables in cron
My actual cron job was expecting most of the arguments as the environment variables passed to the docker run command. But, with bash, I was not able to use any of the environment variables that belongs to the system or the docker container.
Then, this came up as a walkaround to this problem:
Add the following line in the entrypoint.sh
declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /container.env
Update the cron setup and specify-
SHELL=/bin/bash
BASH_ENV=/container.env
At last, your entrypoint.sh should look like
#!/bin/bash
# Start the run once job.
echo "Docker container has been started"
declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /container.env
# Setup a cron schedule
echo "SHELL=/bin/bash
BASH_ENV=/container.env
* * * * * /run.sh >> /var/log/cron.log 2>&1
# This extra line makes it a valid cron" > scheduler.txt
crontab scheduler.txt
cron -f
Last but not the least: Create a Dockerfile
FROM ubuntu:16.04
MAINTAINER Himanshu Gupta
# Install cron
RUN apt-get update && apt-get install -y cron
# Add files
ADD run.sh /run.sh
ADD entrypoint.sh /entrypoint.sh
RUN chmod +x /run.sh /entrypoint.sh
ENTRYPOINT /entrypoint.sh
That’s it. Build and run the Docker image!
Here's my docker-compose based solution:
cron:
image: alpine:3.10
command: crond -f -d 8
depends_on:
- servicename
volumes:
- './conf/cron:/etc/crontabs/root:z'
restart: unless-stopped
the lines with cron entries are on the ./conf/cron file.
Note: this won't run commands that aren't in the alpine image.
Also, output of the tasks apparently won't appear in docker logs.
This question have a lot of answers, but some are complicated and another has some drawbacks. I try to explain the problems and try to deliver a solution.
cron-entrypoint.sh:
#!/bin/bash
# copy machine environment variables to cron environment
printenv | cat - /etc/crontab > temp && mv temp /etc/crontab
## validate cron file
crontab /etc/crontab
# cron service with SIGTERM support
service cron start
trap "service cron stop; exit" SIGINT SIGTERM
# just dump your logs to std output
tail -f \
/app/storage/logs/laravel.log \
/var/log/cron.log \
& wait $!
Problems solved
environment variables are not available on cron environment (like env vars or kubernetes secrets)
stop when crontab file is not valid
stop gracefully cron jobs when machine receive an SIGTERM signal
For context, I use previous script on Kubernetes with Laravel app.
this line was the one that helped me run my pre-scheduled task.
ADD mycron/root /etc/cron.d/root
RUN chmod 0644 /etc/cron.d/root
RUN crontab /etc/cron.d/root
RUN touch /var/log/cron.log
CMD ( cron -f -l 8 & ) && apache2-foreground # <-- run cron
--> My project run inside: FROM php:7.2-apache
But: if cron dies, the container keeps running.
When running on some trimmed down images that restrict root access, I had to add my user to the sudoers and run as sudo cron
FROM node:8.6.0
RUN apt-get update && apt-get install -y cron sudo
COPY crontab /etc/cron.d/my-cron
RUN chmod 0644 /etc/cron.d/my-cron
RUN touch /var/log/cron.log
# Allow node user to start cron daemon with sudo
RUN echo 'node ALL=NOPASSWD: /usr/sbin/cron' >>/etc/sudoers
ENTRYPOINT sudo cron && tail -f /var/log/cron.log
Maybe that helps someone
But: if cron dies, the container keeps running.
So, my problem was the same. The fix was to change the command section in the docker-compose.yml.
From
command: crontab /etc/crontab && tail -f /etc/crontab
To
command: crontab /etc/crontab
command: tail -f /etc/crontab
The problem was the '&&' between the commands. After deleting this, it was all fine.
Focusing on gracefully stopping the cronjobs when receiving SIGTERM or SIGQUIT signals (e.g. when running docker stop).
That's not too easy. By default, the cron process just got killed without paying attention to running cronjobs. I'm elaborating on pablorsk's answer:
Dockerfile:
FROM ubuntu:latest
RUN apt-get update \
&& apt-get -y install cron procps \
&& rm -rf /var/lib/apt/lists/*
# Copy cronjobs file to the cron.d directory
COPY cronjobs /etc/cron.d/cronjobs
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/cronjobs
# similarly prepare the default cronjob scripts
COPY run_cronjob.sh /root/run_cronjob.sh
RUN chmod +x /root/run_cronjob.sh
COPY run_cronjob_without_log.sh /root/run_cronjob_without_log.sh
RUN chmod +x /root/run_cronjob_without_log.sh
# Apply cron job
RUN crontab /etc/cron.d/cronjobs
# to gain access to environment variables, we need this additional entrypoint script
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
# optionally, change received signal from SIGTERM TO SIGQUIT
#STOPSIGNAL SIGQUIT
# Run the command on container startup
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh:
#!/bin/bash
# make global environment variables available within crond, too
printenv | grep -v "no_proxy" >> /etc/environment
# SIGQUIT/SIGTERM-handler
term_handler() {
echo 'stopping cron'
service cron stop
echo 'stopped'
echo 'waiting'
x=$(($(ps u -C run_cronjob.sh | wc -l)-1))
xold=0
while [ "$x" -gt 0 ]
do
if [ "$x" != "$xold" ]; then
echo "Waiting for $x running cronjob(s):"
ps u -C run_cronjob.sh
xold=$x
sleep 1
fi
x=$(($(ps u -C run_cronjob.sh | wc -l)-1))
done
echo 'done waiting'
exit 143; # 128 + 15 -- SIGTERM
}
# cron service with SIGTERM and SIGQUIT support
service cron start
trap "term_handler" QUIT TERM
# endless loop
while true
do
tail -f /dev/null & wait ${!}
done
cronjobs
* * * * * ./run_cronjob.sh cron1
*/2 * * * * ./run_cronjob.sh cron2
*/3 * * * * ./run_cronjob.sh cron3
Assuming you wrap all your cronjobs in a run_cronjob.sh script. That way, you can execute arbitrary code for which shutdown will wait gracefully.
run_cronjobs.sh (optional helper script to keep cronjob definitions clean)
#!/bin/bash
DIR_INCL="${BASH_SOURCE%/*}"
if [[ ! -d "$DIR_INCL" ]]; then DIR_INCL="$PWD"; fi
cd "$DIR_INCL"
# redirect all cronjob output to docker
./run_cronjob_without_log.sh "$#" > /proc/1/fd/1 2>/proc/1/fd/2
run_cronjob_without_log.sh
your_actual_cronjob_src()
Btw, when receiving a SIGKILL the container still shut downs immediately. That way you can use a command like docker-compose stop -t 60 cron-container to wait 60s for cronjobs to finish gracefully, but still terminate them for sure after the timeout.
All the answers require root access inside the container because 'cron' itself requests for UID 0.
To request root acces (e.g. via sudo) is against docker best practices.
I used https://github.com/gjcarneiro/yacron to manage scheduled tasks.
I occasionally tried to find a docker-friendly cron implementation. And this last time I tried, I've found a couple.
By docker-friendly I mean, "output of the tasks can be seen in docker logs w/o resorting to tricks."
The most promising I see at the moment is supercronic. It can be fed a crontab file, all while being docker-friendly. To make use of it:
docker-compose.yml:
services:
supercronic:
build: .
command: supercronic crontab
Dockerfile:
FROM alpine:3.17
RUN set -x \
&& apk add --no-cache supercronic shadow \
&& useradd -m app
USER app
COPY crontab .
crontab:
* * * * * date
A gist with a bit more info.
Another good one is yacron, but it uses YAML.
ofelia can be used, but they seem to focus on running tasks in separate containers. Which is probably not a downside, but I'm not sure why I'd want to do that.
And there's also a number of traditional cron implementations: dcron, fcron, cronie. But they come with "no easy way to see output of the tasks."
Just adding to the list of answers that you can also use this image:
https://hub.docker.com/repository/docker/cronit/simple-cron
And use it as a basis to start cron jobs, using it like this:
FROM cronit/simple-cron # Inherit from the base image
#Set up all your dependencies
COPY jobs.cron ./ # Copy your local config
Evidently, it is possible to run cron as a process inside the container (under root user) alongside other processes , using ENTRYPOINT statement in Dockerfile with start.sh script what includes line process cron start. More info here
#!/bin/bash
# copy environment variables for local use
env >> etc/environment
# start cron service
service cron start
# start other service
service other start
#...
If your image doesn't contain any daemon (so it's only the short-running script or process), you may also consider starting your cron from outside, by simply defining a LABEL with the cron information, plus the scheduler itself. This way, your default container state is "exited". If you have multiple scripts, this may result in a lower footprint on your system than having multiple parallel-running cron instances.
See: https://github.com/JaciBrunning/docker-cron-label
Example docker-compose.yaml:
version: '3.8'
# Example application of the cron image
services:
cron:
image: jaci/cron-label:latest
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/etc/localtime:/etc/localtime:ro"
hello:
image: hello-world
restart: "no"
labels:
- "cron.schedule=* * * * * "
I wanted to share a modification to the typical off of some of these other suggestions that I found more flexible. I wanted to enable changing the cron time with an environment variable and ended up adding an additional script that runs within my entrypoint.sh, but before the call to cron -f
*updatecron.sh*
#!/bin/sh
#remove old cron files
rm -rf /etc/cron.*/*
#create a new formatted cron definition
echo "$crondef [appname] >/proc/1/fd/1 2>/proc/1/fd/2" >> /etc/cron.d/restart-cron
echo \ >> /etc/cron.d/restart-cron
chmod 0644 /etc/cron.d/restart-cron
crontab /etc/cron.d/restart-cron
This removes any existing cron files, creates a new cronfile using an ENV variable of crondef, and then loads it.
Our's was a nodejs application to be run as cron job and it was also dependent on environment variables.
The below solution worked for us.
Docker file:
# syntax=docker/dockerfile:1
FROM node:12.18.1
ENV NODE_ENV=production
COPY ["startup.sh", "./"]
# Removed steps to build the node js application
#--------------- Setup cron ------------------
# Install Cron
RUN apt-get update
RUN apt-get -y install cron
# Run every day at 1AM
#/proc/1/fd/1 2>/proc/1/fd/2 is used to redirect cron logs to standard output and standard error
RUN (crontab -l ; echo "0 1 * * * /usr/local/bin/node /app/dist/index.js > /proc/1/fd/1 2>/proc/1/fd/2") | crontab
#--------------- Start Cron ------------------
# Grant execution rights
RUN chmod 755 startup.sh
CMD ["./startup.sh"]
startup.sh:
!/bin/bash
echo "Copying env variables to /etc/environment so that it is available for cron jobs"
printenv >> /etc/environment
echo "Starting cron"
cron -f
With multiple jobs and various dependencies like zsh and curl, this is a good approach while also combining the best practices from other answers. Bonus: This does NOT require you to set +x execution permissions on myScript.sh, which can be easy to miss in a new environment.
cron.dockerfile
FROM ubuntu:latest
# Install dependencies
RUN apt-get update && apt-get -y install \
cron \
zsh \
curl;
# Setup multiple jobs with zsh and redirect outputs to docker logs
RUN (echo "\
* * * * * zsh -c 'echo "Hello World"' 1> /proc/1/fd/1 2>/proc/1/fd/2 \n\
* * * * * zsh /myScript.sh 1> /proc/1/fd/1 2>/proc/1/fd/2 \n\
") | crontab
# Run cron in forground, so docker knows the task is running
CMD ["cron", "-f"]
Integrate this with docker compose like so:
docker-compose.yml
services:
cron:
build:
context: .
dockerfile: ./cron.dockerfile
volumes:
- ./myScript.sh:/myScript.sh
Keep in mind that you need to docker compose build cron when you change contents of the cron.dockerfile, but changes to myScript.sh will be reflected right away as it's mounted in compose.
Related
How to output logs of cron in php fpm docker?
I have setup a cron inside a docker php:8.1.0-fpm-buster. It's running like expected, but there is no log showing up inside the docker desktop, it's a black screen. Here is the docker file FROM php:8.1.0-fpm-buster ARG ENV RUN apt-get update && apt-get -y install cron RUN touch /var/log/cron.log RUN chmod 777 /var/log/cron.log COPY ./crontab /etc/cron.d/crontab RUN chmod 0644 /etc/cron.d/crontab RUN /usr/bin/crontab /etc/cron.d/crontab CMD [ "cron", "-f", "-L", "2" ] What I was expecting inside the logs was something similar to linux logs of cron, like this example: Jan 20 09:32:01 ns555555 CRON[21051]: (root) CMD (echo 'my command') I tried differents commands: I added bash -c before the cron command I remove the -L 2 I have also found other stackoverflow posts, but eachtime it's not the same cron: See cron output via docker logs, without using an extra file Docker - Using PHP Cli base image for Cron container What am I doing wrong ? Did I install the wrong cron ?
I found out the solution inside this post: How to run a cron job inside a docker container? I added > /proc/1/fd/1 2>/proc/1/fd/2 after the command, now I have the command output inside the logs * * * * * root echo hello > /proc/1/fd/1 2>/proc/1/fd/2
Schedule Cron Job with docker [duplicate]
I am trying to run a cronjob inside a docker container that invokes a shell script. Yesterday I have been searching all over the web and stack overflow, but I could not really find a solution that works. How can I do this?
You can copy your crontab into an image, in order for the container launched from said image to run the job. Important: as noted in docker-cron issue 3: use LF, not CRLF for your cron file. See "Run a cron job with Docker" from Julien Boulay in his Ekito/docker-cron: Let’s create a new file called "hello-cron" to describe our job. # must be ended with a new line "LF" (Unix) and not "CRLF" (Windows) * * * * * echo "Hello world" >> /var/log/cron.log 2>&1 # An empty line is required at the end of this file for a valid cron file. If you are wondering what is 2>&1, Ayman Hourieh explains. The following Dockerfile describes all the steps to build your image FROM ubuntu:latest MAINTAINER docker#ekito.fr RUN apt-get update && apt-get -y install cron # Copy hello-cron file to the cron.d directory COPY hello-cron /etc/cron.d/hello-cron # Give execution rights on the cron job RUN chmod 0644 /etc/cron.d/hello-cron # Apply cron job RUN crontab /etc/cron.d/hello-cron # Create the log file to be able to run tail RUN touch /var/log/cron.log # Run the command on container startup CMD cron && tail -f /var/log/cron.log But: if cron dies, the container keeps running. (see Gaafar's comment and How do I make apt-get install less noisy?: apt-get -y install -qq --force-yes cron can work too) As noted by Nathan Lloyd in the comments: Quick note about a gotcha: If you're adding a script file and telling cron to run it, remember to RUN chmod 0744 /the_script Cron fails silently if you forget. OR, make sure your job itself redirect directly to stdout/stderr instead of a log file, as described in hugoShaka's answer: * * * * * root echo hello > /proc/1/fd/1 2>/proc/1/fd/2 Replace the last Dockerfile line with CMD ["cron", "-f"] But: it doesn't work if you want to run tasks as a non-root. See also (about cron -f, which is to say cron "foreground") "docker ubuntu cron -f is not working" Build and run it: sudo docker build --rm -t ekito/cron-example . sudo docker run -t -i ekito/cron-example Be patient, wait for 2 minutes and your command-line should display: Hello world Hello world Eric adds in the comments: Do note that tail may not display the correct file if it is created during image build. If that is the case, you need to create or touch the file during container runtime in order for tail to pick up the correct file. See "Output of tail -f at the end of a docker CMD is not showing". See more in "Running Cron in Docker" (Apr. 2021) from Jason Kulatunga, as he commented below See Jason's image AnalogJ/docker-cron based on: Dockerfile installing cronie/crond, depending on distribution. an entrypoint initializing /etc/environment and then calling cron -f -l 2
The accepted answer may be dangerous in a production environment. In docker you should only execute one process per container because if you don't, the process that forked and went background is not monitored and may stop without you knowing it. When you use CMD cron && tail -f /var/log/cron.log the cron process basically fork in order to execute cron in background, the main process exits and let you execute tailf in foreground. The background cron process could stop or fail you won't notice, your container will still run silently and your orchestration tool will not restart it. You can avoid such a thing by redirecting directly the cron's commands output into your docker stdout and stderr which are located respectively in /proc/1/fd/1 and /proc/1/fd/2. Using basic shell redirects you may want to do something like this : * * * * * root echo hello > /proc/1/fd/1 2>/proc/1/fd/2 And your CMD will be : CMD ["cron", "-f"] But: this doesn't work if you want to run tasks as a non-root.
For those who wants to use a simple and lightweight image: FROM alpine:3.6 # copy crontabs for root user COPY config/cronjobs /etc/crontabs/root # start crond with log level 8 in foreground, output to stderr CMD ["crond", "-f", "-d", "8"] Where cronjobs is the file that contains your cronjobs, in this form: * * * * * echo "hello stackoverflow" >> /test_file 2>&1 # remember to end this file with an empty new line But apparently you won't see hello stackoverflow in docker logs.
What #VonC has suggested is nice but I prefer doing all cron job configuration in one line. This would avoid cross platform issues like cronjob location and you don't need a separate cron file. FROM ubuntu:latest # Install cron RUN apt-get -y install cron # Create the log file to be able to run tail RUN touch /var/log/cron.log # Setup cron job RUN (crontab -l ; echo "* * * * * echo "Hello world" >> /var/log/cron.log") | crontab # Run the command on container startup CMD cron && tail -f /var/log/cron.log After running your docker container, you can make sure if cron service is working by: # To check if the job is scheduled docker exec -ti <your-container-id> bash -c "crontab -l" # To check if the cron service is running docker exec -ti <your-container-id> bash -c "pgrep cron" If you prefer to have ENTRYPOINT instead of CMD, then you can substitute the CMD above with ENTRYPOINT cron start && tail -f /var/log/cron.log But: if cron dies, the container keeps running.
There is another way to do it, is to use Tasker, a task runner that has cron (a scheduler) support. Why ? Sometimes to run a cron job, you have to mix, your base image (python, java, nodejs, ruby) with the crond. That means another image to maintain. Tasker avoid that by decoupling the crond and you container. You can just focus on the image that you want to execute your commands, and configure Tasker to use it. Here an docker-compose.yml file, that will run some tasks for you version: "2" services: tasker: image: strm/tasker volumes: - "/var/run/docker.sock:/var/run/docker.sock" environment: configuration: | logging: level: ROOT: WARN org.springframework.web: WARN sh.strm: DEBUG schedule: - every: minute task: hello - every: minute task: helloFromPython - every: minute task: helloFromNode tasks: docker: - name: hello image: debian:jessie script: - echo Hello world from Tasker - name: helloFromPython image: python:3-slim script: - python -c 'print("Hello world from python")' - name: helloFromNode image: node:8 script: - node -e 'console.log("Hello from node")' There are 3 tasks there, all of them will run every minute (every: minute), and each of them will execute the script code, inside the image defined in image section. Just run docker-compose up, and see it working. Here is the Tasker repo with the full documentation: http://github.com/opsxcq/tasker
Though this aims to run jobs beside a running process in a container via Docker's exec interface, this may be of interest for you. I've written a daemon that observes containers and schedules jobs, defined in their metadata, on them. Example: version: '2' services: wordpress: image: wordpress mysql: image: mariadb volumes: - ./database_dumps:/dumps labels: deck-chores.dump.command: sh -c "mysqldump --all-databases > /dumps/dump-$$(date -Idate)" deck-chores.dump.interval: daily 'Classic', cron-like configuration is also possible. Here are the docs, here's the image repository.
VonC's answer is pretty thorough. In addition I'd like to add one thing that helped me. If you just want to run a cron job without tailing a file, you'd be tempted to just remove the && tail -f /var/log/cron.log from the cron command. However this will cause the Docker container to exit shortly after running because when the cron command completes, Docker thinks the last command has exited and hence kills the container. This can be avoided by running cron in the foreground via cron -f.
If you're using docker for windows, remember that you have to change your line-ending format from CRLF to LF (i.e. from dos to unix) if you intend on importing your crontab file from windows to your ubuntu container. If not, your cron-job won't work. Here's a working example: FROM ubuntu:latest RUN apt-get update && apt-get -y install cron RUN apt-get update && apt-get install -y dos2unix # Add crontab file (from your windows host) to the cron directory ADD cron/hello-cron /etc/cron.d/hello-cron # Change line ending format to LF RUN dos2unix /etc/cron.d/hello-cron # Give execution rights on the cron job RUN chmod 0644 /etc/cron.d/hello-cron # Apply cron job RUN crontab /etc/cron.d/hello-cron # Create the log file to be able to run tail RUN touch /var/log/hello-cron.log # Run the command on container startup CMD cron && tail -f /var/log/hello-cron.log This actually took me hours to figure out, as debugging cron jobs in docker containers is a tedious task. Hope it helps anyone else out there that can't get their code to work! But: if cron dies, the container keeps running.
I created a Docker image based on the other answers, which can be used like docker run -v "/path/to/cron:/etc/cron.d/crontab" gaafar/cron where /path/to/cron: absolute path to crontab file, or you can use it as a base in a Dockerfile: FROM gaafar/cron # COPY crontab file in the cron directory COPY crontab /etc/cron.d/crontab # Add your commands here For reference, the image is here.
Unfortunately, none of the above answers worked for me, although all answers lead to the solution and eventually to my solution, here is the snippet if it helps someone. Thanks This can be solved with the bash file, due to the layered architecture of the Docker, cron service doesn't get initiated with RUN/CMD/ENTRYPOINT commands. Simply add a bash file which will initiate the cron and other services (if required) DockerFile FROM gradle:6.5.1-jdk11 AS build # apt RUN apt-get update RUN apt-get -y install cron # Setup cron to run every minute to print (you can add/update your cron here) RUN touch /var/log/cron-1.log RUN (crontab -l ; echo "* * * * * echo testing cron.... >> /var/log/cron-1.log 2>&1") | crontab # entrypoint.sh RUN chmod +x entrypoint.sh CMD ["bash","entrypoint.sh"] entrypoint.sh #!/bin/sh service cron start & tail -f /var/log/cron-2.log If any other service is also required to run along with cron then add that service with & in the same command, for example: /opt/wildfly/bin/standalone.sh & service cron start & tail -f /var/log/cron-2.log Once you will get into the docker container there you can see that testing cron.... will be getting printed every minute in file: /var/log/cron-1.log But, if cron dies, the container keeps running.
Define the cronjob in a dedicated container which runs the command via docker exec to your service. This is higher cohesion and the running script will have access to the environment variables you have defined for your service. #docker-compose.yml version: "3.3" services: myservice: environment: MSG: i'm being cronjobbed, every minute! image: alpine container_name: myservice command: tail -f /dev/null cronjobber: image: docker:edge volumes: - /var/run/docker.sock:/var/run/docker.sock container_name: cronjobber command: > sh -c " echo '* * * * * docker exec myservice printenv | grep MSG' > /etc/crontabs/root && crond -f"
I decided to use busybox, as it is one of the smallest images. crond is executed in foreground (-f), logging is send to stderr (-d), I didn't choose to change the loglevel. crontab file is copied to the default path: /var/spool/cron/crontabs FROM busybox:1.33.1 # Usage: crond [-fbS] [-l N] [-d N] [-L LOGFILE] [-c DIR] # # -f Foreground # -b Background (default) # -S Log to syslog (default) # -l N Set log level. Most verbose 0, default 8 # -d N Set log level, log to stderr # -L FILE Log to FILE # -c DIR Cron dir. Default:/var/spool/cron/crontabs COPY crontab /var/spool/cron/crontabs/root CMD [ "crond", "-f", "-d" ] But output of the tasks apparently can't be seen in docker logs.
When you deploy your container on another host, just note that it won't start any processes automatically. You need to make sure that 'cron' service is running inside your container. In our case, I am using Supervisord with other services to start cron service. [program:misc] command=/etc/init.d/cron restart user=root autostart=true autorestart=true stderr_logfile=/var/log/misc-cron.err.log stdout_logfile=/var/log/misc-cron.out.log priority=998
From above examples I created this combination: Alpine Image & Edit Using Crontab in Nano (I hate vi) FROM alpine RUN apk update RUN apk add curl nano ENV EDITOR=/usr/bin/nano # start crond with log level 8 in foreground, output to stderr CMD ["crond", "-f", "-d", "8"] # Shell Access # docker exec -it <CONTAINERID> /bin/sh # Example Cron Entry # crontab -e # * * * * * echo hello > /proc/1/fd/1 2>/proc/1/fd/2 # DATE/TIME WILL BE IN UTC
Setup a cron in parallel to a one-time job Create a script file, say run.sh, with the job that is supposed to run periodically. #!/bin/bash timestamp=`date +%Y/%m/%d-%H:%M:%S` echo "System path is $PATH at $timestamp" Save and exit. Use Entrypoint instead of CMD f you have multiple jobs to kick in during docker containerization, use the entrypoint file to run them all. Entrypoint file is a script file that comes into action when a docker run command is issued. So, all the steps that we want to run can be put in this script file. For instance, we have 2 jobs to run: Run once job: echo “Docker container has been started” Run periodic job: run.sh Create entrypoint.sh #!/bin/bash # Start the run once job. echo "Docker container has been started" # Setup a cron schedule echo "* * * * * /run.sh >> /var/log/cron.log 2>&1 # This extra line makes it a valid cron" > scheduler.txt crontab scheduler.txt cron -f Let’s understand the crontab that has been set up in the file * * * * *: Cron schedule; the job must run every minute. You can update the schedule based on your requirement. /run.sh: The path to the script file which is to be run periodically /var/log/cron.log: The filename to save the output of the scheduled cron job. 2>&1: The error logs(if any) also will be redirected to the same output file used above. Note: Do not forget to add an extra new line, as it makes it a valid cron. Scheduler.txt: the complete cron setup will be redirected to a file. Using System/User specific environment variables in cron My actual cron job was expecting most of the arguments as the environment variables passed to the docker run command. But, with bash, I was not able to use any of the environment variables that belongs to the system or the docker container. Then, this came up as a walkaround to this problem: Add the following line in the entrypoint.sh declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /container.env Update the cron setup and specify- SHELL=/bin/bash BASH_ENV=/container.env At last, your entrypoint.sh should look like #!/bin/bash # Start the run once job. echo "Docker container has been started" declare -p | grep -Ev 'BASHOPTS|BASH_VERSINFO|EUID|PPID|SHELLOPTS|UID' > /container.env # Setup a cron schedule echo "SHELL=/bin/bash BASH_ENV=/container.env * * * * * /run.sh >> /var/log/cron.log 2>&1 # This extra line makes it a valid cron" > scheduler.txt crontab scheduler.txt cron -f Last but not the least: Create a Dockerfile FROM ubuntu:16.04 MAINTAINER Himanshu Gupta # Install cron RUN apt-get update && apt-get install -y cron # Add files ADD run.sh /run.sh ADD entrypoint.sh /entrypoint.sh RUN chmod +x /run.sh /entrypoint.sh ENTRYPOINT /entrypoint.sh That’s it. Build and run the Docker image!
Here's my docker-compose based solution: cron: image: alpine:3.10 command: crond -f -d 8 depends_on: - servicename volumes: - './conf/cron:/etc/crontabs/root:z' restart: unless-stopped the lines with cron entries are on the ./conf/cron file. Note: this won't run commands that aren't in the alpine image. Also, output of the tasks apparently won't appear in docker logs.
This question have a lot of answers, but some are complicated and another has some drawbacks. I try to explain the problems and try to deliver a solution. cron-entrypoint.sh: #!/bin/bash # copy machine environment variables to cron environment printenv | cat - /etc/crontab > temp && mv temp /etc/crontab ## validate cron file crontab /etc/crontab # cron service with SIGTERM support service cron start trap "service cron stop; exit" SIGINT SIGTERM # just dump your logs to std output tail -f \ /app/storage/logs/laravel.log \ /var/log/cron.log \ & wait $! Problems solved environment variables are not available on cron environment (like env vars or kubernetes secrets) stop when crontab file is not valid stop gracefully cron jobs when machine receive an SIGTERM signal For context, I use previous script on Kubernetes with Laravel app.
this line was the one that helped me run my pre-scheduled task. ADD mycron/root /etc/cron.d/root RUN chmod 0644 /etc/cron.d/root RUN crontab /etc/cron.d/root RUN touch /var/log/cron.log CMD ( cron -f -l 8 & ) && apache2-foreground # <-- run cron --> My project run inside: FROM php:7.2-apache But: if cron dies, the container keeps running.
When running on some trimmed down images that restrict root access, I had to add my user to the sudoers and run as sudo cron FROM node:8.6.0 RUN apt-get update && apt-get install -y cron sudo COPY crontab /etc/cron.d/my-cron RUN chmod 0644 /etc/cron.d/my-cron RUN touch /var/log/cron.log # Allow node user to start cron daemon with sudo RUN echo 'node ALL=NOPASSWD: /usr/sbin/cron' >>/etc/sudoers ENTRYPOINT sudo cron && tail -f /var/log/cron.log Maybe that helps someone But: if cron dies, the container keeps running.
So, my problem was the same. The fix was to change the command section in the docker-compose.yml. From command: crontab /etc/crontab && tail -f /etc/crontab To command: crontab /etc/crontab command: tail -f /etc/crontab The problem was the '&&' between the commands. After deleting this, it was all fine.
Focusing on gracefully stopping the cronjobs when receiving SIGTERM or SIGQUIT signals (e.g. when running docker stop). That's not too easy. By default, the cron process just got killed without paying attention to running cronjobs. I'm elaborating on pablorsk's answer: Dockerfile: FROM ubuntu:latest RUN apt-get update \ && apt-get -y install cron procps \ && rm -rf /var/lib/apt/lists/* # Copy cronjobs file to the cron.d directory COPY cronjobs /etc/cron.d/cronjobs # Give execution rights on the cron job RUN chmod 0644 /etc/cron.d/cronjobs # similarly prepare the default cronjob scripts COPY run_cronjob.sh /root/run_cronjob.sh RUN chmod +x /root/run_cronjob.sh COPY run_cronjob_without_log.sh /root/run_cronjob_without_log.sh RUN chmod +x /root/run_cronjob_without_log.sh # Apply cron job RUN crontab /etc/cron.d/cronjobs # to gain access to environment variables, we need this additional entrypoint script COPY entrypoint.sh /entrypoint.sh RUN chmod +x /entrypoint.sh # optionally, change received signal from SIGTERM TO SIGQUIT #STOPSIGNAL SIGQUIT # Run the command on container startup ENTRYPOINT ["/entrypoint.sh"] entrypoint.sh: #!/bin/bash # make global environment variables available within crond, too printenv | grep -v "no_proxy" >> /etc/environment # SIGQUIT/SIGTERM-handler term_handler() { echo 'stopping cron' service cron stop echo 'stopped' echo 'waiting' x=$(($(ps u -C run_cronjob.sh | wc -l)-1)) xold=0 while [ "$x" -gt 0 ] do if [ "$x" != "$xold" ]; then echo "Waiting for $x running cronjob(s):" ps u -C run_cronjob.sh xold=$x sleep 1 fi x=$(($(ps u -C run_cronjob.sh | wc -l)-1)) done echo 'done waiting' exit 143; # 128 + 15 -- SIGTERM } # cron service with SIGTERM and SIGQUIT support service cron start trap "term_handler" QUIT TERM # endless loop while true do tail -f /dev/null & wait ${!} done cronjobs * * * * * ./run_cronjob.sh cron1 */2 * * * * ./run_cronjob.sh cron2 */3 * * * * ./run_cronjob.sh cron3 Assuming you wrap all your cronjobs in a run_cronjob.sh script. That way, you can execute arbitrary code for which shutdown will wait gracefully. run_cronjobs.sh (optional helper script to keep cronjob definitions clean) #!/bin/bash DIR_INCL="${BASH_SOURCE%/*}" if [[ ! -d "$DIR_INCL" ]]; then DIR_INCL="$PWD"; fi cd "$DIR_INCL" # redirect all cronjob output to docker ./run_cronjob_without_log.sh "$#" > /proc/1/fd/1 2>/proc/1/fd/2 run_cronjob_without_log.sh your_actual_cronjob_src() Btw, when receiving a SIGKILL the container still shut downs immediately. That way you can use a command like docker-compose stop -t 60 cron-container to wait 60s for cronjobs to finish gracefully, but still terminate them for sure after the timeout.
All the answers require root access inside the container because 'cron' itself requests for UID 0. To request root acces (e.g. via sudo) is against docker best practices. I used https://github.com/gjcarneiro/yacron to manage scheduled tasks.
I occasionally tried to find a docker-friendly cron implementation. And this last time I tried, I've found a couple. By docker-friendly I mean, "output of the tasks can be seen in docker logs w/o resorting to tricks." The most promising I see at the moment is supercronic. It can be fed a crontab file, all while being docker-friendly. To make use of it: docker-compose.yml: services: supercronic: build: . command: supercronic crontab Dockerfile: FROM alpine:3.17 RUN set -x \ && apk add --no-cache supercronic shadow \ && useradd -m app USER app COPY crontab . crontab: * * * * * date A gist with a bit more info. Another good one is yacron, but it uses YAML. ofelia can be used, but they seem to focus on running tasks in separate containers. Which is probably not a downside, but I'm not sure why I'd want to do that. And there's also a number of traditional cron implementations: dcron, fcron, cronie. But they come with "no easy way to see output of the tasks."
Just adding to the list of answers that you can also use this image: https://hub.docker.com/repository/docker/cronit/simple-cron And use it as a basis to start cron jobs, using it like this: FROM cronit/simple-cron # Inherit from the base image #Set up all your dependencies COPY jobs.cron ./ # Copy your local config
Evidently, it is possible to run cron as a process inside the container (under root user) alongside other processes , using ENTRYPOINT statement in Dockerfile with start.sh script what includes line process cron start. More info here #!/bin/bash # copy environment variables for local use env >> etc/environment # start cron service service cron start # start other service service other start #...
If your image doesn't contain any daemon (so it's only the short-running script or process), you may also consider starting your cron from outside, by simply defining a LABEL with the cron information, plus the scheduler itself. This way, your default container state is "exited". If you have multiple scripts, this may result in a lower footprint on your system than having multiple parallel-running cron instances. See: https://github.com/JaciBrunning/docker-cron-label Example docker-compose.yaml: version: '3.8' # Example application of the cron image services: cron: image: jaci/cron-label:latest volumes: - "/var/run/docker.sock:/var/run/docker.sock" - "/etc/localtime:/etc/localtime:ro" hello: image: hello-world restart: "no" labels: - "cron.schedule=* * * * * "
I wanted to share a modification to the typical off of some of these other suggestions that I found more flexible. I wanted to enable changing the cron time with an environment variable and ended up adding an additional script that runs within my entrypoint.sh, but before the call to cron -f *updatecron.sh* #!/bin/sh #remove old cron files rm -rf /etc/cron.*/* #create a new formatted cron definition echo "$crondef [appname] >/proc/1/fd/1 2>/proc/1/fd/2" >> /etc/cron.d/restart-cron echo \ >> /etc/cron.d/restart-cron chmod 0644 /etc/cron.d/restart-cron crontab /etc/cron.d/restart-cron This removes any existing cron files, creates a new cronfile using an ENV variable of crondef, and then loads it.
Our's was a nodejs application to be run as cron job and it was also dependent on environment variables. The below solution worked for us. Docker file: # syntax=docker/dockerfile:1 FROM node:12.18.1 ENV NODE_ENV=production COPY ["startup.sh", "./"] # Removed steps to build the node js application #--------------- Setup cron ------------------ # Install Cron RUN apt-get update RUN apt-get -y install cron # Run every day at 1AM #/proc/1/fd/1 2>/proc/1/fd/2 is used to redirect cron logs to standard output and standard error RUN (crontab -l ; echo "0 1 * * * /usr/local/bin/node /app/dist/index.js > /proc/1/fd/1 2>/proc/1/fd/2") | crontab #--------------- Start Cron ------------------ # Grant execution rights RUN chmod 755 startup.sh CMD ["./startup.sh"] startup.sh: !/bin/bash echo "Copying env variables to /etc/environment so that it is available for cron jobs" printenv >> /etc/environment echo "Starting cron" cron -f
With multiple jobs and various dependencies like zsh and curl, this is a good approach while also combining the best practices from other answers. Bonus: This does NOT require you to set +x execution permissions on myScript.sh, which can be easy to miss in a new environment. cron.dockerfile FROM ubuntu:latest # Install dependencies RUN apt-get update && apt-get -y install \ cron \ zsh \ curl; # Setup multiple jobs with zsh and redirect outputs to docker logs RUN (echo "\ * * * * * zsh -c 'echo "Hello World"' 1> /proc/1/fd/1 2>/proc/1/fd/2 \n\ * * * * * zsh /myScript.sh 1> /proc/1/fd/1 2>/proc/1/fd/2 \n\ ") | crontab # Run cron in forground, so docker knows the task is running CMD ["cron", "-f"] Integrate this with docker compose like so: docker-compose.yml services: cron: build: context: . dockerfile: ./cron.dockerfile volumes: - ./myScript.sh:/myScript.sh Keep in mind that you need to docker compose build cron when you change contents of the cron.dockerfile, but changes to myScript.sh will be reflected right away as it's mounted in compose.
Docker ubuntu cron tail logs not visible
Trying to run a docker container that has a cron scheduling. However I cannot make it output logs. Im using docker-compose. docker-compose.yml --- version: '3' services: cron: build: context: cron/ container_name: ubuntu-cron cron/Dockerfile FROM ubuntu:18.10 RUN apt-get update RUN apt-get update && apt-get install -y cron ADD hello-cron /etc/cron.d/hello-cron # Give execution rights on the cron job RUN chmod 0644 /etc/cron.d/hello-cron # Create the log file to be able to run tail RUN touch /var/log/cron.log # Run the command on container startup CMD cron && tail -F /var/log/cron.log cron/hello-cron * * * * * root echo "Hello world" >> /var/log/cron.log 2>&1 The above runs fine its outputting logs inside the container however they are not streamed to the docker. e.g. docker logs -f ubuntu-cron returns empty results but if you login to the container docker exec -it -i ubuntu-cron /bin/bash you have logs. cat /var/log/cron.log Hello world Hello world Hello world Now Im thinking that maybe I dont need to log to a file? could attach this to sttoud but not sure how to do this. This looks similar... How to redirect cron job output to stdout
I tried your setup and the following Dockerfile works: FROM ubuntu:18.10 RUN apt-get update RUN apt-get update && apt-get install -y cron ADD hello-cron /etc/cron.d/hello-cron # Give execution rights on the cron job RUN chmod 0755 /etc/cron.d/hello-cron # Create the log file to be able to run tail RUN touch /var/log/cron.log # Symlink the cron to stdout RUN ln -sf /dev/stdout /var/log/cron.log # Run the command on container startup CMD cron && tail -F /var/log/cron.log 2>&1 Also note that I'm bringing the container up with "docker-compose up" rather than docker. It wouldn't matter in this particular example, but if your actual solution is bigger it might matter. EDIT: Here's the output when I run docker-compose up: neekoy#synchronoss:~$ sudo docker-compose up Starting ubuntu-cron ... done Attaching to ubuntu-cron ubuntu-cron | Hello world ubuntu-cron | Hello world ubuntu-cron | Hello world Same in the logs obviously: neekoy#synchronoss:~$ sudo docker logs daf0ff73a640 Hello world Hello world Hello world Hello world Hello world My understanding is that the above is the goal.
Due to some weirdness in the docker layers and inodes, you have to create the file during the CMD: CMD cron && touch /var/log/cron.log && tail -F /var/log/cron.log This works both for file and stdout: FROM ubuntu:18.10 RUN apt-get update RUN apt-get update && apt-get install -y cron ADD hello-cron /etc/cron.d/hello-cron # Give execution rights on the cron job RUN chmod 0644 /etc/cron.d/hello-cron # Create the log file to be able to run tail # Run the command on container startup CMD cron && touch /var/log/cron.log && tail -F /var/log/cron.log The explanation seems to be this one: In the original post tail command starts "listening" to a file which is in a layer of the image, then when cron writes the first line to that file, docker copies the file to a new layer, the container layer (because of the nature of copy-and-write filesystem, the way that docker works). So when the file gets created in a new layer it gets a different inode and tail keeps listening in the previous state, so looses every update to the "new file". Credits BMitch
Try to redirect on this > /dev/stdout, after this you should see your logs with a docker logs.
How can I run script automatically after Docker container startup
I'm using Search Guard plugin to secure an elasticsearch cluster composed of multiple nodes. Here is my Dockerfile: #!/bin/sh FROM docker.elastic.co/elasticsearch/elasticsearch:5.6.3 USER root # Install search guard RUN bin/elasticsearch-plugin install --batch com.floragunn:search-guard-5:5.6.3-16 \ && chmod +x \ plugins/search-guard-5/tools/hash.sh \ plugins/search-guard-5/tools/sgadmin.sh \ bin/init_sg.sh \ && chown -R elasticsearch:elasticsearch /usr/share/elasticsearch USER elasticsearch To initialize SearchGuard (create internal users and assign roles). I need to run the script init_sg.sh after the container startup. Here is the problem: Unless elasticsearch is running, the script will not initialize any security index. The script's content is : sleep 10 plugins/search-guard-5/tools/sgadmin.sh -cd config/ -ts config/truststore.jks -ks config/kirk-keystore.jks -nhnv -icl Now, I just run the script manually after the container startup but since I'm running it on Kubernetes.. Pods may get killed or fail and get recreated automatically for some reason. In this case, the plugin have to be initialized automatically after the container startup! So how to accomplish this? Any help or hint would be really appreciated.
The image itself has an entrypoint ENTRYPOINT ["/run/entrypoint.sh"] specified in the Dockerfile. You can replace it by your own script. So for example create a new script, mount it and first call /run/entrypoint.sh and then wait for start of elasticsearch before running your init_sg.sh.
Not sure this will solves your problem, but its worth check my repo'sDockerfile I have created a simple run.sh file copied to docker image and in the Dockerfile I wrote CMD ["run.sh"]. In the same way define whatever you want in run.sh and write CMD ["run.sh"]. You can find another example like below Dockerfile FROM java:8 RUN apt-get update && apt-get install stress-ng -y ADD target/restapp.jar /restapp.jar COPY dockerrun.sh /usr/local/bin/dockerrun.sh RUN chmod +x /usr/local/bin/dockerrun.sh CMD ["dockerrun.sh"] dockerrun.sh #!/bin/sh java -Dserver.port=8095 -jar /restapp.jar & hostname="hostname: `hostname`" nohup stress-ng --vm 4 & while true; do sleep 1000 done
This is addressed in the documentation here: https://docs.docker.com/config/containers/multi-service_container/ If one of your processes depends on the main process, then start your helper process FIRST with a script like wait-for-it, then start the main process SECOND and remove the fg %1 line. #!/bin/bash # turn on bash's job control set -m # Start the primary process and put it in the background ./my_main_process & # Start the helper process ./my_helper_process # the my_helper_process might need to know how to wait on the # primary process to start before it does its work and returns # now we bring the primary process back into the foreground # and leave it there fg %1
I was trying to solve the exact problem. Here's the approach that worked for me. Create a separate shell script that checks for ES status, and only start initialization of SG when ES is ready: Shell Script #!/bin/sh echo ">>>> Right before SG initialization <<<<" # use while loop to check if elasticsearch is running while true do netstat -uplnt | grep :9300 | grep LISTEN > /dev/null verifier=$? if [ 0 = $verifier ] then echo "Running search guard plugin initialization" /elasticsearch/plugins/search-guard-6/tools/sgadmin.sh -h 0.0.0.0 -cd plugins/search-guard-6/sgconfig -icl -key config/client.key -cert config/client.pem -cacert config/root-ca.pem -nhnv break else echo "ES is not running yet" sleep 5 fi done Install script in Dockerfile You will need to install the script in container so it's accessible after it starts. COPY sginit.sh / RUN chmod +x /sginit.sh Update entrypoint script You will need to edit the entrypoint script or run script of your ES image. So that it starts the sginit.sh in the background BEFORE starting ES process. # Run sginit in background waiting for ES to start /sginit.sh & This way the sginit.sh will start in the background, and will only initialize SG after ES is started. The reason to have this sginit.sh script starts before ES in the background is so that it's not blocking ES from starting. The same logic applies if you put it after starting of ES, it will never run unless you put the starting of ES in the background.
I would suggest to put the CMD in you docker file to execute the script when the container start FROM debian RUN apt-get update && apt-get install -y nano && apt-get clean EXPOSE 8484 CMD ["/bin/bash", "/opt/your_app/init.sh"] There is other way , but before using this look at your requirement, ENTRYPOINT "put your code here" && /bin/bash #exemple ENTRYPOINT service nginx start && service ssh start &&/bin/bash "use && to separate your code"
You can also use wait-for-it script. It will wait on the availability of a host and TCP port. It is useful for synchronizing the spin-up of interdependent services and works like a charm with containers. It does not have any external dependencies so you can just run it as an RUN command without doing anything else. A Dockerfile example based on this thread: FROM elasticsearch # Make elasticsearch write data to a folder that is not declared as a volume in elasticsearchs' official dockerfile. RUN mkdir /data && chown -R elasticsearch:elasticsearch /data && echo 'es.path.data: /data' >> config/elasticsearch.yml && echo 'path.data: /data' >> config/elasticsearch.yml # Download wait-for-it ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/e1f115e4ca285c3c24e847c4dd4be955e0ed51c2/wait-for-it.sh /utils/wait-for-it.sh # Copy the files you may need and your insert script # Insert data into elasticsearch RUN /docker-entrypoint.sh elasticsearch -p /tmp/epid & /bin/bash /utils/wait-for-it.sh -t 0 localhost:9200 -- path/to/insert/script.sh; kill $(cat /tmp/epid) && wait $(cat /tmp/epid); exit 0;
See cron output via docker logs, without using an extra file
I am running "cron" in a docker container. Every day a script is executed. The output of this script I would like to see via "docker logs " The process with PID 0 is the cron daemon in my container. Entrypoint starts cron in foreground: /usr/sbin/crond -f I understand, that I could redirect the script output to a file "path/to/logs" 07 2 * * * /data/docker/backup_webserver/backupscript.sh >> path/to/logs and start the container as following to see the logs "tail -f path/to/logs" But then the file "path/to/logs" would grow during the runtime of the container. Is there a possibility to log from crontab, directly to "docker logs" ?
Change your cron file to below 07 2 * * * /data/docker/backup_webserver/backupscript.sh > /dev/stdout This will make sure the logs go to the container output
Alpine: No need for redirection using the default cron utility (busybox) Dockerfile FROM alpine:3.7 # Setting up crontab COPY crontab /tmp/crontab RUN cat /tmp/crontab > /etc/crontabs/root CMD ["crond", "-f", "-l", "2"] crontab * * * * * echo "Crontab is working - watchdog 1" Centos: Redirection to /proc/1/fd/1 inside the crontab declaration line Dockerfile FROM centos:7 RUN yum -y install crontabs ADD crontab /etc/cron.d/crontab RUN chmod 0644 /etc/cron.d/crontab RUN crontab /etc/cron.d/crontab CMD ["crond", "-n"] crontab * * * * * echo "Crontab is working - watchdog 1" > /proc/1/fd/1
#mcfedr is correct, but it took me a while to understand it with it being a one-liner with variables and some extra code related to setting up cron. This may be a little bit easier to read. It helped me to write it out explicitly. # Create custom stdout and stderr named pipes mkfifo /tmp/stdout /tmp/stderr chmod 0666 /tmp/stdout /tmp/stderr # Have the main Docker process tail the files to produce stdout and stderr # for the main process that Docker will actually show in docker logs. tail -f /tmp/stdout & tail -f /tmp/stderr >&2 & # Run cron cron -f Then, write to those pipes in your cron: * * * * * /run.sh > /tmp/stdout 2> /tmp/stderr
fifo is the way to go, it also useful because it allows cron tasks that are not running as root to write to the output. I am using a CMD along these lines ENV LOG_STREAM="/tmp/stdout" CMD ["bash", "-o", "pipefail", "-c", "mkfifo $$LOG_STREAM && chmod 777 $$LOG_STREAM && echo -e \"$$(env | sed 's/=\\(.*\\)/=\"\\1\"/')\n$$(cat /etc/cron.d/tasks)\" > /etc/cron.d/tasks && cron -f | tail -f $$LOG_STREAM"] With the tasks in /etc/cron.d/tasks * * * * */10 www-data echo hello >$LOG_STREAM 2>$LOG_STREAM I also prepend the env at launch to tasks so it's visible to the tasks, as cron doesnt pass it though by itself. The sed is needed because crontab format requires env vars to be quoted - at least it requires empty vars to be quoted and fails to run tasks if you have an empty var without quotes.
You could just use a FIFO. mkfifo path/to/logs When processes exchanging data via the FIFO, the kernel passes all data without writing it to the filesystem. Thus, the FIFO special has no contents on the filesystem; the filesystem entry merely serves a reference point so that processes can access the pipe using a in the filesystem. man fifo
For Debian-based images, following Dockerfile works for me (note that /etc/crontab has a slightly different format, compared to user crontab files): FROM debian:buster-slim RUN apt-get update \ && apt-get install -y cron RUN echo "* * * * * root echo 'Crontab is working - watchdog 1' > /proc/1/fd/1 2>/proc/1/fd/2" > /etc/crontab CMD ["cron", "-f"]