I want to run some cron jobs in a Docker container and send the output to stdout. I read this post: How to run a cron job inside a docker container?
To try this out with a simple example, I created a demo crontab:
my-crontab:
* * * * * date > /dev/stdout 2> /dev/stderr
# empty line
Then I run an interactive shell inside a Docker container based on the image my scripts will need:
docker run -it --entrypoint bash python:3.10.3-bullseye
/# apt update
/# apt install cron
/# crontab < my-crontab
/# cron -f
If I wait 60 seconds, I expect to see some output to the console attached to the container once every minute. But there is no output.
Finally, I found the output in /var/spool/mail/mail. Here is one message:
From root#5e3c82cb3651 Tue May 10 20:04:02 2022
Return-path: <root#5e3c82cb3651>
Envelope-to: root#5e3c82cb3651
Delivery-date: Tue, 10 May 2022 20:04:02 +0000
Received: from root by 5e3c82cb3651 with local (Exim 4.94.2)
(envelope-from <root#5e3c82cb3651>)
id 1noW5S-0000SA-0T
for root#5e3c82cb3651; Tue, 10 May 2022 20:04:02 +0000
From: root#5e3c82cb3651 (Cron Daemon)
To: root#5e3c82cb3651
Subject: Cron <root#5e3c82cb3651> date > /dev/stdout 2> /dev/stderr
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Cron-Env: <SHELL=/bin/sh>
X-Cron-Env: <HOME=/root>
X-Cron-Env: <PATH=/usr/bin:/bin>
X-Cron-Env: <LOGNAME=root>
Message-Id: <E1noW5S-0000SA-0T#5e3c82cb3651>
Date: Tue, 10 May 2022 20:04:02 +0000
Tue May 10 20:04:01 UTC 2022
Then it looks like /bin/sh is completely ignoring the shell redirection in the crontab.
#DavidMaze answered this in his comment (above - can't find a link to it). Redirecting to /proc/1/fd/1 and /proc/1/fd/2 (for stderr) totally works. Thank you, David.
Nevertheless, that's counterintuitive. The filesystem nodes /dev/stdout and /dev/stderr already exist as symlinks that point to /proc/1/fd/1 and /proc/1/fd/2, respectively, independent of cron. Why wouldn't cmd > /dev/stdout and cmd > /proc/1/fd/1 be interchangeable in a crontab?
cron was written quite a while a ago. And expectedly, it's not, say, docker-friendly. It expects tasks to produce no output. And if they do, that is considered an error, and cron tries to email the responsible person about it. There are some tricks to make cron tasks' output to be seen in docker logs, but why not choose a docker-friendly cron implementation?
One of which is supercronic:
docker-compose.yml:
services:
supercronic:
build: .
command: supercronic crontab
Dockerfile:
FROM alpine:3.17
RUN set -x \
&& apk add --no-cache supercronic shadow \
&& useradd -m app
USER app
COPY crontab .
crontab:
* * * * * date
A gist with a bit more info.
Another good one is yacron, but it uses YAML.
ofelia can be used, but they seem to focus on running tasks in separate containers. Which is probably not a downside, but I'm not sure why I'd want to do that.
But if you insist on traditional ones, you can find a couple in my other answer.
Related
Problem Description
I am unable to see any output from the cron job when I run docker-compose logs -f cron after running docker-compose up.
When I attached to the container using VSCode, I navigated to var/logs/cron.log and ran the cat command and saw no output. Curiously, when I run crontab -l I see * * * * * /bin/sh get_date.sh as the output.
Description of Attempted Solution
Here is how I organized the project (it is over engineered at the moment for reasons of extensibility later)
├── config
│ └── crontab
├── docker-compose.yml
├── Dockerfile
├── README.md
└── scripts
└── get_date.sh
Here is the details on the above, the contents are simple. Also, it is my attempt to use a lean python:3.8-slim-buster docker image so I can run bash or python scripts (not attempted):
crontab
* * * * * /bin/sh get_date.sh
get_date.sh
#!/bin/sh
echo "Current date and time is " "$(date +%D-%H:%M)"
docker-compose.yml
version: '3.8'
services:
cron:
build:
context: .
dockerfile: ./Dockerfile
Dockerfile
FROM python:3.8-slim-buster
#Install cron
RUN apt-get update \
&& apt-get install -y cron
# Copying script file into the container.
COPY scripts/get_date.sh .
# Giving executable permission to the script file.
RUN chmod +x get_date.sh
# Adding crontab to the appropriate location
ADD config/crontab /etc/cron.d/my-cron-file
# Giving executable permission to crontab file
RUN chmod 0644 /etc/cron.d/my-cron-file
# Running crontab
RUN crontab /etc/cron.d/my-cron-file
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Creating entry point for cron
CMD ["cron", "tail", "-f", "/var/log/cron.log"]
Things Attempted
I am new in trying to get cron working in a container environment. I am not getting any error messages, so not sure how I can debug this issue except describe the behavior.
I have changed the content of crontab from * * * * * root bash get_date.sh to the above. I also checked out stackoverflow and found a similar issue here but no clear solution was proposed as far as I could tell.
Thanks kindly in advance.
References
Stackoverflow discussion on running cron inside of container
How to run cron inside of containers
You have several issues that are preventing this from working:
Your attempt to run tail is a no-op: with your CMD as written you're simply running the command cron tail -f /var/log/cron.log. In other words, you're running cron and providing tail -f /var/log/cron.log as arguments. If you want to run cron followed by the tail command, you would need to write it like this:
CMD ["sh", "-c", "cron && tail -f /var/log/cron.log"]
While the above will both start cron and run the tail command, you still won't see any log output...because Debian cron doesn't log to a file; it logs to syslog. You won't see any output in /var/log/cron.log unless you have a syslog daemon installed, configured, and running.
I would suggest this as an alternative:
Fix your syntax in config/crontab; for files installed in /etc/cron.d, you need to provide the username:
* * * * * root /bin/sh /usr/local/bin/get_date.sh
I'm also being explicit about the path here, rather than assuming our cron job and the COPY command have the same working directory.
There's another problem here: this script outputs to stdout, but that won't go anywhere useful (cron generally takes output from your cron jobs and then emails it to root). We can explicitly send the output to syslog instead:
* * * * * root /bin/sh /usr/local/bin/get_date.sh | logger
We don't need to make get_date.sh executable, since we're explicitly running it with the sh command.
We'll use busybox for a syslog daemon that logs to stdout.
That all gets us:
FROM python:3.8-slim-buster
# Install cron and busybox
RUN apt-get update \
&& apt-get install -y \
cron \
busybox
# Copying script file into the container.
COPY scripts/get_date.sh /usr/local/bin/get_date.sh
# Adding crontab to the appropriate location
COPY config/crontab /etc/cron.d/get_date
# Creating entry point for cron
CMD sh -c 'cron && busybox syslogd -n -O-'
If we build an image from this, start a container, and leave it running for a while, we see as output:
Sep 22 00:17:52 701eb0bd249f syslog.info syslogd started: BusyBox v1.30.1
Sep 22 00:18:01 701eb0bd249f authpriv.err CRON[7]: pam_env(cron:session): Unable to open env file: /etc/default/locale: No such file or directory
Sep 22 00:18:01 701eb0bd249f authpriv.info CRON[7]: pam_unix(cron:session): session opened for user root by (uid=0)
Sep 22 00:18:01 701eb0bd249f cron.info CRON[8]: (root) CMD (/bin/sh /usr/local/bin/get_date.sh | logger)
Sep 22 00:18:01 701eb0bd249f user.notice root: Current date and time is 09/22/22-00:18
Sep 22 00:18:01 701eb0bd249f authpriv.info CRON[7]: pam_unix(cron:session): session closed for user root
Sep 22 00:19:01 701eb0bd249f authpriv.err CRON[12]: pam_env(cron:session): Unable to open env file: /etc/default/locale: No such file or directory
Sep 22 00:19:01 701eb0bd249f authpriv.info CRON[12]: pam_unix(cron:session): session opened for user root by (uid=0)
Sep 22 00:19:01 701eb0bd249f cron.info CRON[13]: (root) CMD (/bin/sh /usr/local/bin/get_date.sh | logger)
Sep 22 00:19:01 701eb0bd249f user.notice root: Current date and time is 09/22/22-00:19
Sep 22 00:19:01 701eb0bd249f authpriv.info CRON[12]: pam_unix(cron:session): session closed for user root
I need to automate the verification of active containers with docker ps, and send updates of containers with docker pull. So i created this script file:
if docker ps | grep "fairplay";then
echo "doker fairplay ok" >> /home/ubuntu/at2.log
else
echo "doker fairplay caido" >> /home/ubuntu/at2.log
errdock=1
fi
The script works without a problem when i use manually on the terminal, but when i try with cron just don't work.
Crontab:
* * * * * root sh /home/ubuntu/at2.sh
The log when i run mannualy:
Thu Mar 25 13:33:43 -03 2021
doker fairplay ok
doker widevine ok
Thu Mar 25 13:33:44 -03 2021
The log when i run with cron:
Thu Mar 25 13:34:01 -03 2021
doker fairplay caido
doker widevine caido
Thu Mar 25 13:34:01 -03 2021
I don't want to run anything inside the container, i need to run the command in the cron that is on the host, so the following questions don't help question 1, question 2.
Your crontab syntax is not correct. I've tried your exact .sh file and I got no errors.
This is the correct one:
* * * * * sh /home/ubuntu/at2.sh
I'm not sure why you've added root user or word in crontab syntax as if you run root sh /home/ubuntu/at2.sh, you'll get Command 'root' not found.
I can also recommend you add date variable so that you know the time it's up or down:
if docker ps | grep "fairplay";then
then
echo "`date +"%D %H:%M:%S"` docker fairplay ok" >> /home/ubuntu/at2.log
else
echo "`date +"%D %H:%M:%S"` docker fairplay caido" >> /home/ubuntu/at2.log
errdock=1
fi
So I am pretty new into creating containers and I have the simple Dockerfile where I would like to run a simple python script every minute:
FROM python:3.8-buster
RUN apt-get update && apt-get install -y cron
COPY my_python /bin/my_python
COPY root /var/spool/cron/crontabs/root
RUN chmod +x /bin/my_python
CMD cron -l 2 -f
where my_python:
print("hi world!!")
and root:
* * * * * python3 /bin/my_python
then I just create the image and the container:
docker image build -t python-test
docker container run -it --name python-test python-test
I was supposed to see every minute a print with the hi world, however when running the container ( after the image build) no logs seem to appear.
What am i doing wrong?
First, I believe you want -L 2 rather than -l 2 in your cron command line; see the man page for details.
The cron daemon logs to syslog, so if something isn't work as intended, it's a good idea to arrange to receive those messages. The busybox tool provides a simple syslog daemon that can log to an in-memory buffer and a tool for reading those logs, so I modified your Dockerfile to look like this:
FROM python:3.8-buster
RUN apt-get update && apt-get install -y cron busybox
COPY my_python /bin/my_python
COPY root /var/spool/cron/crontabs/root
RUN chmod +x /bin/my_python
CMD busybox syslogd -C; cron -L 2 -f
After starting this, I docker exec'd into the container and ran busybox logread and found:
Jan 24 16:50:45 7f516db86417 cron.info cron[4]: (CRON) INFO (pidfile fd = 3)
Jan 24 16:50:45 7f516db86417 cron.info cron[4]: (root) INSECURE MODE (mode 0600 expected) (crontabs/root)
Jan 24 16:50:45 7f516db86417 cron.info cron[4]: (CRON) INFO (Running #reboot jobs)
So there's your problem: the permissions on the root crontab are incorrect. There are two ways to fix this problem:
We could explicitly chmod the file when we copy it into place, or
We can use the crontab command to install the file, which takes care of that for us
I like option 2 because it means we don't need to know the specifics of what cron expects in terms of permissions. That gets us:
FROM python:3.8-buster
RUN apt-get update && apt-get install -y cron busybox
COPY my_python /bin/my_python
COPY root /tmp/root.crontab
RUN crontab /tmp/root.crontab
RUN chmod +x /bin/my_python
CMD busybox syslogd -C; cron -L 2 -f
With that change, we can confirm that the cron job is now running as expected:
Jan 24 16:59:50 8aa688ad31cc syslog.info syslogd started: BusyBox v1.30.1
Jan 24 16:59:50 8aa688ad31cc cron.info cron[4]: (CRON) INFO (pidfile fd = 3)
Jan 24 16:59:50 8aa688ad31cc cron.info cron[4]: (CRON) INFO (Running #reboot jobs)
Jan 24 17:00:01 8aa688ad31cc authpriv.err CRON[7]: pam_env(cron:session): Unable to open env file: /etc/default/locale: No such file or directory
Jan 24 17:00:01 8aa688ad31cc authpriv.info CRON[7]: pam_unix(cron:session): session opened for user root by (uid=0)
Jan 24 17:00:02 8aa688ad31cc cron.info CRON[7]: (root) END (python3 /bin/my_python)
Jan 24 17:00:02 8aa688ad31cc authpriv.info CRON[7]: pam_unix(cron:session): session closed for user root
But...there's still no output from the container! If you read through that man page, you'll find this:
cron then wakes up every minute, examining all stored crontabs,
checking each command to see if it should be run in the current
minute. When executing commands, any output is mailed to the owner of
the crontab (or to the user named in the MAILTO environment
variable in the crontab, if such exists)...
In other words, cron collects the output from programs and attempts
to mail to the user who owns the cron job. If you want to see the
output from the cron job on the console, you will need to explicitly
redirect stdout, like this:
* * * * * python3 /bin/my_python > /dev/console
With this change in place, running the image results in the message...
hi world!
...printing to the console once a minute.
I have a logrotate config:
/opt/docker_folders/logs/nginx/*.log {
dateext
daily
rotate 31
nocreate
missingok
notifempty
nocompress
postrotate
/usr/bin/docker exec -it nginx-container-name nginx -s reopen > /dev/null 2>/dev/null
endscript
su docker_nginx root
}
folder permissions:
drwxrwxr-x. 2 docker_nginx root 4096 Oct 13 10:22 nginx
nginx is a local host folder mounted to docker container.
docker_nginx is a user that has same uid as nginx user inside a container (uid: 101).
If I run commands (as root)
# /sbin/logrotate -v /etc/logrotate.d/nginx_logrotate_config
# /sbin/logrotate -d -v /etc/logrotate.d/nginx_logrotate_config
# /sbin/logrotate -d -f -v /etc/logrotate.d/nginx_logrotate_config
All working like a charm.
Problem:
But when log rotating automatically by cron I have get error
logrotate: ALERT exited abnormally with [1]
in /var/log/messages
As result logs rotating as usual but nginx don't create new files (access.log, etc).
Looks like postrotate nginx -s reopen script failing.
Linux version is CentOS 7.
SELinux disabled.
Question:
At least how know what happend when logrotate running from cron?
And what problem may be?
PS I know that I can also use docker restart. But I don't want to do this because of service short time disconnect.
PS2 Also I know that here is nocreate parameter in config. That is because I want to create new log files by nginx (to avoid wrong permissions of new files). Anyway, if nginx -s reopen really failing, there is a possibility that nginx will not re-read newly created files.
EDIT1:
I edited /etc/cron.daily/logrotate script and get logs.
There is only one line about problem.
error: error running non-shared postrotate script for /opt/docker_folders/logs/nginx/access.log of '/opt/docker_folders/logs/nginx/*.log '
So I still don't understand what cause this problem... When I run this script manually all running fine.
Okay. Answering by myself.
-it parameters can't be used with cron tasks (and logrotate is also a cron task).
Because cron don't has interactive session (TTY).
I figured it out by running the /usr/bin/docker exec -it nginx-container-name nginx -s reopen > /dev/null 2>/dev/null as a cron task. I have got error message "The input device is not a TTY"
So my new logrotate config looks like
/opt/docker_folders/logs/nginx/*.log {
dateext
daily
rotate 31
nocreate
missingok
notifempty
nocompress
postrotate
/usr/bin/docker exec nginx-container-name /bin/sh -c '/usr/sbin/nginx -s reopen > /dev/null 2>/dev/null'
endscript
su docker_nginx root
}
And it's finally works.
I have to understand the parameter before using it
I have to understand the parameter before using it
I have to understand the parameter before using it
Relevant parts of Dockerfile:
RUN apt-get install -y cron
RUN service cron start
ADD cronjob /etc/cron.d/gptswmm-cron
RUN chmod 0644 /etc/cron.d/gptswmm-cron
RUN touch /var/log/cron.log
RUN crontab /etc/cron.d/gptswmm-cron
RUN cron
I check ps -ef output and cron isn't there. Whatever, I can spin it up manually after the fact with cron command and it shows up (just to check all my boxes, also do service cron start).
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 15:54 pts/0 00:00:00 /bin/bash
root 41 0 0 15:59 ? 00:00:00 cron
root 65 0 0 16:02 ? 00:00:00 ps -ef
I do crontab -l and get the same as in my cronfile (which does have the empty line and the end too):
MAILTO=""
* * * * * root python /var/test/testcron.py >> /var/log/cron.log 2>&1
Python file simply creates (or appends to, if existing) a test file in the same directory (ensured same dir as script location), repeating the same word. As simple a test as you can get. (I originally had it echo-ing to log file, but did this as I'm more comfortable with what's going on in a python script than bash). Python file is owned by root with all permissions to owner.
Yet when I check where the text file should be, nothing. When I check /var/log/cron.log, it's empty.
When I manually call python /var/test/testcron.py it works and creates the output file.
So I get some system logging going, redoing the Dockerfile with this at the end:
RUN apt-get install -y rsyslog
Rebuild and spin up container. Start rsyslog first rsyslogd, then cron with cron and double-checking with service start cron.
Check /var/log/syslog and cron seems to be getting called, these basic two lines repeat every minute:
... CRON[48]: (root) CMD (python /var/test/testcron.py >> /var/log/cron.log 2>&1^M)
... CRON[47]: (root) CMD (root python /var/test/testcron.py >> /var/log/cron.log 2>&1^M)
I'm at a loss here. Been googling and searching for various solutions, but nothing so far has worked.
Looks like I had remove the 2>&1 from the cron job:
* * * * * root python /var/test/testcron.py >> /var/log/cron.log
I had copied most of the procedure from https://www.ekito.fr/people/run-a-cron-job-with-docker/ and assumed maybe there's a wire getting crossed since his tutorial is trying to output to console.
All credit to #brthornbury in comment to original question. Posting as answer for visibility for anyone else who stumbles across this.