Verify crontab works on container using slim-buster docker image? - docker

Problem Description
I am unable to see any output from the cron job when I run docker-compose logs -f cron after running docker-compose up.
When I attached to the container using VSCode, I navigated to var/logs/cron.log and ran the cat command and saw no output. Curiously, when I run crontab -l I see * * * * * /bin/sh get_date.sh as the output.
Description of Attempted Solution
Here is how I organized the project (it is over engineered at the moment for reasons of extensibility later)
├── config
│ └── crontab
├── docker-compose.yml
├── Dockerfile
├── README.md
└── scripts
└── get_date.sh
Here is the details on the above, the contents are simple. Also, it is my attempt to use a lean python:3.8-slim-buster docker image so I can run bash or python scripts (not attempted):
crontab
* * * * * /bin/sh get_date.sh
get_date.sh
#!/bin/sh
echo "Current date and time is " "$(date +%D-%H:%M)"
docker-compose.yml
version: '3.8'
services:
cron:
build:
context: .
dockerfile: ./Dockerfile
Dockerfile
FROM python:3.8-slim-buster
#Install cron
RUN apt-get update \
&& apt-get install -y cron
# Copying script file into the container.
COPY scripts/get_date.sh .
# Giving executable permission to the script file.
RUN chmod +x get_date.sh
# Adding crontab to the appropriate location
ADD config/crontab /etc/cron.d/my-cron-file
# Giving executable permission to crontab file
RUN chmod 0644 /etc/cron.d/my-cron-file
# Running crontab
RUN crontab /etc/cron.d/my-cron-file
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
# Creating entry point for cron
CMD ["cron", "tail", "-f", "/var/log/cron.log"]
Things Attempted
I am new in trying to get cron working in a container environment. I am not getting any error messages, so not sure how I can debug this issue except describe the behavior.
I have changed the content of crontab from * * * * * root bash get_date.sh to the above. I also checked out stackoverflow and found a similar issue here but no clear solution was proposed as far as I could tell.
Thanks kindly in advance.
References
Stackoverflow discussion on running cron inside of container
How to run cron inside of containers

You have several issues that are preventing this from working:
Your attempt to run tail is a no-op: with your CMD as written you're simply running the command cron tail -f /var/log/cron.log. In other words, you're running cron and providing tail -f /var/log/cron.log as arguments. If you want to run cron followed by the tail command, you would need to write it like this:
CMD ["sh", "-c", "cron && tail -f /var/log/cron.log"]
While the above will both start cron and run the tail command, you still won't see any log output...because Debian cron doesn't log to a file; it logs to syslog. You won't see any output in /var/log/cron.log unless you have a syslog daemon installed, configured, and running.
I would suggest this as an alternative:
Fix your syntax in config/crontab; for files installed in /etc/cron.d, you need to provide the username:
* * * * * root /bin/sh /usr/local/bin/get_date.sh
I'm also being explicit about the path here, rather than assuming our cron job and the COPY command have the same working directory.
There's another problem here: this script outputs to stdout, but that won't go anywhere useful (cron generally takes output from your cron jobs and then emails it to root). We can explicitly send the output to syslog instead:
* * * * * root /bin/sh /usr/local/bin/get_date.sh | logger
We don't need to make get_date.sh executable, since we're explicitly running it with the sh command.
We'll use busybox for a syslog daemon that logs to stdout.
That all gets us:
FROM python:3.8-slim-buster
# Install cron and busybox
RUN apt-get update \
&& apt-get install -y \
cron \
busybox
# Copying script file into the container.
COPY scripts/get_date.sh /usr/local/bin/get_date.sh
# Adding crontab to the appropriate location
COPY config/crontab /etc/cron.d/get_date
# Creating entry point for cron
CMD sh -c 'cron && busybox syslogd -n -O-'
If we build an image from this, start a container, and leave it running for a while, we see as output:
Sep 22 00:17:52 701eb0bd249f syslog.info syslogd started: BusyBox v1.30.1
Sep 22 00:18:01 701eb0bd249f authpriv.err CRON[7]: pam_env(cron:session): Unable to open env file: /etc/default/locale: No such file or directory
Sep 22 00:18:01 701eb0bd249f authpriv.info CRON[7]: pam_unix(cron:session): session opened for user root by (uid=0)
Sep 22 00:18:01 701eb0bd249f cron.info CRON[8]: (root) CMD (/bin/sh /usr/local/bin/get_date.sh | logger)
Sep 22 00:18:01 701eb0bd249f user.notice root: Current date and time is 09/22/22-00:18
Sep 22 00:18:01 701eb0bd249f authpriv.info CRON[7]: pam_unix(cron:session): session closed for user root
Sep 22 00:19:01 701eb0bd249f authpriv.err CRON[12]: pam_env(cron:session): Unable to open env file: /etc/default/locale: No such file or directory
Sep 22 00:19:01 701eb0bd249f authpriv.info CRON[12]: pam_unix(cron:session): session opened for user root by (uid=0)
Sep 22 00:19:01 701eb0bd249f cron.info CRON[13]: (root) CMD (/bin/sh /usr/local/bin/get_date.sh | logger)
Sep 22 00:19:01 701eb0bd249f user.notice root: Current date and time is 09/22/22-00:19
Sep 22 00:19:01 701eb0bd249f authpriv.info CRON[12]: pam_unix(cron:session): session closed for user root

Related

Trying to run a simple docker container with a crontab and a simple python script

So I am pretty new into creating containers and I have the simple Dockerfile where I would like to run a simple python script every minute:
FROM python:3.8-buster
RUN apt-get update && apt-get install -y cron
COPY my_python /bin/my_python
COPY root /var/spool/cron/crontabs/root
RUN chmod +x /bin/my_python
CMD cron -l 2 -f
where my_python:
print("hi world!!")
and root:
* * * * * python3 /bin/my_python
then I just create the image and the container:
docker image build -t python-test
docker container run -it --name python-test python-test
I was supposed to see every minute a print with the hi world, however when running the container ( after the image build) no logs seem to appear.
What am i doing wrong?
First, I believe you want -L 2 rather than -l 2 in your cron command line; see the man page for details.
The cron daemon logs to syslog, so if something isn't work as intended, it's a good idea to arrange to receive those messages. The busybox tool provides a simple syslog daemon that can log to an in-memory buffer and a tool for reading those logs, so I modified your Dockerfile to look like this:
FROM python:3.8-buster
RUN apt-get update && apt-get install -y cron busybox
COPY my_python /bin/my_python
COPY root /var/spool/cron/crontabs/root
RUN chmod +x /bin/my_python
CMD busybox syslogd -C; cron -L 2 -f
After starting this, I docker exec'd into the container and ran busybox logread and found:
Jan 24 16:50:45 7f516db86417 cron.info cron[4]: (CRON) INFO (pidfile fd = 3)
Jan 24 16:50:45 7f516db86417 cron.info cron[4]: (root) INSECURE MODE (mode 0600 expected) (crontabs/root)
Jan 24 16:50:45 7f516db86417 cron.info cron[4]: (CRON) INFO (Running #reboot jobs)
So there's your problem: the permissions on the root crontab are incorrect. There are two ways to fix this problem:
We could explicitly chmod the file when we copy it into place, or
We can use the crontab command to install the file, which takes care of that for us
I like option 2 because it means we don't need to know the specifics of what cron expects in terms of permissions. That gets us:
FROM python:3.8-buster
RUN apt-get update && apt-get install -y cron busybox
COPY my_python /bin/my_python
COPY root /tmp/root.crontab
RUN crontab /tmp/root.crontab
RUN chmod +x /bin/my_python
CMD busybox syslogd -C; cron -L 2 -f
With that change, we can confirm that the cron job is now running as expected:
Jan 24 16:59:50 8aa688ad31cc syslog.info syslogd started: BusyBox v1.30.1
Jan 24 16:59:50 8aa688ad31cc cron.info cron[4]: (CRON) INFO (pidfile fd = 3)
Jan 24 16:59:50 8aa688ad31cc cron.info cron[4]: (CRON) INFO (Running #reboot jobs)
Jan 24 17:00:01 8aa688ad31cc authpriv.err CRON[7]: pam_env(cron:session): Unable to open env file: /etc/default/locale: No such file or directory
Jan 24 17:00:01 8aa688ad31cc authpriv.info CRON[7]: pam_unix(cron:session): session opened for user root by (uid=0)
Jan 24 17:00:02 8aa688ad31cc cron.info CRON[7]: (root) END (python3 /bin/my_python)
Jan 24 17:00:02 8aa688ad31cc authpriv.info CRON[7]: pam_unix(cron:session): session closed for user root
But...there's still no output from the container! If you read through that man page, you'll find this:
cron then wakes up every minute, examining all stored crontabs,
checking each command to see if it should be run in the current
minute. When executing commands, any output is mailed to the owner of
the crontab (or to the user named in the MAILTO environment
variable in the crontab, if such exists)...
In other words, cron collects the output from programs and attempts
to mail to the user who owns the cron job. If you want to see the
output from the cron job on the console, you will need to explicitly
redirect stdout, like this:
* * * * * python3 /bin/my_python > /dev/console
With this change in place, running the image results in the message...
hi world!
...printing to the console once a minute.

Cron doesn't seem to run on Ubuntu Docker container

Relevant parts of Dockerfile:
RUN apt-get install -y cron
RUN service cron start
ADD cronjob /etc/cron.d/gptswmm-cron
RUN chmod 0644 /etc/cron.d/gptswmm-cron
RUN touch /var/log/cron.log
RUN crontab /etc/cron.d/gptswmm-cron
RUN cron
I check ps -ef output and cron isn't there. Whatever, I can spin it up manually after the fact with cron command and it shows up (just to check all my boxes, also do service cron start).
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 15:54 pts/0 00:00:00 /bin/bash
root 41 0 0 15:59 ? 00:00:00 cron
root 65 0 0 16:02 ? 00:00:00 ps -ef
I do crontab -l and get the same as in my cronfile (which does have the empty line and the end too):
MAILTO=""
* * * * * root python /var/test/testcron.py >> /var/log/cron.log 2>&1
Python file simply creates (or appends to, if existing) a test file in the same directory (ensured same dir as script location), repeating the same word. As simple a test as you can get. (I originally had it echo-ing to log file, but did this as I'm more comfortable with what's going on in a python script than bash). Python file is owned by root with all permissions to owner.
Yet when I check where the text file should be, nothing. When I check /var/log/cron.log, it's empty.
When I manually call python /var/test/testcron.py it works and creates the output file.
So I get some system logging going, redoing the Dockerfile with this at the end:
RUN apt-get install -y rsyslog
Rebuild and spin up container. Start rsyslog first rsyslogd, then cron with cron and double-checking with service start cron.
Check /var/log/syslog and cron seems to be getting called, these basic two lines repeat every minute:
... CRON[48]: (root) CMD (python /var/test/testcron.py >> /var/log/cron.log 2>&1^M)
... CRON[47]: (root) CMD (root python /var/test/testcron.py >> /var/log/cron.log 2>&1^M)
I'm at a loss here. Been googling and searching for various solutions, but nothing so far has worked.
Looks like I had remove the 2>&1 from the cron job:
* * * * * root python /var/test/testcron.py >> /var/log/cron.log
I had copied most of the procedure from https://www.ekito.fr/people/run-a-cron-job-with-docker/ and assumed maybe there's a wire getting crossed since his tutorial is trying to output to console.
All credit to #brthornbury in comment to original question. Posting as answer for visibility for anyone else who stumbles across this.

Docker does not run cron job files with external origin (host - windows)

I use supervisor to run cron and nginx, the problem is when i try to COPY or VOLUME mount my cron files, it does not run my cron files in /etc/cron.d
But when I exec -it <container_id> bash into the container and create the exact same cron file from inside, it is immediately recognized and runs as it should.
Dockerfile :
FROM phusion/baseimage:latest
ENV TERM xterm
ENV HOME /root
RUN apt-get update && apt-get install -y \
nginx \
supervisor \
curl \
nano \
net-tools
RUN rm -rf /etc/nginx/*
COPY nginx_conf /etc/nginx
COPY supervisor_conf /etc/supervisor/
RUN mkdir -p /var/log/supervisor
COPY crontabs /etc/cron.d/
RUN chmod -R 644 /etc/cron.d/
CMD /usr/bin/supervisord
The cron itself
* * * * * root curl --silent http://127.0.0.1/cronjob/cron_test_docker.php >> /var/www/html/log/docker_test.log 2>&1
cron and nginx run through supervisor
[supervisord]
nodaemon = true
[program:nginx]
command = /usr/sbin/nginx -g "daemon off;"
autostart = true
[program:cron]
command = /usr/sbin/cron -f
autostart = true
The logs inside /var/log/supervisor/ relating to cron for stdout and stderr are empty.
I also tried stripping out supervisor and running cron on its own through phusion and CMD cron -f but got the same issue of it not working when the source is external(COPY or VOLUME) and magically works when created inside the container.
Initially believed it to be a permissions issue and tried chmod 644 (as this was the permission a file created in the container had) on all files that were the result of COPY into.
RUN chmod 644 /etc/cron.d/
After which tried every possible combination of permissions with rwx to no avail.
Also, tried to append the line of the cronjob into /etc/crontab but it is not recognized in crontab -l.
COPY crontab /tmp/crontab
RUN cat crontab >> /etc/crontab
It would be really handy if it worked just when it was created through COPY or VOLUME as it is a hassle to create it manually in the container everytime.
Any help would be greatly appreciated!
Edit 1 :
Some additional information about the file permissions after COPY or VOLUME.
When I perform
COPY crontabs /etc/cron.d/
RUN chmod -R 644 /etc/cron.d/
Inside the container running ls -l inside /etc/cron.d/ shows
-rw-r--r-- 1 root root 118 Jul 20 11:03 wwwcron-cron-docker_test
When I mount the folder through my docker-compose through VOLUME
volumes:
- ./server/crontabs:/etc/cron.d
ls -l shows
-rwxrwxrwx 1 1000 staff 118 Jul 20 11:03 wwwcron-cron-docker_test
In addition if I manually create the cron file in the container it looks like this and this works
-rw-r--r-- 1 root root 118 Jul 22 15:50 wwwcron-cron-docker_test_inside_docker
Clearly there are very different permissions and ownership when making COPY or VOLUME. But making a COPY with exact permissions does not work but seems to work when created in the container.
Thanks to #BMitch was able to find the issue which was related to line endings since my host machine was windows and the cron file origin was windows as well there was a disparity in the line endings thereby cron did not pick it up automatically.
I added this line to my Dockerfile and it works like a charm
RUN find /etc/cron.d/ -type f -print0 | xargs -0 dos2unix
And iterating on that the size of the file is indeed 1 byte smaller when a dos2unix is run, so you can verify if this operation indeed occurred.
-rw-r--r-- 1 root root 117 Jul 25 08:33 wwwcron-cron-docker_test
Have you tried installing the crontab as a separate command in the Dockerfile?
i.e.
...
COPY crontabs /path/to/crontab.txt
RUN crontab -u myUser /path/to/crontab.txt
...

Docker Jboss/wildfly: How to add datasources and MySQL connector

I am learning Docker which is completely new to me. I already was able to create an jboss/wildfly image, and then i was able to start jboss with my application using this Dockerfile:
FROM jboss/wildfly
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-c", "standalone-full.xml", "-b", "0.0.0.0"]
ADD mywebapp-web/target/mywebapp-1.0.war /opt/jboss/wildfly/standalone/deployments/mywebapp-1.0.war
Now i would like to add support for a MySQL Database by adding a datasource to the standalone and the mysql connector. For that i am following this example:
https://github.com/arun-gupta/docker-images/tree/master/wildfly-mysql-javaee7
Following is my dockerfile and my execute.sh script
Dockerfile:
FROM jboss/wildfly:latest
ADD customization /opt/jboss/wildfly/customization/
CMD ["/opt/jboss/wildfly/customization/execute.sh"]
execute script code:
#!/bin/bash
# Usage: execute.sh [WildFly mode] [configuration file]
#
# The default mode is 'standalone' and default configuration is based on the
# mode. It can be 'standalone.xml' or 'domain.xml'.
echo "=> Executing Customization script"
JBOSS_HOME=/opt/jboss/wildfly
JBOSS_CLI=$JBOSS_HOME/bin/jboss-cli.sh
JBOSS_MODE=${1:-"standalone"}
JBOSS_CONFIG=${2:-"$JBOSS_MODE.xml"}
function wait_for_server() {
until `$JBOSS_CLI -c ":read-attribute(name=server-state)" 2> /dev/null | grep -q running`; do
sleep 1
done
}
echo "=> Starting WildFly server"
echo "JBOSS_HOME : " $JBOSS_HOME
echo "JBOSS_CLI : " $JBOSS_CLI
echo "JBOSS_MODE : " $JBOSS_MODE
echo "JBOSS_CONFIG: " $JBOSS_CONFIG
echo $JBOSS_HOME/bin/$JBOSS_MODE.sh -b 0.0.0.0 -c $JBOSS_CONFIG &
$JBOSS_HOME/bin/$JBOSS_MODE.sh -b 0.0.0.0 -c $JBOSS_CONFIG &
echo "=> Waiting for the server to boot"
wait_for_server
echo "=> Executing the commands"
$JBOSS_CLI -c --file=`dirname "$0"`/commands.cli
# Add MySQL module
module add --name=com.mysql --resources=/opt/jboss/wildfly/customization/mysql-connector-java-5.1.39-bin.jar --dependencies=javax.api,javax.transaction.api
# Add MySQL driver
/subsystem=datasources/jdbc-driver=mysql:add(driver-name=mysql,driver-module-name=com.mysql,driver-xa-datasource-class-name=com.mysql.jdbc.jdbc2.optional.MysqlXADataSource)
# Deploy the WAR
#cp /opt/jboss/wildfly/customization/leadservice-1.0.war $JBOSS_HOME/$JBOSS_MODE/deployments/leadservice-1.0.war
echo "=> Shutting down WildFly"
if [ "$JBOSS_MODE" = "standalone" ]; then
$JBOSS_CLI -c ":shutdown"
else
$JBOSS_CLI -c "/host=*:shutdown"
fi
echo "=> Restarting WildFly"
$JBOSS_HOME/bin/$JBOSS_MODE.sh -b 0.0.0.0 -c $JBOSS_CONFIG
But I get a error when i run the image complaining that a file or directory is not found:
Building Image
$ docker build -t mpssantos/leadservice:latest .
Sending build context to Docker daemon 19.37 MB
Step 1 : FROM jboss/wildfly:latest
---> b8279b641e82
Step 2 : ADD customization /opt/jboss/wildfly/customization/
---> aea03d4f2819
Removing intermediate container 0920e2cd97fd
Step 3 : CMD /opt/jboss/wildfly/customization/execute.sh
---> Running in 8a0dbcb01855
---> 10335320b89d
Removing intermediate container 8a0dbcb01855
Successfully built 10335320b89d
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
Running image
$ docker run mpssantos/leadservice
no such file or directory
Error response from daemon: Cannot start container 5d3357ba17afa36e81d8794f2b0cd45cc00dde955b2b2054282c4ef17dd4f265: [8] System error: no such file or directory
Can someone let me know how can i access the filesystem so i can check which file or directory is complaining? Is there a better way to debug this?
I believe that is something related with the bash which is referred on first line of the script because the following echo is not printed
Thank you so much
I made it to ssh the container to check whats inside.
1) ssh to the docker machine: docker-machine ssh default
2) checked the container id with the command: docker ps -a
3) ssh to the container with the command: sudo docker exec -i -t 665b4a1e17b6 /bin/bash
4) i can check that the "/opt/jboss/wildfly/customization/" directory exists with the expected files
The customization dir have the following permissions and is listed like this:
drwxr-xr-x 2 root root 4096 Jun 12 23:44 customization
drwxr-xr-x 10 jboss jboss 4096 Jun 14 00:15 standalone
and the files inside the customization dir
drwxr-xr-x 2 root root 4096 Jun 12 23:44 .
drwxr-xr-x 12 jboss jboss 4096 Jun 14 00:15 ..
-rwxr-xr-x 1 root root 1755 Jun 12 20:06 execute.sh
-rwxr-xr-x 1 root root 989497 May 4 11:11 mysql-connector-java-5.1.39-bin.jar
if i try to execute the file i get this error
[jboss#d68190e4f0d8 customization]$ ./execute.sh
bash: ./execute.sh: /bin/bash^M: bad interpreter: No such file or directory
Does this bring light to anything?
Thank you so much again
I found the issue. The execute.sh file was with windows eof. I converted to UNIX And start to work.
I believe the execute.sh is not found. You can verify by running the following and finding the result is an empty directory:
docker run mpssantos/leadservice ls -al /opt/jboss/wildfly/customization/
The reason for this is you are doing your build on a different (virtual) machine than your local system, so it's pulling the "customization" folder from that VM. I'd run the build within the VM and place the files you want to import on that VM where the build can find it.

Multi command with docker in a script

With docker I would like to offer a vm to each client to compile and execute a C program in only one file.
For that, I share a folder with the docker and the host thanks to a dockerfile and the command "ADD".
My folder is like that:
folder/id_user/script.sh
folder/id_user/code.c
In script.sh:
gcc ./compil/code.c -o ./compil/code && ./compil/code
My problem is in the doc we can read this for ADD:
All new files and directories are created with mode 0755, uid and gid 0.
But when I launch "ls" on the file I have:
ls -l compil/8f41dacd-8775-483e-8093-09a8712e82b1/
total 8
-rw-r--r-- 1 1000 1000 51 Feb 11 10:52 code.c
-rw-r--r-- 1 1000 1000 54 Feb 11 10:52 script.sh
So I can't execute the script.sh. Do you know why?
Maybe you wonder why proceed like that.
It's because if I do:
sudo docker run ubuntu/C pwd && pwd
result:
/
/srv/website
So we can see the first command is in the VM but not the second. I understand it might be normal for docker.
If you have any suggestion I'm pleased to listen it.
Thanks !
You can set up the correct mode by RUN command with chmod:
# Dockerfile
...
ADD script.sh /root/script.sh
RUN chmod +x /root/script.sh
...
The second question, you should use CMD command - && approach does work in Dockerfile, try to put this line at the end of your Dockerfile:
CMD pwd && pwd
then docker build . and you will see:
root#test:/home/test/# docker run <image>
/
/
Either that your you can do:
RUN /bin/sh /root/script.sh
to achieve the same result

Resources