I have this Dockerfile (where I am using miniconda just because I would like to schedule some python scripts, but it's a debian:jessie docker image):
FROM continuumio/miniconda:4.2.12
RUN mkdir -p /workspace
WORKDIR /workspace
ADD volume .
RUN apt-get update
RUN apt-get install -y cron
ENTRYPOINT ["/bin/sh", "/workspace/conf/entrypoint.sh"]
The script entrypoint.sh that keeps the container alive is this one:
#!/usr/bin/env bash
echo ">>> Configuring cron"
service cron start
touch /var/log/cron.log
mv /workspace/conf/root /var/spool/cron/crontabs/root
chmod +x /var/spool/cron/crontabs/root
crontab /var/spool/cron/crontabs/root
echo ">>> Done!"
tail -f /var/log/cron.log
From the docker documentation about supervisor (https://docs.docker.com/engine/admin/using_supervisord/) it looks like that could be an option as well as the bash script option (as in my example), that's why I decided to go for the bash script and to ignore supervisor.
And the content of the cron details /workspace/conf/root is this:
* * * * * root echo "Hello world: $(date +%H:%M:%S)" >> /var/log/cron.log 2>&1
(with at the bottom as an empty line \n)
I can not find a way to see that Hello world: $(date +%H:%M:%S) each minute appended to /var/log/cron.log, but to me all the cron/crontab settings are correct.
When I check the logs of the container I can see:
>>> Configuring cron
[ ok ] Starting periodic command scheduler: cron.
>>> Done!
Also, when logging into the running container I can see the cron daemon running:
root#2330ced4daa9:/workspace# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 4336 1580 ? Ss+ 13:06 0:00 /bin/sh /workspace/conf/entrypoint.sh
root 14 0.0 0.0 27592 2096 ? Ss 13:06 0:00 /usr/sbin/cron
root 36 0.0 0.0 5956 740 ? S+ 13:06 0:00 tail -f /var/log/cron.log
root 108 0.5 0.1 21948 3692 ? Ss 13:14 0:00 bash
root 114 0.0 0.1 19188 2416 ? R+ 13:14 0:00 ps aux
What am I doing wrong?
Are you sure the Cronjob has execution rights?
chmod 0644 /var/spool/cron/crontabs/root
Related
I have PHP Dockerfile:
...
USER root
echo "${SYSTEM_USERNAME} ALL=NOPASSWD: /usr/sbin/php-fpm${PHP_VERSION}" >> /etc/sudoers.d/${SYSTEM_USERNAME}
...
USER ${SYSTEM_USERNAME}
CMD ["/usr/bin/env", "sh", "-c", "sudo php-fpm${PHP_VERSION} --nodaemonize"]
...
It works via docker:
$ docker run -dit php7.4-fpm
$ docker exec -it 2e9331162630 ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
php-7-4 1 0.0 0.0 2384 764 pts/0 Ss+ 15:48 0:00 sh -c sudo php-
root 6 0.0 0.0 6592 3224 pts/0 S+ 15:48 0:00 sudo php-fpm7.4
root 7 0.0 0.3 635904 33796 ? Ss 15:48 0:00 php-fpm: master
www-data 8 0.0 0.0 635904 7968 ? S 15:48 0:00 php-fpm: pool w
And not working via docker-compose:
$ docker-compose up
php_1 |
php_1 | We trust you have received the usual lecture from the local System
php_1 | Administrator. It usually boils down to these three things:
php_1 |
php_1 | #1) Respect the privacy of others.
php_1 | #2) Think before you type.
php_1 | #3) With great power comes great responsibility.
php_1 |
php_1 | sudo: no tty present and no askpass program specified
docker_php_1 exited with code 1
How to avoid sudo password prompt in docker-compose ?
Generally you don't use sudo in Docker at all: it's all but impossible to safely set a user password, and whenever you run a container, you can directly specify the user ID it uses (with the docker run -u option). Containers only run one process and usually don't have multiple users.
In the particular example you have here, you're in theory running the container as a non-root user, but the main container process is a sudo invocation that immediately switches back to the root user. You can eliminate the intermediate step here and just specify
USER root
CMD php-fpm${PHP_VERSION} --nodaemonize
Note that you allocate pseudo tty with -t.
Do same in docker-compose with tty: true.
Not sure if explanation is correct, but it works:
Environment variable DEBIAN_FRONTEND=noninteractive is guilty, need to share it
PHP Dockerfile:
ENV ...
# Avoid 'debconf: unable to initialize frontend: Dialog'
DEBIAN_FRONTEND=noninteractive
...
USER root
echo "${SYSTEM_USERNAME} ALL=NOPASSWD:SETENV: /usr/sbin/php-fpm${PHP_VERSION}" >> /etc/sudoers.d/${SYSTEM_USERNAME}
...
USER ${SYSTEM_USERNAME}
CMD ["/usr/bin/env", "sh", "-c", "sudo --preserve-env=DEBIAN_FRONTEND php-fpm${PHP_VERSION} --nodaemonize"]
Thanks to https://superuser.com/a/1001684/192832
I have a cronjob that runs every minute which will redirect output to a log file /var/log/cronjob/cron.log. Since this is running in Kubernetes, I want to redirect the log to STDOUT.
The approach I took was to use a symlink using RUN ln -sf /proc/1/fd/1 /var/log/cronjob/cron.log:
# ls -la /var/log/cronjob/cron.log
lrwxrwxrwx 1 root root 12 Jan 21 19:23 /var/log/cronjob/cron.log -> /proc/1/fd/1
When I run kubectl logs it has no output.
If I (within the container), delete the symlink and create as a normal file, my output as expected appears in the /var/log/cronjob/cron.log file.
# tail -f /var/log/cronjob/cron.log
Running scheduled command: '/usr/bin/php7.3' 'artisan' sync:health_check > '/dev/null' 2>&1
Running scheduled command: ('/usr/bin/php7.3' 'artisan' compute:user_preferences > '/dev/null' 2>&1 ; '/usr/bin/php7.3' 'artisan' schedule:finish "framework/schedule-9019c9dc22ad7439efd038277fe8f370f56958e7") > '/dev/null' 2>&1 &
How can I get the my log via symlink write to STDOUT?
I have tried other things such as:
Use /dev/stdout for the symlink
Tail the /var/log/cronjob/cron.log file within the entrypoint
Edit: More information about files/scripts:
crontab:
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
* * * * * /usr/local/bin/schedule-run.sh
# An empty line is required at the end of this file for a valid cron file
/usr/local/bin/schedule-run.sh:
#!/bin/bash
# Source container environment variables
source /tmp/export
# Run Laravel scheduler
php /var/www/api/artisan schedule:run >> /var/log/cronjob/cron.log 2>&1
Edit #2:
Currently my CMD looks like this which spawns multiple child processes:
CMD export >> /tmp/export && crontab /etc/cron.d/realty-cron && cron && tail -f /var/log/cronjob/cron.log
root#workspace-dev-condos-ca-765dc6686-h8vdl:/var/www/api# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 21:55 ? 00:00:00 /bin/sh -c export >> /tmp/export && crontab /etc/cron.d/realty-cron && cron && tail -f /var/log/cronjob/cron.log
root 8 1 0 21:55 ? 00:00:00 cron
root 9 1 0 21:55 ? 00:00:00 tail -f /var/log/cronjob/cron.log
root 170 1 0 21:59 ? 00:00:00 ssh-agent -s
root 233 0 0 22:00 pts/0 00:00:00 bash
root 249 1 0 22:00 ? 00:00:00 ssh-agent -s
root 1277 233 0 22:26 pts/0 00:00:00 ps -ef
I'm not sure if that is relevant but through trial and error testing, I noticed that sometimes echo "test1" >> /proc/1/fd/1 or echo "test2" >> /proc/1/fd/2 will output to stdout (kubectl logs) but not both at the same time. I feel like the child processes are related but don't know why.
I am trying to create docker image that will execute cronjobs from root and custom made user. So far only root user is working:
FROM amazonlinux:2017.09
RUN yum -y install ca-certificates shadow-utils cronie && yum -y clean all
# root cronjob
RUN echo '* * * * * echo "$USER - Working in a coal mine..." > /proc/1/fd/1 2>/proc/1/fd/2' > ~/cronjob
RUN chmod 0644 ~/cronjob
RUN crontab ~/cronjob
RUN useradd -ms /bin/bash ansible
# Ansible cronjob
USER ansible
RUN echo '* * * * * echo "$USER - Working in a coal mine..." > /proc/1/fd/1 2>/proc/1/fd/2' > ~/cronjob
RUN chmod 0644 ~/cronjob
RUN crontab ~/cronjob
USER root
CMD ["/usr/sbin/crond", "-n"]
I build this docker using this command docker build -t demotest -f Dockerfile .
I execute the created image using this command docker run -t -i demotest:latest
Result of execution:
> docker run -t -i demotest:latest
root - Working in a coal mine...
root - Working in a coal mine...
root - Working in a coal mine...
root - Working in a coal mine...
Few details:
docker run -t -i demotest:latest bash -c 'ls -l /home/ansible/cronjob'
-rw-r--r-- 1 ansible ansible 81 Jul 11 16:06 /home/ansible/cronjob
docker run -t -i demotest:latest bash -c 'ls -l /root/cronjob'
-rw-r--r-- 1 root root 81 Jul 11 16:06 /root/cronjob
docker run -t -i demotest:latest bash -c 'crontab -u root -l'
* * * * * echo "$USER - Working in a coal mine..." > /proc/1/fd/1 2>/proc/1/fd/2
docker run -t -i demotest:latest bash -c 'crontab -u ansible -l'
* * * * * echo "$USER - Working in a coal mine..." > /proc/1/fd/1 2>/proc/1/fd/2
What am I doing wrong?
Updated information
I changed CMD to look like this: CMD ["/usr/sbin/crond", "-n", "-x", "sch"]
This generated the output:
docker run -t -i demotest:latest
debug flags enabled: sch
[1] cron started
log_it: (CRON 1) INFO (RANDOM_DELAY will be scaled with factor 85% if used.)
log_it: (CRON 1) INFO (running with inotify support)
[1] GMToff=0
[1] Target time=1562865060, sec-to-wait=12
user [root:0:0:...] cmd="run-parts /etc/cron.hourly"
user [root:0:0:...] cmd="[ ! -f /etc/cron.hourly/0anacron ] && run-parts /etc/cron.monthly"
user [root:0:0:...] cmd="[ ! -f /etc/cron.hourly/0anacron ] && run-parts /etc/cron.weekly"
user [root:0:0:...] cmd="[ ! -f /etc/cron.hourly/0anacron ] && run-parts /etc/cron.daily"
user [ansible:500:500:...] cmd="echo "$USER - Working in a coal mine..." > /proc/1/fd/1 2>/proc/1/fd/2"
user [root:0:0:...] cmd="echo "$USER - Working in a coal mine..." > /proc/1/fd/1 2>/proc/1/fd/2"
Minute-ly job. Recording time 1562865061
[1] Target time=1562865120, sec-to-wait=60
Minute-ly job. Recording time 1562865061
log_it: (root 8) CMD (echo "$USER - Working in a coal mine..." > /proc/1/fd/1 2>/proc/1/fd/2)
log_it: (ansible 9) CMD (echo "$USER - Working in a coal mine..." > /proc/1/fd/1 2>/proc/1/fd/2)
root - Working in a coal mine...
This shows that the custom user cronjob is executed but output is not redirected.
So the questions changes to - How to properly redirect stdout to host console?
Change last line of your Dockerfile:
from
CMD ["/usr/sbin/crond", "-n", "-x", "sch"]
to
CMD ["cron", "-f"]
Note:
cron -f --> cron foreground
I use supervisor to run cron and nginx, the problem is when i try to COPY or VOLUME mount my cron files, it does not run my cron files in /etc/cron.d
But when I exec -it <container_id> bash into the container and create the exact same cron file from inside, it is immediately recognized and runs as it should.
Dockerfile :
FROM phusion/baseimage:latest
ENV TERM xterm
ENV HOME /root
RUN apt-get update && apt-get install -y \
nginx \
supervisor \
curl \
nano \
net-tools
RUN rm -rf /etc/nginx/*
COPY nginx_conf /etc/nginx
COPY supervisor_conf /etc/supervisor/
RUN mkdir -p /var/log/supervisor
COPY crontabs /etc/cron.d/
RUN chmod -R 644 /etc/cron.d/
CMD /usr/bin/supervisord
The cron itself
* * * * * root curl --silent http://127.0.0.1/cronjob/cron_test_docker.php >> /var/www/html/log/docker_test.log 2>&1
cron and nginx run through supervisor
[supervisord]
nodaemon = true
[program:nginx]
command = /usr/sbin/nginx -g "daemon off;"
autostart = true
[program:cron]
command = /usr/sbin/cron -f
autostart = true
The logs inside /var/log/supervisor/ relating to cron for stdout and stderr are empty.
I also tried stripping out supervisor and running cron on its own through phusion and CMD cron -f but got the same issue of it not working when the source is external(COPY or VOLUME) and magically works when created inside the container.
Initially believed it to be a permissions issue and tried chmod 644 (as this was the permission a file created in the container had) on all files that were the result of COPY into.
RUN chmod 644 /etc/cron.d/
After which tried every possible combination of permissions with rwx to no avail.
Also, tried to append the line of the cronjob into /etc/crontab but it is not recognized in crontab -l.
COPY crontab /tmp/crontab
RUN cat crontab >> /etc/crontab
It would be really handy if it worked just when it was created through COPY or VOLUME as it is a hassle to create it manually in the container everytime.
Any help would be greatly appreciated!
Edit 1 :
Some additional information about the file permissions after COPY or VOLUME.
When I perform
COPY crontabs /etc/cron.d/
RUN chmod -R 644 /etc/cron.d/
Inside the container running ls -l inside /etc/cron.d/ shows
-rw-r--r-- 1 root root 118 Jul 20 11:03 wwwcron-cron-docker_test
When I mount the folder through my docker-compose through VOLUME
volumes:
- ./server/crontabs:/etc/cron.d
ls -l shows
-rwxrwxrwx 1 1000 staff 118 Jul 20 11:03 wwwcron-cron-docker_test
In addition if I manually create the cron file in the container it looks like this and this works
-rw-r--r-- 1 root root 118 Jul 22 15:50 wwwcron-cron-docker_test_inside_docker
Clearly there are very different permissions and ownership when making COPY or VOLUME. But making a COPY with exact permissions does not work but seems to work when created in the container.
Thanks to #BMitch was able to find the issue which was related to line endings since my host machine was windows and the cron file origin was windows as well there was a disparity in the line endings thereby cron did not pick it up automatically.
I added this line to my Dockerfile and it works like a charm
RUN find /etc/cron.d/ -type f -print0 | xargs -0 dos2unix
And iterating on that the size of the file is indeed 1 byte smaller when a dos2unix is run, so you can verify if this operation indeed occurred.
-rw-r--r-- 1 root root 117 Jul 25 08:33 wwwcron-cron-docker_test
Have you tried installing the crontab as a separate command in the Dockerfile?
i.e.
...
COPY crontabs /path/to/crontab.txt
RUN crontab -u myUser /path/to/crontab.txt
...
I have a Docker image based on Ubuntu that runs a supervisor script as the CMD at the end of the Dockerfile. This successfully runs uwsgi and nginx in the container on start up. However, the following appended at the end of the supervisor-app.conf does not work:
[program:Xvfb]
command=/usr/bin/Xvfb :1 -screen 0 1024x768x16 &> xvfb.log &
When I open a shell into a running docker instance there is no X instance running:
root#9221694363ea:/# ps aux | grep X
root 39 0.0 0.0 8868 784 ? S+ 15:32 0:00 grep --color=auto X
However, running exactly the same command as in the supervisor-app.conf works
root#9221694363ea:/# /usr/bin/Xvfb :1 -screen 0 1024x768x16 &> xvfb.log &
[1] 40
root#9221694363ea:/# ps aux | grep X
root 40 1.2 0.1 170128 21604 ? Sl 15:33 0:00 /usr/bin/Xvfb :1 -screen 0 1024x768x16
root 48 0.0 0.0 8868 792 ? S+ 15:33 0:00 grep --color=auto X
so what's wrong with the line in the supervisor-app.conf?
Supervisor does not handle bash specific operators such as the-run-in-the -background '&' or redirections like '>' as per my original failing config line.
I solved it by using bash -c thus:
[program:Xvfb]
command=bash -c "/usr/bin/Xvfb :1 -screen 0 1024x768x16 &> xvfb.log"
Now when I get into the docker bash shell the Xvfb window is created waiting for me to use it elsewhere in the code.