How to run cron job in docker container? - docker

I have a python script that populates Postgres database in AWS.
I am able to run it manually and it is loading data into database without any issues. I want to run it once for every 5 minutes inside a docker container.
So I included it in docker image to run. But I'm not sure why it's not running. I can't see anything appended to /var/log/cron.log file. Can someone help me figure out why it's not running?
I am able to copy the script to image during docker build and able to run it manually. The DB is being populated and I'm getting the expected output.
The script is in current directory which will be copied to /code/ folder
Here is my code:
Dockerfile:
FROM python:3
RUN apt-get -y update && apt-get -y upgrade
RUN apt-get install -y cron
RUN apt-get install -y postgresql-client
RUN touch /var/log/cron.log
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD . /code/
COPY crontab /etc/cron.d/cjob
RUN chmod 0644 /etc/cron.d/cjob
CMD cron && tail -f /var/log/cron.log
crontab:
*/5 * * * * python3 /code/populatePDBbackground.py >> /var/log/cron.log
# Empty line

Crontab requires additional field: user, who runs the command:
* * * * * root python3 /code/populatePDBbackground.py >> /var/log/cron.log
# Empty line
The Dockerfile is:
FROM python:3
RUN apt-get -y update && apt-get -y upgrade
RUN apt-get install -y cron postgresql-client
RUN touch /var/log/cron.log
RUN mkdir /code
WORKDIR /code
ADD . /code/
COPY crontab /etc/cron.d/cjob
RUN chmod 0644 /etc/cron.d/cjob
ENV PYTHONUNBUFFERED 1
CMD cron -f
Test python script populatePDBbackground.py is:
from datetime import datetime
print('Script has been started at {}'.format(datetime.now()))
And finally we get:
$ docker run -d b3fa191e8822
b8e768b4159637673f3dc4d1d91557b374670f4a46c921e0b02ea7028f40e105
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b8e768b41596 b3fa191e8822 "/bin/sh -c 'cron -f'" 4 seconds ago Up 3 seconds cocky_beaver
$ docker exec -ti b8e768b41596 bash
root#b8e768b41596:/code# tail -f /var/log/cron.log
Script has been started at 2019-03-13 00:06:01.095013
Script has been started at 2019-03-13 00:07:01.253030
Script has been started at 2019-03-13 00:08:01.273926

This might not be directly related, but I was getting stuck with my cronjob running in a docker container. I tried the accepted answer with no luck. I finally discovered that the editor I was using to edit the Crontab file (Notepad++) was set to Windows line-endings (CR LF). Once I changed this to Unix line endings (LF) my cronjob ran successfully in my docker container.

Using traditional cron implementations under docker is tricky. Consider using supercronic:
docker-compose.yml:
services:
supercronic:
build: .
command: supercronic crontab
Dockerfile:
FROM alpine:3.17
RUN set -x \
&& apk add --no-cache supercronic shadow \
&& useradd -m app
USER app
COPY crontab .
crontab:
*/5 * * * * date
$ docker-compose up
More alternatives can be found in my other answer.

Related

crontab non executed in docker

I need to execute crontab inside docker container, so I created the following dockerfile:
FROM openjdk:11-oraclelinux8
RUN mkdir -p /opt/my-user/
RUN mkdir -p /opt/my-user/joblogs
RUN groupadd my-user && adduser my-user -g my-user
RUN chown -R my-user:my-user /opt/my-user/
RUN microdnf install yum
RUN yum -y update
RUN yum -y install cronie
RUN yum -y install vi
RUN yum -y install telnet
COPY talend /opt/my-user/
COPY entrypoint.sh /opt/my-user/
RUN chmod +x /opt/my-user/entrypoint.sh
RUN chmod +x /opt/my-user/ETLJob/ETLJob_run.sh
RUN chown -R my-user:my-user /opt/my-user/
RUN echo "*/2 * * * * /bin/sh /opt/my-user/ETLJob/ETLJob_run.sh >> /opt/my-user/joblogs/job.log 2>&1" >> /etc/cron.d/my-user-job
RUN chmod 0644 /etc/cron.d/my-user-job
RUN crontab -u my-user /etc/cron.d/my-user-job
RUN chmod u+s /usr/sbin/crond
USER my-user:my-user
ENTRYPOINT [ "/opt/my-user/entrypoint.sh" ]
My entrypoint.sh file is the following one:
#!/bin/bash
echo "Start cron"
crontab /etc/cron.d/diomedee-job
echo "cron started"
# Run forever
tail -f /dev/null
So far so good, the container is created successfully and when I go inside the container and I type crontab -l I see the crontab... but it is never executed
I can't figure out what I'm missing; any research I made didn't give me any clue
May you give me any tip?
Docker containers usually only host a single process. In your case, the tail process. The cron daemon isn't running.
Your comment 'cron started' seems to indicate that running crontab starts the daemon, but it doesn't.
Replace your tail -f /dev/null command with crond -f to run the cron daemon in the foreground and it should work.

Azure App Service failed to start with custom container (trying to configure SSH connection)

I'm following this guide from Microsoft to connect to my App Service (running on a custom container) using SSH.
The base image I'm using is tiangolo/uwsgi-nginx
And here's my docker file
FROM node
WORKDIR /nodebuild
ADD frontend /nodebuild
ADD .env /nodebuild
RUN export $(grep -v '^#' .env | xargs) && npm install && npm audit fix && npm run build
FROM tiangolo/uwsgi-nginx:latest
ENV UWSGI_INI uwsgi.ini
WORKDIR /app
COPY requirements.txt /app
RUN python3 -m pip install -r requirements.txt
ADD . /app
COPY --from=0 /nodebuild/build /app/frontend/build
RUN export $(grep -v '^#' .env | xargs) && python3 manage.py makemigrations -noinput && python3 manage.py migrate --noinput && python3 manage.py collectstatic --noinput
RUN rm .env
# THE BELOW IS FOR SETTING UP SSH
# ----------------------------------
ENV SSH_PASSWD "root:Docker!"
RUN apt-get update \
&& apt-get install -y --no-install-recommends dialog \
&& apt-get update \
&& apt-get install -y --no-install-recommends openssh-server \
&& echo "$SSH_PASSWD" | chpasswd
COPY sshd_config /etc/ssh/
COPY init.sh /usr/local/bin/
RUN chmod u+x /usr/local/bin/init.sh
EXPOSE 8000 2222
ENTRYPOINT ["init.sh"]
Notice the last line of the Dockerfile. It uses ENTRYPOINT to set the startup command.
Content of the init.sh file is as below (just to start the SSH service).
#!/bin/bash
set -e
echo "Starting SSH ..."
service ssh start
Now the strange thing is that if I remove the last line (ENTRYPOINT ["init.sh"]) then everything works fine. But if it's there, the app failed to start and the app logs say something like
Container abc_xy_0_57397aae didn't respond to HTTP pings on port: 80, failing site start. See container logs for debugging.
Your entrypoint is equivalent to the init process (PID 1) of a traditional Unix system. If that process terminates, your computer shuts down or reboots. Your bash script starts sshd and then terminates. You need to find out what the entrypoint was and call that to preserve the previous behaviour.

Why isn't a cron job running in a personally modified CentOS docker image?

I am trying to make a docker image using a CentOS 7 base image of a private registry.
A cron operation is needed, so I first tested with printing 'hello cron' with a public CentOS 7 image.
My Dockerfile using a public Centos 7 image:
FROM centos:7.4.1708
RUN localedef -c -i en_US -f UTF-8 en_US.UTF-8 && \
yum update -y
RUN yum install -y python3 cronie vim && \
pip3 install requests
WORKDIR /app
COPY . /app
COPY crontab /etc/cron.d/crontab
RUN chmod 0644 /etc/cron.d/crontab
RUN /usr/bin/crontab /etc/cron.d/crontab
CMD ["crond", "-n"]
In this case, it seems to work properly.
[root#091c362de99c app]# cat /var/log/cron.log
hello cron
hello cron
hello cron
So, I tried the same way using a personal CentOS 7 base image.
My Dockerfile using a private image:
FROM base.REGISTRY/centos7/jdk:8.x64
RUN localedef -c -i en_US -f UTF-8 en_US.UTF-8 && \
yum update -y
RUN yum install -y python3 cronie vim && \
pip3 install requests
WORKDIR /app
COPY . /app
COPY crontab /etc/cron.d/crontab
RUN chmod 0644 /etc/cron.d/crontab
RUN /usr/bin/crontab /etc/cron.d/crontab
CMD ["crond", "-n"]
However, it doesn't seem to work.
[root#31580092ecec app]# cat /var/log/cron.log
cat: /var/log/cron.log: No such file or directory
Commands done with a public image:
[root#091c362de99c app]# crontab -l
* * * * * python3 /app/hello.py >> /var/log/cron.log
[root#091c362de99c app]# cat /app/hello.py
print('hello cron')
[root#091c362de99c app]# cat /var/log/cron.log
hello cron
hello cron
hello cron
hello cron
hello cron
Commands done with a private image:
[root#31580092ecec app]# crontab -l
* * * * * python3 /app/hello.py >> /var/log/cron.log
[root#31580092ecec app]# cat /app/hello.py
print('hello cron')
[root#31580092ecec app]# cat /var/log/cron.log
cat: /var/log/cron.log: No such file or directory
I want to figure out the differences between the two, but what I can do was just checking crontab -l.
Other than a cron job, just running a python file once works well with the both Dockerfile.
CMD ["python", "./hello.py"]
What could be a way to find out the differences to solve this problem?

Why won't my docker container run unless I use -i -t?

If I run my Dockerfile with the following command, the docker container starts running and all is well.
docker run --name test1 -i -t 660c93c32a
However, if I run this command without the -it, the container does not appear to be running as docker ps returns nothing:
docker run -d --name test1 660c93c32a
.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
All I'm trying to do is run the container and then be able to attach and/or open a shell in the container later.
Not sure if the issue is in my dockerfile or not, so have pasted the dockerfile below.
############################################################
# Dockerfile to build Ubuntu/Ansible/Django
############################################################
# Set the base image to Ansible
FROM ubuntu:16.10
# File Author / Maintainer
MAINTAINER David
# Install Ansible and Related Deps #
RUN apt-get -y update && \
apt-get install -y python-yaml python-jinja2 python-httplib2 python-keyczar python-paramiko python-setuptools python-pkg-resources git python-pip
RUN mkdir /etc/ansible/
RUN echo '[local]\nlocalhost\n' > /etc/ansible/hosts
RUN mkdir /opt/ansible/
RUN git clone http://github.com/ansible/ansible.git /opt/ansible/ansible
WORKDIR /opt/ansible/ansible
RUN git submodule update --init
ENV PATH /opt/ansible/ansible/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin
ENV PYTHONPATH /opt/ansible/ansible/lib
ENV ANSIBLE_LIBRARY /opt/ansible/ansible/library
# Update the repository sources list
RUN apt-get update -y
RUN apt-get install python -y
RUN apt-get install python-dev -y
RUN apt-get install python-setuptools -y
RUN apt-get install python-pip
RUN mkdir /ansible/
WORKDIR /ansible
COPY ./ansible ./
WORKDIR /
RUN ansible-playbook -c local ansible/playbooks/installdjango.yml
ENV PROJECTNAME davidswebsite
CMD django-admin startproject $PROJECTNAME
When you run your container, command after CMD or ENTRYPOINT becomes $1 process of you container. If this process doesn't run well, your container will die.
So, check container logs using: docker logs <container id>
and recheck your command in CMD django-admin startproject $PROJECTNAME

Dockerfile supervisord cannot find path

For some reason supervisord cannot start up when executing docker run... If I log out the path where the configuration is stored for supervisord I can clearly see that the file is present.
Below is the part of my Dockerfile thats not currently commented out.
FROM ubuntu:16.04
MAINTAINER Kevin Gilbert
# Update Packages
RUN apt-get -y update
# Install basics
RUN apt-get -y install curl wget make gcc build-essential
# Setup Supervisor
RUN apt-get -y install supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord", "-c /etc/supervisor/conf.d/supervisord.conf"]
Here is the error I get in terminal after running.
remote-testing:analytics-portal kgilbert$ docker run kmgilbert/portal
Error: could not find config file /etc/supervisor/conf.d/supervisord.conf
For help, use /usr/bin/supervisord -h
Try with the exec form of CMD:
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
or with the shell form
CMD /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
Depending on the OS used by the base image, you might not even have to specify the supervisord.conf in the command line (see this example, or the official documentation)
It happended to me on Alpine linux 3.9, but eventually ran successfully with
CMD ["supervisord", "-c", "<path_to_conf_file>"]

Resources