i try to execute a cron directly on the container but its nit working i did :
`
apt-get update && apt-get install cron -y
# Giving executable permission to the script file
RUN chmod +x date-script.sh
mkdir my-cron-file
# Adding crontab to the appropriate location
crontab /etc/cron.d/my-cron-file
# Giving permission to crontab file
RUN chmod 0644 /etc/cron.d/my-cron-file
# Running crontab
crontab /etc/cron.d/my-cron-file
# Creating entry point for cron
ENTRYPOINT ["cron", "-f"]
`
from airplane
but its not working,
do you have an idea ?
Related
I need to execute crontab inside docker container, so I created the following dockerfile:
FROM openjdk:11-oraclelinux8
RUN mkdir -p /opt/my-user/
RUN mkdir -p /opt/my-user/joblogs
RUN groupadd my-user && adduser my-user -g my-user
RUN chown -R my-user:my-user /opt/my-user/
RUN microdnf install yum
RUN yum -y update
RUN yum -y install cronie
RUN yum -y install vi
RUN yum -y install telnet
COPY talend /opt/my-user/
COPY entrypoint.sh /opt/my-user/
RUN chmod +x /opt/my-user/entrypoint.sh
RUN chmod +x /opt/my-user/ETLJob/ETLJob_run.sh
RUN chown -R my-user:my-user /opt/my-user/
RUN echo "*/2 * * * * /bin/sh /opt/my-user/ETLJob/ETLJob_run.sh >> /opt/my-user/joblogs/job.log 2>&1" >> /etc/cron.d/my-user-job
RUN chmod 0644 /etc/cron.d/my-user-job
RUN crontab -u my-user /etc/cron.d/my-user-job
RUN chmod u+s /usr/sbin/crond
USER my-user:my-user
ENTRYPOINT [ "/opt/my-user/entrypoint.sh" ]
My entrypoint.sh file is the following one:
#!/bin/bash
echo "Start cron"
crontab /etc/cron.d/diomedee-job
echo "cron started"
# Run forever
tail -f /dev/null
So far so good, the container is created successfully and when I go inside the container and I type crontab -l I see the crontab... but it is never executed
I can't figure out what I'm missing; any research I made didn't give me any clue
May you give me any tip?
Docker containers usually only host a single process. In your case, the tail process. The cron daemon isn't running.
Your comment 'cron started' seems to indicate that running crontab starts the daemon, but it doesn't.
Replace your tail -f /dev/null command with crond -f to run the cron daemon in the foreground and it should work.
Cron isn't a running process when I go into my container, but when go into the container and bash service cron restart it starts running, I can't figure out why this isn't working with the service cron restart but not without it?
Dockerfile
FROM ubuntu:bionic
RUN apt-get update && apt-get -y install \
cron \
nano \
psmisc \
wget
COPY hello-cron /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod +x /etc/cron.d/hello-cron
# Apply cron job
RUN crontab /etc/cron.d/hello-cron
# Create the log file to be able to run tail
CMD ["cron", "-f"]
my hello-cron script
* * * * * echo "Hello world" > /usr/src/helloworldcron.log 2>&1
We are running sphinx through supervisord inside docker. We are trying to enable autorestart, but it does work. We are trying to verify this by manually killing the searchd process.
Below is the configuration
[program:sphinx]
command=searchd --pidfile --config "config/sphinx.conf"
autostart=true
autorestart=unexpected
startsecs=5
startretries=3
exitcodes=0
We also attempted other configuration
program:sphinx]
command= rake ts:stop
command= rake ts:configure
command= rake ts:start
autostart=true
exitcodes=0,2
autorestart=unexpected
Are we missing something?
Dockerfile
FROM ruby:2.3
RUN apt-get update
#Install Sphinx
RUN wget -P /tmp http://sphinxsearch.com/files/sphinx-2.3.2-beta.tar.gz
RUN mkdir /opt/sphinx_src
RUN tar -xzvf /tmp/sphinx-2.3.2-beta.tar.gz -C /opt/sphinx_src
WORKDIR /opt/sphinx_src/sphinx-2.3.2-beta
RUN ./configure --with-pgsql --with-mysql
RUN make
RUN make install
RUN apt-get install -q -y supervisor cron
ADD supervisor-cron.conf /etc/supervisor/conf.d/cron.conf
RUN service supervisor stop
RUN apt-get install -y net-tools telnet
WORKDIR /opt/app
ADD start.sh /usr/local/sbin/start
RUN chmod 755 /usr/local/sbin/start
EXPOSE 9312
CMD ["/usr/local/sbin/start"]
start.sh
#!/bin/sh
export set BUNDLE_GEMFILE=Gemfile
cd /opt/app
bundle install
echo "About to perform Sphinx Start"
cp /opt/app/supervisor-sphinx.conf /etc/supervisor/conf.d/sphinx.conf
supervisord -n
I have a python script that populates Postgres database in AWS.
I am able to run it manually and it is loading data into database without any issues. I want to run it once for every 5 minutes inside a docker container.
So I included it in docker image to run. But I'm not sure why it's not running. I can't see anything appended to /var/log/cron.log file. Can someone help me figure out why it's not running?
I am able to copy the script to image during docker build and able to run it manually. The DB is being populated and I'm getting the expected output.
The script is in current directory which will be copied to /code/ folder
Here is my code:
Dockerfile:
FROM python:3
RUN apt-get -y update && apt-get -y upgrade
RUN apt-get install -y cron
RUN apt-get install -y postgresql-client
RUN touch /var/log/cron.log
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD . /code/
COPY crontab /etc/cron.d/cjob
RUN chmod 0644 /etc/cron.d/cjob
CMD cron && tail -f /var/log/cron.log
crontab:
*/5 * * * * python3 /code/populatePDBbackground.py >> /var/log/cron.log
# Empty line
Crontab requires additional field: user, who runs the command:
* * * * * root python3 /code/populatePDBbackground.py >> /var/log/cron.log
# Empty line
The Dockerfile is:
FROM python:3
RUN apt-get -y update && apt-get -y upgrade
RUN apt-get install -y cron postgresql-client
RUN touch /var/log/cron.log
RUN mkdir /code
WORKDIR /code
ADD . /code/
COPY crontab /etc/cron.d/cjob
RUN chmod 0644 /etc/cron.d/cjob
ENV PYTHONUNBUFFERED 1
CMD cron -f
Test python script populatePDBbackground.py is:
from datetime import datetime
print('Script has been started at {}'.format(datetime.now()))
And finally we get:
$ docker run -d b3fa191e8822
b8e768b4159637673f3dc4d1d91557b374670f4a46c921e0b02ea7028f40e105
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b8e768b41596 b3fa191e8822 "/bin/sh -c 'cron -f'" 4 seconds ago Up 3 seconds cocky_beaver
$ docker exec -ti b8e768b41596 bash
root#b8e768b41596:/code# tail -f /var/log/cron.log
Script has been started at 2019-03-13 00:06:01.095013
Script has been started at 2019-03-13 00:07:01.253030
Script has been started at 2019-03-13 00:08:01.273926
This might not be directly related, but I was getting stuck with my cronjob running in a docker container. I tried the accepted answer with no luck. I finally discovered that the editor I was using to edit the Crontab file (Notepad++) was set to Windows line-endings (CR LF). Once I changed this to Unix line endings (LF) my cronjob ran successfully in my docker container.
Using traditional cron implementations under docker is tricky. Consider using supercronic:
docker-compose.yml:
services:
supercronic:
build: .
command: supercronic crontab
Dockerfile:
FROM alpine:3.17
RUN set -x \
&& apk add --no-cache supercronic shadow \
&& useradd -m app
USER app
COPY crontab .
crontab:
*/5 * * * * date
$ docker-compose up
More alternatives can be found in my other answer.
I seem to have tried every solution on here but none seem to be working, I'm not sure what I'm missing. Im trying to run celery as a daemon through my docker container.
root#bae5de770400:/itapp/itapp# /etc/init.d/celeryd status
celery init v10.1.
Using config script: /etc/default/celeryd
celeryd down: no pidfiles found
root#bae5de770400:/itapp/itapp# /etc/init.d/celerybeat status
celery init v10.1.
Using configuration: /etc/default/celeryd
celerybeat is down: no pid file found
root#bae5de770400:/itapp/itapp#
I've seen lots of posts to do with perms and I've tried them all to no avail.
this is my docker file which creates all the perms and folders
FROM python:latest
ENV PYTHONUNBUFFERED 1
# add source for snmp
RUN sed -i "s#jessie main#jessie main contrib non-free#g" /etc/apt/sources.list
# install dependancies
RUN apt-get update -y \
&& apt-get install -y apt-utils python-software-properties libsasl2-dev python3-dev libldap2-dev libssl-dev libsnmp-dev snmp-mibs-downloader git vim
# copy and install requirements
RUN mkdir /config
ADD /config/requirements.txt /config/
RUN pip install -r /config/requirements.txt
# create folders
RUN mkdir /itapp;
RUN mkdir /static;
# create celery user
RUN useradd -N -M --system -s /bin/false celery
RUN echo celery:"*****" | /usr/sbin/chpasswd
# celery perms
RUN groupadd grp_celery
RUN usermod -a -G grp_celery celery
RUN mkdir /var/run/celery/
RUN mkdir /var/log/celery/
RUN chown root:root /var/run/celery/
RUN chown root:root /var/log/celery/
# copy celery daemon files
ADD /config/celery/init_celeryd /etc/init.d/celeryd
RUN chmod +x /etc/init.d/celeryd
ADD /config/celery/celerybeat /etc/init.d/celerybeat
RUN chmod +x /etc/init.d/celerybeat
RUN chmod 755 /etc/init.d/celeryd
RUN chown root:root /etc/init.d/celeryd
RUN chmod 755 /etc/init.d/celerybeat
RUN chown root:root /etc/init.d/celerybeat
# copy celery config
ADD /config/celery/default_celeryd /etc/default/celeryd
# RUN /etc/init.d/celeryd start
# set workign DIR for copying code
WORKDIR /itapp
if I start it manually it works
celery -A itapp worker -l info
/usr/local/lib/python3.6/site-packages/celery/platforms.py:795: RuntimeWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the -u option.
...
[2017-09-25 17:29:51,707: INFO/MainProcess] Connected to amqp://it-app:**#rabbitmq:5672/it-app-vhost
[2017-09-25 17:29:51,730: INFO/MainProcess] mingle: searching for neighbors
[2017-09-25 17:29:52,764: INFO/MainProcess] mingle: all alone
the init.d files are copied from the celery repo and this is the contents of my default file if it helps
# Names of nodes to start
# most people will only start one node:
CELERYD_NODES="worker1"
# but you can also start multiple and configure settings
# for each in CELERYD_OPTS
#CELERYD_NODES="worker1 worker2 worker3"
# alternatively, you can specify the number of nodes to start:
#CELERYD_NODES=10
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/usr/local/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="itapp"
# or fully qualified:
# Where to chdir at start.
CELERYD_CHDIR="/itapp/itapp/"
# Extra command-line arguments to the worker
CELERYD_OPTS="flower --time-limit=300 --concurrency=8"
# Configure node-specific settings by appending node name to arguments:
#CELERYD_OPTS="--time-limit=300 -c 8 -c:worker2 4 -c:worker3 2 -Ofair:worker1"
# Set logging level to DEBUG
#CELERYD_LOG_LEVEL="DEBUG"
# %n will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists (e.g., nobody).
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
the only thing in this file which may be wrong I think is the CELERY_BIN value, I'm not sure what to set that too in a docker container
Thanks
So you had few issues in your Dockerfile
Celery process shell was set to /bin/false which didn't allow any process to be started.
You needed to give permission on /var/run/celery and /var/log/celery to the celery user
/etc/default/celeryd should be 640 permission
Also too many layers in your Dockerfile
So I updated the Dockerfile to below
FROM python:latest
ENV PYTHONUNBUFFERED 1
# add source for snmp
RUN sed -i "s#jessie main#jessie main contrib non-free#g" /etc/apt/sources.list
# install dependancies
RUN apt-get update -y \
&& apt-get install -y apt-utils python-software-properties libsasl2-dev python3-dev libldap2-dev libssl-dev libsnmp-dev git vim
# copy and install requirements
RUN mkdir /config
ADD /config/requirements.txt /config/
RUN pip install -r /config/requirements.txt
# create folders
RUN mkdir /itapp && mkdir /static;
# create celery user
RUN useradd -N -M --system -s /bin/bash celery && echo celery:"B1llyB0n3s" | /usr/sbin/chpasswd
# celery perms
RUN groupadd grp_celery && usermod -a -G grp_celery celery && mkdir -p /var/run/celery/ /var/log/celery/
RUN chown -R celery:grp_celery /var/run/celery/ /var/log/celery/
# copy celery daemon files
ADD /config/celery/init_celeryd /etc/init.d/celeryd
RUN chmod +x /etc/init.d/celeryd
ADD /config/celery/celerybeat /etc/init.d/celerybeat
RUN chmod 750 /etc/init.d/celeryd /etc/init.d/celerybeat
RUN chown root:root /etc/init.d/celeryd /etc/init.d/celerybeat
# copy celery config
ADD /config/celery/default_celeryd /etc/default/celeryd
RUN chmod 640 /etc/default/celeryd
# set workign DIR for copying code
ADD /itapp/ /itapp/itapp
WORKDIR /itapp
And then got into the web service container and all worked fine
root#ab658c5d0c67:/itapp/itapp# /etc/init.d/celeryd status
celery init v10.1.
Using config script: /etc/default/celeryd
celeryd down: no pidfiles found
root#ab658c5d0c67:/itapp/itapp# /etc/init.d/celeryd start
celery init v10.1.
Using config script: /etc/default/celeryd
celery multi v4.1.0 (latentcall)
> Starting nodes...
> worker1#ab658c5d0c67: OK
> flower#ab658c5d0c67: OK
root#ab658c5d0c67:/itapp/itapp# /etc/init.d/celeryd status
celery init v10.1.
Using config script: /etc/default/celeryd
celeryd down: no pidfiles found
root#ab658c5d0c67:/itapp/itapp# /etc/init.d/celeryd status
celery init v10.1.
Using config script: /etc/default/celeryd
celeryd (node worker1) (pid 66) is up...
root#ab658c5d0c67:/itapp/itapp#