To preface, I have been referencing these two articles for help:
Run a cron job with Docker - Julien Boulay
Running cron jobs inside a Docker container - Chris S.
My goal is to have a cron job automatically start when I start my docker container. Currently, it doesn't automatically start, but I can manually go into my container and run service cron start, which starts the job, and it works correctly.
So the problem is: How do I get my cron job to start automatically when my container starts up?
Dockerfile
FROM microsoft/dotnet:latest
RUN apt-get update && apt-get install -y cron
COPY . /app
WORKDIR /app
ADD crontab /etc/cron.d/crontab
RUN chmod 0600 /etc/cron.d/crontab
RUN crontab -u root /etc/cron.d/crontab
RUN touch /var/log/cron.log
RUN ["dotnet", "restore"]
RUN ["dotnet", "build"]
EXPOSE 5000/tcp
CMD cron && tail -f /var/log/cron.log
CMD service cron start
crontab
* * * * * echo "Hello world" >> /var/log/cron.log 2>&1
# Empty space
Though I wasn't able to get cron working in that particular container, I was able to create a standalone docker container specifically for cron, and was successful in getting it to run automatically.
As far as setup for the cron container, I followed the linked article, Run a cron job with Docker - Julien Boulay, and was able to get it working.
What I'm doing is have the CMD call cron directly like this:
CMD /usr/sbin/cron -f
Before that I'm adding the crontab to the container and assigning it as the root crontab with the command:
RUN crontab /root/mycrontab
You don't need to call the crontab command on files that are located in /etc/cron.d, but you do need those files to have the correct syntax. Using your example, instead of this:
* * * * * echo "Hello world" >> /var/log/cron.log 2>&1
You should have this:
* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1
On your crontab file. This only applies to crontab files located within /etc/cron.d, otherwise your crontab file syntax is correct and you use the crontab command to load it.
Starting from your example, I think you should modify your files like this:
Dockerfile
FROM microsoft/dotnet:latest
RUN apt-get update && apt-get install -y cron
COPY . /app
WORKDIR /app
ADD crontab /etc/cron.d/crontab
RUN chmod 0600 /etc/cron.d/crontab
RUN touch /var/log/cron.log
RUN ["dotnet", "restore"]
RUN ["dotnet", "build"]
EXPOSE 5000/tcp
CMD /usr/sbin/cron -f
crontab
* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1
Another alternative would be:
Dockerfile
FROM microsoft/dotnet:latest
RUN apt-get update && apt-get install -y cron
COPY . /app
WORKDIR /app
ADD crontab /root/
RUN crontab /root/crontab
RUN touch /var/log/cron.log
RUN ["dotnet", "restore"]
RUN ["dotnet", "build"]
EXPOSE 5000/tcp
CMD /usr/sbin/cron -f
crontab
* * * * * echo "Hello world" >> /var/log/cron.log 2>&1
We had a problem with php-fpm and docker where our cronjob tasks were not be executed. There were two problems we solved:
We tried to copy a crontab file into the docker container by using COPY config/custom-cron /etc/cron.d/custom-cron. The problem is, that our line endings were in windows format. This did break our crontab file, because this line endings are not converted while copy that file into the container.
The second problem was, that we tried to start the cron via CMD ["cron", "-f"] which did block the main php-fpm process. This results in a 502 Bad gateway error when calling our web application.
Finaly we made it work by editing the crontab file manually while building the docker image instead of copy-pasting and using supervisord to have multiple tasks running inside docker. This should work on all supported operating systems.
dockerfile
FROM php:7.1.16-fpm
RUN apt-get update && apt-get install -y cron supervisor
# Configure cron
RUN crontab -l | { cat; echo "* * * * * echo 'Hello world' >> /var/log/cron-test.log 2>&1"; } | crontab -
# Configure supervisor
COPY config/supervisord.conf /etc/supervisor/supervisord.conf
supervisord.conf
[supervisord]
logfile = /dev/null
loglevel = info
pidfile = /var/run/supervisord.pid
nodaemon = true
[program:php-fpm]
command = php-fpm
autostart = true
autorestart = true
stdout_logfile = /dev/stdout
stdout_logfile_maxbytes = 0
stderr_logfile = /dev/stderr
stderr_logfile_maxbytes = 0
[program:cron]
command = cron -f
autostart = true
autorestart = true
stdout_logfile = /dev/stdout
stdout_logfile_maxbytes = 0
stderr_logfile = /dev/stderr
stderr_logfile_maxbytes = 0
There is a bug in Debian based distributions which will cause cronjobs to fail because docker uses layered filesystem and cron doesn't start and says NUMBER OF HARD LINKS > 1 (/etc/crontab).
The fix is simple, add touch /etc/crontab /etc/cron.*/* to the entrypoint of your container.
I have made a blog post explaining how to setup cron in a Docker container here : https://esc.sh/blog/cron-jobs-in-docker/
I know this is an old question but I found a fix to this on Debian and it solved my problem. Cron pam auth with uid was breaking my cron from being able to run.
RUN sed -i '/session required pam_loginuid.so/c\#session required pam_loginuid.so/' /etc/pam.d/cron
Related
I've been trying to run a go script with cron under Ubuntu 16.04 Docker image. Here are the files that I've
Dockerfile
FROM couchbase
RUN apt-get update
RUN apt-get install gcc make -y
RUN apt-get install golang-1.10 git -y
ADD src/crontab.txt /crontab.txt
ADD src/backup.sh /backup.sh
ADD src/backup.go /backup.go
ADD src/file.txt /file.txt
COPY entry.sh /entry.sh
RUN chmod 755 /backup.sh /entry.sh
RUN /usr/bin/crontab /crontab.txt
RUN apt-get install vim -y
CMD ["/entry.sh"]
entry.sh
#!/bin/sh
/usr/sbin/cron -f -l 8
src/crontab.txt
* * * * * /backup.sh >> /var/log/backup.log
src/backup.sh
#!/bin/sh
chmod 666 /var/log/backup.log
/usr/lib/go-1.10/bin/go run backup.go
backup.go
package main
import (
"log"
"os"
"strings"
)
func init() {
file, err := os.OpenFile("/var/log/backup.log", os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0666)
if err != nil {
log.Fatal(err)
}
log.SetOutput(file)
}
func main() {
log.Println("Writing log")
}
I checked and the cron task is running each minute. The go installation is there in the folder and when I exec into the container it works, but the backup.go script is not logging anything. When I trigger the script manually it works though. The container that I'm using has Ubuntu 16.04 and I want it cause I don't have to do a couchbase installation.
You can do this simpler by using a multi stage build. First use a Go image to compile a standalone executable from your src/backup.go. Then switch to a coughbase image and copy the executable from the previous step.
Dockerfile:
# use a first-stage image to build the go code
# we'll change it later
FROM golang:1.10 AS build
# for now we only need the go code
COPY src/backup.go backup.go
# build a standalone executable
RUN go build -o /backup backup.go
# switch to a second-stage production image
FROM couchbase
# setup cronjob
COPY src/crontab.txt /crontab.txt
RUN /usr/bin/crontab /crontab.txt
# copy the executable from the first stage
# into the production image
COPY --from=build /backup /backup
CMD ["/usr/sbin/cron", "-f", "-l", "8"]
src/crontab.txt:
* * * * * /backup >> /var/log/backup.log
Build and run like this:
docker build . -t backup
# start in backgroud
docker run --name backup -d test
# check if it works
docker exec backup tail -f /var/log/backup.log
On the next minute :
2021/04/09 19:05:01 Writing log
I want to build my own custom docker image from nginx image.
I override the ENTRYPOINT of nginx with my own ENTERYPOINT file.
Which bring me to ask two questions:
I think I lose some commands from nginx by doing so. am I right? (like expose the port.. )
If I want to restart the nginx I run this commands: nginx -t && systemctl reload nginx. but the output is:
nginx: configuration file /etc/nginx/nginx.conf test is successful
/entrypoint.sh: line 5: systemctl: command not found
How to fix that?
FROM nginx:latest
WORKDIR /
RUN echo "deb http://ftp.debian.org/debian stretch-backports main" >> /etc/apt/sources.list
RUN apt-get -y update && \
apt-get -y install apt-utils && \
apt-get -y upgrade && \
apt-get -y clean
# I ALSO WANT TO INSTALL CERBOT FOR LATER USE (in my entrypoint file)
RUN apt-get -y install python-certbot-nginx -t stretch-backports
# COPY ./something ./tothisimage
# COPY ./something ./tothisimage
# COPY ./something ./tothisimage
# COPY ./something ./tothisimage
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["bash", "/entrypoint.sh"]
entrypoint.sh
echo "in entrypoint"
# I want to run some commands here...
# After I want to run nginx normally....
nginx -t && systemctl reload nginx
echo "after reload"
this will work using service command:
echo "in entrypoint"
# I want to run some commands here...
# After I want to run nginx normally....
nginx -t && service nginx reload
echo "after reload"
output:
in entrypoint
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Restarting nginx: nginx.
after reload
Commands like service and systemctl mostly just don't work in Docker, and you should totally ignore them.
At the point where your entrypoint script is running, it is literally the only thing that is running. That means you don't need to restart nginx, because it hasn't started the first time yet. The standard pattern here is to use the entrypoint script to do some first-time setup; it will be passed the actual command to run as arguments, so you need to tell it to run them.
#!/bin/sh
echo "in entrypoint"
# ... do first-time setup ...
# ...then run the command, nginx or otherwise
exec "$#"
(Try running docker run --rm -it myimage /bin/sh. You will get an interactive shell in a new container, but after this first-time setup has happened.)
The one thing you do lose in your Dockerfile is the default CMD from the base image (setting an ENTRYPOINT resets that). You need to add back that CMD:
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
You should keep the other settings from the base image, like ENV definitions and EXPOSEd ports.
The "systemctl" command is specific to some SystemD based operating system. But you do not have such a SystemD daemon running on PID 1 - so even if you install those packages it wont work.
You can only check in the nginx.service file which command the "reload" would execute for real. Or have something like the docker-systemctl-replacement script do it for you.
Context
I'm trying to schedule some ingestion jobs in an Alpine container. It took me a while to understand why my cron jobs did not start: crond doesn't seems to be running
rc-service -l | grep crond
According to Alpine's documentation, crond must first be started with openrc (i.e. some kind of systemctl). Here is the Dockerfile
FROM python:3.7-alpine
# set work directory
WORKDIR /usr/src/collector
RUN apk update \
&& apk add curl openrc
# ======>>>> HERE !!!!!
RUN rc-service crond start && rc-update add crond
# install dependencies
RUN pip install --upgrade pip
RUN pip install pipenv
COPY ./Pipfile /usr/src/collector/Pipfile
RUN pipenv install --skip-lock --system --dev
# copy entrypoint.sh
COPY ./entrypoint.sh /usr/src/collector/entrypoint.sh
# copy project
COPY . /usr/src/collector/
# run entrypoint.sh
ENTRYPOINT ["/usr/src/collector/entrypoint.sh"]
entrypoint.sh merely appends the jobs at the end of /etc/crontabs/root
Problem
I'm getting the following error:
* rc-service: service `crond' does not exist
ERROR: Service 'collector' failed to build: The command '/bin/sh -c rc-service crond start && rc-update add crond' returned a non-zero code: 1
Things are starting to feel a bit circular. How can rc-service not recognizing a service while, in the same time:
sh seems to know the name crond,
there was a /etc/crontabs/root
What am I missing?
Some Alpine Docker containers are missing the busybox-initscripts package. Just append that to the end of your apk add command, and crond should run as a service.
You might also need to remove the following line from your Dockerfile since it seems as if busybox-initscripts runs crond as a service immediately after installation:
RUN rc-service crond start && rc-update add crond
I was able to fix this by adding the crond command to the docker-entrypoint.sh, prior to the actual script commands.
e.g.:
#!/bin/sh
set -e
crond &
...(rest of the original script)
This way the crond is reloaded as a detached process.
So all the steps needed was to
Find and copy the entrypoint.sh to the build folder. I did this from a running container:
docker cp <running container name>:<path to script>/entrypoint.sh <path to Dockerfile folder>/entrypoint.sh
Modify the entrypoint.sh as stated above
Include the entrypoint.sh again in the Dockerfile used for building the custom image. Dockerfile example:
...
COPY docker-entrypoint.sh <path to script>/entrypoint.sh
RUN chmod +x <path to script>/entrypoint.sh
...
And then just build and use the new custom image:
docker build -t <registry if used>/<image name>:<tag> .
Have faced the issue either. My solution for containers is to execute crond inside of screen
# apk add screen --no-cache
# screen -dmS crond crond -f -l 0
FROM alpine:latest
RUN touch crontab.tmp \
&& echo '* * * * * echo "123"' > crontab.tmp \
&& crontab crontab.tmp \
&& rm -rf crontab.tmp
CMD ["/usr/sbin/crond", "-f", "-d", "0"]
ref: https://gist.github.com/mhubig/a01276e17496e9fd6648cf426d9ceeec
Run this one: apk add openrc --no-cache
I'm trying to create a multi-stage build in docker which simply run a non root crontab which write to volume accessible from outside the container. I have two problem with permissions, with volume external access and with cron:
the first build in dockerfile create a non-root user image with entry-point and su-exec useful to fix permission with volume!
the second build in the same dockerfile used the first image to run a crond process which normally write to /backup folder.
The docker-compose.yml file to build the dockerfile:
version: '3.4'
services:
scrap_service:
build: .
container_name: "flight_scrap"
volumes:
- /home/rey/Volumes/mongo/backup:/backup
In the first step of DockerFile (1), I try to adapt the answer given by denis bertovic to Alpine image
############################################################
# STAGE 1
############################################################
# Create first stage image
FROM gliderlabs/alpine:edge as baseStage
RUN echo http://nl.alpinelinux.org/alpine/edge/testing >> /etc/apk/repositories
RUN apk add --update && apk add -f gnupg ca-certificates curl dpkg su-exec shadow
COPY docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
# ADD NON ROOT USER, i hard fix value to 1000, my current id
RUN addgroup scrapy \
&& adduser -h /home/scrapy -u 1000 -S -G scrapy scrapy
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
My docker-entrypoint.sh to fix permission is:
#!/usr/bin/env bash
chown -R scrapy .
exec su-exec scrapy "$#"
The second stage (2) run the cron service to write into /backup folder mounted as volume
############################################################
# STAGE 2
############################################################
FROM baseStage
MAINTAINER rey
ENV TZ=UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apk add busybox-suid
RUN apk add -f tini bash build-base curl
# CREATE FUTURE VOLUME FOLDER WRITEABLE BY SCRAPY USER
RUN mkdir /backup && chown scrapy:scrapy /backup
# INIT NON ROOT USER CRON CRONTAB
COPY crontab /var/spool/cron/crontabs/scrapy
RUN chmod 0600 /var/spool/cron/crontabs/scrapy
RUN chown scrapy:scrapy /var/spool/cron/crontabs/scrapy
RUN touch /var/log/cron.log
RUN chown scrapy:scrapy /var/log/cron.log
# Switch to user SCRAPY already created in stage 1
WORKDIR /home/scrapy
USER scrapy
# SET TIMEZONE https://serverfault.com/questions/683605/docker-container-time-timezone-will-not-reflect-changes
VOLUME /backup
ENTRYPOINT ["/sbin/tini"]
CMD ["crond", "-f", "-l", "8", "-L", "/var/log/cron.log"]
The crontab file which normally create a test file into /backup volume folder:
* * * * * touch /backup/testCRON
DEBUG phase :
Login into my image with bash, it seems image correctly run the scrapy user :
uid=1000(scrapy) gid=1000(scrapy) groups=1000(scrapy)
The crontab -e command also gives the correct information
But first error, cron don't run correctly, when i cat /var/log/cron.log i have a permission denied error
crond: crond (busybox 1.27.2) started, log level 8
crond: root: Permission denied
crond: root: Permission denied
I have also a second error when I try to write directly into the /backup folder using the command touch /backup/testFile. The /backup volume folder continue to be only accessible using root permission, don't know why.
crond or cron should be used as root, as described in this answer.
Check out instead aptible/supercronic, a crontab-compatible job runner, designed specifically to run in containers. It will accomodate any user you have created.
I need to add environment variable to a crontab task in a Dockerfile. This is so i can pass the value as a runtime value which Docker can pick up.
But when i start a container from the image and pass environment variables, it is not picked up in the container.
How do i fix it.
Example below:
RUN yum install -y cronie
RUN pip install --upgrade pip
ADD dist/*.whl /opt/
RUN pip install /opt/*.whl
WORKDIR /opt/setup
VOLUME /var/docker-share
# Add task to crontab
RUN echo '0 * * * * run.sh --var1 $VAR1 --var2 $VAR2' | crontab -
# Run the command on container startup
CMD crond && while true; do sleep 3600; done