The Docker container does not run the crontab - docker

I build a Docker image to run a crontab file:
RUN apt-get install -y cron
RUN touch /usr/local/learnintouch/cron.log
COPY learnintouch.cron /usr/local/learnintouch/
RUN chmod 0644 /usr/local/learnintouch/learnintouch.cron \
&& sudo crontab /usr/local/learnintouch/learnintouch.cron
It has an ENTRYPOINT to run a start.sh file which contains:
# Run the crontab
sudo service cron start
The learnintouch.cron file contains:
* * * * * echo "Hello cron" >> /usr/local/learnintouch/logs/cron.log 2>&1
But the log shows nothing.
Only if I connect in the container and run the start.sh file manually, that is, as the root user, does the log show the Hello cron message.
When logged in the container, the files have the apache user:
root#72f59adb5324:/usr/local/learnintouch# ll
total 2852
drwxr-xr-x 1 apache apache 4096 May 11 19:42 ./
drwxr-xr-x 1 root root 4096 May 3 20:10 ../
-rwxr-xr-x 1 apache apache 0 May 11 18:56 cron.log*
-rwxr-xr-x 1 apache apache 1057 May 11 19:34 start.sh*
root#72f59adb5324:/usr/local/learnintouch# whoami
root
I reckon it's a user permissions issue.
UPDATE: I tried replicating the issue with a Dockerfile as in:
FROM ubuntu:20.10
RUN apt-get update \
&& apt-get install -y sudo \
&& apt-get autoremove -y && apt-get clean
RUN mkdir -p /usr/local/learnintouch/
RUN apt-get install -y cron
COPY learnintouch.cron /usr/local/learnintouch/
RUN chmod 0644 /usr/local/learnintouch/learnintouch.cron \
&& crontab /usr/local/learnintouch/learnintouch.cron
ENTRYPOINT ["/usr/sbin/cron", "tail", "-f", "/dev/null"]
After building the image:
docker build -t stephaneeybert/cronissue .
and running the container:
docker run --name cronissue -v ~/dev/docker/projects/common/volumes/logs:/usr/local/learnintouch/logs stephaneeybert/cronissue
the cron started working fine and the issue would NOT show up.
So I reckoned the issue could lie within the docker-compose.yml file I use.
I thus tried running with the docker-compose.yml file:
version: "3.7"
services:
cronissue:
image: stephaneeybert/cronissue
volumes:
- "~/dev/docker/projects/common/volumes/logs:/usr/local/learnintouch/logs"
with the Docker Swarm command:
docker stack deploy --compose-file docker-compose.yml cronissue
And again the cron started working fine and the issue would NOT show up.
So I finally added the user: "${CURRENT_UID}:${CURRENT_GID}" property that I also have in my project as in:
version: "3.7"
services:
cronissue:
image: stephaneeybert/cronissue
volumes:
- "~/dev/docker/projects/common/volumes/logs:/usr/local/learnintouch/logs"
user: "${CURRENT_UID}:${CURRENT_GID}"
And this time, the cron did NOT work and the issue showed up.
The issue shows up ONLY when I run the container with the host user.
As a side note, I also tried opening the file permissions but it did not change anything:
&& chmod a+x /usr/bin/crontab \
&& chmod a+x /usr/sbin/cron \
UPDATE: I ended up using supercronic instead of cron as it works fine in containers.
# Using supercronic as a cron scheduler
# See https://github.com/aptible/supercronic/
COPY supercronic-linux-amd64 /usr/local/learnintouch
COPY learnintouch.cron /usr/local/learnintouch/
RUN chmod 0644 /usr/local/learnintouch/learnintouch.cron

First, the command in ENTRYPOINT doesn't make sense. You've got:
ENTRYPOINT ["/usr/sbin/cron", "tail", "-f", "/dev/null"]
It looks like you're trying to combine two commands here (cron and tail -f /dev/null), but you can't just mash together commands like that. It looks like you're using the tail -f /dev/null to try to keep the container running after cron background itself, but that's unnecessary -- looking at the man page, we can use the -f flag to cron to keep it in the foreground, so that becomes:
ENTRYPOINT ['/usr/sbin/cron', '-f']
Fortunately, the way you've got things configured, cron is just ignoring the unknown arguments (you can run cron this is a test -f and it works) so you're accidentally doing the right thing.
The issue shows up ONLY when I run the container with the host user.
The cron daemon is designed to run as root. If I try starting your image as a non-root user, for example using your docker-compose.yml that sets the user, I get:
cronissue_1 | cron: can't open or create /var/run/crond.pid: Permission denied
And even if you fix that problem by changing the directory ownership, cron will still fail:
$ cron -f
seteuid: Operation not permitted
You'll need to run cron as root, or you'll need to find some other tooling if you really want to run a scheduler as a non-root user.
As a side note, I also tried opening the file permissions but it did not change anything:
&& chmod a+x /usr/bin/crontab \
&& chmod a+x /usr/sbin/cron \
Note that the above commands are no-ops; both commands are already
executable by everyone so you haven't changed anything.

Related

Enable crond in an Alpine container

Context
I'm trying to schedule some ingestion jobs in an Alpine container. It took me a while to understand why my cron jobs did not start: crond doesn't seems to be running
rc-service -l | grep crond
According to Alpine's documentation, crond must first be started with openrc (i.e. some kind of systemctl). Here is the Dockerfile
FROM python:3.7-alpine
# set work directory
WORKDIR /usr/src/collector
RUN apk update \
&& apk add curl openrc
# ======>>>> HERE !!!!!
RUN rc-service crond start && rc-update add crond
# install dependencies
RUN pip install --upgrade pip
RUN pip install pipenv
COPY ./Pipfile /usr/src/collector/Pipfile
RUN pipenv install --skip-lock --system --dev
# copy entrypoint.sh
COPY ./entrypoint.sh /usr/src/collector/entrypoint.sh
# copy project
COPY . /usr/src/collector/
# run entrypoint.sh
ENTRYPOINT ["/usr/src/collector/entrypoint.sh"]
entrypoint.sh merely appends the jobs at the end of /etc/crontabs/root
Problem
I'm getting the following error:
* rc-service: service `crond' does not exist
ERROR: Service 'collector' failed to build: The command '/bin/sh -c rc-service crond start && rc-update add crond' returned a non-zero code: 1
Things are starting to feel a bit circular. How can rc-service not recognizing a service while, in the same time:
sh seems to know the name crond,
there was a /etc/crontabs/root
What am I missing?
Some Alpine Docker containers are missing the busybox-initscripts package. Just append that to the end of your apk add command, and crond should run as a service.
You might also need to remove the following line from your Dockerfile since it seems as if busybox-initscripts runs crond as a service immediately after installation:
RUN rc-service crond start && rc-update add crond
I was able to fix this by adding the crond command to the docker-entrypoint.sh, prior to the actual script commands.
e.g.:
#!/bin/sh
set -e
crond &
...(rest of the original script)
This way the crond is reloaded as a detached process.
So all the steps needed was to
Find and copy the entrypoint.sh to the build folder. I did this from a running container:
docker cp <running container name>:<path to script>/entrypoint.sh <path to Dockerfile folder>/entrypoint.sh
Modify the entrypoint.sh as stated above
Include the entrypoint.sh again in the Dockerfile used for building the custom image. Dockerfile example:
...
COPY docker-entrypoint.sh <path to script>/entrypoint.sh
RUN chmod +x <path to script>/entrypoint.sh
...
And then just build and use the new custom image:
docker build -t <registry if used>/<image name>:<tag> .
Have faced the issue either. My solution for containers is to execute crond inside of screen
# apk add screen --no-cache
# screen -dmS crond crond -f -l 0
FROM alpine:latest
RUN touch crontab.tmp \
&& echo '* * * * * echo "123"' > crontab.tmp \
&& crontab crontab.tmp \
&& rm -rf crontab.tmp
CMD ["/usr/sbin/crond", "-f", "-d", "0"]
ref: https://gist.github.com/mhubig/a01276e17496e9fd6648cf426d9ceeec
Run this one: apk add openrc --no-cache

CentOS7: How to start the slapd service in a docker container?

I want to run an OpenLDAP server in a docker container using CentOS7.
I managed to have a container running with an openldap installed in it. My problem is that I am using a script entrypoint.sh to start the slapd service and add a user to my directory. I would like this two steps to be in the Dockerfile so that the password to perform ldapadd is not stored in the script.
So far I have only found examples on debian .
https://github.com/kanboard/docker-openldap/blob/master/memberUid/Dockerfile this is what I would like to do but using CentOS 7.
I tried start slapd service in my Dockerfile without success.
My Dockerfile looks like this :
FROM centos:7
RUN yum -y update && yum -y install \
openldap-servers \
openldap-clients \
libselinux-python \
openssl \
; yum clean all
RUN chown ldap:ldap -R /var/lib/ldap
COPY slapd.conf /etc/openldap/slapd.conf
COPY base.ldif /etc/openldap/schema/base.ldif
COPY entrypoint.sh /entrypoint.sh
RUN chmod 500 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
My entrypoint.sh script looks like this :
#!/bin/bash
exec /usr/sbin/slapd -f /etc/openldap/slapd.conf -h "ldapi:/// ldap:///" -d stats &
sleep 10
ldapadd -x -w mypassword -D "cn=ldapadm,dc=mydomain" -f /etc/openldap/schema/base.ldif
This does work however I am looking to start the ldap service and do the ldapadd command in the Dockerfile not to have mypassword stored in entrypoint.sh.
Hence I tried these commands :
RUN systemctl slapd start
RUN ldapadd -x -w password -D "cn=ldapadm,dc=mydomain" -f /etc/openldap/schema/base.ldif
Of course this does not work as systemctl does not work in Dockerfile. What is the best alternative ? I was considering having one container starting the ldap servcie but then I do not know how to call it to build the image of the other container...
EDIT :
Thanks to Guido U. Draheim, I have an alternative to systemctl to start slapd service.
My Dockerfile now looks like this :
FROM centos:7
RUN yum -y update && yum -y install \
openldap-servers \
openldap-clients \
libselinux-python \
openssl \
; yum clean all
RUN chown ldap:ldap -R /var/lib/ldap
COPY slapd.conf /etc/openldap/slapd.conf
COPY base.ldif /etc/openldap/schema/base.ldif
COPY files/docker/systemctl.py /usr/bin/systemctl
RUN systemctl enable slapd
RUN systemctl start slapd;\
ldapdd -x -w password -D "cn=ldapadm,dc=sblanche" -f /etc/openldap/schema/base.ldif
COPY entrypoint.sh /entrypoint.sh
RUN chmod 500 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
But I have got the following error : ldap_bind: Invalid credentials (49)
(a) you could use the docker-systemctl-replacement to run your "systemctl.py start slapd". Which is the obvious first error.
(b) each RUN in a dockerfile is a new container, so the running process from the earlier invocation can not survive anyway. That's why the referenced dockerfile example has it combined with "&&".
And yeah (c) I am using an openldap centos container. So go ahead, try again.

PHP and redis in same docker image

I'm trying to add redis to a php:7.0-apache image, using this Dockerfile:
FROM php:7.0-apache
RUN apt-get update && apt-get -y install build-essential tcl
RUN cd /tmp \
&& curl -O http://download.redis.io/redis-stable.tar.gz \
&& tar xzvf redis-stable.tar.gz \
&& cd redis-stable \
&& make \
&& make install
COPY php.ini /usr/local/etc/php/
COPY public /var/www/html/
RUN chown -R root:www-data /var/www/html
RUN chmod -R 1755 /var/www/html
RUN find /var/www/html -type d -exec chmod 1775 {} +
RUN mkdir -p /var/redis/6379
COPY 6379.conf /etc/redis/6379.conf
COPY redis_6379 /etc/init.d/redis_6379
RUN chmod 777 /etc/init.d/redis_6379
RUN update-rc.d redis_6379 defaults
RUN service apache2 restart
RUN service redis_6379 start
It build and run fines but redis is never started? When I run /bin/bash inside my container and manually input "service redis_6379 start" it works, so I'm assuming my .conf and init.d files are okay.
While I'm aware it'd much easier using docker-compose, I'm specifically trying to avoid having to use it for specific reasons.
There are multiple things wrong here:
Starting processes in dockerfile has no effect. A dockerfile builds an image. The processes need to be started at container construction time. This can be done using an entrypoint can be defined in the dockerfile by using ENTRYPOINT. That entrypoint is typically a script that is executed when an actual container is started.
There is no init process in docker by default. Issuing service calls will fail without further work. If you need to start multiple processes you can look for the docs of the supervisord program.
Running both redis and a webserver in one container is not best practice. For a php application using redis you'd typically have 2 containers - one running redis and one running apache and let them interact via network.
I suggest you read the docker documentation before continuing. All this is described in depth there.
I am agree with #Richard. Use two or more containers according to your needs then --link them, in order to get the things work!

How can I run a searchguard set up script after elasticsearch is up and running in docker?

I have been trying to make the searchguard setup script init_sg.sh to run after elasticsearch automatically. I don't want to do it manually with docker exec. Here's what I have tried.
entrypoint.sh:
#! /bin/sh
elasticsearch
source init_sg.sh
Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.0
COPY config/ config/
COPY bin/ bin/
# Search Guard plugin
# https://github.com/floragunncom/search-guard/wiki
RUN elasticsearch-plugin install --batch com.floragunn:search-guard-6:6.1.0-20.1 \
&& chmod +x \
plugins/search-guard-6/tools/hash.sh \
plugins/search-guard-6/tools/sgadmin.sh \
&& chown -R elasticsearch config/sg/ \
&& chmod -R go= config/sg/
# This custom entrypoint script is used instead of
# the original's /usr/local/bin/docker-entrypoint.sh
ENTRYPOINT ["bash","-c","entrypoint.sh"]
However, it'd throw cannot run elasticsearch as root error:
org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root
So I guess I cannot run elasticsearch directly in entrypoint.sh, which is confusing because there's no problem when the Dockerfile is like this:
FROM docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.0
COPY config/ config/
COPY bin/ bin/
....
CMD ["elasticsearch"]
This thread's accepted answer doesn't work. There's no "/run/entrypoint.sh" in the container.
Solution:
Finally I've managed to get it done. Here's my custom entrypoint script that will run the searchguard setup script automatically:
source init_sg.sh
while [ $? -ne 0 ]; do
sleep 10
source init_sg.sh
done &
/bin/bash -c "source /usr/local/bin/docker-entrypoint.sh;"
If you have any alternative solutions, please feel free to answer.

How to fix permissions for an Alpine image writing files using Cron as non root user into accessible volume

I'm trying to create a multi-stage build in docker which simply run a non root crontab which write to volume accessible from outside the container. I have two problem with permissions, with volume external access and with cron:
the first build in dockerfile create a non-root user image with entry-point and su-exec useful to fix permission with volume!
the second build in the same dockerfile used the first image to run a crond process which normally write to /backup folder.
The docker-compose.yml file to build the dockerfile:
version: '3.4'
services:
scrap_service:
build: .
container_name: "flight_scrap"
volumes:
- /home/rey/Volumes/mongo/backup:/backup
In the first step of DockerFile (1), I try to adapt the answer given by denis bertovic to Alpine image
############################################################
# STAGE 1
############################################################
# Create first stage image
FROM gliderlabs/alpine:edge as baseStage
RUN echo http://nl.alpinelinux.org/alpine/edge/testing >> /etc/apk/repositories
RUN apk add --update && apk add -f gnupg ca-certificates curl dpkg su-exec shadow
COPY docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
# ADD NON ROOT USER, i hard fix value to 1000, my current id
RUN addgroup scrapy \
&& adduser -h /home/scrapy -u 1000 -S -G scrapy scrapy
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
My docker-entrypoint.sh to fix permission is:
#!/usr/bin/env bash
chown -R scrapy .
exec su-exec scrapy "$#"
The second stage (2) run the cron service to write into /backup folder mounted as volume
############################################################
# STAGE 2
############################################################
FROM baseStage
MAINTAINER rey
ENV TZ=UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apk add busybox-suid
RUN apk add -f tini bash build-base curl
# CREATE FUTURE VOLUME FOLDER WRITEABLE BY SCRAPY USER
RUN mkdir /backup && chown scrapy:scrapy /backup
# INIT NON ROOT USER CRON CRONTAB
COPY crontab /var/spool/cron/crontabs/scrapy
RUN chmod 0600 /var/spool/cron/crontabs/scrapy
RUN chown scrapy:scrapy /var/spool/cron/crontabs/scrapy
RUN touch /var/log/cron.log
RUN chown scrapy:scrapy /var/log/cron.log
# Switch to user SCRAPY already created in stage 1
WORKDIR /home/scrapy
USER scrapy
# SET TIMEZONE https://serverfault.com/questions/683605/docker-container-time-timezone-will-not-reflect-changes
VOLUME /backup
ENTRYPOINT ["/sbin/tini"]
CMD ["crond", "-f", "-l", "8", "-L", "/var/log/cron.log"]
The crontab file which normally create a test file into /backup volume folder:
* * * * * touch /backup/testCRON
DEBUG phase :
Login into my image with bash, it seems image correctly run the scrapy user :
uid=1000(scrapy) gid=1000(scrapy) groups=1000(scrapy)
The crontab -e command also gives the correct information
But first error, cron don't run correctly, when i cat /var/log/cron.log i have a permission denied error
crond: crond (busybox 1.27.2) started, log level 8
crond: root: Permission denied
crond: root: Permission denied
I have also a second error when I try to write directly into the /backup folder using the command touch /backup/testFile. The /backup volume folder continue to be only accessible using root permission, don't know why.
crond or cron should be used as root, as described in this answer.
Check out instead aptible/supercronic, a crontab-compatible job runner, designed specifically to run in containers. It will accomodate any user you have created.

Resources