Context
I'm trying to schedule some ingestion jobs in an Alpine container. It took me a while to understand why my cron jobs did not start: crond doesn't seems to be running
rc-service -l | grep crond
According to Alpine's documentation, crond must first be started with openrc (i.e. some kind of systemctl). Here is the Dockerfile
FROM python:3.7-alpine
# set work directory
WORKDIR /usr/src/collector
RUN apk update \
&& apk add curl openrc
# ======>>>> HERE !!!!!
RUN rc-service crond start && rc-update add crond
# install dependencies
RUN pip install --upgrade pip
RUN pip install pipenv
COPY ./Pipfile /usr/src/collector/Pipfile
RUN pipenv install --skip-lock --system --dev
# copy entrypoint.sh
COPY ./entrypoint.sh /usr/src/collector/entrypoint.sh
# copy project
COPY . /usr/src/collector/
# run entrypoint.sh
ENTRYPOINT ["/usr/src/collector/entrypoint.sh"]
entrypoint.sh merely appends the jobs at the end of /etc/crontabs/root
Problem
I'm getting the following error:
* rc-service: service `crond' does not exist
ERROR: Service 'collector' failed to build: The command '/bin/sh -c rc-service crond start && rc-update add crond' returned a non-zero code: 1
Things are starting to feel a bit circular. How can rc-service not recognizing a service while, in the same time:
sh seems to know the name crond,
there was a /etc/crontabs/root
What am I missing?
Some Alpine Docker containers are missing the busybox-initscripts package. Just append that to the end of your apk add command, and crond should run as a service.
You might also need to remove the following line from your Dockerfile since it seems as if busybox-initscripts runs crond as a service immediately after installation:
RUN rc-service crond start && rc-update add crond
I was able to fix this by adding the crond command to the docker-entrypoint.sh, prior to the actual script commands.
e.g.:
#!/bin/sh
set -e
crond &
...(rest of the original script)
This way the crond is reloaded as a detached process.
So all the steps needed was to
Find and copy the entrypoint.sh to the build folder. I did this from a running container:
docker cp <running container name>:<path to script>/entrypoint.sh <path to Dockerfile folder>/entrypoint.sh
Modify the entrypoint.sh as stated above
Include the entrypoint.sh again in the Dockerfile used for building the custom image. Dockerfile example:
...
COPY docker-entrypoint.sh <path to script>/entrypoint.sh
RUN chmod +x <path to script>/entrypoint.sh
...
And then just build and use the new custom image:
docker build -t <registry if used>/<image name>:<tag> .
Have faced the issue either. My solution for containers is to execute crond inside of screen
# apk add screen --no-cache
# screen -dmS crond crond -f -l 0
FROM alpine:latest
RUN touch crontab.tmp \
&& echo '* * * * * echo "123"' > crontab.tmp \
&& crontab crontab.tmp \
&& rm -rf crontab.tmp
CMD ["/usr/sbin/crond", "-f", "-d", "0"]
ref: https://gist.github.com/mhubig/a01276e17496e9fd6648cf426d9ceeec
Run this one: apk add openrc --no-cache
Related
I build a Docker image to run a crontab file:
RUN apt-get install -y cron
RUN touch /usr/local/learnintouch/cron.log
COPY learnintouch.cron /usr/local/learnintouch/
RUN chmod 0644 /usr/local/learnintouch/learnintouch.cron \
&& sudo crontab /usr/local/learnintouch/learnintouch.cron
It has an ENTRYPOINT to run a start.sh file which contains:
# Run the crontab
sudo service cron start
The learnintouch.cron file contains:
* * * * * echo "Hello cron" >> /usr/local/learnintouch/logs/cron.log 2>&1
But the log shows nothing.
Only if I connect in the container and run the start.sh file manually, that is, as the root user, does the log show the Hello cron message.
When logged in the container, the files have the apache user:
root#72f59adb5324:/usr/local/learnintouch# ll
total 2852
drwxr-xr-x 1 apache apache 4096 May 11 19:42 ./
drwxr-xr-x 1 root root 4096 May 3 20:10 ../
-rwxr-xr-x 1 apache apache 0 May 11 18:56 cron.log*
-rwxr-xr-x 1 apache apache 1057 May 11 19:34 start.sh*
root#72f59adb5324:/usr/local/learnintouch# whoami
root
I reckon it's a user permissions issue.
UPDATE: I tried replicating the issue with a Dockerfile as in:
FROM ubuntu:20.10
RUN apt-get update \
&& apt-get install -y sudo \
&& apt-get autoremove -y && apt-get clean
RUN mkdir -p /usr/local/learnintouch/
RUN apt-get install -y cron
COPY learnintouch.cron /usr/local/learnintouch/
RUN chmod 0644 /usr/local/learnintouch/learnintouch.cron \
&& crontab /usr/local/learnintouch/learnintouch.cron
ENTRYPOINT ["/usr/sbin/cron", "tail", "-f", "/dev/null"]
After building the image:
docker build -t stephaneeybert/cronissue .
and running the container:
docker run --name cronissue -v ~/dev/docker/projects/common/volumes/logs:/usr/local/learnintouch/logs stephaneeybert/cronissue
the cron started working fine and the issue would NOT show up.
So I reckoned the issue could lie within the docker-compose.yml file I use.
I thus tried running with the docker-compose.yml file:
version: "3.7"
services:
cronissue:
image: stephaneeybert/cronissue
volumes:
- "~/dev/docker/projects/common/volumes/logs:/usr/local/learnintouch/logs"
with the Docker Swarm command:
docker stack deploy --compose-file docker-compose.yml cronissue
And again the cron started working fine and the issue would NOT show up.
So I finally added the user: "${CURRENT_UID}:${CURRENT_GID}" property that I also have in my project as in:
version: "3.7"
services:
cronissue:
image: stephaneeybert/cronissue
volumes:
- "~/dev/docker/projects/common/volumes/logs:/usr/local/learnintouch/logs"
user: "${CURRENT_UID}:${CURRENT_GID}"
And this time, the cron did NOT work and the issue showed up.
The issue shows up ONLY when I run the container with the host user.
As a side note, I also tried opening the file permissions but it did not change anything:
&& chmod a+x /usr/bin/crontab \
&& chmod a+x /usr/sbin/cron \
UPDATE: I ended up using supercronic instead of cron as it works fine in containers.
# Using supercronic as a cron scheduler
# See https://github.com/aptible/supercronic/
COPY supercronic-linux-amd64 /usr/local/learnintouch
COPY learnintouch.cron /usr/local/learnintouch/
RUN chmod 0644 /usr/local/learnintouch/learnintouch.cron
First, the command in ENTRYPOINT doesn't make sense. You've got:
ENTRYPOINT ["/usr/sbin/cron", "tail", "-f", "/dev/null"]
It looks like you're trying to combine two commands here (cron and tail -f /dev/null), but you can't just mash together commands like that. It looks like you're using the tail -f /dev/null to try to keep the container running after cron background itself, but that's unnecessary -- looking at the man page, we can use the -f flag to cron to keep it in the foreground, so that becomes:
ENTRYPOINT ['/usr/sbin/cron', '-f']
Fortunately, the way you've got things configured, cron is just ignoring the unknown arguments (you can run cron this is a test -f and it works) so you're accidentally doing the right thing.
The issue shows up ONLY when I run the container with the host user.
The cron daemon is designed to run as root. If I try starting your image as a non-root user, for example using your docker-compose.yml that sets the user, I get:
cronissue_1 | cron: can't open or create /var/run/crond.pid: Permission denied
And even if you fix that problem by changing the directory ownership, cron will still fail:
$ cron -f
seteuid: Operation not permitted
You'll need to run cron as root, or you'll need to find some other tooling if you really want to run a scheduler as a non-root user.
As a side note, I also tried opening the file permissions but it did not change anything:
&& chmod a+x /usr/bin/crontab \
&& chmod a+x /usr/sbin/cron \
Note that the above commands are no-ops; both commands are already
executable by everyone so you haven't changed anything.
I had built a Docker container from this Dockerfile previously and it worked fine:
FROM perl:5.32
MAINTAINER Matthew Jordan Oldach, moldach686#gmail.com
WORKDIR /usr/local/bin
# Install cpan modules
RUN cpanm install --force Cwd Getopt::Long POSIX File::Basename List::Util Bio::DB::Fasta Bio::Seq Bio::SeqUtils Bio::SeqIO Set::IntervalTree Set::IntSpan
RUN apt-get install tar
# Download CooVar-v0.07
RUN wget http://genome.sfu.ca/projects/coovar/CooVar-0.07.tar.gz
RUN tar xvf CooVar-0.07.tar.gz
RUN cd coovar-0.07; chmod +x scripts/* coovar.pl
# Set WORKDIR to /data -- predefined mount location.
RUN mkdir /data
WORKDIR /data
# Set Entrypoint
ENTRYPOINT ["perl", "/usr/local/bin/coovar-0.07/coovar.pl"]
The only issue was that I found there was a slight difference between what is on the repo and the coovar-0.07 which is on our server (there was slight difference in the extract-cdna.pl script).
In order to reproduce our pipeline I'll need to COPY CooVar locally into the container (rather than wget).
I've therefore tried the following Dockerfile:
FROM perl:5.32
MAINTAINER Matthew Jordan Oldach, moldach686#gmail.com
WORKDIR /usr/local/bin
# Install cpan modules
RUN cpanm install --force Cwd Getopt::Long POSIX File::Basename List::Util Bio::DB::Fasta Bio::Seq Bio::SeqUtils Bio::SeqIO Set::IntervalTree Set::IntSpan
# Download CooVar-v0.07
COPY coovar-0.07 /usr/local/bin/coovar-0.07
RUN cd coovar-0.07; chmod +x scripts/* coovar.pl
# Set WORKDIR to /data -- predefined mount location.
RUN mkdir /data
WORKDIR /data
# Set Entrypoint
ENTRYPOINT ["perl", "/usr/local/bin/coovar-0.07/coovar.pl"]
It appears I could run the main script (coovar.pl) from Docker (no Permission Denied error):
# pull the container
$ sudo docker pull moldach686/coovar-v0.07:latest
# force entry point of `moldach686/coovar-v0.07` to /bin/bash
## in order to investigate file system
$ sudo docker run -it --entrypoint /bin/bash moldach686/coovar-v0.07
root#c7459dbe216a:/data# perl /usr/local/bin/coovar-0.07/coovar.pl
USAGE: ./coovar.pl -e EXONS_GFF -r REFERENCE_FASTA (-t GVS_TAB_FORMAT | -v GVS_VCF_FORMAT) [-o OUTPUT_DIRECTORY] [--circos] [--feature_source] [--feature_type]
Program parameter details provided in file README.
However, when I tried to incorporate this into my Snakemake workflow I get the following Permission Denied error:
Workflow defines that rule get_vep_cache is eligible for caching between workflows (use the --cache argument to enable this).
Building DAG of jobs...
Using shell: /cvmfs/soft.computecanada.ca/nix/var/nix/profiles/16.09/bin/bash
Provided cores: 1 (use --cores to define parallelism)
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 coovar
1
[Tue Nov 3 21:56:51 2020]
rule coovar:
input: variant_calling/varscan/MTG470.vcf, refs/c_elegans.PRJNA13758.WS265.genomic.fa
output: annotation/coovar/varscan/MTG470/categorized-gvs.gvf, annotation/coovar/varscan/MTG470.annotated.vcf, annotation/coovar/varscan/filtration/MTG470_keep.tsv, annotation/coovar/varscan/filtration/MTG470_exclude.tsv
jobid: 0
wildcards: sample=MTG470
resources: mem=4000, time=10
Activating singularity image /scratch/moldach/COOVAR/cbc22e3a26af1c31fb0e4fcae240baf8.simg
Can't open perl script "/usr/local/bin/coovar-0.07/coovar.pl": Permission denied
The solution I found to work was adding the following line to the Dockerfile:
RUN echo "user ALL=NOPASSWD: ALL" >> /etc/sudoers
This adds the user to the sudoers file giving permissions:
FROM perl:5.32
MAINTAINER Matthew Jordan Oldach, moldach686#gmail.com
USER root
WORKDIR /usr/local/bin
# Install cpan modules
RUN cpanm install --force Cwd Getopt::Long POSIX File::Basename List::Util Bio::DB::Fasta Bio::Seq Bio::SeqUtils Bio::SeqIO Set::IntervalTree Set::IntSpan
RUN echo "user ALL=NOPASSWD: ALL" >> /etc/sudoers
# Download CooVar-v0.07
COPY coovar-0.07 /usr/local/bin/coovar-0.07
RUN cd coovar-0.07; chmod a+rwx scripts/* coovar.pl
# Download Bedtools 2.27.1
ENV VERSION 2.27.1
ENV NAME bedtools2
ENV URL "https://github.com/arq5x/bedtools2/releases/download/v${VERSION}/bedtools-${VERSION}.tar.gz"
WORKDIR /tmp
RUN wget -q -O - $URL | tar -zxv && \
cd ${NAME} && \
make -j 4 && \
cd .. && \
cp ./${NAME}/bin/bedtools /usr/local/bin/ && \
strip /usr/local/bin/*; true && \
rm -rf ./${NAME}/
# Set WORKDIR to /data -- predefined mount location.
RUN mkdir /data
WORKDIR /data
# Set Entrypoint
ENTRYPOINT ["perl", "/usr/local/bin/coovar-0.07/coovar.pl"]
FROM docker.elastic.co/elasticsearch/elasticsearch:5.5.2
USER root
WORKDIR /usr/share/elasticsearch/
ENV ES_HOSTNAME elasticsearch
ENV ES_PORT 9200
RUN chown elasticsearch:elasticsearch config/elasticsearch.yml
RUN chown -R elasticsearch:elasticsearch data
# install security plugin
RUN bin/elasticsearch-plugin install -b com.floragunn:search-guard-5:5.5.2-16
COPY ./safe-guard/install_demo_configuration.sh plugins/search-guard-5/tools/
COPY ./safe-guard/init-sgadmin.sh plugins/search-guard-5/tools/
RUN chmod +x plugins/search-guard-5/tools/init-sgadmin.sh
ADD ./run.sh .
RUN chmod +x run.sh
RUN chmod +x plugins/search-guard-5/tools/install_demo_configuration.sh
RUN ./plugins/search-guard-5/tools/install_demo_configuration.sh -y
RUN chmod +x sgadmin_demo.sh
RUN yum install tree -y
#RUN curl -k -u admin:admin https://localhost:9200/_searchguard/authinfo
RUN usermod -aG wheel elasticsearch
USER elasticsearch
EXPOSE 9200
#ENTRYPOINT ["nohup", "./run.sh", "&"]
ENTRYPOINT ["/usr/share/elasticsearch/run.sh"]
#CMD ["echo", "hello"]
Once I add either CMD or Entrypoint - "Container is exited with code 0"
#!/bin/bash
exec $#
If I comment ENTRYPOINT or CMD - all is great.
What I am doing wrong???
If you take a look at official 5.6.9 elasticsearch Dockerfile, you will see the following at the bottom:
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["elasticsearch"]
If you do not know the difference between CMD and ENTRYPOINT, read this answer.
What you're doing is you're overwriting those two instructions with something else. What you really need is to extend CMD. What I usually do in my images, I create an sh script and combine different things I need and then indicate the script for CMD. So, you need to run sgadmin_demo.sh, but you need to wait for elasticsearch first. Create a start.sh script:
#!/bin/bash
elasticsearch
sleep 15
sgadmin_demo.sh
Now, add your script to your image and run it on CMD:
FROM: ...
...
COPY start.sh /tmp/start.sh
CMD ["/tmp/start.sh"]
Now it should be executed once you start a container. Don't forget to build :)
I'm trying to create a multi-stage build in docker which simply run a non root crontab which write to volume accessible from outside the container. I have two problem with permissions, with volume external access and with cron:
the first build in dockerfile create a non-root user image with entry-point and su-exec useful to fix permission with volume!
the second build in the same dockerfile used the first image to run a crond process which normally write to /backup folder.
The docker-compose.yml file to build the dockerfile:
version: '3.4'
services:
scrap_service:
build: .
container_name: "flight_scrap"
volumes:
- /home/rey/Volumes/mongo/backup:/backup
In the first step of DockerFile (1), I try to adapt the answer given by denis bertovic to Alpine image
############################################################
# STAGE 1
############################################################
# Create first stage image
FROM gliderlabs/alpine:edge as baseStage
RUN echo http://nl.alpinelinux.org/alpine/edge/testing >> /etc/apk/repositories
RUN apk add --update && apk add -f gnupg ca-certificates curl dpkg su-exec shadow
COPY docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh
# ADD NON ROOT USER, i hard fix value to 1000, my current id
RUN addgroup scrapy \
&& adduser -h /home/scrapy -u 1000 -S -G scrapy scrapy
ENTRYPOINT ["/usr/local/bin/docker-entrypoint.sh"]
My docker-entrypoint.sh to fix permission is:
#!/usr/bin/env bash
chown -R scrapy .
exec su-exec scrapy "$#"
The second stage (2) run the cron service to write into /backup folder mounted as volume
############################################################
# STAGE 2
############################################################
FROM baseStage
MAINTAINER rey
ENV TZ=UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apk add busybox-suid
RUN apk add -f tini bash build-base curl
# CREATE FUTURE VOLUME FOLDER WRITEABLE BY SCRAPY USER
RUN mkdir /backup && chown scrapy:scrapy /backup
# INIT NON ROOT USER CRON CRONTAB
COPY crontab /var/spool/cron/crontabs/scrapy
RUN chmod 0600 /var/spool/cron/crontabs/scrapy
RUN chown scrapy:scrapy /var/spool/cron/crontabs/scrapy
RUN touch /var/log/cron.log
RUN chown scrapy:scrapy /var/log/cron.log
# Switch to user SCRAPY already created in stage 1
WORKDIR /home/scrapy
USER scrapy
# SET TIMEZONE https://serverfault.com/questions/683605/docker-container-time-timezone-will-not-reflect-changes
VOLUME /backup
ENTRYPOINT ["/sbin/tini"]
CMD ["crond", "-f", "-l", "8", "-L", "/var/log/cron.log"]
The crontab file which normally create a test file into /backup volume folder:
* * * * * touch /backup/testCRON
DEBUG phase :
Login into my image with bash, it seems image correctly run the scrapy user :
uid=1000(scrapy) gid=1000(scrapy) groups=1000(scrapy)
The crontab -e command also gives the correct information
But first error, cron don't run correctly, when i cat /var/log/cron.log i have a permission denied error
crond: crond (busybox 1.27.2) started, log level 8
crond: root: Permission denied
crond: root: Permission denied
I have also a second error when I try to write directly into the /backup folder using the command touch /backup/testFile. The /backup volume folder continue to be only accessible using root permission, don't know why.
crond or cron should be used as root, as described in this answer.
Check out instead aptible/supercronic, a crontab-compatible job runner, designed specifically to run in containers. It will accomodate any user you have created.
Right now, I am using a docker-compose file that contains, amongst other stuff, a few lines like this. This executes without any sort of problem. It deploys perfectly and I'm able to access the web server inside through the browser.
container:
command: bash -c "cd /code; chmod +x ./deploy/start_dev.sh; ./deploy/start_dev.sh;"
image: python:3.6
As I needed to be able to connect to the container through SSH I created a Dockerfile that installs it and modifies the config file so it allows unsafe root connections:
FROM python:3.6
RUN apt-get update && apt-get install openssh-server -y
RUN sed -i "s/PermitRootLogin without-password/PermitRootLogin yes/g" /etc/ssh/sshd_config
RUN sed -i "s/PermitEmptyPasswords no/PermitEmptyPasswords yes/g" /etc/ssh/sshd_config
RUN service ssh restart
RUN echo "root:sshpassword" | chpasswd
ENTRYPOINT ["/bin/sh", "-c"]
CMD ["/bin/bash"]
After that I changed the docker-compose file to:
container:
command: bash -c "cd /code; chmod +x ./deploy/start_dev.sh; ./deploy/start_dev.sh;"
build:
context: .
From this moment on, whenever I run docker-compose up I get the following output:
container exited with code 0
Is there something I am missing?
In your docker-compose.yaml file, add the following parameter (under the 'container' section):
tty: true
Solved it switching the last two lines of the Dockerfile
ENTRYPOINT ["/bin/sh", "-c"]
CMD ["/bin/bash"]
to
CMD ["/bin/bash", "-c", "/bin/bash"]