I'm trying to build a very simple Alpine Docker container, where I want to execute a python script every minute using cron.
I can correctly create the container, copy the files and create a cron entry, however the main.log file is never created, so I'm suspecting that cron is not executing the script and I don't know why.
My crontab.txt
* * * * * python main.py >> main.log
My python script:
print("Hello World")
My DockerFile (I added the last line because my understanding is that otherwise the container will stop after setting the crontab)
FROM python:3.7-alpine
WORKDIR /code
COPY . .
RUN /usr/bin/crontab crontab.txt
ENTRYPOINT ["tail", "-f", "/dev/null"]
My Docker Compose file:
version: "3.8"
services:
app:
build: .
Inside the container I can do an ls and I see
/code # ls
Dockerfile crontab.txt docker-compose.yml main.py
Doing crontab -l correctly shows the crontab entry
/code # crontab -l
* * * * * python main.py >> main.log
And manually executing the script correctly creates the log file
/code # python main.py >> main.log && cat main.log
Hello World
However if I don't manually execute the script, then the logfile is never created, making me think that the script is not being executed every minute.
Am I missing something?
Related
Problem Description
I have a docker image which I build and run using docker-compose. Normally I develop on WSL2, and when running docker-compose up --build the image builds and runs successfully. On another machine, using Windows powershell, with an identical clone of the code, executing the same command successfully builds the image, but gives an error when running.
Error
[+] Running 1/1
- Container fastapi-service Created 0.0s
Attaching to fastapi-service
fastapi-service | exec /start_reload.sh: no such file or directory
fastapi-service exited with code 1
I'm fairly experienced using Docker, but am a complete novice with PowerShell and developing on Windows more generally. Is there a difference in Dockerfile construction in this context, or a difference in the execution of COPY and RUN statements?
Code snippets
Included are all parts of the code required to replicate the error.
Dockerfile
FROM tiangolo/uvicorn-gunicorn:python3.7
COPY requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY ./start.sh /start.sh
RUN chmod +x /start.sh
COPY ./start_reload.sh /start_reload.sh
RUN chmod +x /start_reload.sh
COPY ./data /data
COPY ./app /app
EXPOSE 8000
CMD ["/start.sh"]
docker-compose.yml
services:
web:
build: .
container_name: "fastapi-service"
ports:
- "8000:8000"
volumes:
- ./app:/app
command: /start_reload.sh
start-reload.sh
This is a small shell script which runs a prestart.sh if present, and then launches gunicorn/uvicorn in "reload mode":
#!/bin/sh
# If there's a prestart.sh script in the /app directory, run it before starting
PRE_START_PATH=/app/prestart.sh
HOST=${HOST:-0.0.0.0}
PORT=${PORT:-8000}
LOG_LEVEL=${LOG_LEVEL:-info}
echo "Checking for script in $PRE_START_PATH"
if [ -f $PRE_START_PATH ] ; then
echo "Running script $PRE_START_PATH"
. "$PRE_START_PATH"
else
echo "There is no script $PRE_START_PATH"
fi
# Start Uvicorn with live reload
exec uvicorn --host $HOST --port $PORT --log-level $LOG_LEVEL main:app --reload
The solution lies in a difference between UNIX and Windows systems, and the way they end lines. A discussion on the topic can be found [here].
(Difference between CR LF, LF and CR line break types?)
The presence/absence of these characters in the file, and configuration of the shell running the command leads to an error where the file being run is the Dockerfile start-reload.sh(CR-LF) but the file that exists is simply start-reload.sh, hence the no such file or directory error raised.
In my development environment, I have a docker-compose.yml which builds an image from a Dockerfile.
I then push this to dockerhub, and my PROD servers use that image (using docker-compose-prod.yml). In PROD I'm using docker-swarm (currently three nodes).
When building the image, in the Dockerfile, it uses entrypoint.sh so I can run both apache and cron within the same container.
This is a requirement of the web app I'm working with (whilst I'd ideally split the cron out into a separate container, it's not practical).
The result is that the crontab runs no matter the environment; i.e. whenever I docker-compose up in my DEV environment, the crontab runs. For a variety of reasons, this isn't desired.
I already have different docker-compose.yml files on DEV vs PROD (*-prod.yml), but the task to start crontab is within entrypoint.sh which is embedded within the image.
So, is there anyway I can pass some type of server tag / environment variable within entrypoint.sh so on anything other than PROD, it doesn't start/add the crontab?
For example:
new-entrypoint.sh
#!/bin/bash
if NODE IS LIVE
{
# Setup a cron schedule
echo "* * * * * /usr/local/bin/php /var/www/html/cron.php >> /var/log/cron.log 2>&1
# This extra line makes it a valid cron" > /var/www/scheduler.txt
crontab /var/www/scheduler.txt
#cron
service cron start
}
/usr/sbin/apache2ctl -D FOREGROUND
If I was the use an .ENV file and variable, I'm firstly not sure how I'd reference that in the entrypoint.sh but also how do I manage different .ENV files between DEV and PROD (bear in mind I'm running PROD as a docker-swarm).
Or, can I use a node tag (like I can for docker-compose > deploy > placement > constraints)?
Ideally it'll, by default NOT run the crontab, and only run from if a specific ENV or tag is met.
Thanks!
FILES
docker-compose.yml
version: "3.5"
services:
example:
build:
context: .
dockerfile: ./example.Dockerfile
image: username/example
...
docker-compose-prod.yml
version: '3.5'
services:
example:
image: username/example
...
Dockerfile
FROM php:7.4-apache
RUN ....
# MULTIPLE ENTRYPOINTS to enable cron and apache
ADD entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT /entrypoint.sh
entrypoint.sh
#!/bin/bash
# Setup a cron schedule
echo "* * * * * /usr/local/bin/php /var/www/html/cron.php >> /var/log/cron.log 2>&1
# This extra line makes it a valid cron" > /var/www/scheduler.txt
crontab /var/www/scheduler.txt
#cron
service cron start
/usr/sbin/apache2ctl -D FOREGROUND
You can use ENV vars that way:
Set a ENV var in docker-compose-prod.yml
version: '3.5'
services:
example:
image: username/example
environment:
APP_ENV: prd
...
And use it in entrypoint.sh
#!/bin/bash
if [ "$APP_ENV" == "prd" ]
then
# Setup a cron schedule
echo "* * * * * /usr/local/bin/php /var/www/html/cron.php >> /var/log/cron.log 2>&1
# This extra line makes it a valid cron" > /var/www/scheduler.txt
crontab /var/www/scheduler.txt
#cron
service cron start
fi
/usr/sbin/apache2ctl -D FOREGROUND
In others environments (DEV for example) the var is not set, and the entrypoint.sh doesn't execute code inside if. Hope it helps you.
I've created a simple Django application, and I want to set up a cron job. I'm using django-cron package.
I tried 2 approaches, the first one without docker-compose, I used this approach, but then I realised it wasn't working as the alpine shell was BusyBox, and it didn't have the necessary commands.
Then for the second way, I commented out a few commands in Dockerfile and followed the approach shown in this repository.
I've tried literally everything over 3 days, but every approach has some problems that cannot be FIXED.
Keeping following things in mind -
Alpine version DOES NOT have apt-get, service, cron commands.
I don't want to use Ubuntu base OS image, as it is very big.(BUT IF YOU PROVIDE A PERFECT WORKING SOLUTION, I'M WILLING TO DO ANYTHING)
Dockerfile file
# syntax=docker/dockerfile:1
FROM python:3.10.2-alpine
ENV PYTHONUNBUFFERED=1
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
# Creating DB Tables
RUN python manage.py makemigrations
RUN python manage.py migrate
# Configuring CRONJOB
COPY bashscript /bin/bashscript
# COPY cronjob /var/spool/cron/crontabs/root
# RUN chmod +x /bin/bashscript
# RUN crond -l 2 -b # THIS ISN'T WORKING FOR IDK WHAT REASON
RUN python manage.py collectstatic --noinput
EXPOSE 8000
CMD [ "python", "manage.py", "runserver", "0.0.0.0:8000" ]
docker-compose.yml file
version: "3.9"
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
mycron:
build: .
volumes:
- .:/code
entrypoint: sh /usr/src/app/crontab.sh
crontab.sh file
#!/usr/bin/env bash
# Ensure the log file exists
touch /var/log/crontab.log
# Ensure permission on the command
chmod a+x /bin/bashscript
# Added a cronjob in a new crontab
echo "*/1 * * * * python manage.py runcrons >> /var/log/crontab.log 2>&1" > /etc/crontab
# Registering the new crontab
crontab /etc/crontab
# Starting the cron
/usr/sbin/service cron start # CAN'T USE THIS BECAUSE service is not a command
# Displaying logs
# Useful when executing docker-compose logs mycron
tail -f /var/log/crontab.log
bashscript file
#!/bin/sh
python manage.py runcrons # THIS IS THE COMMAND I WANT TO EXECUTE EVERY nth MINUTES
cronjob file
# do daily/weekly/monthly maintenance
# min hour day month weekday command
*/1 * * * * /bin/bashscript
You want to run a script in a container but the cronjob doesn't need to be configured in the container itself.
You can create a script in the container to do whatever you want. And, in the server, schedule the cronjob to execute a docker exec command that runs the script in the container. Solved.
I get an issue working with docker-compose service while using Dockerfile build.
Indeed, I provide a .env file into my app/ folder. I want the TAG value of the .env file to be propagate/render into the config.ini file. I tried to achieve using entrypoint.sh (which is launch just after the volumes) but it failed.
There is my docker-compose.yml file
# file docker-compose.yml
version: "3.4"
app-1:
build:
context: ..
dockerfile: deploy/Dockerfile
image: my_image:${TAG}
environment:
- TAG=${TAG}
volumes:
- ../config.ini:/app/config.ini
And then my Dockerfile:
# file Dockerfile
FROM python:3.9
RUN apt-get update -y
RUN apt-get install -y python-pip
COPY ./app /app
WORKDIR /app
RUN pip install -r requirements.txt
RUN chmod 755 entrypoint.sh
ENTRYPOINT ["/app/entrypoint.sh"]
CMD ["python", "hello_world.py"]
In my case, I mount a config.ini file with the configuration like :
# file config.ini
[APP_INFO]
name = HELLO_WORLD
version = {TAG}
And finally, in my app folder, I have a .env file where you can found the version of the app, which is evoluing through time.
# file .env
TAG=1.0.0
Finally
#!/bin/bash
echo "TAG:${TAG}"
awk '{sub("{TAG}","${TAG}")}1' /app/config.ini > /app/final_config.ini
mv /app/final_config.ini /app/config.ini
exec "$#" # Need to execute RUN CMD function
I want my entrypoint.sh (which is called before the last DOCKERFILE line and after the docker-compose volumes. With the entrypoint.sh, I want overwritte my mounted file by a new one, cerated using awk.
Unfortunatly, I recover the tag and I can create a final_config.ini file, but I'm not able to overwrite config.ini with it.
I get this error :
mv: cannot move '/app/final_config.ini' to '/app/config.ini': Device or resource busy
How can I overwritting config.ini without getting error? Is there an more simple solution?
Because /app/config.ini is a mountpoint, you can't replace it. You should be able to rewrite it, like this...
cat /app/final_config.ini > /app/config.ini
...but that would, of course, modify the original file on your host. For what you're doing, a better solution is probably to mount the template configuration in an alternate location, and then generate /app/config.ini. E.g, mount it on /app/template_config.ini:
volumes:
- ../config.ini:/app/template_config.ini
And then modify your script to output to the final location:
#!/bin/bash
echo "TAG:${TAG}"
awk '{sub("{TAG}","${TAG}")}1' /app/template_config.ini > /app/config.ini
exec "$#" # Need to execute RUN CMD function
I have a docker-compose file with a service called 'app'. When I try to run my docker file I don't see the service with docker ps but I do with docker ps -a.
I looked at the logs:
docker logs my_app_1
python: can't open file '//apps/index.py': [Errno 2] No such file or directory
In order to debug I wanted to be able to see the home directory and the files and dirs contained there when the app attempts to run.
Is there a command I can add to docker-compose that would show me the pwd and ls -l of the container when it attempts to run index.py?
My Dockerfile:
FROM python:3
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "apps/index.py"]
My docker-compose.yaml:
version: '3.1'
services:
app:
build:
context: ./app
dockerfile: ./Dockerfile
depends_on:
- db
ports:
- 8050:8050
My directory structure:
my_app:
* docker-compose.yaml
* app
* Dockerfile
* apps
* index.py
You can add a RUN statement in the application Dockerfile to run these commands.
Example:
FROM python:3
COPY . .
RUN pip install -r requirements.txt
# Run your commands
RUN pwd && ls -l
CMD ["python", "apps/index.py"]
Then you chan check the logs of the build process and view the results.
I hope this answer helps you.
If you're just trying to debug an image you've already built, you can docker-compose run an alternate command:
docker-compose run apps \
ls -l ./apps
You don't need to modify anything in your Dockerfile to be able to do this (assuming it uses CMD correctly; see below).
If you need to do more intensive debugging, you can docker-compose run apps sh (or, if your image has it, bash) to get an interactive shell. The container will include any mounted volumes and be on the same Docker network as the named container, but won't have published ports.
Note that the command here replaces the CMD in the Dockerfile. If your image uses ENTRYPOINT for its main command, or if it has a complete command split between ENTRYPOINT and CMD (especially, if you have ENTRYPOINT ["python"]), these need to be combined into a single CMD for this to work. If your ENTRYPOINT is a wrapper script that does some first-time setup and then runs the CMD, this approach will work fine; the debugging ls or sh will run after the first-time setup happens.