Custom shell script in crontab - docker

I've a simple shell script that executes a docker-exec command inside a container.
The script is located in /var/www/mysite-nginx/nginx-reload.sh and permissions of this file are -rwxrwxr-x
#!/bin/sh
docker exec -it mysite_nginx nginx -s reload
If I execute this script directly from shell, it works. But if I add the script to my crontab with the following line, it doesn't work.
15 4 * * * /var/www/mysite-nginx/nginx-reload.sh
I suppose that cron doesn't execute the command, or what is wrong?
On /var/log/syslog I have:
Jul 23 15:30:01 arrubiu CRON[29511]: (sergej) CMD (/var/www/mysite-nginx/nginx-reload.sh)
[EDIT] Solved in this way: docker exec is not working in cron

The issue seems to be that docker is not found. There are two ways around:
You enter the full paths of all application in your crontab script, you can find that out using e.g. locate docker, so that it looks something like
#!/bin/sh
/usr/bin/docker exec -it mysite_nginx
/usr/bin/nginx -s reload
Alternatively, you can set the $PATH and other environment variables in the same way how they are set for a usual sh-script. To achieve that, first backup what is saved in /etc/environment, and then flush it with the currently available variables by executing:
cp /etc/environment > ~/my_etc_environment_backup
env >> /etc/environment
Related questions on SO
Where can I set environment variables that crontab will use?

Related

Docker container environment variable file during runtime

I have a docker image that basically schedules a cron job at a frequency defined when building the image using the below.
COPY myjobtime /etc/cron.d/myjobtime
RUN chmod 0644 /etc/cron.d/myjobtime &&\
crontab /etc/cron.d/myjobtime
CMD cron
I have the cron entry in the file myjobtime.
*/10 * * * * /usr/local/bin/sh /app/myscript.py
I would like to be able to pass the cron schedule during the runtime. Meaning, if someone wants to modify the cron schedule to a different frequency, they should be able to do that while running the container and passing an environment variable file with the new cron schedule in it. Can this be done?
The important detail is that you need to create and install the crontab file when the container starts up. I find an entrypoint wrapper script to be a useful pattern for this: set the image's ENTRYPOINT to be a shell script that does whatever first-time setup is required, then have it exec "$#" to run the image's CMD.
If your image is ultimately based on a Linux distribution based on the GNU toolset, then envsubst is a really helpful program here. It reads in a text file, expands environment variable references, and writes out the result. I'll assume you have this available; on Alpine-based images you can do similar tricks with sed(1) (though escaping around the cron schedule will become tricky).
This makes the entrypoint wrapper script something like:
#!/bin/sh
# entrypoint.sh
# Set a default schedule, if the user didn't provide one
if [ -z "$CRON_SCHEDULE" ]; then
export CRON_SCHEDULE='*/10 * * * *'
fi
# Run substitutions on the template file and inject the crontab
envsubst < /app/myjobtime.cron.tmpl | crontab
# Run the main container command
exec "$#"
Since the template isn't a "normal" crontab, it can't go in the "normal" crontab directory; putting it in the application directory is fine. That file has an environment variable reference where the schedule would go
# myjobtime.cron.tmpl
${CRON_SCHEDULE} /app/myscript.py
In your image, set the wrapper script to be the ENTRYPOINT, make sure the template file is in the right place, and leave the CMD unchanged.
# (assuming there's not a broad `COPY . .`)
COPY myjobtime.cron.tmpl .
COPY entrypoint.sh .
ENTRYPOINT ["/app/entrypoint.sh"] # must be JSON-array syntax
CMD cron # unchanged
This should allow you to override the cron schedule.
docker run -d --name hourly myappcron
docker run -d --name daily -e 'CRON_SCHEDULE=0 0 * * *' myappcron
Since the entrypoint wrapper script runs whatever command was provided, and you can override the command pretty easily, this also lets you double-check that the right schedule got set.
docker run --rm -e 'CRON_SCHEDULE=0 0 * * *' myappcron \
crontab -l # runs instead of the cron daemon

Path is different depending on how you connect to container

I have an Alpine docker container and depending on how I connect using ssh the path is different. If I connect using a PTY shell:
ssh root#localhost sh -lc env | grep PATH
this prints:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
However if don't use this shell:
ssh root#localhost sh -c env | grep PATH
this prints:
PATH=/bin:/usr/bin:/sbin:/usr/sbin
Why is this happening? What do I need to do so that the second command produces the same output as the first command?
With sh -l you start a login shell:
When invoked as an interactive login shell, or a non-interactive shell with the --login option, it first attempts to read and execute commands from /etc/profile and ~/.profile, in that order. The --noprofile option may be used to inhibit this behavior.
...
A non-interactive shell invoked with the name sh does not attempt to read any other startup files.
From https://linux.die.net/man/1/sh
That is you can probably edit the profile files to make the login shell behave similar to noprofile but it might become difficult the other way around.
I'll answer my own question. This stack overflow post has the main info needed: Where to set system default environment variables in Alpine linux?
Given that, there are two alternatives:
Declare PATH using the ENV option of the Dockerfile
Or add PermitUserEnvironment yes to sshd_config file and define PATH in ~/.ssh/environment

How to run cron job manually and check the logs in ubuntu

I need to check the printed logs inside my cron file in ubuntu/cent os server. I am explaining my code below.
backup-mongodb.sh
#!/bin/bash
name = "uBot_sandbox"
export DATABASE_NAME="planets"
export BACKUP_LOCATION="/home/UBOT"
for container_name in $(docker ps --filter="name=$name" -q);do
echo "container name : $container_name"
done
I set this file using crontab -e like below.
* * * * * /bin/bash /home/subhrajp/UBOT/git/uBot/python3.7-alpine3.8/app/mean-stack/node-js/utils/backup-mongodb.sh
Here I need to check the echo output of that .sh file once the cronjob started. Also I need to know what is the command to run the cron file manually without starting the cronjob so that I can use it while developing. Please help me to resolve my problem.

set environment variable from sh script in systemd service file

i try to use ready-made bash script that set env
this is the service that i try to use :
[Unit]
Description=myserver service
After=multi-user.target
[Service]
Type=simple
User=ec2-user
Group=ec2-user
WorkingDirectory=/home/ec2-user/myserver/
ExecStart=/bin/sh -c '/home/ec2-user/myserver/config/myserverVars.sh ;/home/ec2-user/venv/bin/python /home/ec2-user/myserver/myserver.py 2>&1 >> /home/ec2-user/myserver/logs/systemd_myserver.log'
StandardOutput=append:/home/ec2-user/myserver/logs/systemd_stdout.log
StandardError=append:/home/ec2-user/myserver/logs/systemd_stderr.log
[Install]
WantedBy=multi-user.target
the myserverVars.sh:
#!/bin/bash
export APP1=foo#gmail.com
export APP2_BIND_PASS=xxxxxx
export APP3=xxxxxx
the variables in /home/ec2-user/myserver/config/myserverVars.sh
are never set, and the server is started without the variables and this is wrong ,
i trying to avoid using Environment key or EnvironmentFile
When you run /home/ec2-user/myserver/config/myserverVars.sh, it is run in a new process which exits when it finishes, so all the changes to the environment are lost. You need to ask the current shell to execute the script without starting a new process. This is done with the source command, which is also available as the "dot" command: .. So use
/bin/sh -c 'source /home/ec2-user/myserver/config/myserverVars.sh; ...'

Running cron in a docker container on a windows host

I am having some problems trying to make a container that runs a cronjob. I can see cron running using top in the container but it doesn't write to the log file as the below example attempts to. The file stays empty.
I have read answers to the same question here:
How to run a cron job inside a docker container?
Output of `tail -f` at the end of a docker CMD is not showing
But I could not make any of the suggestions work. For example I used the dockerfile from here: https://github.com/Ekito/docker-cron/
FROM ubuntu:latest
MAINTAINER docker#ekito.fr
# Add crontab file in the cron directory
ADD crontab /etc/cron.d/hello-cron
# Give execution rights on the cron job
RUN chmod 0644 /etc/cron.d/hello-cron
# Create the log file to be able to run tail
RUN touch /var/log/cron.log
#Install Cron
RUN apt-get update
RUN apt-get -y install cron
# Run the command on container startup
CMD cron && tail -f /var/log/cron.log
crontab:
* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1
# Don't remove the empty line at the end of this file. It is required to run the cron job
It didn't work on my machine (windows 10). Apparently there seems to be a windows specific issue also reported by someone else: https://github.com/Ekito/docker-cron/issues/3
To test if it was just me doing something wrong I tried to do the same in a virtual machine running ubuntu (so an ubuntu host instead of my windows host) and that worked as expected. The log file is extended as expected.
So what can I do to try to make this work?
I tried writing to a mounted (bind) folder and making a volume to write to. Neither worked.
rferalli's answer on the github issue did the trick for me:
"Had the same issue. Fixed it by changing line ending of the crontab file from CRLF to LF. Hope this helps!"
I have this problem too.
My workaround is to use Task Scheduler to run a .bat file that start a container instead
Using Task Scheduler: https://active-directory-wp.com/docs/Usage/How_to_add_a_cron_job_on_Windows.html
hello.bat
docker run hello-world
TaskScheduler Action
cmd /c hello.bat >> hello.log 2>&1
Hope this help :)

Resources