How can I run a cron in MariaDB container? - docker

I want to include a cron task in a MariaDB container, based on the latest image mariadb, but I'm stuck with this.
I tried many things without success because I can't launch both MariaDB and Cron.
Here is my actual dockerfile:
FROM mariadb:10.3
# DB settings
ENV MYSQL_DATABASE=beurre \
MYSQL_ROOT_PASSWORD=beurette
COPY ./data /docker-entrypoint-initdb.d
COPY ./keys/keys.enc home/mdb/
COPY ./config/encryption.cnf /etc/mysql/conf.d/encryption.cnf
# Installations
RUN apt-get update && apt-get -y install python cron
# Cron
RUN touch /etc/cron.d/bp-cron
RUN printf '* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1\n#' >> /etc/cron.d/bp-cron
RUN touch /var/log/cron.log
RUN chmod 0644 /etc/cron.d/bp-cron
RUN cron
With its settings, the database starts correctly, but "Cron" is not initialized. To make it work, I have to get into the container and execute the "Cron" command, and everything works perfectly.
So I'm looking for a way to launch both the db and cron from my Dockerfile used in my docker-compose.
If this is not possible, maybe there is another way to do tasks planned? The purpose being to execute a script of the db.

Elaborating on #k0pernikus's comment, I would recommend to use a separate container that runs cron. The cronjobs in that container can then work with your mysql database.
Here's how I would approach it:
1. Create a Cron Docker Container
You can set up a cron container fairly simply. Here's an example Dockerfile that should do the job:
FROM alpine
COPY ./crontab /etc/crontab
RUN crontab /etc/crontab
RUN touch /var/log/cron.log
CMD crond -f
Just put your crontab into a crontab file next to that Dockerfile and you should have a working cron container.
An example crontab file:
* * * * * mysql -h mysql --execute "INSERT INTO database.table VALUES 'v';"
2. Add the cron container to your docker-compose.yml as a service
Make sure you add your cron container to the docker-compose.yml, and put it in the same network as your mysql service:
networks:
my_network:
services:
mysql:
image: mariadb
networks:
- my_network
cron:
image: my_cron
depends_on:
- mysql
build:
context: ./path/to/my/cron-docker-folder
networks:
- my_network

I recommend the solution provided by fjc. Treat this as nice-to-know to understand why your approach is not working.
Docker has RUN commands that are only being executed during build. Not on container startup.
It also has a CMD (or ENTRYPOINT) for executing specific scripts.
Since you are using mariadb your CMD it is:
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306
CMD ["mysqld"]
(You can find the link to the actual dockerfles on dockerhub.)
This tells docker to run:
docker-entrypoint.sh mysqld
on startup.
You'd have to override its docker-entrypoint.sh to allow for the startup of the cron job as well.
See the relevant part of the Dockerfile for the CMD instruction:
CMD The CMD instruction has three forms:
CMD ["executable","param1","param2"] (exec form, this is the preferred
form) CMD ["param1","param2"] (as default parameters to ENTRYPOINT)
CMD command param1 param2 (shell form) There can only be one CMD
instruction in a Dockerfile. If you list more than one CMD then only
the last CMD will take effect.
The main purpose of a CMD is to provide defaults for an executing
container. These defaults can include an executable, or they can omit
the executable, in which case you must specify an ENTRYPOINT
instruction as well.
Note: If CMD is used to provide default arguments for the ENTRYPOINT
instruction, both the CMD and ENTRYPOINT instructions should be
specified with the JSON array format.
Note: The exec form is parsed as a JSON array, which means that you
must use double-quotes (“) around words not single-quotes (‘).
Note: Unlike the shell form, the exec form does not invoke a command
shell. This means that normal shell processing does not happen. For
example, CMD [ "echo", "$HOME" ] will not do variable substitution on
$HOME. If you want shell processing then either use the shell form or
execute a shell directly, for example: CMD [ "sh", "-c", "echo $HOME"
]. When using the exec form and executing a shell directly, as in the
case for the shell form, it is the shell that is doing the environment
variable expansion, not docker.
When used in the shell or exec formats, the CMD instruction sets the
command to be executed when running the image.
If you use the shell form of the CMD, then the will execute
in /bin/sh -c:
FROM ubuntu CMD echo "This is a test." | wc - If you want to run your
without a shell then you must express the command as a JSON
array and give the full path to the executable. This array form is the
preferred format of CMD. Any additional parameters must be
individually expressed as strings in the array:
FROM ubuntu CMD ["/usr/bin/wc","--help"] If you would like your
container to run the same executable every time, then you should
consider using ENTRYPOINT in combination with CMD. See ENTRYPOINT.
If the user specifies arguments to docker run then they will override
the default specified in CMD.
Note: Don’t confuse RUN with CMD. RUN actually runs a command and
commits the result; CMD does not execute anything at build time, but
specifies the intended command for the image.

Related

CMD and ENTRYPOINT with script, same Dockerfile

Trying to run a pod based on an image with this Dockerfile:
...
ENTRYPOINT [ "./mybashscript", ";", "flask" ]
CMD [ "run" ]
I would be expecting the full command to be ./mybashscript; flask run.
However, in this example, the pod / container executes ./mybashscript but not flask.
I also tried a couple of variations like:
...
ENTRYPOINT [ "/bin/bash", "-c", "./mybashscript && flask" ]
CMD [ "run" ]
Now, flask gets executed but run is ignored.
PS: I am trying to understand why this doesn't work and I am aware that I can fit all into the entrypoint or shove everything inside the bash script, but that is not the point.
In both cases you show here, you use the JSON-array exec form for ENTRYPOINT and CMD. This means no shell is run, except in the second case where you run it explicitly. The two parts are just combined together into a single command.
The first construct runs the script ./mybashscript, which must be executable and have a valid "shebang" line (probably #!/bin/bash). The script is passed three arguments, which you can see in the shell variables $1, $2, and $3: a semicolon ;, flask, and run.
The second construct runs /bin/sh -c './mybashscript && flask' run. sh -c takes a single argument, which is mybashscript && flask; the remaining argument run is interpreted as a positional argument, and the sh -c command would see it as $0.
The arbitrary split of ENTRYPOINT and CMD you show doesn't really make sense. The only really important difference between the two is that it is easier to change CMD when you run the container, for example by putting it after the image name in a docker run command. It makes sense to put all of the command in the command part, or none of it, but not really to put half of the command in one part and half in another.
My first pass here would be to write:
# no ENTRYPOINT
CMD ./mybashscript && flask run
Docker will insert a sh -c wrapper for you in bare-string shell form, so the && has its usual Bourne-shell meaning.
This setup looks like you're trying to run an initialization script before the main container command. There's a reasonably standard pattern of using an ENTRYPOINT for this. Since it gets passed the CMD as parameters, the script can end with exec "$#" to run the CMD (potentially as overridden in the docker run command). The entrypoint script could look like
#!/bin/sh
# entrypoint.sh
./mybashscript
exec "$#"
(If you wrote mybashscript, you could also end it with the exec "$#" line, and use that script as the entrypoint.)
In the Dockerfile, set this wrapper script as the ENTRYPOINT, and then whatever the main command is as the CMD.
ENTRYPOINT ["./entrypoint.sh"] # must be a JSON array
CMD ["flask", "run"] # can be either form
If you provide an alternate command, it replaces CMD, and so the exec "$#" line will run that command instead of what's in the Dockerfile, but the ENTRYPOINT wrapper still runs.
# See the environment the wrapper sets up
docker run --rm your-image env
# Double-check the data directory setup
docker run --rm -v $PWD/data:/data your-image ls -l /data
If you really want to use the sh -c form and the split ENTRYPOINT, then the command inside sh -c has to read $# to find its positional arguments (the CMD), plus you need to know the first argument is $0 and not $1. The form you show would be functional if you wrote
# not really recommended but it would work
ENTRYPOINT ["/bin/sh", "-c", "./mybashscript && flask \"$#\"", "flask"]
CMD ["run"]

Override docker-compose's command then run original command

I'm using docker-compose. I need to insert some behavior into a container's startup lifecycle. (Creating a custom image is overkill and I want to avoid that.)
Ordinarily I'd override the entrypoint (run my commands then run the original entrypoint), but for this image I cannot as it performs important work.
The order I want:
entrypoint (I can't override this as it must run before my stuff)
the stuff I want to run
the original command (php-fpm)
So I tried this:
command: >
my_extra_command_1
my_extra_command_2
php-fpm
and this:
command: my_extra_command_1; my_extra_command_2; php-fpm
and this:
command: ["my_extra_command_1", "my_extra_command_2", "php-fpm"]
None of these work because the container stops after my first command. It doesn't run all the commands.
What is the correct syntax?
BTW the image's Dockerfile is defined using exec form as follows:
ENTRYPOINT ["/entrypoint.sh"]
CMD ["php-fpm"]
and the entrypoint script ends with exec "$#".
You need to arrange to have a shell run your command. Dockerfile syntax will automatically wrap commands in sh -c but Docker Compose won't.
command: sh -c 'my_extra_command_1; my_extra_command_2; php-fpm'
If you use the list syntax that manually breaks this into words, the argument to -c is a single word
command:
- sh
- -c
- my_extra_command_1; my_extra_command_2; php-fpm

entrypoint: "entrypoint.sh" - docker compose

There is no such file by name entrypoint.sh in my workspace.
But below instruction in docker-compose.yml is referring it:
builder:
build: ../../
dockerfile: docker/dev/Dockerfile
volumes:
- ../../target:/wheelhouse
volumes_from:
- cache
entrypoint: "entrypoint.sh"
command: ["pip", "wheel", "--non-index", "-f /build", "."]
where ../docker/dev/Dockerfile has
# Set defaults for entrypoint and command string
ENTRYPOINT ["test.sh"]
CMD ["python", "manage.py", "test", "--noinput"]
What does entrypoint: "entrypoint.sh" actually do?
entrypoint: "entrypoint.sh" overrides ENTRYPOINT ["test.sh"] from Dockerfile.
From the docs:
Setting entrypoint both overrides any default entrypoint set on the
service’s image with the ENTRYPOINT Dockerfile instruction, and clears
out any default command on the image - meaning that if there’s a CMD
instruction in the Dockerfile, it is ignored.
ENTRYPOINT ["test.sh"] is set in Dockerfile describing docker image
entrypoint: "entrypoint.sh" is set in docker-compose file which describes multicontainer environment while referencing the Dockerfile.
docker-compose build builder will build image and set entrypoint to ENTRYPOINT ["test.sh"] set in Dockerfile.
docker-compose up builder will start container with entrypoint entrypoint.sh pip wheel --no-index '-f /build' . set in docker-compose file
ENTRYPOINT is a command or script that is executed when you run the docker container.
If you specify entrypoint in the docker-compose.yaml, it overrides ENTRYPOINT from specified Dockerfile.
CMD is something that is passed as the parameters to the ENTRYPOINT
So if you just run the dev/Dockerfile, it would execute
test.sh python manage.py test --noinput
If you overrided CMD in docker-compose.yaml as you did, it would execute
test.sh pip wheel --non-index -f /build .
But because you also overrided ENTRYPOINT in your docker-compose.yaml, it is going to execute
entrypoint.sh pip wheel --non-index -f /build .
So basically, entrypoint.sh is a script that will run inside your container builder when you execute docker-compose up command.
Also you can check this answer for more info What is the difference between CMD and ENTRYPOINT in a Dockerfile?
Update:
If the base image has entrypoint.sh, it will run that, but if you override with your own entrypoint then the container will run the override entrypoint.
If you to override the default behaviour of base image then you can change, ohterwise you do not need to override it from docker-compose.
What does entrypoint: "entrypoint.sh" actually do?
It totally depend on the script or command inside entrypoint.sh, but few things can be considered.
ENTRYPOINT instruction allows you to configure a container that will
run as an executable. It looks similar to CMD, because it also allows
you to specify a command with parameters. The difference is ENTRYPOINT
command and parameters are not ignored when Docker container runs with
command line parameters. (There is a way to ignore ENTTRYPOINT, but it
is unlikely that you will do it.)
In simple word, entrypoint can be a complex bash script, for example in case of mysql entrypoint which is more then 200 LOC which does the following task.
start MySQL server
wait for MySQL server to up
Create DB
Can perform DB migration or DB initlization
So much complex task is not possible with CMD, as in CMD you can run the bash but it will be more headache to make it work. Also it make Dockerfile simple and put the complex task to entrypoint.
When there is entrypoint, anything that is passed to CMD will be consider as a argument for entrypoint.
In your case, CMD is CMD ["python", "manage.py", "test", "--noinput"] it will be passed as an argument and the best to run this is to use use
# set of command
#start long running process at the end that is passed from CMD
exec "$#"
Finally, the exec shell construct is invoked, so that the final
command given becomes the container's PID 1. $# is a shell variable
that means "all the arguments",
use-a-script-to-initialize-stateful-container-data
cmd-vs-entrypoint

Add arguments to entrypoint/cmd for different containers

I have this simple node.js image:
FROM node:12
USER root
WORKDIR /app
COPY package.json .
COPY package-lock.json .
RUN npm i --production
COPY . .
ENTRYPOINT node dist/main.js
ultimately, I just want to be able to pass different arguments to node dist/main.js like so:
docker run -d my-image --foo --bar=3
so that the executable when run is
node dist/main.js --foo --bar=3
I have read about CMD / ENTRYPOINT and I don't know how to do this, anybody know?
I would suggest writing a custom entrypoint script to handle this case.
In general you might find it preferable to use CMD to ENTRYPOINT in most cases. In particular, the debugging shell pattern of
docker run --rm -it myimage sh
is really useful, and using ENTRYPOINT to run your main application breaks this. The entrypoint script pattern I’m about to describe is also really useful in general and it’s easy to drop in if your main container process is described with CMD.
ENTRYPOINT ["/app/entrypoint.sh"]
CMD ["node", "dist/main.js"]
The script itself is an ordinary shell script that gets passed the CMD as command-line arguments. It will typically end with exec "$#" to actualy run the CMD as the main container process.
Since the entrypoint script is a shell script, and it gets passed the command from the docker run command line as arguments, you can do dynamic switching on it, and meet both your requirement to just be able to pass additional options to your script and also my requirement to be able to run arbitrary programs instead of the Node application.
#!/bin/sh
if [ $# = 1 ]; then
# no command at all
exec node dist/main.js
else
case "$1" of
-*) exec node dist/main.js "$#" ;;
*) exec "$#" ;;
esac
fi
This seems to work:
ENTRYPOINT ["node", "dist/main.js"]
CMD []
which appears to be equivalent to just:
ENTRYPOINT ["node", "dist/main.js"]
you can't seem to use single quotes - double quotes are necessary, and you have to use shell syntax..not sure why, but this style does not work:
ENTRYPOINT node dist/main.js

Execute a script before CMD

As per Docker documentation:
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
I wish to execute a simple bash script(which processes docker environment variable) before the CMD command(which is init in my case).
Is there any way to do this?
Use a custom entrypoint
Make a custom entrypoint which does what you want, and then exec's your CMD at the end.
NOTE: if your image already defines a custom entrypoint, you may need to extend it rather than replace it, or you may change behavior you need.
entrypoint.sh:
#!/bin/sh
## Do whatever you need with env vars here ...
# Hand off to the CMD
exec "$#"
Dockerfile:
COPY entrypoint.sh /entrypoint.sh
RUN chmod 755 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
Docker will run your entrypoint, using CMD as arguments. If your CMD is init, then:
/entrypoint.sh init
The exec at the end of the entrypoint script takes care of handing off to CMD when the entrypoint is done with what it needed to do.
Why this works
The use of ENTRYPOINT and CMD frequently confuses people new to Docker. In comments, you expressed confusion about it. Here is how it works and why.
The ENTRYPOINT is the initial thing run inside the container. It takes the CMD as an argument list. Therefore, in this example, what is run in the container is this argument list:
# ENTRYPOINT = /entrypoint.sh
# CMD = init
["/entrypoint.sh", "init"]
# or shown in a simpler form:
/entrypoint.sh init
It is not required that an image have an ENTRYPOINT. If you don't define one, Docker has a default: /bin/sh -c.
So with your original situation, no ENTRYPOINT, and using a CMD of init, Docker would have run this:
/bin/sh -c 'init'
^--------^ ^--^
| \------- CMD
\--------------- ENTRYPOINT
In the beginning, Docker offered only CMD, and /bin/sh -c was hard-coded as the ENTRYPOINT (you could not change it). At some point along the way, people had use cases where they had to do more custom things, and Docker exposed ENTRYPOINT so you could change it to anything you want.
In the example I show above, the ENTRYPOINT is replaced with a custom script. (Though it is still ultimately being run by sh, because it starts with #!/bin/sh.)
That ENTRYPOINT takes the CMD as is argument. At the end of the entrypoint.sh script is exec "$#". Since $# expands to the list of arguments given to the script, this is turned into
exec "init"
And therefore, when the script is finished, it goes away and is replaced by init as PID 1. (That's what exec does - it replaces the current process with a different command.)
How to include CMD
In the comments, you asked about adding CMD in the Dockerfile. Yes, you can do that.
Dockerfile:
CMD ["init"]
Or if there is more to your command, e.g. arguments like init -a -b, would look like this:
CMD ["init", "-a", "-b"]
Dan's answer was correct, but I found it rather confusing to implement. For those in the same situation, here are code examples of how I implemented his explanation of the use of ENTRYPOINT instead of CMD.
Here are the last few lines in my Dockerfile:
#change directory where the mergeandlaunch script is located.
WORKDIR /home/connextcms
ENTRYPOINT ["./mergeandlaunch", "node", "keystone.js"]
Here are the contents of the mergeandlaunch bash shell script:
#!/bin/bash
#This script should be edited to execute any merge scripts needed to
#merge plugins and theme files before starting ConnextCMS/KeystoneJS.
echo Running mergeandlaunch script
#Execute merge scripts. Put in path to each merge script you want to run here.
cd ~/theme/rtb4/
./merge-plugin
#Launch KeystoneJS and ConnextCMS
cd ~/myCMS
exec "$#"
Here is how the code gets executed:
The ENTRYPOINT command kicks off the mergeandlaunch shell script
The two arguments 'node' and 'keystone.js' are passed along to the shell script.
At the end of the script, the arguments are passed on to the exec command.
The exec command then launched my node program the same way the Docker command CMD would do.
Thanks to Dan for his answer.
Although I found I had to do something like this within the Dockerfile:
WORKDIR /
COPY startup.sh /
RUN chmod 755 /startup.sh
ENTRYPOINT sh /startup.sh /usr/sbin/init
NOTE: I named the script startup.sh as opposed to entrypoint.sh
The key here was that I needed to provide 'sh' otherwise I kept getting "no such file..." errors coming out of 'docker logs -f container_name'.
See:
https://github.com/docker/compose/issues/3876

Resources