Dockerfile entrypoint - docker

I'm trying to customize the docker image presented in the following repository
https://github.com/erkules/codership-images
I created a cron job in the Dockerfile and tried to run it with CMD, knowing the Dockerfile for the erkules image has an ENTRYPOINT ["/entrypoint.sh"]. It didn't work.
I tried to create a separate cron-entrypoint.sh and add it into the dockerfile, then test something like this ENTRYPOINT ["/entrypoint.sh", "/cron-entrypoint.sh"]. But also get an error.
I tried to add the cron job to the entrypoint.sh of erkules image, when I put it at the beginning, then the container runs the cron job but doesn't execute the rest of the entrypoint.sh. And when I put the cron script at the end of the entrypoint.sh, the cron job doesn't run but anything above in the entrypoint.sh gets executed.
How can I be able to run what's in the the entrypoint.sh of erkules image and my cron job at the same time through the Dockerfile?

You need to send the cron command to background, so either use & or remove the -f (-f means: Stay in foreground mode, don't daemonize.)
So, in your entrypoint.sh:
#!/bin/bash
cron -f &
(
# the other commands here
)
Edit: I am totally agree with #BMitch regarding the way that you should handle multiple processes, but inside the same container, which is something not so recommended.
See examples here: https://docs.docker.com/engine/admin/multi-service_container/

The first thing to look at is whether you need multiple applications running in the same container. Ideally, the container would only run a single application. You may be able to run multiple containers for different apps and connect them together with the same networks or share a volume to achieve your goals.
Assuming your design requires multiple apps in the same container, you can launch some in the background and run the last in the foreground. However, I would lean towards using a tool that manages multiple processes. Two tools I can think of off the top of my head are supervisord and foreman in go. The advantage of using something like supervisord is that it will handle signals to shutdown the applications cleanly and if one process dies, you can configure it to automatically restart that app or consider the container failed and abend.

Related

Is there an easy way to automatically run a script whenever I (re)start a container?

I have built a Docker image, copied a script into the image, and automatically execute it when I run the image, thanks to this Dockerfile command:
ENTRYPOINT ["/path/to/script/my_script.sh"]
(I had to give it chmod rights in a RUN command to actually make it run)
Now, I'm quite new to Docker, so I'm not sure if what I want to do is even good practice:
My basic idea is that I would rather not always have to create a new container whenever I want to run this script, but to instead find a way to re-execute this script whenever I (re)start the same container.
So, instead of having to type docker run my_image, accomplishing the same via docker (re)start container_from_image.
Is there an easy way to do this, and does it even make sense from a resource parsimony perspective?
docker run is fairly cheap, and the typical Docker model is generally that you always start from a "clean slate" and set things up from there. A Docker container doesn't have the same set of pre-start/post-start/... hooks that, for instance, a systemd job does; there is only the ENTRYPOINT/CMD mechanism. The way you have things now is normal.
Also remember that you need to delete and recreate containers for a variety of routine changes, with the most important long-term being that you have to delete a container to change the underlying image (because the installed software or the base Linux distribution has a critical bug you need a fix for). I feel like a workflow built around docker build/run/stop/rm is the "most Dockery" and fits well with the immutable-infrastructure pattern. Repeated docker stop/start as a workflow feels like you're trying to keep this specific container alive, and in most cases that shouldn't matter.
From a technical point of view you can think of the container environment and its filesystem, and the main process inside the container. docker run is actually docker create plus docker start. I've never noticed the "create" half of this taking substantial time, but if you're doing something like starting a JVM or loading a large dataset on startup, the "start" half will be slow whether or not it's coupled with creating a new container.
For chmod issue you can do something like this
COPY . /path/to/script/my_script.sh
RUN chmod 777 -R /path/to/script/my_script.sh
For rerun script issue
The ENTRYPOINT specifies a command that will always be executed when the container starts.
It can be either
docker run container_from_image
or
docker start container_from_image
So whenever your container start your ENTRYPOINT command will be executed.
You can refer this for more detail

What is the best way to do periodical cleanups inside a docker container?

I have a docker container that runs a simple custom download server using uwsgi on debian and a python script. The files are generated and saved inside the container for each request. Now, periodically I want to delete old files that the server generated for past requests.
So far, I achieved the cleanup via a cronjob on the host, that looks something like this:
*/30 * * * * docker exec mycontainer /path/on/container/delete_old_files.sh
But that has a few drawbacks:
Cron needs to be installed and running on the docker host
The user manually has to add a cronjob for each container they start
There is an extra cleanup script in the source
The fact that the cron job is needed needs to be documented
I would much prefer a solution that rolls out with the docker container and is also suitable for more general periodical tasks in the background of a docker container.
Any best practices on this?
Does python or uwsgi have an easy mechanism for periodical background tasks?
I'm aware, that I could install cron inside the container and to something like: CMD ['sh', '-c', 'cron; uswgi <uswgi-options>... --wsgi-file server.py'] but that seems a bit clonky and against docker philosopy.
A solution like this in server.py:
def cleanup():
# ...
threading.Timer(30*60, cleanup).start() # seconds...
cleanup()
# ... rest of the code here ...
Seems good, but I'm not sure how it interferes with uwsgi's own threading and processing.
It seems like a simple problem but isn't.
You should not store live data in containers. Containers can be a little bit fragile and need to be deleted and restarted routinely (because you forgot an option; because the underlying image has a critical security fix) and when this happens you will lose all of the data that's in the container.
What you can do instead is use a docker run -v option to cause the data to be stored in a path on the host. If they're all in the same place then you can have one cron job that cleans them all up. Running cron on the host is probably the right solution here, though in principle you could have a separate dedicated cron container that did the cleanup.

Docker dealing with processes that don't end?

I have a docker container that has services running on multiple ports.
When I try to start one of these processes mid-way through my Dockerfile it causes the build process to stall indefinitely.
RUN /opt/webhook/webhook-linux-amd64/webhook -hooks /opt/webhook/hooks.json -verbose
So the program is running as it should but it never moves on.
I've tried adding & to the end of the command to tell bash to run the next step in parallel but this causes the service to not be running in the final image. I also tried redirecting the output of the program to /dev/null.
How can I get around this?
You have a misconception here. The commands in the Dockerfile are executed to create a docker image before it is executed. One type of command in the Dockerfile is RUN which allows you to run an arbitrary shell command whose actions influence the image under creation in some sense.
Therefore, the build process waits until the command terminates.
It seems you want to start the service when the image is started. To do so use the CMD command instead. It tells Docker what is supposed to be executed when the image is started.

How to execute docker commands after a process has started

I wrote a Dockerfile for a service (I have a CMD pointing to a script that starts the process) but I cannot run any other commands after the process has started? I tried using '&' to run the process in the background so that the other commands would run after the process has started but it's not working? Any idea on how to achieve this?
For example, consider I started a database server and wanted to run some scripts only after the database process has started, how do I do that?
Edit 1:
My specific use case is I am running a Rabbitmq server as a service and I want to create a new user, make him administrator and delete the default guest user once the service starts in a container. I can do it manually by logging into the docker container but I wanted to automate it by appending these to the shell script that starts the rabbitmq service but that's not working.
Any help is appreciated!
Regards
Specifically around your problem with Rabbit MQ - you can create a rabbitmq.config file and copy that over when creating the docker image.
In that file you can specify both a default_user and default_pass that will be created when an the database is set from scratch see https://www.rabbitmq.com/configure.html
AS for the general problem - you can change the entry point to a script that runs whatever you need and the service you want instead of the run script of the service
I partially understood your question. Based on what I perceived from your question, I would recommend you to mention the Copy command to copy the script you want to run into the dockerfile. Once you build an image and run the container, start the db service. Then exec the container and get into the container, run the script manually.
If you have CMD command in the dockerfile, then it will be overwritten by the command you specify during the execution. So, I don't think you have any other option to run the script unless you don't have CMD in the dockerfile.

Running a cronjob or task inside a docker cloud container

I got stuck and need help. I have setup multiple stacks on docker cloud. The stacks are running multiple container like data, mysql, web, elasticsearch, etc.
Now I need to run commands on the web containers. Before docker I did this with cronjob eg:
*/10 * * * * php /var/www/public/index.php run my job
But my web Dockerfile ends with
CMD ["apache2-foreground"]
As I understand the docker concept running two commands on one container would be bad practice. But how would I schedule a job like the one cronjob above?
Should I start cron in the CMD too, something like?
CMD ["cron", "apache2-foreground"] ( should exit with 0 before apache starts)
Should I make a start up script running both commands?
In my opinion the smartest solution would be to create another service like the dockercloud haproxy one, where other services are linked.
Then the cron service would exec commands that are defined in the Stackfile of the linked containers/stacks.
Thanks for your help
With docker in general I see 3 options:
run your cron process in the same container
run your cron process in a different container
run cron on the host, outside of docker
For running cron in the same container you can look into https://github.com/phusion/baseimage-docker
Or you can create a separate container where the only running process inside is the cron daemon. I don't have a link handy for this, but they are our there. Then you use the cron invocations to connect to the other containers and call what you want to run. With an apache container that should be easy enough, just expose some minimal http API endpoint that will do what you want done when it's called (make sure it's not vulnerable to any injections, i.e. don't pass any arguments, keep it simple stupid).
If you have control of the host as well then you can (ab)use the cron daemon running there (I currently do this with my containers). I don't know docker cloud, but something tells me that this might not be an option for you.

Resources