Create a runit service that doesn't automatically start - docker

I'm working on a Docker container built on Phusion's baseimage which needs to have a number of services only started on demand. I'd like these services to remain as runit services, I'd just like them to not automatically start on boot.
As seen in their documentation, you can easily add a service by creating a folder in /etc/service with the name of your service, ie: /etc/service/jboss. Next, you must create and chmod +x a file in that service directory called run which will execute the startup of your service.
How can I do this and ensure that the service will not start on boot? The goal is still to be able to do sv start jboss, but to not have it start on boot.

Add your services to /etc/sv/<SERVICE_NAME>/ and add the run executable just like you are doing now. When you are ready to run the service, simply symlink it to /etc/service and runit will pick it up and start running it automatically.
Here's a short (non-optimized) Dockerfile that shows a disabled service and an enabled service. The enabled service will start at Docker run. The disabled service will not start until it is symlinked to /etc/service, at which time runit will start it within five seconds.
FROM phusion/baseimage
RUN mkdir /etc/sv/disabled_service
ADD disabled_service.sh /etc/sv/disabled_service/run
RUN chmod 700 /etc/sv/disabled_service/run
RUN mkdir /etc/sv/enabled_service
ADD enabled_service.sh /etc/sv/enabled_service/run
RUN chmod 700 /etc/sv/enabled_service/run
RUN ln -s /etc/sv/enabled_service /etc/service/enabled_service
CMD ["/sbin/my_init"]

With phusion/baseimage:0.9.17 (not sure in which version it was introduced) you can bake RUN touch /etc/service/jboss/down in your Dockerfile. It prevents the runit from starting it on boot and you're still able to sv start jboss later.

I'm looking at exactly the same problem (when running Cassandra in a container) and I haven't found a clean answer. Here are the two hacky ways I've come up with.
-Have an early runlevel script that moves a file in and out of run depending on whether you want something to start at boot.
-(mis)Use one of the service control commands for runit to actually start your service and use a dummy run command to bypass the automatic start.
Both methods are clearly less than ideal, but they've worked for some purposes.

Related

how to write entrypoint scripts on windows

I was asked to build an image for python programs and for example if we create 3 python programs and create a an image for them and if we run that image, basically a container will be created and will execute and will exit, and for the second program another container will be created.
that's what usually will happen. but here i was informed that a single container should be created for all the programs and it should be in the run state continuously and if we give the program name in the run command it should execute that program, not the other two programs, and it should start and stop based on the commands i give.
for this to happen i was given a hint/suggestion i should say that if i create an entrypoint script and copy that in the docker file it'll work. but unfortunately, when i researched on it in internet the entrypoint scripts are available for linux, but I'm using windows here.
So, first to explain why the container exits after you run it: Containers are not like VMs. Docker (or the container runtime you choose) will check for what is running on the containers. This "what is running" is defined on the ENTRYPOINT on your dockerfile. If you don't have an entrypoint, there's nothing running and Docker stops the containers. Or it might be the case that something ran and the container stopped after it executed.
Now, the Windows Server base images don't have an entrypoint. If you just ask to run the container, it will start and stop immediately. That is a problem for background services like web servers, for example IIS. To solve that, Microsoft created a service called Service Monitor. If you look for the docker file of the IIS image that Microsoft produces, you'll notice that the entrypoint is the service monitor that in turn checks the status of the IIS service. If IIS is running, service monitor will continue to run and thus the container keeps running indefinitely. (Here's the dockerfile: https://github.com/Microsoft/iis-docker/blob/main/windowsservercore-ltsc2022/Dockerfile)
Now, for your case, what you need is a job on your python container. Look at the description on the link provided by Mihai: https://hub.docker.com/_/python
This is their example docker file:
FROM python:3
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "python", "./your-daemon-or-script.py" ]
Note the last line. It's not an entry point, which means that the python app will run and exit, which will stop the container. IF you need the container to run indefinetly, either you leverage something like service monitor (but need to build a service in the background) or you create your own logic to keep something running. For example, a infinte loop.
Does that helps?

docker container "post start" activity

I'm new to docker and I'm starting of building, deploying, and maintaining telemetry like services (grafana, prometheus, ...). One thing I've come accross is that I have a need to start up grafana with some default/preconfigured settings (dashboard, users, org, datasources, ...). Grafana allows some startup configuration in its config file but not with all its features (users, org, ...). Outside of (if I weren't using) docker I use a ansible script to configure the not supported parts of grafana. However, when I build my custom grafana image (with allowed startup config) and later start a grafana container of that image is there a way to specify "post-start" commands or steps in docker file? I image it to be something like every time a container of my image is deployed some steps are issues to configure that container.
Any suggestions? Would I still need to use ansible or other tools like this to manage it?
This is trickier than it sounds. Continuing to use Ansible to configure it post-startup is probably a good compromise between being straightforward, code you already have, and using standard Docker tooling and images.
If this is for a test environment, one possibility is to keep a reference copy of Grafana's config and data directories. You'd have to distribute these separately from the Docker images.
mkdir grafana
docker run \
-v $PWD/grafana/config:/etc/grafana \
-v $PWD/grafana/data:/var/lib/grafana \
... \
grafana/grafana
...
tar cvzf grafana.tar.gz grafana
Once you have the tar file, you can restart the system from a known configuration:
tar xvzf grafana.tar.gz
docker run \
-v $PWD/grafana/config:/etc/grafana \
-v $PWD/grafana/data:/var/lib/grafana \
... \
grafana/grafana
Several of the standard Docker Hub database images have the ability to do first-time configuration, via an entrypoint script; I'll refer to the mysql image's entrypoint script here. The basic technique involves:
Determine whether the command given to start the container is to actually start the server, and if this is the first startup.
Start the server, as a background process, recording its pid.
Wait for the server to become available.
Actually do the first-time initialization.
Stop the server that got launched as a background process.
Go on to exec "$#" as normal to launch the server "for real".
The basic constraint here is that you want the server process to be the only thing running in the container once everything is done. That means commands like docker stop will directly signal the server, and if the server fails, it's the main container process so that will cause the container to exit. Once the entrypoint script has replaced itself with the server as the main container process (by execimg it), you can't do any more post-startup work. That leads to the sequence of starting a temporary copy of the server to do initialization work.
Once you've done this initialization work once the relevant content is usually stored in persisted data directories or external databases.
SO questions have a common shortcut of starting a server process in the background, and then using something like tail -f /dev/null as the actual main container process. This means that docker stop will signal the tail process, but not tell the server that it's about to shut down; it also means that if the server does fail, since the tail process is still running, the container won't exit. I'd discourage this shortcut.

Is there an easy way to automatically run a script whenever I (re)start a container?

I have built a Docker image, copied a script into the image, and automatically execute it when I run the image, thanks to this Dockerfile command:
ENTRYPOINT ["/path/to/script/my_script.sh"]
(I had to give it chmod rights in a RUN command to actually make it run)
Now, I'm quite new to Docker, so I'm not sure if what I want to do is even good practice:
My basic idea is that I would rather not always have to create a new container whenever I want to run this script, but to instead find a way to re-execute this script whenever I (re)start the same container.
So, instead of having to type docker run my_image, accomplishing the same via docker (re)start container_from_image.
Is there an easy way to do this, and does it even make sense from a resource parsimony perspective?
docker run is fairly cheap, and the typical Docker model is generally that you always start from a "clean slate" and set things up from there. A Docker container doesn't have the same set of pre-start/post-start/... hooks that, for instance, a systemd job does; there is only the ENTRYPOINT/CMD mechanism. The way you have things now is normal.
Also remember that you need to delete and recreate containers for a variety of routine changes, with the most important long-term being that you have to delete a container to change the underlying image (because the installed software or the base Linux distribution has a critical bug you need a fix for). I feel like a workflow built around docker build/run/stop/rm is the "most Dockery" and fits well with the immutable-infrastructure pattern. Repeated docker stop/start as a workflow feels like you're trying to keep this specific container alive, and in most cases that shouldn't matter.
From a technical point of view you can think of the container environment and its filesystem, and the main process inside the container. docker run is actually docker create plus docker start. I've never noticed the "create" half of this taking substantial time, but if you're doing something like starting a JVM or loading a large dataset on startup, the "start" half will be slow whether or not it's coupled with creating a new container.
For chmod issue you can do something like this
COPY . /path/to/script/my_script.sh
RUN chmod 777 -R /path/to/script/my_script.sh
For rerun script issue
The ENTRYPOINT specifies a command that will always be executed when the container starts.
It can be either
docker run container_from_image
or
docker start container_from_image
So whenever your container start your ENTRYPOINT command will be executed.
You can refer this for more detail

Dockerfile entrypoint

I'm trying to customize the docker image presented in the following repository
https://github.com/erkules/codership-images
I created a cron job in the Dockerfile and tried to run it with CMD, knowing the Dockerfile for the erkules image has an ENTRYPOINT ["/entrypoint.sh"]. It didn't work.
I tried to create a separate cron-entrypoint.sh and add it into the dockerfile, then test something like this ENTRYPOINT ["/entrypoint.sh", "/cron-entrypoint.sh"]. But also get an error.
I tried to add the cron job to the entrypoint.sh of erkules image, when I put it at the beginning, then the container runs the cron job but doesn't execute the rest of the entrypoint.sh. And when I put the cron script at the end of the entrypoint.sh, the cron job doesn't run but anything above in the entrypoint.sh gets executed.
How can I be able to run what's in the the entrypoint.sh of erkules image and my cron job at the same time through the Dockerfile?
You need to send the cron command to background, so either use & or remove the -f (-f means: Stay in foreground mode, don't daemonize.)
So, in your entrypoint.sh:
#!/bin/bash
cron -f &
(
# the other commands here
)
Edit: I am totally agree with #BMitch regarding the way that you should handle multiple processes, but inside the same container, which is something not so recommended.
See examples here: https://docs.docker.com/engine/admin/multi-service_container/
The first thing to look at is whether you need multiple applications running in the same container. Ideally, the container would only run a single application. You may be able to run multiple containers for different apps and connect them together with the same networks or share a volume to achieve your goals.
Assuming your design requires multiple apps in the same container, you can launch some in the background and run the last in the foreground. However, I would lean towards using a tool that manages multiple processes. Two tools I can think of off the top of my head are supervisord and foreman in go. The advantage of using something like supervisord is that it will handle signals to shutdown the applications cleanly and if one process dies, you can configure it to automatically restart that app or consider the container failed and abend.

How to execute docker commands after a process has started

I wrote a Dockerfile for a service (I have a CMD pointing to a script that starts the process) but I cannot run any other commands after the process has started? I tried using '&' to run the process in the background so that the other commands would run after the process has started but it's not working? Any idea on how to achieve this?
For example, consider I started a database server and wanted to run some scripts only after the database process has started, how do I do that?
Edit 1:
My specific use case is I am running a Rabbitmq server as a service and I want to create a new user, make him administrator and delete the default guest user once the service starts in a container. I can do it manually by logging into the docker container but I wanted to automate it by appending these to the shell script that starts the rabbitmq service but that's not working.
Any help is appreciated!
Regards
Specifically around your problem with Rabbit MQ - you can create a rabbitmq.config file and copy that over when creating the docker image.
In that file you can specify both a default_user and default_pass that will be created when an the database is set from scratch see https://www.rabbitmq.com/configure.html
AS for the general problem - you can change the entry point to a script that runs whatever you need and the service you want instead of the run script of the service
I partially understood your question. Based on what I perceived from your question, I would recommend you to mention the Copy command to copy the script you want to run into the dockerfile. Once you build an image and run the container, start the db service. Then exec the container and get into the container, run the script manually.
If you have CMD command in the dockerfile, then it will be overwritten by the command you specify during the execution. So, I don't think you have any other option to run the script unless you don't have CMD in the dockerfile.

Resources