I wrote a Dockerfile for a service (I have a CMD pointing to a script that starts the process) but I cannot run any other commands after the process has started? I tried using '&' to run the process in the background so that the other commands would run after the process has started but it's not working? Any idea on how to achieve this?
For example, consider I started a database server and wanted to run some scripts only after the database process has started, how do I do that?
Edit 1:
My specific use case is I am running a Rabbitmq server as a service and I want to create a new user, make him administrator and delete the default guest user once the service starts in a container. I can do it manually by logging into the docker container but I wanted to automate it by appending these to the shell script that starts the rabbitmq service but that's not working.
Any help is appreciated!
Regards
Specifically around your problem with Rabbit MQ - you can create a rabbitmq.config file and copy that over when creating the docker image.
In that file you can specify both a default_user and default_pass that will be created when an the database is set from scratch see https://www.rabbitmq.com/configure.html
AS for the general problem - you can change the entry point to a script that runs whatever you need and the service you want instead of the run script of the service
I partially understood your question. Based on what I perceived from your question, I would recommend you to mention the Copy command to copy the script you want to run into the dockerfile. Once you build an image and run the container, start the db service. Then exec the container and get into the container, run the script manually.
If you have CMD command in the dockerfile, then it will be overwritten by the command you specify during the execution. So, I don't think you have any other option to run the script unless you don't have CMD in the dockerfile.
Related
I'm somewhat new to Docker. I would like to be able to use Docker to distribute a CLI program, but run the program normally once it has been installed. To be specific, after running docker build on the system, I need to be able to simply run my-program in the terminal, not docker run my-program. How can I do this?
I tried something with a Makefile which runs docker build -t my-program . and then writes a shell script to ~/.local/bin/ called my-program that runs docker run my-program, but this adds another container every time I run the script.
EDIT: I realize is the expected behavior of docker run, but it does not work for my use-case.
Any help is greatly appreciated!
If you want to keep your script, add the remove flag --rm to the docker run command. The remove flag removes the container automatically after the entry-point process has exit.
Additionally, I would personally prefer an alias for this. Simply add something like this example alias my-program="docker run --rm my-program" to your ~/.bashrc or ~/.zshrc file. This even has the advantage that all other parameters after the alias (my-program param1 param2) are automatically forwarded to the entry-point of your image without any additional effort.
I have two Dockerfiles, one for a database, and one for a web server. The web server's Dockerfile has a RUN statement which requires a connection to the database container. The web server is unable to resolve the database's IP then errors out. But if I comment out the RUN line, then manually run it inside the container, it successfully resolves the database. Should the web server be able to resolve the database during its build process?
# Web server
FROM tomcat:9.0.26-jdk13-openjdk-oracle
# The database container cannot be resolved when myscript runs. "Unable to connect to the database." is thrown.
RUN myscript
CMD catalina.sh run
# But if I comment out the RUN line then connect to web server container and run myscript, the database container is resolved
docker exec ... bash
# This works
./myscript
I ran into the same problem on database migrations and NuGet pushes. You may want to run something similar on your db like migrations, initial/test data and so on. It could be solved in two ways:
Move your DB operations to the ENTRYPOINT so that they're executed at runtime (where the DB container is up and reachable).
Build your image using docker build instead of something like docker-compose up --build because docker build has a switch called --network. So you could create a network in your compose file, bring the DB up with docker-compose up -d db-container and then access them during the build with docker build --network db-container-network -t your-image .
I'd prefer #1 over #2 if possible because
it's simpler: the network is only present in docker-compose file, not on multiple places
you can specify relations usind depends_on and make sure that they're respected properly without taking manually care of it
But depending on the action you want to execute, you need to take care that it's not executed multiple times because it's running on every start and not just during build (when the cache got purged by file changes).
However, I'd consider this as best practice anyway when running such automated DB operations to expect that they may executed more than one and should create the expected result anyway (e.g. by checking if the migration version or change is present).
I'm using the postgres:latest image, and creating backups using the following command
pg_dump postgres -U postgres > /docker-entrypoint-initdb.d/backups/redmine-$(date +%Y-%m-%d-%H-%M).sql
and it's running periodically using crontab
*/30 * * * * /docker-entrypoint-initdb.d/backup.sh
However, on occasion I might need to run
docker-compose down/up
for whatever reason
The problem
I always need to manually run /etc/init.d/cron start whenever I restart the container. This is a bit of a problem because it's difficult to remember to do, and if I (or anyone else) forgets this, backups wont be made
According to the documentation, scripts ending with *.sql and *.sh inside the /docker-entrypoint-initdb.d/ are run on container startup (and they do)
However, if I put /etc/init.d/cron start inside a executable .sh file, the other commands inside that file are executed and I've verified that. But the cron service does not start, probably because the /etc/init.d/cron start inside the executable file does not execute successfully
I would appreciate any suggestion for a solution
You will want to keep your docker containers as independent of other services as possible, I would recommend that you instead of running the cronjob in the container do it on the host, that way it will run even if the container is restarted (weather automatically or manually).
If you really really feel the need for it, I would build a new image with the postgres image as base, and add the cron right from there, that way it is in the container already from start, without any extra scripts needed. Or even create another image just to invoke the cronjob and connect via the docker network.
Expanding on #Jite's answer, you could run pg_dump remotely in a different container using the --host option
This image, for example, provides a minimal environment with psql client and dump/restore utilities
I'm trying to customize the docker image presented in the following repository
https://github.com/erkules/codership-images
I created a cron job in the Dockerfile and tried to run it with CMD, knowing the Dockerfile for the erkules image has an ENTRYPOINT ["/entrypoint.sh"]. It didn't work.
I tried to create a separate cron-entrypoint.sh and add it into the dockerfile, then test something like this ENTRYPOINT ["/entrypoint.sh", "/cron-entrypoint.sh"]. But also get an error.
I tried to add the cron job to the entrypoint.sh of erkules image, when I put it at the beginning, then the container runs the cron job but doesn't execute the rest of the entrypoint.sh. And when I put the cron script at the end of the entrypoint.sh, the cron job doesn't run but anything above in the entrypoint.sh gets executed.
How can I be able to run what's in the the entrypoint.sh of erkules image and my cron job at the same time through the Dockerfile?
You need to send the cron command to background, so either use & or remove the -f (-f means: Stay in foreground mode, don't daemonize.)
So, in your entrypoint.sh:
#!/bin/bash
cron -f &
(
# the other commands here
)
Edit: I am totally agree with #BMitch regarding the way that you should handle multiple processes, but inside the same container, which is something not so recommended.
See examples here: https://docs.docker.com/engine/admin/multi-service_container/
The first thing to look at is whether you need multiple applications running in the same container. Ideally, the container would only run a single application. You may be able to run multiple containers for different apps and connect them together with the same networks or share a volume to achieve your goals.
Assuming your design requires multiple apps in the same container, you can launch some in the background and run the last in the foreground. However, I would lean towards using a tool that manages multiple processes. Two tools I can think of off the top of my head are supervisord and foreman in go. The advantage of using something like supervisord is that it will handle signals to shutdown the applications cleanly and if one process dies, you can configure it to automatically restart that app or consider the container failed and abend.
I'm running one container with default env variables(like PORTS=1234,1235,1236) already defined from dockerfile
So with help of this while runtime, executing the script to run the naming services on defined ports
Once the container running , i want to start naming service on 1237,1238 along with existing ports, without stopping the existing container.
Let me know if anybody need more info
Please suggest the best approach
The idea behind containers are single processes that run an application and are self contained. However that case doesn't always work and you need to run multiple things in a single container, to achieve the automatic kick off of the services you should create a SCRIPT file, and use the ADD command in the docker file to get it in the system then use the ENTRYPOINT section to execute the said script.
If you are really wanting it to be at runtime instead of on startup of container you can do one of the following.
Add SSH capabilities to the container (bad idea)
Start the container with the -i "interactive" switch and the entry point as a shell environment to allow you to attach/detach to the container (not recommended).