I have container with php-fpm as main process. Is possible create another container with supervizor as main proces to run and controll some daemon process in the php container? For example, in the php conainer there is some consumer that consume messagess from rabbitMQ. I want to control that consumers by supervisor, but I don't want to run supervizor in the php container. Is it possible?
Q: I have a container running the php-fpm as its main process. Is possible to create another container with supervisor as the main process to run and control other daemon processes in the php container?
A: I have reconstructed your problem statement a little, let me know if it do not make sense.
Short answer, possible. However you don't want to have nested containers within one as this is considered anti-pattern and is not the desire micro-service architecture.
Typically you would only have one main process running in a container. This is so that when the process dies the container will stops and exit. Hence not bringing the other working processes with it.
An ideal architecture would be to have one container for the rabbitmq and another container for the php process. Easiest way to spun them up on a same docker network would be through a docker-compose file.
You may be interested in the following attributes links/depends_on and expose to port forward your rabbitMq's port into your php container.
https://docs.docker.com/compose/compose-file/#expose
https://docs.docker.com/compose/compose-file/#depends_on
Related
I'm trying to dockerize 2 dotnet console application that one depends on the other.
When I run the 1st container I need it to run another container on the host, insert a parameter to it's stdin and wait for it to end the job and exit.
What will be the best solution to achieve this?
Running a container inside a container seems to me like a bad solution,
I've also thought about managing another process with a webserver (nginx or something) on the host that get the request as http request and execute a docker run command in the host but I'm sure there is a better solution for this (in this way the webserver will just run on the host and not inside a container).
There is also this solution but it seems to have major security issues.
I've tried also using the Docker.DotNet library but it does not help with my problem.
any ideas for the best solution?
Thanks in advance.
EDIT:
I will be using docker compose but the problem is that the 2nd container is not running and listening at all time, similar to the Hello-World container it's called, performs it's job and exits.
EDIT2: FIX
I've implemented redis as a message broker to communicate between the different services, while it changed the requirement a little (containers will always run and listen to redis) it helped me to solve the issue.
I am not sure I understand correctly what you need to do, but if you simply need to start two containers in parallel the simplest way I can think of is docker-compose.
Another way is with a python or bash/bat script that launches the containers independently (in python you can either use the Docker API or do it manually with the subprocess module). This allows you to perform other things (like writing on the stdin of one container, as you stated).
I'm using Docker to run a java REST service in a container. If I were outside of a container then I might use a process manager/supervisor to ensures that the java service restarts if it encounters a strange one-off error. I see some posts about using supervisord inside of containers but it seems like they're focused mostly on running multiple services, rather than just keeping one up.
What is the common way of managing services that run in containers? Should I just be using some built in Docker stuff on the container itself rather than trying to include a process manager?
You should not use a process supervisor inside your Docker container for a single-service container. Using a process supervisor effectively hides the health of your service, making it more difficult to detect when you have a problem.
You should rely on your container orchestration layer (which may be Docker itself, or a higher level tool like Docker Swarm or Kubernetes) to restart the container if the service fails.
With Docker (or Docker Swarm), this means setting a restart policy on the container.
I have an application running in container A, and i'd like to write to stdin of a process running in container B. I see that if I want to write to B's stdin from the host machine I can use docker attach. Effectively, I want to call docker attach B from within A.
Ideally i'd like to be able to configure this through docker-compose.yml. Maybe I could tell docker-compose to create a unix domain socket A that pipes to B's stdin, or connect some magic port number to B's stdin.
I realize that if I have to I can always put a small webserver in B's container that redirects all input from an open port in B to the process, but i'd rather use an out of the box solution if it exists.
For anyone interested in the details, I have a python application running from container A and I want it to talk to stockfish (chess engine) in container B.
A process in one Docker container can't directly use the stdin/stdout/stderr of another container. This is one of the ways containers are "like VMs". Note that this is also pretty much impossible in ordinary Linux/Unix without a parent/child process relationship.
As you say, the best approach is to put an HTTP or other service in front of the process in the other container, or else to use only a single container and launch the thing that only communicates via stdin as a subprocess.
(There might be a way to make this work if you give the calling process access to the host's Docker socket, but you'd be giving it unrestricted access over the host and tying the implementation to Docker; either the HTTP or subprocess paths are straightforward to develop and test without Docker, then move into container land separately, and don't involve the possibility of taking over the host.)
if it will help, you may try to create and read/write to socket.
and mount this socket in both containers like:
docker run -d -v /var/run/app.sock:/var/run/app.sock:ro someapp1
docker run -d -v /var/run/app.sock:/var/run/app.sock someapp2
disclaimer: it is just idea, never did something like it by myself
I am running a docker container which contains a node server. I want to attach to the container, kill the running server, and restart it (for development). However, when I kill the node server it kills the entire container (presumably because I am killing the process the container was started with).
Is this possible? This answer helped, but it doesn't explain how to kill the container's default process without killing the container (if possible).
If what I am trying to do isn't possible, what is the best way around this problem? Adding command: bash -c "while true; do echo 'Hit CTRL+C'; sleep 1; done" to each image in my docker-compose, as suggested in the comments of the linked answer, doesn't seem like the ideal solution, since it forces me to attach to my containers after they are up and run the command manually.
This is by design by Docker. Each container is supposed to be a stateless instance of a service. If that service is interrupted, the container is destroyed. If that service is requested/started, it is created. If you're using an orchestration platform like k8s, swarm, mesos, cattle, etc at least.
There are applications that exist to represent PID 1 rather than the service itself. But this goes against the design philosophy of microservices and containers. Here is an example of an init system that can run as PID 1 instead and allow you to kill and spawn processes within your container at will: https://github.com/Yelp/dumb-init
Why do you want to reboot the node server? To apply changes from a config file or something? If so, you're looking for a solution in the wrong direction. You should instead define a persistent volume so that when the container respawns the service would reread said config file.
https://docs.docker.com/engine/admin/volumes/volumes/
If you need to restart the process that's running the container, then simply run a:
docker restart $container_name_or_id
Exec'ing into a container shouldn't be needed for normal operations, consider that a debugging tool.
Rather than changing the script that gets run to automatically restart, I'd move that out to the docker engine so it's visible if your container is crashing:
docker run --restart=unless-stopped ...
When a container is run with the above option, docker will restart it for you, unless you intentionally run a docker stop on the container.
As for why killing pid 1 in the container shuts it down, it's the same as killing pid 1 on a linux server. If you kill init/systemd, the box will go down. Inside the namespace of the container, similar rules apply and cannot be changed.
Say I am running a java web application inside of my docker container that runs on elastic beanstalk (or any other framework for that matter).
I still am responsible for making sure my process has some kind of process managaement to make sure it is running correct? i.e. supervisord or runit
Or is this something that EB will somehow manage?
When the process inside the container stops, so too does the container (designed to run that single process). So you don't have to manage the process inside your container, instead rely on the system managing your containers to restart them. For example "services" in Docker Swarm and Replication Controllers in Kubernetes are designed to keep a desired number of containers running. When one dies a new one takes its place