I have deployed a few microservice in aws ecs(via CI/CD through jenkins) Each task has its own service and task definition. Apache is running on the foreground and the docker will be deployed from its service if the apache is crashed.
My Dev team is using RabbitMQ to communicate between microservice. few microservice needs to listen to the certain event in the RabbitMQ server (Rabbit is installed on a separate news ec2 instance)
For listening to the rabbit MQ server should we run the listener as a daemon? Dev team have asked to run the following code at the time of docker deployment so that it will listen to the rabbit mq server.
php /app/public/yii queue/listen
and set a cron job so that the listener will start when it is crashed.
For my best knowledge the docker container can run only one process in the foreground, Currently, Apache is running in the foreground. If I try to run the daemon (both the cron and the rabbit mq listener) in the background, The docker container won't restart when any of these daemon is crashed.
Is there any safer approach for this scenario? what is the best practice for running rabbit mq listener in docker container?
If your problem is to run more processes in a container, a more general concept is to create a script eg. start_service.sh in you container and execute that in the CMD directives of your docker file. like this:
#!/bin/bash
process1 ... &
process2 ... &
daemon-process1
sleep infinite
The & will make the script continue after starting a process in the background even if it is not aimed to run as a daemon. The sleep infinite in the end will prevent the script from exiting, which would exit the container.
If you run several processes within your container, consider using an "init" process like dumb-init in the container. Read more here https://github.com/Yelp/dumb-init/blob/master/README.md
Related
I have a server running multiple web applications inside separate docker containers. I am using Traefik as a reverse proxy. So, whenever a container is idle for, say 15 mins, I stop the container from inside (end the running process which causes the container to stop). How can I restart the container on demand, i.e., when there is an incoming request for the stopped container?
As asked, I am not using any cluster manager or anything like that. Basically, I have an API server which uses the docker-py library to create images and containers. Traefik listens to the docker events and generates configuration whenever a container is created to route URLs to respective containers.
I tried systemd socket activation. Here are the socket and service files.
app.socket
[Unit]
Description=App Socket
[Socket]
ListenStream=3000
Accept=yes
[Install]
WantedBy=sockets.target
app#.service
[Unit]
Description=App Service
Requires=app.socket
[Service]
Type=simple
ExecStart=/usr/bin/npm start --prefix /path/to/dir
StandardInput=socket
StandardError=journal
TimeoutStopSec=5
[Install]
WantedBy=multi-user.target
This is my current approach. My containers are running node apps. So, I end the node process inside the containers. While ending the node process, I'll enable and start app.socket. And when there is incoming traffic on port 3000, my apps will be started by socket activation.
But nothing happens when I try to access that port. I've confirmed that socket activation is working. When I execute the command date | netcat 127.0.0.1 3000, the app seems to start and then immediately stops without any error.
Maybe socket activation doesn't work how I'm expecting it to. I can see that a process init with PID 1 is running on port 3000 after enabling app.socket. As soon as traffic comes on port 3000, I want to start my node app inside the container. But how can the app start on 3000, if there is already a process running on that port?
Perhaps there's some way to do it with Traefik since it is the reverse proxy I am using. Is there some functionality which can allow me to execute a command or script whenever 404 occurs?
It would be more helpful if you can tell how are you managing your docker containers ( k8 or swarm or anything else ). But based on your initial input, I guess you are looking for Inetd or systemd socket activation. This post can be helpful https://www.reddit.com/r/docker/comments/72sdyf/startrun_a_container_when_incoming_traffic/
I am running a docker container which contains a node server. I want to attach to the container, kill the running server, and restart it (for development). However, when I kill the node server it kills the entire container (presumably because I am killing the process the container was started with).
Is this possible? This answer helped, but it doesn't explain how to kill the container's default process without killing the container (if possible).
If what I am trying to do isn't possible, what is the best way around this problem? Adding command: bash -c "while true; do echo 'Hit CTRL+C'; sleep 1; done" to each image in my docker-compose, as suggested in the comments of the linked answer, doesn't seem like the ideal solution, since it forces me to attach to my containers after they are up and run the command manually.
This is by design by Docker. Each container is supposed to be a stateless instance of a service. If that service is interrupted, the container is destroyed. If that service is requested/started, it is created. If you're using an orchestration platform like k8s, swarm, mesos, cattle, etc at least.
There are applications that exist to represent PID 1 rather than the service itself. But this goes against the design philosophy of microservices and containers. Here is an example of an init system that can run as PID 1 instead and allow you to kill and spawn processes within your container at will: https://github.com/Yelp/dumb-init
Why do you want to reboot the node server? To apply changes from a config file or something? If so, you're looking for a solution in the wrong direction. You should instead define a persistent volume so that when the container respawns the service would reread said config file.
https://docs.docker.com/engine/admin/volumes/volumes/
If you need to restart the process that's running the container, then simply run a:
docker restart $container_name_or_id
Exec'ing into a container shouldn't be needed for normal operations, consider that a debugging tool.
Rather than changing the script that gets run to automatically restart, I'd move that out to the docker engine so it's visible if your container is crashing:
docker run --restart=unless-stopped ...
When a container is run with the above option, docker will restart it for you, unless you intentionally run a docker stop on the container.
As for why killing pid 1 in the container shuts it down, it's the same as killing pid 1 on a linux server. If you kill init/systemd, the box will go down. Inside the namespace of the container, similar rules apply and cannot be changed.
I installed Nginx ECS Docker container service through AWS ECS, which is running without any issue. However, every other container services such as centos, ubuntu, mongodb or postgres installed through AWS ECS keeps restarting (de-registering, re-registering or in pending state) in a loop. Is there a way to install these container services using AWS ECS without any issue on AMI Optimized Linux? Also, is there a way to register Docker containers in AWS ECS that was manually pulled and ran from Docker Hub?
Usually if a container is restarting over and over again its because its not passing the health check that you setup. MongoDB for example does not use the HTTP protocol so if you set it up as a service in ECS with an HTTP healthcheck it will not be able to pass the healthcheck and will get killed off by ECS for failing to pass the healthcheck.
My recommendation would be to launch such services without using a healthcheck, either as standalone tasks, or with your own healthcheck mechanism.
If the service you are trying to run does in fact have an HTTP interface and its still not passing the healthcheck and its getting killed then you should do some debugging to verify that the instance has the right security group rules to accept traffic from the load balancer. Additionally you should verify that the ports you define in your task definition match up with the port of the healthcheck.
I want to access the docker socket on the host where my worker runs while executing task.
When running a "regular" docker container I can just bind-mount the socket into the container. Can I some how do the same when specifying a task in concourse?
This goes against Concourse's principle of workers being stateless. Tasks should not make any assumptions about the worker they're running on, i.e. that it has a Docker daemon running, or where the socket lives. I would recommend running Docker inside your task container instead.
Is it possible to postpone the startup of a container based the availability of a separate HTTP service. For example, only start the container if port 8080 is running?
That sort of application-level service check isn't available in docker-compose. You would need to implement the necessary logic in your docker images.
For example, if you have something that depends on a web service, you could have your CMD run a script that does something like:
while ! curl -sf http://servicehost:8080/; do
sleep 1
done
exec myprogram
Another option is to set a restart policy of always on your containers, and have them fail if the target service isn't available. Docker will continue to restart your container until it keeps running.