Start docker container on incoming HTTP request - docker

I have a server running multiple web applications inside separate docker containers. I am using Traefik as a reverse proxy. So, whenever a container is idle for, say 15 mins, I stop the container from inside (end the running process which causes the container to stop). How can I restart the container on demand, i.e., when there is an incoming request for the stopped container?
As asked, I am not using any cluster manager or anything like that. Basically, I have an API server which uses the docker-py library to create images and containers. Traefik listens to the docker events and generates configuration whenever a container is created to route URLs to respective containers.
I tried systemd socket activation. Here are the socket and service files.
app.socket
[Unit]
Description=App Socket
[Socket]
ListenStream=3000
Accept=yes
[Install]
WantedBy=sockets.target
app#.service
[Unit]
Description=App Service
Requires=app.socket
[Service]
Type=simple
ExecStart=/usr/bin/npm start --prefix /path/to/dir
StandardInput=socket
StandardError=journal
TimeoutStopSec=5
[Install]
WantedBy=multi-user.target
This is my current approach. My containers are running node apps. So, I end the node process inside the containers. While ending the node process, I'll enable and start app.socket. And when there is incoming traffic on port 3000, my apps will be started by socket activation.
But nothing happens when I try to access that port. I've confirmed that socket activation is working. When I execute the command date | netcat 127.0.0.1 3000, the app seems to start and then immediately stops without any error.
Maybe socket activation doesn't work how I'm expecting it to. I can see that a process init with PID 1 is running on port 3000 after enabling app.socket. As soon as traffic comes on port 3000, I want to start my node app inside the container. But how can the app start on 3000, if there is already a process running on that port?
Perhaps there's some way to do it with Traefik since it is the reverse proxy I am using. Is there some functionality which can allow me to execute a command or script whenever 404 occurs?

It would be more helpful if you can tell how are you managing your docker containers ( k8 or swarm or anything else ). But based on your initial input, I guess you are looking for Inetd or systemd socket activation. This post can be helpful https://www.reddit.com/r/docker/comments/72sdyf/startrun_a_container_when_incoming_traffic/

Related

Does Docker HEALTHCHECK disable container networking when unhealthy?

I need some clarification in regards to using HEALTHCHECK on a docker service.
Context:
We are experimenting with a multi-node mariadb cluster and by utilizing HEALTHCHECK we would like the bootstrapping containers to remain unhealthy until bootstrapping is complete. We want this so that front-end users don’t access that particular container in the service until it is fully online and sync’d with the cluster. The issue is that bootstrapping relies on the network between containers in order to do a state transfer and it won’t work when a container isn’t accessible on the network.
Question:
When a container’s status is either starting or unhealthy does HEALTHCHECK completely kill network access to and from the container?
As an example, when a container is healthy I can run the command getent hosts tasks.<service_name>
inside the container which returns the IP address of other containers in a service. However, when the same container is unhealthy that command does not return anything… Hence my suspicion that HEALTHCHECK kills the network at the container level (as opposed to at the service/load balancer level) if the container isn’t healthy.
Thanks in advance
I ran some more tests and found my own answer. Basically docker does not kill container networking when it is either in the started or unhealthy phase. The reason getent hosts tasks.<service_name> command does not work during those phases is that that command goes back to get the container IP address through the service which does not have the unhealthy container(s) assigned to it.

Flask in docker, access other flask server running locally

After finding a solution for this problem, I have another question: I am running a flask app in a docker container (my web map), and on this map I want to show tiles served by a (flask-based) Terracotta tile server running in another docker container. The two containers are on the same docker network and can talk to each other, however only the port where my web server is running is open to the public, and I like to keep it that way. Is there a way I can serve my tiles somehow "from local" without opening the port of the tile server? Maybe by setting up some redirects or something?
Main reason for this is that I need someone else to open ports for me, which takes ages.
If you are running your docker containers on a remote machine like ec2, then you need not worry about a port being open to public, as by default ports are closed in ec2 or similar services. You just need to open the port on which you are running your app, you can use aws console for that.
If you are running your docker container locally or on some server for which you don't have cosole access, then you can use somekind of firewall to open or close a port. I personally prefer UFW for Ubuntu systems. You can allow a certain range of ports using a simple command such as sudo ufw allow 9000 to allow incoming tcp packets on port 9000. Similarly you can deny incoming packets to a port. Also, you can open a port to a certain ip (like your own ip) using sudo ufw allow from <ip address>.

RabbitMQ listener in docker/microservice

I have deployed a few microservice in aws ecs(via CI/CD through jenkins) Each task has its own service and task definition. Apache is running on the foreground and the docker will be deployed from its service if the apache is crashed.
My Dev team is using RabbitMQ to communicate between microservice. few microservice needs to listen to the certain event in the RabbitMQ server (Rabbit is installed on a separate news ec2 instance)
For listening to the rabbit MQ server should we run the listener as a daemon? Dev team have asked to run the following code at the time of docker deployment so that it will listen to the rabbit mq server.
php /app/public/yii queue/listen
and set a cron job so that the listener will start when it is crashed.
For my best knowledge the docker container can run only one process in the foreground, Currently, Apache is running in the foreground. If I try to run the daemon (both the cron and the rabbit mq listener) in the background, The docker container won't restart when any of these daemon is crashed.
Is there any safer approach for this scenario? what is the best practice for running rabbit mq listener in docker container?
If your problem is to run more processes in a container, a more general concept is to create a script eg. start_service.sh in you container and execute that in the CMD directives of your docker file. like this:
#!/bin/bash
process1 ... &
process2 ... &
daemon-process1
sleep infinite
The & will make the script continue after starting a process in the background even if it is not aimed to run as a daemon. The sleep infinite in the end will prevent the script from exiting, which would exit the container.
If you run several processes within your container, consider using an "init" process like dumb-init in the container. Read more here https://github.com/Yelp/dumb-init/blob/master/README.md

Docker reuse port in a consul TCP health check cycle

Examples:
moment 1:
Docker run container A that listen 32781(export port)->8000(service port)
Consul health check done pass by TCP connection(cycle 10s).
moment 2:
Docker restart container A and run container B at close time(Less than 10s).
Now the port 32781 is container B (reuse port), the new container A got another port.
But next cycle of consul health check, the port 32781 is ok, and the consul take for container A is ok.
How to solve the issue?
It seems to me, you have to deregister a service and it's health checks on container restart. Consul API provide such an opportunity, you just have to use it in your microservices. How to exactly make it work, depends on the way your services are built. Otherwise, no way Consul will determine, that some service was restarted with another port.

Is there a way to "hibernate" a linux container

Say you had a bunch of wordpress containers running on a machine with each application sitting behind cache. Is there a way to stop a container and start it only if the url is not found in cache?
systemd provides a Socket Activation feature that can activate a service on tcp connection and proxy the connection in. Atlassian have a detailed article on using it with Docker.
I don't believe systemd has the ability to stop the service when there is no activity. You will need something that can close down the service after there are no connections left being served. This could be done in the wordpress app container or externally via systemd on the host.
Some more socket reading from the systemd developer:
http://0pointer.de/blog/projects/socket-activated-containers.html
http://0pointer.de/blog/projects/socket-activation2.html
http://0pointer.de/blog/projects/socket-activation.html

Resources