I have 2 EC2 instances with 64 GB RAM and 8 vCPUs. currently i am running one container in each EC2 instance.there is one uvicorn(fastapi)process running in the container. according to my understanding one process can handle one request at a time. I thought of using gunicorn in the container so that we can have multiple fastapi processes running , which can handle multiple requests at a time but on little bit of research, it is suggested/recommended that one container should have one process and we need to replicate/add more containers. which approach should i take like using gunicorn to have more worker processes in the same container or replicating the containers with single process
Related
I would like to run 2 process within the same docker container or dyno .
Is this possible?
Heroku Dyno is very similar to a docker container, and have the same main principle: Run just one foreground process in each one.
Check this post to understand what are foreground and background process.
Docker official web says :
It is generally recommended that you separate areas of concern by using one service per container
With time, maybe you could achieve your goal: Run multiple services in a container (api in your case) in docker using linux services, creating one process which will launch other child process or another workaround, but in heroku will not be possible, due to security restrictions and limited s.o commands.
I have container with php-fpm as main process. Is possible create another container with supervizor as main proces to run and controll some daemon process in the php container? For example, in the php conainer there is some consumer that consume messagess from rabbitMQ. I want to control that consumers by supervisor, but I don't want to run supervizor in the php container. Is it possible?
Q: I have a container running the php-fpm as its main process. Is possible to create another container with supervisor as the main process to run and control other daemon processes in the php container?
A: I have reconstructed your problem statement a little, let me know if it do not make sense.
Short answer, possible. However you don't want to have nested containers within one as this is considered anti-pattern and is not the desire micro-service architecture.
Typically you would only have one main process running in a container. This is so that when the process dies the container will stops and exit. Hence not bringing the other working processes with it.
An ideal architecture would be to have one container for the rabbitmq and another container for the php process. Easiest way to spun them up on a same docker network would be through a docker-compose file.
You may be interested in the following attributes links/depends_on and expose to port forward your rabbitMq's port into your php container.
https://docs.docker.com/compose/compose-file/#expose
https://docs.docker.com/compose/compose-file/#depends_on
I have a design question. I am using dockerized celery workers on several hosts. I only have one instance of the celery container running on each host but using the default workers settings for celery which defaults to the number of cores on that host. I did not set any limits for the docker containers. I used rancher to deploy to the hosts using cattle environment but I guess my question is equally applicable to any docker clustering like swarm. I did not use the scaling features by using more than one container because of the way celery works-one container is already able to leverage the cores by having multiple workers. The question is: Are there any benefits for me to have more 1 worker container on the host? If so, would I need to limit each celery worker to just one in each container and let the cluster to scale multiple containers? The only benefit I can imagine is from a high availability perspective that if the celery worker dies on on host then it is gone, but if I have more containers other can take over the work, but I think celery can do the same thing by respawning workers too. Am I missing something?
The only way to know for sure is to benchmark it with your particular workload but your intuition is generally correct. If the application is capable of consistently using all the cores then running more of them will generally make things slightly slower because of context switching. Side benefits like still being available I'd I've of many workers fail may or may not be worth the overhead to you.
Does exist any component that monitors usage of a server, of a resource or anther Docker instance(s) and starts new Docker containers when more resources are needed?
Docker containers may or may not be deployed on the same Server.
For example :
1) when a message queue grows too fast, other Docker containers that listen that queue are started to help consuming the messages.
2) when too many request are made to a server throug a load balancer, other docker instances are run..
What you are describing here is part of orchestration. Several tools exist for that, the best-known being Kubernetes and Marathon.
Say I am running a java web application inside of my docker container that runs on elastic beanstalk (or any other framework for that matter).
I still am responsible for making sure my process has some kind of process managaement to make sure it is running correct? i.e. supervisord or runit
Or is this something that EB will somehow manage?
When the process inside the container stops, so too does the container (designed to run that single process). So you don't have to manage the process inside your container, instead rely on the system managing your containers to restart them. For example "services" in Docker Swarm and Replication Controllers in Kubernetes are designed to keep a desired number of containers running. When one dies a new one takes its place