I have a flask web server running with docker-compose.
When the container first starts, it starts by using around 200MB of memory but after some use goes up to 1GB (docker stats).
However once it reaches high consumption, even when idle, the memory usage of the container is not decreasing and eventually hitting the limit - causing dead uwsgi workers and stopped processes.
Can someone explain what happens behind the curtain and how to have the container release unused memory?
Looks like this a bug and it is being tracked
https://github.com/kubernetes/kubernetes/issues/70179
Related
What is the best practice to select memory size for a container running on an ec2?
My ec2 has 8gig of ram. It runs 2 containers.
PHP container
NGINX container
The NGINX container is set to 512 Mb.
How large is the recommendation to make the PHP container? It was set by some sort of default to 2 gigs, I want to make it 6gigs but was interested to hear what the recommendations are?
The basis for this question is that our container ran out of memory, and died. I believe we can alleviate this by upping the memory.
Best way to set memory limits is based on monitored metrics for memory usage. If you don't have metrics, then increase in increments and observe. Leave enough memory, say 1-2G, for the operating system itself.
Side note: Setup the container to auto restart via --restart=always, so even if the container is OOMKilled the app is restarted and continues to function.
I am debugging a possible memory leak in a web service I have running as a Docker network. The service has a Javascript front end, Flask REST API, Dask worker pool, the spaCy natural language toolkit...the works. I see intermittent running-out-of memory problems and I'm trying to get a handle on what could be going on.
I can run this system on my laptop, a MacBook Pro with 16 GB of memory where I am using Docker Desktop. When there are no containers running, Activity Monitor shows com.docker.hyperkit using about 12 GB. Then I launch the Docker network, which ultimately runs 14 containers to house the various components. I perform a fairly large batch job in the Docker network. It runs for an hour, during which time com.docker.hyperkit's memory creeps up to around 18 GB. This is not surprising--this is a memory intensive service. But when I stop all the containers in the network, I would expect com.docker.hyperkit's memory usage to drop back to 12 GB. Instead it stays at 18 GB. The only way I can get it back to 12 GB is to restart the Docker Desktop.
Is this expected behavior? It looks like a memory leak in Docker.
No it should not release the memory, and yes it is expected behavior.
There is no way to run docker containers natively on MacOS, so you run them inside of a virtual machine. A VM gets memory assigned to it, which it assigns to processes running inside of that VM. When those processes inside of the VM exit, the resources are released back to the VM, but not back to the parent MacOS. That's just how VM's work, and the fact that it didn't take all of the memory up to the limit specified in the Docker preferences immediately on startup is an impressive feat itself.
The containers themselves are processes running within this VM, and they will release all of their memory back to the VM upon exit. If you run something like docker run --rm busybox free you'll likely see the memory being used and freed within the VM.
For more details on this, there's several extensive threads in the github issues. Most of the comments on these threads appear to be from users assuming MacOS is running containers, rather than a VM that runs containers. Even completely idle, that VM will use some resources to run the kernel, container runtime daemons, volume sharing code, port forwarding code, etc. There's a lot of magic under the covers to make docker not look like a VM to the user, so that you can just pass paths and connect to ports on the MacOS side. The most helpful comment in the thread to me is here: https://github.com/moby/hyperkit/issues/231#issuecomment-448416559
I'm working on project where I divided the application in multiple docker images and I'm running around 5 containers where each one has its own image. Following the "One process per container" rule.
For that I'm using a beaglebone black which has only 480Mb of memory. Sometimes when the application runs for some time it crashes due to Out of memory exception.
So I was wondering if I make the images smaller would it consume less memory? How is the memory allocated for each container?
What if I group some images/containers into a single running container with more than one process? Would it use less memory?
When a process is killed with an OOM exception this is not related to the docker image size, this is the amount of memory the process is trying to use.
You can specify some memory limits on each container when you run them.
For example this will limit your container to 100MB of memory.
docker run -m 100M busybox
However if your applications exceed their allocated memory they will be killed with an OOM exception. The problem you are having is likely because the applications you are running have a minimum requirement which is higher than your beaglebone black.
Grouping processes into one container will not help, they will still use the same amount of memory.
I am running a java process as a docker swarm service. But that service is hogging my CPU, eventually. I tried with CPU limit as 1, and docker stats showing that container to be consistent 100%, but I want to fail that container in 95% and recreated. Is there any way I can accomplish this?
Thanks in advance.
CPU is a compressible resource, unlike memory. When memory requests exceed the limit, the kernel will kill the app. When CPU exceeds the limit, the kernel simply gives that process less time on the CPU and it runs slower.
There's no built in capability to change this behavior. You would need to implement some form of external monitoring with the ability to kill the container when a threshold is exceeded.
More than likely, what you actually want is to setup a healthcheck for your container that detects the application becoming unresponsive. You will need to run the container using swarm mode to automatically recreate the container with the failing healthcheck.
we're running several containers on a single docker host, mainly to run R and Python apps for data analysis. So when I load a big table into one of the containers, its memory footprint on the docker host increases.
However, when I close the Jupyter Notebook or R session, the container's memory footprint appears to remain unchanged on the host. It seems that the memory consumption of a docker container can only go up, and not down.
So I know that Linux in general occupies memory which is not needed by other applications (stuff is cached). However, how is this dealt with in the case of Docker containers? From the individual containers' perspectives there is a lot of memory (we don't want to limit the memory available to containers), and even if it is not needed inside this particular container, it would remain "occupied" in the container, and therefore inaccessible by other containers. And the host doesn't know if this memory is really needed or simply used for caching.
So how is this dealt with? I can imagine a situation where several people have started containers in which they have loaded or generated big data sets, but this was only temporary, and now the host's memory is all occupied because the memory is not freed.
I'm pretty sure that this is not how it works, so can someone explain this to me, please?
Many thanks,
Enno
In the Docker documentation, under resource constraints, there is an explanation about limiting memory to containers. When running a container, the memory is not freed based on the processes running in the container. The docs explain how the host system manages memory:
It is important not to allow a running container to consume too much of the host machine’s memory. On Linux hosts, if the kernel detects that there is not enough memory to perform important system functions, it throws an OOME, or Out Of Memory Exception, and starts killing processes to free up memory. Any process is subject to killing, including Docker and other important applications. This can effectively bring the entire system down if the wrong process is killed.
Docker attempts to mitigate these risks by adjusting the OOM priority on the Docker daemon so that it is less likely to be killed than other processes on the system...
The Docker containers can use memory but is prevented by the Docker daemon from crashing the host system. The memory allotted to Docker containers can also be limited:
Docker can enforce hard memory limits, which allow the container to use no more than a given amount of user or system memory, or soft limits, which allow the container to use as much memory as it needs unless certain conditions are met, such as when the kernel detects low memory or contention on the host machine.
We do not want to limit memory to the containers, but there are options to do so, like --memory=<value> , --memory-swap , and --memory-reservation. So no, the host cannot free up memory of a container that is running, but it does prevent the risk of all memory being occupied and making the kernel potentially kill a crucial system process.
Please excuse the formatting. Hope this helps; I also linked the related documentation. Also, not completely related, but maybe you can check this out about using a Java application in a container:
Why the docker container memory usage doesn't decrease?