I'm running docker desktop community 2.1.0.3 on MacOS Mojave. I've got 8GB of memory allocated to Docker, which already seems like a lot (that's half my RAM). Somehow even after exiting and then starting Docker for Mac again, which means no containers are running, docker is already exceeding the memory allocation by 1GB.
What is expected memory usage for docker with no containers running? Is there a memory leak in docker for mac or docker's hyperkit?
As #GabLeRoux has shared in a comment, the "Real Memory" usage is much lower than what you see in the "Memory" column in Activity Monitor.
This document thoroughly explains memory usage on Mac OS with Docker Desktop and information is excerpted from there.
To see the "Real Memory" used by Docker, right-click the column names in Activity Monitor and Select "Real Memory". The value in this column is what's currently physically allocated to com.docker.hyperkit.
Alternate answer: I reduced the number of CPUs and Memory Docker is allowed to use within the Docker Resources preferences. My computer is running faster and quieter now.
I just now put this in place, so time will tell if this solution works for me. Before it was making my computer max out on memory. Now it's significantly reduced.
Thank you for the note on real memory. I added that to my Activity Monitor.
UPDATE: It's been a few days now and my computer runs well below the max of memory and my fan runs at a minimum if at all.
I think you shouldn't be using swap while ram is not full, for ssd health and speed
Related
I'm running a regular JVM application on containers in GCP.
The container_memory_working_set_bytes metric returns 4GB, while sum(jvm_memory_bytes_used) returns 2GB.
I'm trying to understand which processes are using the remaining 2GB.
In theory, what can consume this memory? how can I understand it via Prometheus / Linux shell by kubectl exec?
According to these docs jvm_memory_bytes_used is only heap memory.
JVM can consume much more than that and this has already been asked and answered several times.
One of the best answers that I know about is here: Java using much more memory than heap size (or size correctly Docker memory limit)
I managed to find the difference, It's was the JVM committed memory which can be found via jvm_memory_bytes_commited
I am investigating a topic, which I will call “Docker swarm and memory management”.
It states in this article here that docker does not recommend using swap memory, but I can’t find (googling) a place where disadvantages of using swap memory in docker context is explained.
Can a kind soul enlighten me? :-)
It is normal to disable SWAP memory in ALL applications or services that are used in production.
SWAP memory is based on using the hard disk as a substitute when the RAM is full. This may seem beneficial but the RAM has a speed from 2.1 GB/s the oldest to 25.6 GB/s the newest. Contrary to the speed of a hard drive with HDDs on average at 135MB/s, newer M.2 SSDs at 1.2GB/s.
As we can see we would be greatly slowing down the service if we were using SWAP.
I create my docker (python flask).
How can I calculate what is the limit to put for memory and CPU?
Do we have some tools that run performance tests on docker with different limitation and then advise what is the best limitation numbers to put?
With an application already running inside of a container, you can use docker stats to see the current utilization of CPU and memory. While there it little harm in setting CPU limits too low (it will just slow down the app, but it will still run), be careful to keep memory limits above the worst case scenario. When apps attempt to exceed their memory limit, they will be killed and usually restarted by a restart policy/orchestration tool. If the limit is set too low, you may find your app in a restart loop.
This is more about the consumption of your specific Flask application, you can probably take use the resource module in Python to calculate them.
More information here and here.
I am running a data analysis code in docker using pandas on MacOS.
However, the program gets killed on high memory allocation in a data frame (I know because it gets killed when my program is loading a huge dataset).
Without the container, my program runs alright on my laptop.
Why is this happening and how can I change this?
Docker on MacOS is running inside a Linux VM which has an explicit memory allocation. From the docs:
MEMORY
By default, Docker for Mac is set to use 2 GB runtime memory,
allocated from the total available memory on your Mac. You can
increase the RAM on the app to get faster performance by setting this
number higher (for example to 3) or lower (to 1) if you want Docker
for Mac to use less memory.
Those instructions are referring to the Preferences dialog.
I'm pretty sure at this stage that Redis needs a certain amount of free memory on the OS in order to run. In the past few weeks, I've seen Redis (Linux) run out of memory with a couple of gigabytes of RAM still free, and on Windows, it refuses to start when you are using a lot of memory on the system but still have a bunch left free, as in the screenshot below.
The error on Windows gives a hint as to why this is happening (although I'm not assuming it's the same on Linux). However, my question is more generic. How much free memory does Redis need in order to operate?
Redis requires RAM between x2 to x3 the size of your data. The maxheap flag is Windows-specific.
According to Redis FAQ, without a specific Linux configuration, it might need 2x the memory of your dataset. From the document:
Short answer: echo 1 > /proc/sys/vm/overcommit_memory :)
With this configuration, the forked process (responsible for saving the dataset to disk) will be able to share memory pages more easily with the original process, so it won't need that much memory.
You can read more about this here: https://redis.io/topics/faq#background-saving-fails-with-a-fork-error-under-linux-even-if-i-have-a-lot-of-free-ram