Docker Ram Utilization - docker

I am currently experiencing some sort of memory leakage on docker desktop, I am running one container at the moment but the service 'Virtual Machine Service' is currently sitting at around 8gb it doesn't change after restart my PC.
Rebooting machine did not fixed the issue

I doubt that it is a memory leakage issue. In Settings/Resources/Memory section of Docker Desktop, you might see that 8GB is allocated and you can change this value. Mine was also 8GB, I am guessing Docker allocates half of the available memory (16GB in my case) but I could not see it written in the config options doc.
Hope that helps ;)

Related

Docker desktop eats all memory and crashes

Using Docker Desktop (19.03.13) with 6 containers in Windows 10. Having 16GB RAM.
In docker stats each container consumes 20-500 mb, all together cunsume ~1gb.
But in the Task Manager docker eats ~10gb and crashes from the lack of system memory.
How to check, what consumes so much memory in docker?
And how to prevent this?
Try to create a .wslconfig file at the root of your User folder C:\Users\<my-user> to adjust how much memory & processors Docker will use.
This is the content of the .wslconfig file.
[wsl2]
memory=2GB # Limits VM memory in WSL 2 up to 2GB
processors=2# Makes the WSL 2 VM use two virtual processors
Then, restart the computer. You will find the Vemm process will only take the amount of RAM you defined previously.
You can learn more here here
I guess you are using the new WSL 2 based engine, try switching docker engine back to Hyper-V by going opening docker settings -> general -> uncheck Use WSL 2 based Engine .
To explain:
I noticed it started happening to me since WSL 2 engine was introduced, i automatically switched to it since it's a new engine; Memory issues started arising since then.
Restarting/closing docker did not free the memory and i noticed in task Manager Vemm was the one eating all memory, so had to force close it (caused docker not to work).
Last thing i did was switching docker engine back to Hyper-V solved my high memory usage.
If you are using WSL2 put into the .wslconfig the middle of your ram. I don't know why but I had the same problem with 8GB RAM.
This is my .wslconfig
[wsl2]
memory=4GB # I have 8GB RAM
processors=2
And the result was good because the consumption is good! In this moment I have running a Docker with 8 images:
Although this problem is already marked as SOLVED
There is still another reason for this, in recently updated versions.
You might enable too many resources for docker hyperkit.
Go to settings - resources - advanced
check if you spared too much resource there.
I have my docker taking less than 2% cpu now.
After updating .wslconfig to be:
[wsl2]
memory=8GB
swap=2000
processors=4
... and then restarting Docker, the CPU consumption was still over 80% and there were 5 Docker Desktop processes (each taking 17-18%) in Windows Task Manager. I reset Docket to Factory and still the CPU pegged at 80% or more.
I then deleted the .docker folder (in windows the path is %USERPROFILE%/.docker) as suggested by jmichalek-fp. I took care to do a Shift-DEL so as not to move it to the recylce bin because I remember in the past recycled items were still found by processes that hold a link to the file.
After Factory Reset, then increasing .wslconfig resources, then deleting .docker folder and then restarting Docker, it is now running only one Docker Desktop process, and, with a NodeJs app running in it, it is consuming between 0.5% and 2% CPU.
I found "delete .docker folder" in this github issue: https://github.com/docker/for-win/issues/12266
As I know docker stats does not show RAM reservations. Try to put RAM limits using -m flag. There are some information how to control resources using docker:
https://docs.docker.com/config/containers/resource_constraints/?spm=a2c41.12663380.0.0.59ed566dAqUZPu
I am guessing on Windows there is something similar to what exists on MacOS.
Open your docker app and go to the dashboard
Click any container
Click Stats
You will get information regarding your CPU, RAM Usage, Disk Read & Write Memory & Network usage.
When I had memory issues, which I used to frequently, I would setup alias scripts that I could chain together to stop/kill/restart and do what ever setup I needed on the containers.
There is no preventing docker behaving the way it behaves unless you want to start contributing to and making pull requests. This isn't an uncommon issue. Docker is a free service, I recommend working around it's short comings.

Should Docker release all memory when all containers are closed?

I am debugging a possible memory leak in a web service I have running as a Docker network. The service has a Javascript front end, Flask REST API, Dask worker pool, the spaCy natural language toolkit...the works. I see intermittent running-out-of memory problems and I'm trying to get a handle on what could be going on.
I can run this system on my laptop, a MacBook Pro with 16 GB of memory where I am using Docker Desktop. When there are no containers running, Activity Monitor shows com.docker.hyperkit using about 12 GB. Then I launch the Docker network, which ultimately runs 14 containers to house the various components. I perform a fairly large batch job in the Docker network. It runs for an hour, during which time com.docker.hyperkit's memory creeps up to around 18 GB. This is not surprising--this is a memory intensive service. But when I stop all the containers in the network, I would expect com.docker.hyperkit's memory usage to drop back to 12 GB. Instead it stays at 18 GB. The only way I can get it back to 12 GB is to restart the Docker Desktop.
Is this expected behavior? It looks like a memory leak in Docker.
No it should not release the memory, and yes it is expected behavior.
There is no way to run docker containers natively on MacOS, so you run them inside of a virtual machine. A VM gets memory assigned to it, which it assigns to processes running inside of that VM. When those processes inside of the VM exit, the resources are released back to the VM, but not back to the parent MacOS. That's just how VM's work, and the fact that it didn't take all of the memory up to the limit specified in the Docker preferences immediately on startup is an impressive feat itself.
The containers themselves are processes running within this VM, and they will release all of their memory back to the VM upon exit. If you run something like docker run --rm busybox free you'll likely see the memory being used and freed within the VM.
For more details on this, there's several extensive threads in the github issues. Most of the comments on these threads appear to be from users assuming MacOS is running containers, rather than a VM that runs containers. Even completely idle, that VM will use some resources to run the kernel, container runtime daemons, volume sharing code, port forwarding code, etc. There's a lot of magic under the covers to make docker not look like a VM to the user, so that you can just pass paths and connect to ports on the MacOS side. The most helpful comment in the thread to me is here: https://github.com/moby/hyperkit/issues/231#issuecomment-448416559

Docker Desktop cannot set large disk size

I'm running Docker Desktop 2.2.0 on Windows 10. It appears that the disk size cannot be set beyond 64GB. I tried setting the diskSizeMiB value to 100GB in %APPDATA%\Docker\settings.json, but docker appears to ignore it and set the size to 64GB in the resulting Hyper-V VM.
"cpus": 6,
"diskSizeMiB": 102400,
The issue I'm having is older images being evicted when pulling new ones in. Even when manually expanding the HyperV disk to 100GB, docker pull deletes older images to make space for new ones.
Docker for Windows docs don't seem to explicitly mention a limit, but 64Gb ominously equals 2^16 bytes which hints at it being a technical limit.
Anyone knows of a workaround for this limitation?
Looks like I was on the right track with increasing the virtual disk size directly in Hyper-V (See this guide). The only missing piece was restarting Docker (or Windows). Once restarted, I was able to use the full disk.

Starting Docker for windows takes so much ram even without running a container How to prevent it?

When I start docker for windows memory usage increases by almost 25% of 6 GB (that's 1.5 GB) without even running a container. I can't see the docker process that in the task manager, but I figured the memory usage by looking at the memory usages % before and after running the docker for windows program.
I'm running windows 10. How can I prevent docker from eating up all this ram.
You can change it in settings. Just decrease memory usage by the slider. Go to settings and choose the Advanced tab.
other settings:
https://docs.docker.com/docker-for-windows/#docker-settings-dialog
The solution is to create a .wslconfig file in the Windows home directory (C:\Users\<Your Account Name>).
Input the contents of the file as follows:-
[wsl2]
memory=1GB
processors=1
The memory and processors are the resources allocated to the wsl2 process. You can change the memory and processors according to your preference. This is my config on a 16GB i5 machine.
After that, restart the WSL2 process:-
Start PowerShell in admin mode and type: Restart-Service LxssManager
After that, you are good to go!
P.s.: Start docker only when it is required.

How do I record RAM usage over time on Google Compute Engine?

Im using the "gci" container optimised vm image running on GCP.
My program has a spike in disk reads, and I think RAM, and then crashes.
The problem is I cannot see RAM usage, only disk and CPU.
I cannot install any utilities on the "gci" vm, I can only run tools inside a Debian based container "toolbox".
How do I record RAM usage?
There are several commands in Linux that can be used to check RAM. For example vmstat, top, free, /prox/meminfo. See this link: https://www.linux.com/blog/5-commands-check-memory-usage-linux

Resources