How to limit memory usage when I run Memgraph Platform within Docker? - docker

I've been running Memgraph for a few days now and everything is working as expected. This is the first time that I'm using Docker.
I've noticed that when I shut down the Memgraph Platform my RAM is still used. I need to restart my computer to free up my RAM. Is there some switch that I can use to limit the memory that Memgraph Platform uses? Is there some way to release the memory after I shut it down?
If it is important, my OS is Windows 10 Professional and I have a 6 years old laptop with 8GB of RAM.

The issue you are experiencing is not related to Memgraph, but Docker or to WSL2 to be more precise. You say that you use Windows 10 so I presume your Docker is configured to use WSL2.
You didn't write which exact build of Windows 10 you are using, but depending on it WSL can use up to 80% of your RAM if you don't limit it.
When you run the Docker image you will see a process called vmmem. When you shutdown running Docker image this process will still occupy your RAM. Restarting your computer frees up the RAM, which is what you are experiencing.
The solution is not to change the configuration of your Memgraph, but to configure Docker. You need to limit the amount of memory that WSL2 can use. But be careful; this is a change that will affect all of your WSL2 instances, not just the Docker ones.
The exact steps that you need to do are:
Shutdown all of the WSL instances with wsl --shutdown
Edit the .wslconfig file (it is located in your user profile folder)
Add the following lines to it:
[wsl2]
memory=3GB
This will limit the RAM usage of WSL to 3GB. I hope that this will help you.

Related

Ubuntu 20.04 memory leak with Docker and Tomcat 9

My setup is as follows:
Ubuntu 20.04 server (16GB RAM) which runs Docker and Elasticsearch 6.8.16 image in a container with following env values -e JAVA_OPTS="-Xmx2g -Xms1g -XX:MaxPermSize=1g".
It also hosts two apps on Tomcat 9, and I have also set up these envs for Tomcat via setenv.sh in Tomcat's bin folder.
However, after a few hours, my remaining memory is less than 100MB and it happens every day. It stabilizes after I reboot the server, but after a few hours it falls under 100MB again.
Does anyone know how can I fix this?
If anyone needs any additional information, I am more than happy to provide it.
P.S. For some reason, my CPU always has 100% usage on one core while the other one is below 10%.
Thanks in advance!

Incrementing GB of Ram for Docker Container in Windows

I am following this tutorial https://docs.docker.com/docker-for-windows/#docker-settings-dialog to install docker in windows. I am stuck on the Settings section under Resources tab. My view of resources does not show how it is showing on that link. Is there a way to increase my Ram so I can have ELK to run. I installed the Docker Desktop application with the Hyper-V.
This is what I see in my settings.
What I should be seeing, but am not.
Though you mention using Hyper-V, because of your screenshot (notably the WSL Integration tab), I suspect you may be running Docker Desktop in WSL2 mode, instead of HyperV mode. (WSL2 to my understanding is the newer, faster option in many cases).
With that assumption, to alter the RAM in your WSL 2 VM, you have to create a C:\Users\username\.wslconfig file with the VM settings. The details are described on this page which is actually linked to by the page you mentioned.
This is an example of a .wslconfig file:
[wsl2]
memory=9GB # Limits VM memory
Note that this applies to all WSL2 VMs (I guess they are called distros?), which I'm not sure is exactly the right answer, since Docker seems to produce 2 distros by itself, plus whatever other distros you have (see wsl --list). Do you want to increase the RAM for all distros?
However, to quote this page:
WSL 2's memory usage grows and shrinks as you use it. When a process frees memory this is automatically returned to Windows.
This sounds to me like the .wslconfig memory setting is a max size, which is only allocated when needed, so I assume setting it for all WSL distros won't cause all of them to balloon up to 9GB immediately upon distro startup unless those distros try to use all that memory.
They go on to say:
However, as of right now WSL 2 does not yet release cached pages in memory back to Windows until the WSL instance is shut down. If you have long running WSL sessions, or access a very large amount of files, this cache can take up memory on Windows. We are tracking the work to improve this experience on the WSL Github repository issue 4166
I have experienced this ballooning memory issue on large ML jobs, so just something to be aware of.
So, the .wslconfig change has seemed to work for me. Another option that has helped me is increasing the swap size via .wslconfig, since my machine has limited memory.

Docker could not start because I do not have enough memory. How to solve it?

I got into a HTML/CSS/JavaScript course and I need Docker Desktop installed an functionally on my laptop. The problem is that I can not start it because I do not have enough memory, the error is appearing every time when I try to start it. I have tried to solve it by lowering the settings of the Docker Engine, free up some memory with RAMMap and turn Windows to performance mode, but unfortunately the error is still here.
The laptop that I work on has only 2 GB of RAM. Is there a solution to start Docker?

Docker in a Parallels' Virtual Windows 10 Pro Machine

I have a 2013 Mac Pro running the latest Parallels Desktop Pro v
12.2.0 (41591)
On it, is a Windows 10 Pro virtual with Docker Version 17.03.1-ce-win10 (11972)
Docker can only run with 'windows containers' because when trying to fire up the 'MobyLinux' instance in Hyper-V, it never fires up always bombing at:
tsc: Fast TSC calibration failed
I understand this to be some time dependent sync that has to happen at boot time or such failure occurs. I bought a WD 1TB SSD on a Thunderbolt dock to speed up the run/boot time of the virtual. (it was on my platter RAID cage before) to no avail. No diff.
Parallels IS set to 'enable nested virtualization' and I have started a virtual in Hyper-V on the win 10 Pro VM just fine, no errors. I have checked and unchecked 'PMU Virtualization' which I understand will provide statistics to the host but slow the VM.
I tried:
reducing the number of assigned cores to the VM as suggested by
another post to no avail (2-6 cores tried)
Reducing the cores to '1' for Docker (and mixing with above attempt)
increasing the number of cores to docker
adding/reducing memory to VM/Docker
playing with the
C:\Program
Files\Docker\Docker\resources\MobyLinux.ps1
file that loads the VM whereas in another post I changed something to
verifying that "C:\Users\Public\Documents\Hyper-V\Virtual hard disks\MobyLinuxVM.vhdx" is teh correct location for the .vhdx
verifying that the .iso is at "C:\Program Files\Docker\Docker\Resources\mobylinux.iso"
uninstalling Hyper-v/reinstalling Hyper-v manually and letting Docker do it automatically
...
I am at wit's end. I specifically bought this machine so I could do my MS/Visual Studio development along with iOS development on the same box. I have done so, this way, for the past 5-6 years with a 2009 Mac Pro before and now my 2013 MP, but never with Docker before...
So, I need one of two solutions:
a way to make Visual Studio 2015/2017 'look' at my host Mac's Docker instance in order to debug/move on to development
a way to make this 'MobyLinux' Docker vm run.
I was having the same issues and I had initially set the memory to the highest levels allotted and Docker just flat would not run in the Windows box. After tinkering with it for a while I realized that in the Windows box I had not done any of the updates so I ran all those and logged back in, and was getting the same issues of docker not running. That is when I moved over to Parallels and made the changes shown below. Hopefully that helps!
result of docker version:
https://a.cl.ly/kpumLPz4
hyper v:
https://a.cl.ly/jkunldkm
settings in parallels:
https://a.cl.ly/QwuGKq1D
additional settings in parallels that I changed:
https://a.cl.ly/9ZuNElnb
command that I ran for hello_world:
docker run --rm busybox echo hello_world
windows docs on Linux containers 10
docker docs on windows install

Docker not releasing memory when shutdown, windows 10

I have recently started using docker for new development work, however I am still required to switch back to working on our older on-premise offering from time to time. That is, I sometimes need to shutdown docker and spin up a an installation of our on premise server.
I find that when I do this with docker installed the performance of this server is terrible, essentially unusable, I need to uninstall docker to get it to work again.
When I have docker running I can see it using the memory (my machine has 32 GB of RAM, I am telling docker to use 16) and when I shutdown docker I can see it being released, according to the task manager anyway, and I can also see on hyper-v manager that the VM has been shutdown. However the performance of on-premise server install continues to act as the memory is in use. This is not a small performance hit, actions that should take 1 second take 20 or 30.
It would seem like docker is not actually releasing the memory on shutdown and only does so when I actually uninstall it, when I do this performance recovers completely.
Is this a known issue? Is there anything else I can try to see where the memory is going? I can find no other reports about it.
I am using windows 10 with docker version 17.03.1-ce-win5 (10743)

Resources