I am following this tutorial https://docs.docker.com/docker-for-windows/#docker-settings-dialog to install docker in windows. I am stuck on the Settings section under Resources tab. My view of resources does not show how it is showing on that link. Is there a way to increase my Ram so I can have ELK to run. I installed the Docker Desktop application with the Hyper-V.
This is what I see in my settings.
What I should be seeing, but am not.
Though you mention using Hyper-V, because of your screenshot (notably the WSL Integration tab), I suspect you may be running Docker Desktop in WSL2 mode, instead of HyperV mode. (WSL2 to my understanding is the newer, faster option in many cases).
With that assumption, to alter the RAM in your WSL 2 VM, you have to create a C:\Users\username\.wslconfig file with the VM settings. The details are described on this page which is actually linked to by the page you mentioned.
This is an example of a .wslconfig file:
[wsl2]
memory=9GB # Limits VM memory
Note that this applies to all WSL2 VMs (I guess they are called distros?), which I'm not sure is exactly the right answer, since Docker seems to produce 2 distros by itself, plus whatever other distros you have (see wsl --list). Do you want to increase the RAM for all distros?
However, to quote this page:
WSL 2's memory usage grows and shrinks as you use it. When a process frees memory this is automatically returned to Windows.
This sounds to me like the .wslconfig memory setting is a max size, which is only allocated when needed, so I assume setting it for all WSL distros won't cause all of them to balloon up to 9GB immediately upon distro startup unless those distros try to use all that memory.
They go on to say:
However, as of right now WSL 2 does not yet release cached pages in memory back to Windows until the WSL instance is shut down. If you have long running WSL sessions, or access a very large amount of files, this cache can take up memory on Windows. We are tracking the work to improve this experience on the WSL Github repository issue 4166
I have experienced this ballooning memory issue on large ML jobs, so just something to be aware of.
So, the .wslconfig change has seemed to work for me. Another option that has helped me is increasing the swap size via .wslconfig, since my machine has limited memory.
Related
I've been running Memgraph for a few days now and everything is working as expected. This is the first time that I'm using Docker.
I've noticed that when I shut down the Memgraph Platform my RAM is still used. I need to restart my computer to free up my RAM. Is there some switch that I can use to limit the memory that Memgraph Platform uses? Is there some way to release the memory after I shut it down?
If it is important, my OS is Windows 10 Professional and I have a 6 years old laptop with 8GB of RAM.
The issue you are experiencing is not related to Memgraph, but Docker or to WSL2 to be more precise. You say that you use Windows 10 so I presume your Docker is configured to use WSL2.
You didn't write which exact build of Windows 10 you are using, but depending on it WSL can use up to 80% of your RAM if you don't limit it.
When you run the Docker image you will see a process called vmmem. When you shutdown running Docker image this process will still occupy your RAM. Restarting your computer frees up the RAM, which is what you are experiencing.
The solution is not to change the configuration of your Memgraph, but to configure Docker. You need to limit the amount of memory that WSL2 can use. But be careful; this is a change that will affect all of your WSL2 instances, not just the Docker ones.
The exact steps that you need to do are:
Shutdown all of the WSL instances with wsl --shutdown
Edit the .wslconfig file (it is located in your user profile folder)
Add the following lines to it:
[wsl2]
memory=3GB
This will limit the RAM usage of WSL to 3GB. I hope that this will help you.
Before I hear about wsl, I was using virtualbox and some other software to use linux. But wsl was amazing. Instead of allocating mamory wsl uses dynamic allocation that was really useful. When I heard about wsl2 in docker I was really happy. But nothing went as expected. A process named vmmem starts with wsl2 backend and uses almost 2gbs of ram. Even if I use the hyper-v backed I would not allocate that much memory. Even with wsl2 ubuntu or wsl2 kali the same process starts but does not allocate that memory. I searched on youtube, Quora, Google and everything I could not get any solutions.
I wanted to switch back to hyper-v backend or wanted to get rid of the vmmem process, So please help me
Thanks to everyone who read this kindly and special thanks to the one who is going to solve my problem
I can give you any additional information if you want
To answer your question, there is a checkbox in the Docker settings (right click Docker icon > Settings > General) labelled "Use the WSL 2 based engine" which will do what you are looking for.
However, if you want to give WSL another go, you can limit the amount of memory that WSL can allocate.
If you create a file in your %userprofile%, called .wslconfig and give it the contents:
[wsl2]
memory=1GB
However, there is an ongoing issue with WSL2 and Docker that appears to indicate a memory leak. Limiting the memory in this way may cause undesirable side effects.
I got into a HTML/CSS/JavaScript course and I need Docker Desktop installed an functionally on my laptop. The problem is that I can not start it because I do not have enough memory, the error is appearing every time when I try to start it. I have tried to solve it by lowering the settings of the Docker Engine, free up some memory with RAMMap and turn Windows to performance mode, but unfortunately the error is still here.
The laptop that I work on has only 2 GB of RAM. Is there a solution to start Docker?
Let's say that I make an image for an OS that uses a kernel of version 10. What behavior does Docker exhibit if I run a container for that image on a host OS running a kernel of version 9? What about version 11?
Does the backward compatibility of the versions matter? I'm asking out of curiosity because the documentation only talks about "minimum Linux kernel version", etc. This sounds like it doesn't matter what kernel version the host is running beyond that minimum. Is this true? Are there caveats?
Let's say that I make an image for an OS that uses a kernel of version 10.
I think this is a bit of a misconception, unless you are talking about specific software that relies on newer kernel features inside your Docker image, which should be pretty rare. Generally speaking a Docker image is just a custom file/directory structure, assembled in layers via FROM and RUN instructions in one or more Dockerfiles, with a bit of meta data like what ports to open or which file to execute on container start. That's really all there is to it. The basic principle of Docker is very much like a classic chroot jail, only a bit more modern and with some candy on top.
What behavior does Docker exhibit if I run a container for that image on a host OS running a kernel of version 9? What about version 11?
If the kernel can run the Docker daemon it should be able to run any image.
Are there caveats?
As noted above, Docker images that include software which relies on bleeding edge kernel features will not work on kernels that do not have those features, which should be no surprise. Docker will not stop you from running such an image on an older kernel as it simply does not care whats inside an image, nor does it know what kernel was used to create the image.
The only other thing I can think of is compiling software manually with aggressive optimizations for a specific cpu like Intel or Amd. Such images will fail on hosts with a different cpu.
Docker's behaviour is no different: it doesn't concern itself (directly) with the behaviour of the containerized process. What Docker does do is set up various parameters (root filesystem, other mounts, network interfaces and configuration, separate namespaces or restrictions on what PIDs can be seen, etc.) for the process that let you consider it a "container," and then it just runs the initial process in that environment.
The specific software inside the container may or may not work with your host operating system's kernel. Using a kernel older than the software was built for is not infrequently problematic; more often it's safe to run older software on a newer kernel.
More often, but not always. On a host with kernel 4.19 (e.g. Ubuntu 18.04) try docker run centos:6 bash. You'll find it segfaults (exit code 139) because that old build of bash does something that greatly displeases the newer kernel. (On a 4.9 or lower kernel, docker run centos:6 bash will work fine.) However, docker run centos:6 ls will not die in the same way because that program is not dependent on particular kernel facilities that have changed (at least, not when run with no arguments).
This sounds like it doesn't matter what kernel version the host is running beyond that minimum. Is this true?
As long as your kernel meets Docker's minimum requirements (which mostly involve having the necessary APIs to support the isolated execution environment that Docker sets up for each container), Docker doesn't really care what kernel you're running.
In many way, this isn't entirely a Docker question: for the most part, user-space tools aren't tied particularly tightly to specific kernel versions. This isn't unilaterally true; there are some tools that by design interact with a very specific kernel version, or that can take advantage of APIs in recent kernel versions for improved performance, but for the most part your web server or database just doesn't care.
Are there caveats?
The kernel version you're running may dictate things like which storage drivers are available to Docker, but this doesn't really have any impact on your containers.
Older kernel versions may have security vulnerabilities that are fixed in more recent versions, and newer versions may have fixes that offer improved performance.
I have recently started using docker for new development work, however I am still required to switch back to working on our older on-premise offering from time to time. That is, I sometimes need to shutdown docker and spin up a an installation of our on premise server.
I find that when I do this with docker installed the performance of this server is terrible, essentially unusable, I need to uninstall docker to get it to work again.
When I have docker running I can see it using the memory (my machine has 32 GB of RAM, I am telling docker to use 16) and when I shutdown docker I can see it being released, according to the task manager anyway, and I can also see on hyper-v manager that the VM has been shutdown. However the performance of on-premise server install continues to act as the memory is in use. This is not a small performance hit, actions that should take 1 second take 20 or 30.
It would seem like docker is not actually releasing the memory on shutdown and only does so when I actually uninstall it, when I do this performance recovers completely.
Is this a known issue? Is there anything else I can try to see where the memory is going? I can find no other reports about it.
I am using windows 10 with docker version 17.03.1-ce-win5 (10743)