After having read about the performance improvements when running Docker on wsl2, I have been waiting for the official release of Windows 10 that supports wsl2.
I updated Windows and Docker and switched on the Docker flag to use wsl2 and was hoping for some performance boost for my Oracle Database running in a Docker container but unfortunately the change slowed down the container and my laptop dramatically.
The performance of the container is about 10x slower and my laptop is pretty much stuck when starting the container.
It seems as if the memory consumption would completely use up my 8GB and heavy memory swapping starts to take place.
Is there anything I can do to improve the performance of Docker on wsl2 or at least to better understand what's wrong in my setup?
My environment:
Processor Intel(R) Core(TM) i7-2620M CPU # 2.70GHz, 2 Core(s)
Installed Physical Memory (RAM) 8.00 GB
Microsoft Windows 10 Pro Version 10.0.19041 Build 19041
Docker version 19.03.8, build afacb8b
This comes from the "vmmem" which consumes as much resource as it can.
To solve the problem just go to your user file
for me in
C:\Users\userName
In this directory create a file named ".wslconfig" in which you will configure how many resources can consume WSL2:
[wsl2]
memory=900MB #Limits VM memory in WSL 2 to 900MB
processors=1 #Makes the WSL 2 VM use one virtual processors
Now close your docker and wait for "vmmem" to close in the task manager.
then You can restart docker and normally "vmmem" will not exceed the limit you have set (here 900MB)
If don't work restart your computer.
I hope it helped you.
You probably have your code stored on the Windows machine in a folder similar to this...
C:\\Users\YourName\projects\blahfu
But you are using Docker on WSL 2 which is a different (Linux) filesystem. So, when you do a Docker build all of the code/context gets copied from the Windows filesystem to Linux filesystem and then from there to the Docker container. This is what takes the most time and is incredibly slow.
Try to put your project into a folder like this...
/home/YouName/projects/blahfu
You should get quite a performance boost.
wsl container have they proper filesystem isolated from the windows filesystem.
The base idea is to copy your source code from windows file systeme to wsl file systeme.
from window you can acces the wsl container and copy your project to a wslcontainer :
navigate with explorer to \\wsl$
rebuild the container from this location this will do the trick !
If the data for the actual docker container is stored on a windows file system (i.e. NTFS) instead of stored on a native linux filesystem (regardless of what the docker container contents are, which are likely already linux based), then I think you are going to see slow performance because you're running WSL and using the docker container from a mounted WINDOWS file system (i.e. /c/mnt/...).
If you copy your docker container to something like /usr/local, or /home//docker on WSL then you may see a 10x performance INCREASE. Try that and see if it works?
you need edit "vmmem" resource
just add file .wslconfig in path
C:\Users<yourUserName>.wslconfig
Configure global options with .wslconfig
Available in Windows Build 19041 and later
You can configure global WSL options by placing a .wslconfig file into the root directory of your users folder: C:\Users<yourUserName>.wslconfig. Many of these files are related to WSL 2, please keep in mind you may need to run
wsl --shutdown
to shut down the WSL 2 VM and then restart your WSL instance for these changes to take affect.
Here is a sample .wslconfig file:
Console
Copy
[wsl2]
kernel=C:\\temp\\myCustomKernel
memory=4GB # Limits VM memory in WSL 2 to 4 GB
processors=2 # Makes the WSL 2 VM use two virtual processors
see this https://learn.microsoft.com/en-us/windows/wsl/wsl-config
Open your wsl2 distribution (Ubuntu for example) and set the ~/.docker/config.json file.
Only you need to change:
{
"credsStore": "docker.exe"
}
"credsStore": "desktop.exe" : ultra-slow (over 2 minutes)
"credsStore": "wincred.exe" : fast
"credsStore": "" : fast
It works very well.
If you are using VS Code, there is a command named "Remote-Containers: Clone Repository in Container Volume..." which assures you have full speed file access.
Form the documentation:
Repository Containers use isolated, local Docker volumes instead binding to the local filesystem. In addition to not polluting your file tree, local volumes have the added benefit of improved performance on Windows and macOS.
As mentioned by Claudio above, setting below lines in ~/.docker/config.json of wsl ubuntu server solved the problem for me.
{
"credsStore": "wincred.exe"
}
Earlier it was taking 5-10 min to build any simple image, now it is done in 1-2 seconds.
Downside: You have to make this change every time you open the server. I have tried every solution mentioned in https://github.com/docker/for-win/issues/9843 to solve this but nothing works for me.
Related
Using Docker Desktop (19.03.13) with 6 containers in Windows 10. Having 16GB RAM.
In docker stats each container consumes 20-500 mb, all together cunsume ~1gb.
But in the Task Manager docker eats ~10gb and crashes from the lack of system memory.
How to check, what consumes so much memory in docker?
And how to prevent this?
Try to create a .wslconfig file at the root of your User folder C:\Users\<my-user> to adjust how much memory & processors Docker will use.
This is the content of the .wslconfig file.
[wsl2]
memory=2GB # Limits VM memory in WSL 2 up to 2GB
processors=2# Makes the WSL 2 VM use two virtual processors
Then, restart the computer. You will find the Vemm process will only take the amount of RAM you defined previously.
You can learn more here here
I guess you are using the new WSL 2 based engine, try switching docker engine back to Hyper-V by going opening docker settings -> general -> uncheck Use WSL 2 based Engine .
To explain:
I noticed it started happening to me since WSL 2 engine was introduced, i automatically switched to it since it's a new engine; Memory issues started arising since then.
Restarting/closing docker did not free the memory and i noticed in task Manager Vemm was the one eating all memory, so had to force close it (caused docker not to work).
Last thing i did was switching docker engine back to Hyper-V solved my high memory usage.
If you are using WSL2 put into the .wslconfig the middle of your ram. I don't know why but I had the same problem with 8GB RAM.
This is my .wslconfig
[wsl2]
memory=4GB # I have 8GB RAM
processors=2
And the result was good because the consumption is good! In this moment I have running a Docker with 8 images:
Although this problem is already marked as SOLVED
There is still another reason for this, in recently updated versions.
You might enable too many resources for docker hyperkit.
Go to settings - resources - advanced
check if you spared too much resource there.
I have my docker taking less than 2% cpu now.
After updating .wslconfig to be:
[wsl2]
memory=8GB
swap=2000
processors=4
... and then restarting Docker, the CPU consumption was still over 80% and there were 5 Docker Desktop processes (each taking 17-18%) in Windows Task Manager. I reset Docket to Factory and still the CPU pegged at 80% or more.
I then deleted the .docker folder (in windows the path is %USERPROFILE%/.docker) as suggested by jmichalek-fp. I took care to do a Shift-DEL so as not to move it to the recylce bin because I remember in the past recycled items were still found by processes that hold a link to the file.
After Factory Reset, then increasing .wslconfig resources, then deleting .docker folder and then restarting Docker, it is now running only one Docker Desktop process, and, with a NodeJs app running in it, it is consuming between 0.5% and 2% CPU.
I found "delete .docker folder" in this github issue: https://github.com/docker/for-win/issues/12266
As I know docker stats does not show RAM reservations. Try to put RAM limits using -m flag. There are some information how to control resources using docker:
https://docs.docker.com/config/containers/resource_constraints/?spm=a2c41.12663380.0.0.59ed566dAqUZPu
I am guessing on Windows there is something similar to what exists on MacOS.
Open your docker app and go to the dashboard
Click any container
Click Stats
You will get information regarding your CPU, RAM Usage, Disk Read & Write Memory & Network usage.
When I had memory issues, which I used to frequently, I would setup alias scripts that I could chain together to stop/kill/restart and do what ever setup I needed on the containers.
There is no preventing docker behaving the way it behaves unless you want to start contributing to and making pull requests. This isn't an uncommon issue. Docker is a free service, I recommend working around it's short comings.
When my Docker containers start, I receive the following notification that reads:
Docker Desktop has detected that you shared a Windows file into a WSL 2 container, which may perform poorly. Click here for more details.
My questions are:
What does this mean?
What is the better practice / how should this be avoided?
If the message has been closed, or I've clicked "Don't show again", how can I get to the details of this warning?
I am happy to share the Dockerfile or Docker-Compose setup if needed, but I simply cannot find anything either here on SO or by a Google search that points me in any direction, so I'm not sure where to start. I'm assuming the issue lies in the Dockerfile since that is where we running COPY to move some files around.
Docker Version: Docker Desktop 2.4.0.0 (48506) Community
Operating System: Windows 10 Pro (version 10.0.19041)
This error means that accessing files on the Windows host file system from a Linux container will perform a little slower than accessing files that are already in a Linux filesystem. Accessing Windows files from the Linux container will perform like accessing files on a remote file share.
Docker and Microsoft recommend avoiding this by storing your source files in a WSL2 distro's file system (which you can bind mount to the container) or building your container image to include all the files needed rather than storing your files in the Windows file system.
If you've clicked "Don't show again", you can get to the details of this message by going to Develop with Docker and WSL 2.
For more information, Docker for Windows Best Practices says:
Linux containers only receive file change events (“inotify events”) if the original files are stored in the Linux filesystem. For example, some web development workflows rely on inotify events for automatic reloading when files have changed.
Performance is much higher when files are bind-mounted from the Linux filesystem, rather than remoted from the Windows host. Therefore avoid docker run -v /mnt/c/users:/users (where /mnt/c is mounted from Windows).
Instead, from a Linux shell use a command like docker run -v ~/my-project:/sources <my-image> where ~ is expanded by the Linux shell to $HOME.
Microsoft's Comparing WSL 1 and WSL 2 article has a whole section on Performance across OS file systems, and its opening paragraph says:
We recommend against working across operating systems with your files, unless you have a specific reason for doing so. For the fastest performance speed, store your files in the WSL file system if you are working in a Linux command line (Ubuntu, OpenSUSE, etc). If you're working in a Windows command line (PowerShell, Command Prompt), store your files in the Windows file system.
Also, the Docker blog article Docker Desktop: WSL 2 Best practices has an "Awesome mounts performance" section that says:
Both your own WSL 2 distro and docker-desktop run on the same utility VM. They share the same Kernel, VFS cache etc. They just run in separate namespaces so that they have the illusion of running totally independently. Docker Desktop leverages that to handle bind mounts from a WSL 2 distro without involving any remote file sharing system. This means that when you mount your project files in a container (with docker run -v ~/my-project:/sources <...>), docker will propagate inotify events and share the same cache as your own distro to avoid reading file content from disk repeatedly.
A little warning though: if you mount files that live in the Windows file system (such as with docker run -v /mnt/c/Users/Simon/windows-project:/sources <...>), you won’t get those performance benefits, as /mnt/c is actually a mountpoint exposing Windows files through a Plan9 file share.
All of that advice is great if you want your primary development workflow to be in Linux. Docker wants you to go "all in" on Linux containers. But if you work primarily in Windows and just want to use a Linux container for a specialized task, then it's fine to click "Don't show again". As Microsoft said, "If you're working in a Windows command line, store your files in the Windows file system."
I run with my main development folder in Windows, and I bind mount it to a Linux container that's just used to execute unit tests. So my full build runs in Windows, then I run all my unit tests in Windows, and I finish by running all my unit tests in a Linux container too. Having Linux bind mount to my Windows folder works fast and great for this scenario where the "dotnet test" call in Linux is just loading and executing the required DLLs from my Windows volume.
This setup may sound like heresy to those that believe containers must be used everywhere, but I love containers for application deployment. I'm not convinced that you need to go all in and do all your development inside a container too. I'm happy with Windows (and VS 2019) as my development environment, and then I use Linux containers for application testing and deployment. So the Windows/WSL2 file system performance hit is a minimal impact to me.
I recently updated my Docker environment to run on WSL 2 on Windows.
For setting memory allocation limits on containers in previous versions, I had option in Docker Desktop GUI under Settings->Resources->Advanced->Preferences to adjust memory and CPU allocation.
After WSL 2 integration, I am not able to find that option.
I assume I should run everything through my Linux distro from now on, so this is the solution I was able to find:
docker run -d -p 8081:80 --memory="256m" container_name
I dont want to have to set a flag each time when running a container. Is there a way to permanently set the memory allocation?
The Memory and CPU settings were removed for WSL2 integration. However, starting in Windows Build 18945, there is a workaround to limit WSL2 memory usage.
Create a %UserProfile%\.wslconfig file for configuring WSL2 settings:
[wsl2]
memory=6GB # Any size you feel like (must be an integer!)
swap=0
localhostForwarding=true
Run Get-Service LxssManager | Restart-Service in an admin Powershell (or reboot) and verify that the vmmem usage in Task Manager drops off.
For the complete list of settings, please visit Advanced settings configuration in WSL.
I just created the %UserProfile%\.wslconfig file with these two lines and left everything else untouched. It worked fine.
[wsl2]
memory=8GB
I did a full shutdown right after adding the file for WSL to pick up the new settings.
$ wsl --shutdown
See additional information from Microsoft here: Advanced settings configuration in WSL
Scenario
Windows 10 Professional
Docker 18.06.1-ce running in Windows container mode
4GB of available memory on host system
using Hyper-V virtual machine
Problem
When trying to "switch to Linux containers" via Docker's taskbar item the process fails after a couple of seconds showing an error about "Not enough memory to start Docker".
Since the host system does not have that much memory, I'd like to reduce the maximum amount of memory the global Docker machine is allowed to use (I think 2 GB is the default here). Thus, I'd like to reduce that to just 1 GB.
When having Docker running in Windows container mode, there is no "advanced" section in Docker's settings that would allow to reduct that memory assignment easily.
I was able to find the "MobyLinuxVM" using Windows' Hyper-V manager. However, when adjusting memory settings there, it is overwritten each time I start Docker and try again switching to Linux container mode.
Question
Is there a different way to define the maximum amount of memory for Docker without using the user interface (which won't work in this scenario due to the missing "advanced" section in Windows container mode - before being able to switch to Linux containers)?
After some searching I found out that settings of Docker's user interface are stored in %APPDATA%\Docker\settings.json (e.g. C:\Users\olly\AppData\Roaming\Docker), memory settings are defined in memoryMiB property.
The following solved the problem on my environement:
quit Docker
modify settings.json file using notepad %APPDATA%\Docker\settings.json in the run command prompt (Windows-Key + R)
adjust value memoryMiB to 1024 (has been 2048 before)
in Docker versions 19.x and later the property is called memoryMiB
in Docker versions 18.x and before the property was called VmMemory
save settings.json
start Docker and finally being able to use "switch to Linux containers"
Property memoryMiB in Docker versions 19.x and later
Property VmMemory in Docker versions 18.x and before
this might seems a stupid question, but here I am :
I'm running Ubuntu 16.04 and managed to install windows 10 in dual boot.
Running docker exclusively in linux so far, I decided to give it a try on Windows 10.
As I already downloaded several docker images on my Linux system, I'm willing to have a "shared" like development environment. I must admit this would be a waste of time and disk space to download Docker images I already downloaded before (on linux) on my fresh windows install.
So my question is simple : Can I use my linux images / containers on windows. I'm thinking of something like a global path variable pointing to my linux images to configure on docker windows.
Any idea if this is possible, and if yes, the pros and cons and the caveats ?
Thanks for helping me on this one.
Well i would suggest to create your local registry and then push these images there and pull it in your windows docker.
Sonatype nexus(artifact storage repository) can be used to store your docker images. Check if this helps.
I guess it's not possible to share the same folder (to reduce disk usage) since the stored files are totally different:
Under Windows the file is:
C:\Users\Public\Documents\Hyper-V\Virtual hard disks \MobyLinuxVM.vhdx
the vhdx extension is specific to MS systems.
and under linux it consist of 2 files:
/var/lib/docker/devicemapper/devicemapper/data
/var/lib/docker/devicemapper/devicemapper/metadata
see here for details
Where are Docker images stored on the host machine?
The technology under this is to have a specific fileSystem optimal for docker. Even if they used the same fileSystem storage, it wouldn't be a good idea imho.
If the purpose is only to gain time for resintalling, just dump all the images from on system, and re-pull them on the other one.
docker images --format "{{.Repository}}" > image-list.txt
then loop on the other OS
while read p; do
docker pull $p
done < image-listtxt