Earlier I used with hyper-v and docker.
Recently I've installed wsl2 for docker usage.
After wsl2 installation I dowloaded and installed ubuntu 20 and set it in docker desktop settings.
So command wsl --list returns
-* Ubuntu-20.04 Running 2
- docker-desktop Running 2
- docker-desktop-data Running 2
I see in daily work that free space disappears on disk C.
I found few files with huge size and i'd like to know what it is? And if I could delete them or cut?
Here are the files:
c:\Users\***\AppData\Local\Docker\wsl\data\ext4.vhdx-----------55 gb
c:\ProgramData\DockerDesktop\vm-data\DockerDesktop.vhdx--------45 gb
c:\Users\All Users\DockerDesktop\vm-data\DockerDesktop.vhdx----the same as prev. What is it??
General question - how reduce disk usage? And what are that files?
This can be resolved by compacting the vhdx file through diskpart
Below file eats up a lot of diskspace.
C:\Users\xxx\AppData\Local\Docker\wsl\data\ext4.vhdx
Here are the commands to resolve and reclaim your diskspace
Ensure that your docker data is cleaned up
docker system prune -all
Warning: Deletes all your images, container, volumes and all data that's created by docker
Terminate the wsl dist that's unused.
Run the command wsl -l -v and terminate all other entries except the below
Entry with * represents your current dist
docker-desktop
docker-desktop-data
Take the backup of you current dist wsl --export <dist> <path>
diskpart
Opens up a new prompt DISKPART>. Issue the command select vdisk file="your-docker-data-file"
Example: select vdisk file="C:\Users\xxx\AppData\Local\Docker\wsl\data"
Finally, do compact vdisk
I observed the same issue: wsl2 got bigger and bigger and even when I deleted files inside of wsl2, I couldn't see the free space reflected on my host machine.
After some research I found a nice step-by-step instruction by Stephen Rees-Carter how to shrink the used space by wsl2: How to Shrink a WSL2 Virtual Disk
A rough explanation:
wsl2 is compressed in one single image on your host machine called ext4.vhdx (located somewhere C:\Users\<Username>\AppData\Local\Packages\<Distro>\LocalState)
to shrink that file and return unused space diskpart can be used:
select vdisk file="pathTo_ext4.vhdx"
compact vdisk
In my case the wsl2 image size was reduced by ~30%.
Related
I've seen this issue a number of times and usually use docker system prune to solve it temporarily, but i'm not understanding why it says there is no space on the device?
The main drive on my mac currently has 170gb free space, i also have a second drive with 900gb free, the images i'm building take up a total of 900mb when built, so what is docker talking about? I have plenty of storage space!
Since you specified that the platform is Mac, your docker runtime is running inside a VM, which has it's own resources allocated.
Assuming you are using Docker For Mac, you should increase the allocated disk space for the docker VM:
In case you don't want to increase the amount of docker engine storage as answered here, you can free some space by running:
docker image prune
so I'm really new to docker, and my friend told me that docker system prune run from the elevated cmd prompt suppose to clean pretty much everything, after running it however the message notifying about "reclaiming 16.24 gb" was displayed but my file explorer doesn't show any changes to disk c, restart of docker or host machine didn't help, pruning volumes yield same results. How do I make him release the space or display it correctly (as I don't really know what the case is) ?
I'm not super familiar with the internals of Docker for Windows, but fairly recently it worked by having a small virtual machine with a virtual disk image. The reclaimed disk space is inside that virtual disk image, but the "file" for that image will still remain the same size on your physical disk. If you want to reclaim the physical disk space, there should be a "Reset Docker" button somewhere in the Docker for Windows control panel, which will essentially delete that disk image and create a new, empty one.
I recently updated my Docker environment to run on WSL 2 on Windows.
For setting memory allocation limits on containers in previous versions, I had option in Docker Desktop GUI under Settings->Resources->Advanced->Preferences to adjust memory and CPU allocation.
After WSL 2 integration, I am not able to find that option.
I assume I should run everything through my Linux distro from now on, so this is the solution I was able to find:
docker run -d -p 8081:80 --memory="256m" container_name
I dont want to have to set a flag each time when running a container. Is there a way to permanently set the memory allocation?
The Memory and CPU settings were removed for WSL2 integration. However, starting in Windows Build 18945, there is a workaround to limit WSL2 memory usage.
Create a %UserProfile%\.wslconfig file for configuring WSL2 settings:
[wsl2]
memory=6GB # Any size you feel like (must be an integer!)
swap=0
localhostForwarding=true
Run Get-Service LxssManager | Restart-Service in an admin Powershell (or reboot) and verify that the vmmem usage in Task Manager drops off.
For the complete list of settings, please visit Advanced settings configuration in WSL.
I just created the %UserProfile%\.wslconfig file with these two lines and left everything else untouched. It worked fine.
[wsl2]
memory=8GB
I did a full shutdown right after adding the file for WSL to pick up the new settings.
$ wsl --shutdown
See additional information from Microsoft here: Advanced settings configuration in WSL
After having read about the performance improvements when running Docker on wsl2, I have been waiting for the official release of Windows 10 that supports wsl2.
I updated Windows and Docker and switched on the Docker flag to use wsl2 and was hoping for some performance boost for my Oracle Database running in a Docker container but unfortunately the change slowed down the container and my laptop dramatically.
The performance of the container is about 10x slower and my laptop is pretty much stuck when starting the container.
It seems as if the memory consumption would completely use up my 8GB and heavy memory swapping starts to take place.
Is there anything I can do to improve the performance of Docker on wsl2 or at least to better understand what's wrong in my setup?
My environment:
Processor Intel(R) Core(TM) i7-2620M CPU # 2.70GHz, 2 Core(s)
Installed Physical Memory (RAM) 8.00 GB
Microsoft Windows 10 Pro Version 10.0.19041 Build 19041
Docker version 19.03.8, build afacb8b
This comes from the "vmmem" which consumes as much resource as it can.
To solve the problem just go to your user file
for me in
C:\Users\userName
In this directory create a file named ".wslconfig" in which you will configure how many resources can consume WSL2:
[wsl2]
memory=900MB #Limits VM memory in WSL 2 to 900MB
processors=1 #Makes the WSL 2 VM use one virtual processors
Now close your docker and wait for "vmmem" to close in the task manager.
then You can restart docker and normally "vmmem" will not exceed the limit you have set (here 900MB)
If don't work restart your computer.
I hope it helped you.
You probably have your code stored on the Windows machine in a folder similar to this...
C:\\Users\YourName\projects\blahfu
But you are using Docker on WSL 2 which is a different (Linux) filesystem. So, when you do a Docker build all of the code/context gets copied from the Windows filesystem to Linux filesystem and then from there to the Docker container. This is what takes the most time and is incredibly slow.
Try to put your project into a folder like this...
/home/YouName/projects/blahfu
You should get quite a performance boost.
wsl container have they proper filesystem isolated from the windows filesystem.
The base idea is to copy your source code from windows file systeme to wsl file systeme.
from window you can acces the wsl container and copy your project to a wslcontainer :
navigate with explorer to \\wsl$
rebuild the container from this location this will do the trick !
If the data for the actual docker container is stored on a windows file system (i.e. NTFS) instead of stored on a native linux filesystem (regardless of what the docker container contents are, which are likely already linux based), then I think you are going to see slow performance because you're running WSL and using the docker container from a mounted WINDOWS file system (i.e. /c/mnt/...).
If you copy your docker container to something like /usr/local, or /home//docker on WSL then you may see a 10x performance INCREASE. Try that and see if it works?
you need edit "vmmem" resource
just add file .wslconfig in path
C:\Users<yourUserName>.wslconfig
Configure global options with .wslconfig
Available in Windows Build 19041 and later
You can configure global WSL options by placing a .wslconfig file into the root directory of your users folder: C:\Users<yourUserName>.wslconfig. Many of these files are related to WSL 2, please keep in mind you may need to run
wsl --shutdown
to shut down the WSL 2 VM and then restart your WSL instance for these changes to take affect.
Here is a sample .wslconfig file:
Console
Copy
[wsl2]
kernel=C:\\temp\\myCustomKernel
memory=4GB # Limits VM memory in WSL 2 to 4 GB
processors=2 # Makes the WSL 2 VM use two virtual processors
see this https://learn.microsoft.com/en-us/windows/wsl/wsl-config
Open your wsl2 distribution (Ubuntu for example) and set the ~/.docker/config.json file.
Only you need to change:
{
"credsStore": "docker.exe"
}
"credsStore": "desktop.exe" : ultra-slow (over 2 minutes)
"credsStore": "wincred.exe" : fast
"credsStore": "" : fast
It works very well.
If you are using VS Code, there is a command named "Remote-Containers: Clone Repository in Container Volume..." which assures you have full speed file access.
Form the documentation:
Repository Containers use isolated, local Docker volumes instead binding to the local filesystem. In addition to not polluting your file tree, local volumes have the added benefit of improved performance on Windows and macOS.
As mentioned by Claudio above, setting below lines in ~/.docker/config.json of wsl ubuntu server solved the problem for me.
{
"credsStore": "wincred.exe"
}
Earlier it was taking 5-10 min to build any simple image, now it is done in 1-2 seconds.
Downside: You have to make this change every time you open the server. I have tried every solution mentioned in https://github.com/docker/for-win/issues/9843 to solve this but nothing works for me.
When I am trying to build the docker image I am getting out of disk space error and after investigating I find the following:
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 4G 3.8G 0 100% /
How do I fix this out of space error?
docker system prune
https://docs.docker.com/engine/reference/commandline/system_prune/
This will clean up all images, containers, networks, volumes not used. We generally try to clean up old images when creating a new one but you could also have this run as a scheduled task on your docker server every day.
use command - docker system prune -a
This will clean up total Reclaimable Size for Images, Network & Volume..... This will remove all images related reclaimable space which are not associated with any running container.....
Run docker system df command to view Reclaimable memory
In case there is some Reclaimable memory then if above command does not work in first go then run the same command twice then it should cleaned up....
I have been experiencing this behavior almost on daily basis.....
Planning to report this bug to Docker Community but before that want to reproduce this bug with new release to see if this has been fixed or not with latest one....
Open up the docker settings -> Resources -> Advanced and up the amount of Hard Drive space it can use under disk image size.
If you are using linux, then most probably docker is filling up the directory /var/lib/docker/containers, because it is writing container logs to <CONTAINER_ID>-json.log file under this directory. You can use the command cat /dev/null > <CONTAINER_ID>-json.log to clear this file or you can set the maximum log file size be editing /etc/sysconfig/docker. More information can be found in this RedHat documentation. In my case, I have created a crontab to clear the contents of the file every day at midnight. Hope this helps!
NB:
You can find the docker containers with ID using the following command
sudo docker ps --no-trunc
You can check the size of the file using the command
du -sh $(docker inspect --format='{{.LogPath}}' CONTAINER_ID_FOUND_IN_LAST_STEP)
Nothing works for me. I change the disk images max size in Docker Settings, and just after that it free huge size.
Going to leave this here since I couldn't find the answer.
Go to the Docker GUI -> Prefereces -> Reset -> Uninstall
Completely uninstall Docker.
Then install it fresh using this link
My docker was using 20GB of space when building an image, after fresh install, it uses 3-4GB max. Definitely helps!
Also, if you using a macbook, have look at ~/Library/Containers/docker*
This folder for me was 60 GB and was eating up all the space on my mac! Even though this may not be relevant to the question, I believe it is vital for me to leave this here.