this might seems a stupid question, but here I am :
I'm running Ubuntu 16.04 and managed to install windows 10 in dual boot.
Running docker exclusively in linux so far, I decided to give it a try on Windows 10.
As I already downloaded several docker images on my Linux system, I'm willing to have a "shared" like development environment. I must admit this would be a waste of time and disk space to download Docker images I already downloaded before (on linux) on my fresh windows install.
So my question is simple : Can I use my linux images / containers on windows. I'm thinking of something like a global path variable pointing to my linux images to configure on docker windows.
Any idea if this is possible, and if yes, the pros and cons and the caveats ?
Thanks for helping me on this one.
Well i would suggest to create your local registry and then push these images there and pull it in your windows docker.
Sonatype nexus(artifact storage repository) can be used to store your docker images. Check if this helps.
I guess it's not possible to share the same folder (to reduce disk usage) since the stored files are totally different:
Under Windows the file is:
C:\Users\Public\Documents\Hyper-V\Virtual hard disks \MobyLinuxVM.vhdx
the vhdx extension is specific to MS systems.
and under linux it consist of 2 files:
/var/lib/docker/devicemapper/devicemapper/data
/var/lib/docker/devicemapper/devicemapper/metadata
see here for details
Where are Docker images stored on the host machine?
The technology under this is to have a specific fileSystem optimal for docker. Even if they used the same fileSystem storage, it wouldn't be a good idea imho.
If the purpose is only to gain time for resintalling, just dump all the images from on system, and re-pull them on the other one.
docker images --format "{{.Repository}}" > image-list.txt
then loop on the other OS
while read p; do
docker pull $p
done < image-listtxt
Related
After having read about the performance improvements when running Docker on wsl2, I have been waiting for the official release of Windows 10 that supports wsl2.
I updated Windows and Docker and switched on the Docker flag to use wsl2 and was hoping for some performance boost for my Oracle Database running in a Docker container but unfortunately the change slowed down the container and my laptop dramatically.
The performance of the container is about 10x slower and my laptop is pretty much stuck when starting the container.
It seems as if the memory consumption would completely use up my 8GB and heavy memory swapping starts to take place.
Is there anything I can do to improve the performance of Docker on wsl2 or at least to better understand what's wrong in my setup?
My environment:
Processor Intel(R) Core(TM) i7-2620M CPU # 2.70GHz, 2 Core(s)
Installed Physical Memory (RAM) 8.00 GB
Microsoft Windows 10 Pro Version 10.0.19041 Build 19041
Docker version 19.03.8, build afacb8b
This comes from the "vmmem" which consumes as much resource as it can.
To solve the problem just go to your user file
for me in
C:\Users\userName
In this directory create a file named ".wslconfig" in which you will configure how many resources can consume WSL2:
[wsl2]
memory=900MB #Limits VM memory in WSL 2 to 900MB
processors=1 #Makes the WSL 2 VM use one virtual processors
Now close your docker and wait for "vmmem" to close in the task manager.
then You can restart docker and normally "vmmem" will not exceed the limit you have set (here 900MB)
If don't work restart your computer.
I hope it helped you.
You probably have your code stored on the Windows machine in a folder similar to this...
C:\\Users\YourName\projects\blahfu
But you are using Docker on WSL 2 which is a different (Linux) filesystem. So, when you do a Docker build all of the code/context gets copied from the Windows filesystem to Linux filesystem and then from there to the Docker container. This is what takes the most time and is incredibly slow.
Try to put your project into a folder like this...
/home/YouName/projects/blahfu
You should get quite a performance boost.
wsl container have they proper filesystem isolated from the windows filesystem.
The base idea is to copy your source code from windows file systeme to wsl file systeme.
from window you can acces the wsl container and copy your project to a wslcontainer :
navigate with explorer to \\wsl$
rebuild the container from this location this will do the trick !
If the data for the actual docker container is stored on a windows file system (i.e. NTFS) instead of stored on a native linux filesystem (regardless of what the docker container contents are, which are likely already linux based), then I think you are going to see slow performance because you're running WSL and using the docker container from a mounted WINDOWS file system (i.e. /c/mnt/...).
If you copy your docker container to something like /usr/local, or /home//docker on WSL then you may see a 10x performance INCREASE. Try that and see if it works?
you need edit "vmmem" resource
just add file .wslconfig in path
C:\Users<yourUserName>.wslconfig
Configure global options with .wslconfig
Available in Windows Build 19041 and later
You can configure global WSL options by placing a .wslconfig file into the root directory of your users folder: C:\Users<yourUserName>.wslconfig. Many of these files are related to WSL 2, please keep in mind you may need to run
wsl --shutdown
to shut down the WSL 2 VM and then restart your WSL instance for these changes to take affect.
Here is a sample .wslconfig file:
Console
Copy
[wsl2]
kernel=C:\\temp\\myCustomKernel
memory=4GB # Limits VM memory in WSL 2 to 4 GB
processors=2 # Makes the WSL 2 VM use two virtual processors
see this https://learn.microsoft.com/en-us/windows/wsl/wsl-config
Open your wsl2 distribution (Ubuntu for example) and set the ~/.docker/config.json file.
Only you need to change:
{
"credsStore": "docker.exe"
}
"credsStore": "desktop.exe" : ultra-slow (over 2 minutes)
"credsStore": "wincred.exe" : fast
"credsStore": "" : fast
It works very well.
If you are using VS Code, there is a command named "Remote-Containers: Clone Repository in Container Volume..." which assures you have full speed file access.
Form the documentation:
Repository Containers use isolated, local Docker volumes instead binding to the local filesystem. In addition to not polluting your file tree, local volumes have the added benefit of improved performance on Windows and macOS.
As mentioned by Claudio above, setting below lines in ~/.docker/config.json of wsl ubuntu server solved the problem for me.
{
"credsStore": "wincred.exe"
}
Earlier it was taking 5-10 min to build any simple image, now it is done in 1-2 seconds.
Downside: You have to make this change every time you open the server. I have tried every solution mentioned in https://github.com/docker/for-win/issues/9843 to solve this but nothing works for me.
Our build setup is backed into a large docker container (basically a 2 GB image coming with a complete X86 linux in itself).
We have two ways to actually build: the official approach is jenkins environment (running on X86 hardware). But we also have a little "side X86 server" running RH 7. Developers can log into that RH server and kick off specific builds (using said docker images) themselves.
Those RH servers will be shut down at some point, to be replaced with IBM Power8 machines (running RH7 Little Endian for power).
I am simply wondering: is there a chance that our existing build setup and docker images simply work on Power8? Or are the fundamental technical issues that make it unlikely and not even worth trying?
You can probably use your existing build methodology and scripts close to unchanged, but you'll need to rebuild the actual images.
You can't directly run x86 binaries on Power (at a very low level, the bytes of machine code are just different). Docker doesn't contain any sort of virtualization layer; it does a bunch of setup to isolate the container from the host, but then runs the binaries in an image directly.
If your Jenkins setup has enough parameters for image names and version tags, then you should be able to run the x86 and Power setups side-by-side; you need to encode the architecture somewhere in the built image name or tag; for instance, repo.example.com/app/build:20180904-power. (I don't know that one or the other is considered better if you control all of the machinery.) If you have a private repo, you could encode it earlier in the path, winding up with image names like repo.example.com/power/build:20180904.
You'd need to double-check that everywhere that has a Docker image reference has it correctly parameterized (which is a good practice anyways). That would include any direct docker run commands; any Docker Compose or Kubernetes YAML files or similar artifacts; and the FROM line of any Dockerfiles.
Existing build setup? Not sure!
Docker images? NO, don’t even try.
Docker images are actually multiple layers which stored on filesystem through corresponding storage driver and backing filesystem(shown in the output of docker info).
If storage driver/backing filesystem has been changed, which likely be true when OS changed, older docker images could not be valid any more. Meaning they must be rebuilt for sure.
If I have two Docker containers running on the same host machine do they each have their own page cache or do they use the page cache of the host machine?
Page cache is managed by the kernel, which is used by all the containers.
See more at moby/moby issue 21759
Docker makes it easy to spawn a lot of containers and get better density, but it also makes it easy to run too many services on one machine or to run services which require way too much RAM.
The official documentation lists devicemapper (direct-lvm) as a production ready storage driver, but it doesn't have very efficient memory usage. The official documentation doesn't state otherwise either. Multiple identical containers will increase memory usage for the page cache.
In order to make this better and get better performance, the following should help, in a similar way to how it helps outside of Docker and containers in general:
make containers smaller for long running services & applications (e.g. smaller binaries, smaller images, optimize memory usage, etc)
VERY IMPORTANT: use volumes and bind mounts, instead of storing data inside the container
VERY IMPORTANT: make sure to run a system with a maintained kernel, up to date Docker and devicemapper libraries (e.g. fully updated CentOS 7 / RHEL 7 / Ubuntu 14.04 / Ubuntu 16.04)
Current behaviour (January 2020) is that by default containers on the same host share the same page cache.
Current docker documentation explains:
OverlayFS is a modern union filesystem that is similar to AUFS, but faster and with a simpler implementation. Docker provides two storage drivers for OverlayFS: the original overlay, and the newer and more stable overlay2.
The overlay2 driver is supported on Docker Engine - Community, and Docker EE 17.06.02-ee5 and up, and is the recommended storage driver.
Page Caching. OverlayFS supports page cache sharing. Multiple containers accessing the same file share a single page cache entry for that file. This makes the overlay and overlay2 drivers efficient with memory and a good option for high-density use cases such as PaaS
https://docs.docker.com/storage/storagedriver/overlayfs-driver/
I can't understand one thing. I read about images and AUFS file system and I think that I got it. However, when I look at iso file on ubuntu site it is meaningfully more than 100MB. Where is key ? In graphical enviroment? (eg. KDE)
Docker Images are minimal meaning, they contain only a few number of libraries (needed libraries). They don't include kernel, because containers use docker host's kernel.
You can download and inspect official ubuntu cloud image (source of library/ubuntu yekkety) from here.
Another thing to note: Base images usually don't include window managers and desktop environments.
I am totally new to Docker and have only, so far, used the images available on Docker repos.
I have already tested and being using docker for some aspects of my daily activities and it works great for me but in some specific cases I need a "virtual" image of Linux with graphic support(X in Ubuntu or CentOS) and so far I have only encountered on Docker repos images that by default don't have X support.
Should I in this case use a standard Virtual Box or VMWare image? Is it possible to run a visual version of Linux in a docker container? I haven't tried it yet.
If you run your containers in privileged mode they can access the host's resources (and to anything else for that matter), so in essence it is possible though I'd be willing to bet that it turns out to be more trouble than it's worth because the containers won't be quite as portable as ones that don't require such outside resources.