I'm using docker desktop (4.X) over win10 pro. We are building Windows applications and using Windows containers.
On our setup, the folder C:\ProgramData\Docker(images/windowsfilter/tmp & co) can grow up to hundreds of GB, and i need to move this folder to an alternative location.
Again, i am using WINDOWS CONTAINERS (i do not care about wsl2 or hyper-v specific solutions)
I tried moving / creating a junction between
C:\ProgramData\ Docker => D:\DockerData, but windows containers backend does not start.
If i switch back to linux containers, everything is working fine (and i know how to move WSL2 vhdx, if needed, but again, i DO NOT NEED THAT information).
Moving HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\ProgramData location BEFORE installing docker desktop, works, but it is not an acceptable solution
I tried configuring data-root directory in %USERPROFILE%.docker\windows-daemon.json, But it does not work, windows containers backend does not start.
Please give me a reliable way to move the C:\ProgramData\Docker folder to another location.
Unfortunately, when utilizing Windows Containers, it is not yet feasible to relocate the C:ProgramDataDocker folder to another location. This is so that container images and other data can be stored in this directory, which the Docker for Windows service is hard-coded to utilize.
You might try using a symbolic link to reroute the C:ProgramDataDocker folder to an other place as a solution. This may not be a reliable approach, though, as the Docker for Windows service might not handle the symbolic link correctly, which would prevent the service from starting.
Related
I'm completely new to docker and ddev and I have just started learning things. My main purpose to use Docker & Ddev is to work on my CMS projects. However, I noticed that by default the docker gets installed in C drive (in my case its almost full). Therefore, I want to learn how to create my projects in D drive using DDEV.
For example I would like to have them organized in one single folder like
D://Myprojects\Drupalsites\Mysite1
Something like that.
How do I do that?
Your problem isn't DDEV and your projects, it's docker using up your space, at at least as far as I understand your question.
So what you're really wanting to do is to move your Docker WSL2 data distro over to the new drive. As far as I know Docker and WSL2 don't provide a simple way to do this, but these two links will tell you how you can do it.
https://dev.to/kimcuonthenet/move-docker-desktop-data-distro-out-of-system-drive-4cg2
How can I change the location of docker images when using Docker Desktop on WSL2 with Windows 10 Home?
I have not tried this.
One note though: Most people these days are doing the recommended thing and running DDEV inside WSL2 (in /home/<youruser>). But if you already have trouble with disk space with WSL2, you're going to have trouble with that as well. But WSL2 should be your future, see https://ddev.readthedocs.io/en/stable/#installation-or-upgrade-windows-wsl2
Is it possible to run code someFile.js from inside a docker container, and have it open in VS Code?
Why do I want to do this? Because vue dev tools allows you to open a vue component from within the browser. This is especially helpful for new devs that want to quickly track down components and open them in the editor.
Unfortunatly - since my dev server is running inside a docker container - this functionality doesn't work. This is because the editor is opened from within the devserver.
Might be worth noting, I'm using Visual Studio Code Remote - Containers.
So to narrow the question furthur:
How can I allow launch VS Code from a docker container, so that vue dev tools can open that file in my local editor?
Yes, if you don't mind running your vue tools inside the docker container as well. You have to set up a .devcontainer.json file specifying the dockerfile or image or dockercompose file to use to build the container. It will create the container for you and automatically mount your project directory by default, but there are a lot of alternative configuration options as well.
This means you'd open VS Code and basically your whole IDE would be in the docker container. You could call vue tools from the VS Code terminal, including calls to code.
I've been doing this with some tensorflow stuff for the last 6 weeks or so. It was a little confusing at first, but now I really like it.
One challenge I've encountered so far is that if you are deploying your image as a deliverable, using a container as a dev environment can cause some dev tool creep into the image (like including dev tools in your Dockerfile that you need in development but dont want in the deployed image). There are probably good ways to deal with this but I haven't explored them all yet.
Another note: I can't seem to find the docs, but I think the recommended way is to use WSL2-backed docker, and then do all your docker mounting and docker client invocations from the WSL2 filesystem to docker instead of from Windows to Docker. I guess if WSL2 and docker are sharing the same VM, the mounted file systems are faster between WSL2/Docker than from Windows/Docker. This has worked well for me so far...
I've managed to adapt this dockerized version of VS Code to our restrictive runtime environment (Openshift), although it does assume connection to the internet, so extensions and Intellisense ML model had to be preinstalled:
https://hub.docker.com/r/codercom/code-server
When my Docker containers start, I receive the following notification that reads:
Docker Desktop has detected that you shared a Windows file into a WSL 2 container, which may perform poorly. Click here for more details.
My questions are:
What does this mean?
What is the better practice / how should this be avoided?
If the message has been closed, or I've clicked "Don't show again", how can I get to the details of this warning?
I am happy to share the Dockerfile or Docker-Compose setup if needed, but I simply cannot find anything either here on SO or by a Google search that points me in any direction, so I'm not sure where to start. I'm assuming the issue lies in the Dockerfile since that is where we running COPY to move some files around.
Docker Version: Docker Desktop 2.4.0.0 (48506) Community
Operating System: Windows 10 Pro (version 10.0.19041)
This error means that accessing files on the Windows host file system from a Linux container will perform a little slower than accessing files that are already in a Linux filesystem. Accessing Windows files from the Linux container will perform like accessing files on a remote file share.
Docker and Microsoft recommend avoiding this by storing your source files in a WSL2 distro's file system (which you can bind mount to the container) or building your container image to include all the files needed rather than storing your files in the Windows file system.
If you've clicked "Don't show again", you can get to the details of this message by going to Develop with Docker and WSL 2.
For more information, Docker for Windows Best Practices says:
Linux containers only receive file change events (“inotify events”) if the original files are stored in the Linux filesystem. For example, some web development workflows rely on inotify events for automatic reloading when files have changed.
Performance is much higher when files are bind-mounted from the Linux filesystem, rather than remoted from the Windows host. Therefore avoid docker run -v /mnt/c/users:/users (where /mnt/c is mounted from Windows).
Instead, from a Linux shell use a command like docker run -v ~/my-project:/sources <my-image> where ~ is expanded by the Linux shell to $HOME.
Microsoft's Comparing WSL 1 and WSL 2 article has a whole section on Performance across OS file systems, and its opening paragraph says:
We recommend against working across operating systems with your files, unless you have a specific reason for doing so. For the fastest performance speed, store your files in the WSL file system if you are working in a Linux command line (Ubuntu, OpenSUSE, etc). If you're working in a Windows command line (PowerShell, Command Prompt), store your files in the Windows file system.
Also, the Docker blog article Docker Desktop: WSL 2 Best practices has an "Awesome mounts performance" section that says:
Both your own WSL 2 distro and docker-desktop run on the same utility VM. They share the same Kernel, VFS cache etc. They just run in separate namespaces so that they have the illusion of running totally independently. Docker Desktop leverages that to handle bind mounts from a WSL 2 distro without involving any remote file sharing system. This means that when you mount your project files in a container (with docker run -v ~/my-project:/sources <...>), docker will propagate inotify events and share the same cache as your own distro to avoid reading file content from disk repeatedly.
A little warning though: if you mount files that live in the Windows file system (such as with docker run -v /mnt/c/Users/Simon/windows-project:/sources <...>), you won’t get those performance benefits, as /mnt/c is actually a mountpoint exposing Windows files through a Plan9 file share.
All of that advice is great if you want your primary development workflow to be in Linux. Docker wants you to go "all in" on Linux containers. But if you work primarily in Windows and just want to use a Linux container for a specialized task, then it's fine to click "Don't show again". As Microsoft said, "If you're working in a Windows command line, store your files in the Windows file system."
I run with my main development folder in Windows, and I bind mount it to a Linux container that's just used to execute unit tests. So my full build runs in Windows, then I run all my unit tests in Windows, and I finish by running all my unit tests in a Linux container too. Having Linux bind mount to my Windows folder works fast and great for this scenario where the "dotnet test" call in Linux is just loading and executing the required DLLs from my Windows volume.
This setup may sound like heresy to those that believe containers must be used everywhere, but I love containers for application deployment. I'm not convinced that you need to go all in and do all your development inside a container too. I'm happy with Windows (and VS 2019) as my development environment, and then I use Linux containers for application testing and deployment. So the Windows/WSL2 file system performance hit is a minimal impact to me.
Motivation
As of now, we are using five docker containers (MySQL, PHP, static...) managed by docker-compose. We do only need to access one of them. We now have a local copy of all data inside and sync it from Windows to the container, but that is very slow, VSCode on Windows sometimes randomly locks files causing git rebase origin/master to end in very unpleasant ways.
Desired behaviour
Use VSCode Remote Development extension to:
Edit files inside the container without any mirrored files on Windows
Run git commands (checkout, rebase, merge...)
Run build commands (make, ng, npm)
Still keep Windows as for many developers it is the prefered platform.
Question
Is it possible to develop inside a docker container using VSCode?
I have tried to follow the official guide, but they do seem to require us to have mirrored files. We do also use WSL.
As #FSCKur points out this is the exact scenario VSCode dev containers is supposed to address, but on Windows I've found the performance to be unusable.
I've settled on running VSCode and docker inside a Linux VM on Windows, and have a 96% time saving in things like running up a server and watching code for changes making this setup my preferred way now.
The standardisation of devcontainer.json and being able to use github codespaces if you're away from your normal dev machine make this whole setup a pleasure to use.
see https://stackoverflow.com/a/72787362/183005 for detailed timing comparison and setup details
This is sounds like exactly what I do. My team uses Windows on the desktop, and we develop a containerised Linux app.
We use VSCode dev containers. They are an excellent solution for the scenario.
You may also be able to SSH to your docker host and code on it, but in my view this is less good because you want to keep all customisation "contained" - I have installed a few quality-of-life packages in my dev container which I'd prefer to keep out of my colleague's environments and off the docker host.
We have access to the docker host, so we clone our source on the docker host and mount it through. We also bind-mount folders from the docker host for SQL and Redis data - but that could be achieved with docker volumes instead. IIUC, the workspace folder itself does have to be a bind-mount - in fact, no alternative is allowed in the devcontainer.json file. But since you need permission anyway on the docker daemon, this is probably achievable.
All source code operations happen in the dev container, i.e. in Linux. We commit and push from there, we edit our code there. If we need to work on the repo on our laptops, we pull it locally. No rcopy, no SCP - github is our "sync" mechanism. We previously used vagrant and mounted the source from Windows - the symlinks were an absolute pain for us, but probably anyone who's tried mounting source code from Windows into Linux will have experienced pain over some element or other.
VSCode in a dev container is very similar to the local experience. You will get bash in the terminal. To be real, you probably can't work like this without touching bash. However, you can install PSv7 in the container, and/or a 'better' shell (opinion mine) such as zsh.
I want to ask, if I have one folder that contains the application server (Axis2, Tomcat, WSO2, mongodb, and jms-consumer) What can be used as a container?
Is Docker as an application installer? Which classifies the entire application so 1 is then used as installer file, for example: server.exe for windows, server.deb for ubuntu
Could help to explain it?
Docker as an application installer?
No, docker is a a platform which manages containers (isolated user/process/disk machines running with the host kernel), around building, shipping and running (Containers as a Service).
The best practice is to isolate each part of your global service in its own container, both because of the PID1 zombie reaping issue (detailed in "Use of Supervisor in docker"), but also in term of ease of management and update.
If each component only represents a Tomcat, a MongoDB, a..., each one is easier to manage/debug, instead of having one giant container.
Also you can stop/update one without necessarily ipacting all the other ones.
The installation-like part is rather the description of your environment (both in term of OS and of applications you want to add to a container) with the Dockerfile: a description of what your environment will need to run.
That helps building an image (sort of archive of all the files you need), from which you docker run a container.
Right now, those containers only runs as Linux machines on Linux kernel hosts (or on Windows, through a Linux VM).
You don't have yet pure Windows images/containers that runs on Windows (it is in progress, with Windows Server 2016).
So can you just take what you have in one giant folder and put it in a docker container?
Not directly. The goal of Dockerfile is to describe how you would install what you need.
Then you docker build, and from the image you get, you docker run.
But in order for docker to manage correctly the lifecycle of that container, it is best if the container is limited to one process (instead of trynig to run everything like a webapp server, a mongodb, and so on in the same container space)
That means:
describing in separate Dockerfile (building separate images) for each of the components of your system
running those containers in a way they see each others and communicate with each others.
You have an example of a complex multi-component system in my project: b2d.