How to set the shared drives in Docker for Windows? - docker

How to set the shared drives in Docker for Windows? I am using the latest version 18. Stable and Edge. My settings screen is shown below. It's missing some options like Shared Drives, Advanced and Network, which are shown in the second image. Why am I missing these options?
My settings:
Screen from a website:

Seems you are Running Docker for Windows using "Windows Containers". If you switch to "Linux containers" you'll see "Shared Drives" option. Take a look this video.
According Docker documentation: shared drives for Windows containers is not implemented.
Volume mounting requires shared drives for Linux containers (not for
Windows containers).
Update:
Since 2018, Docker for Desktop is using a new UI. I recorded a new video showing how to solve this problem.
Update:
If you are using WSL2 you will be experiencing same problem. Take a look this video.

In new UIs they are placed under resources

Ended up here, because the "Shared drives" was missing on my docker settings. If you are missing it too, but docker is set for linux container then it is because WSL 2.
Because if you are using Docker on WSL 2, there is no such option, but you can directly attach volumes from filesystem with docker run -v c:\...\your-folder:/mount ... without specifying it in docker settings.

Related

Is it possible to mount docker container on VS code?

I saw it but I can't remember where I saw it.
It looked like seeing inside Linux, mounting the docker container on VS Code on Mac OS.
Is it possible to mount the docker container on VS code?
By "mount", I presume you're talking about opening VSCode inside the container instead of Docker volumes and bind mounts. You can install the Remote Containers extension to do this.
Also see the docs mentioned by #AttilaViniczai for how to create development containers/dev contianers for VSCode: https://code.visualstudio.com/docs/remote/create-dev-container
Install the VSCode extension
Select the Remote Explorer tab on the left side of VSCode and make sure you have Containers selected from the drop down menu.
Double click the container you want to work on and/or right click and select Attach to Container. This will attach VSCode to that Docker container and install its required tools inside automatically.
For more information about this, you can see the VSCode documentation here and a tutorial here.

Docker Desktop - Filesharing notification about poor performance

When my Docker containers start, I receive the following notification that reads:
Docker Desktop has detected that you shared a Windows file into a WSL 2 container, which may perform poorly. Click here for more details.
My questions are:
What does this mean?
What is the better practice / how should this be avoided?
If the message has been closed, or I've clicked "Don't show again", how can I get to the details of this warning?
I am happy to share the Dockerfile or Docker-Compose setup if needed, but I simply cannot find anything either here on SO or by a Google search that points me in any direction, so I'm not sure where to start. I'm assuming the issue lies in the Dockerfile since that is where we running COPY to move some files around.
Docker Version: Docker Desktop 2.4.0.0 (48506) Community
Operating System: Windows 10 Pro (version 10.0.19041)
This error means that accessing files on the Windows host file system from a Linux container will perform a little slower than accessing files that are already in a Linux filesystem. Accessing Windows files from the Linux container will perform like accessing files on a remote file share.
Docker and Microsoft recommend avoiding this by storing your source files in a WSL2 distro's file system (which you can bind mount to the container) or building your container image to include all the files needed rather than storing your files in the Windows file system.
If you've clicked "Don't show again", you can get to the details of this message by going to Develop with Docker and WSL 2.
For more information, Docker for Windows Best Practices says:
Linux containers only receive file change events (“inotify events”) if the original files are stored in the Linux filesystem. For example, some web development workflows rely on inotify events for automatic reloading when files have changed.
Performance is much higher when files are bind-mounted from the Linux filesystem, rather than remoted from the Windows host. Therefore avoid docker run -v /mnt/c/users:/users (where /mnt/c is mounted from Windows).
Instead, from a Linux shell use a command like docker run -v ~/my-project:/sources <my-image> where ~ is expanded by the Linux shell to $HOME.
Microsoft's Comparing WSL 1 and WSL 2 article has a whole section on Performance across OS file systems, and its opening paragraph says:
We recommend against working across operating systems with your files, unless you have a specific reason for doing so. For the fastest performance speed, store your files in the WSL file system if you are working in a Linux command line (Ubuntu, OpenSUSE, etc). If you're working in a Windows command line (PowerShell, Command Prompt), store your files in the Windows file system.
Also, the Docker blog article Docker Desktop: WSL 2 Best practices has an "Awesome mounts performance" section that says:
Both your own WSL 2 distro and docker-desktop run on the same utility VM. They share the same Kernel, VFS cache etc. They just run in separate namespaces so that they have the illusion of running totally independently. Docker Desktop leverages that to handle bind mounts from a WSL 2 distro without involving any remote file sharing system. This means that when you mount your project files in a container (with docker run -v ~/my-project:/sources <...>), docker will propagate inotify events and share the same cache as your own distro to avoid reading file content from disk repeatedly.
A little warning though: if you mount files that live in the Windows file system (such as with docker run -v /mnt/c/Users/Simon/windows-project:/sources <...>), you won’t get those performance benefits, as /mnt/c is actually a mountpoint exposing Windows files through a Plan9 file share.
All of that advice is great if you want your primary development workflow to be in Linux. Docker wants you to go "all in" on Linux containers. But if you work primarily in Windows and just want to use a Linux container for a specialized task, then it's fine to click "Don't show again". As Microsoft said, "If you're working in a Windows command line, store your files in the Windows file system."
I run with my main development folder in Windows, and I bind mount it to a Linux container that's just used to execute unit tests. So my full build runs in Windows, then I run all my unit tests in Windows, and I finish by running all my unit tests in a Linux container too. Having Linux bind mount to my Windows folder works fast and great for this scenario where the "dotnet test" call in Linux is just loading and executing the required DLLs from my Windows volume.
This setup may sound like heresy to those that believe containers must be used everywhere, but I love containers for application deployment. I'm not convinced that you need to go all in and do all your development inside a container too. I'm happy with Windows (and VS 2019) as my development environment, and then I use Linux containers for application testing and deployment. So the Windows/WSL2 file system performance hit is a minimal impact to me.

Show configuration of Docker container

So, I ran a docker image with certain settings a while ago. In the meantime I updated my container settings via "docker update".
Now I want to see, what options/configurations (e.g. cpuset, stack, swap) are currently configured for my container.
Is there a docker command to check this?
If not, (why the hell isn't there and) where exactly can I find this information?
I am running docker 18.03.1-ce on debian 9.4.
Greetings,
Johannes
I found it out by myself.
To get detailed information about a containers settings one can use:
docker inspect 'options' 'containerid'

How do pass in DOCKER_OPTS into docker image running from Docker for Mac?

My root problem is that I need to support a local docker registry, self-signed certs and whatnot, and after upgrading to Docker for Mac, I haven't quite been able to figure out how to pass in options, or persist options, in the docker/alpine image running via the new and shiny xhyve that got installed with Docker for Mac.
I do have the functional piece of my problem solved, but it's very manual:
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
log in as root
vi /etc/init.d/docker
Append --insecure-registry foo.local.machine:5000 to DOCKER_OPTS; write file; quit vi.
/etc/init.d/docker restart
Now, if from the perspective of the main OS / OSX, Docker is restarted -- like a simple reboot of the computer -- of course this change and option is lost, and I have to go through this process again.
So, what can I do to automate this?
Am I missing where DOCKER_OPTS may be set? The /etc/init.d/docker file, internally, doesn't overwrite the env var, it appends to it, so this seems like it should be possible.
Am I missing where files may be persisted in the new docker image? I admit I'm not as familiar with it than the older image that I believe was boot2docker based, where I could have a persisted volume attached, and an entry point where to start these modifications.
Thank you for any help, assistance, and inspiration.
Go to Docker preferences (you can find icon on main panel)
Advanced -> Insecure docker registry
Advanced settings pictures

Docker images with visual(X) support? Or should I use VBox instead in this case?

I am totally new to Docker and have only, so far, used the images available on Docker repos.
I have already tested and being using docker for some aspects of my daily activities and it works great for me but in some specific cases I need a "virtual" image of Linux with graphic support(X in Ubuntu or CentOS) and so far I have only encountered on Docker repos images that by default don't have X support.
Should I in this case use a standard Virtual Box or VMWare image? Is it possible to run a visual version of Linux in a docker container? I haven't tried it yet.
If you run your containers in privileged mode they can access the host's resources (and to anything else for that matter), so in essence it is possible though I'd be willing to bet that it turns out to be more trouble than it's worth because the containers won't be quite as portable as ones that don't require such outside resources.

Resources