Windows Container on Windows 10 Insider Build and Docker for Windows - docker

I'm on the latest insider preview build for Window 10 (14955.rs_prerelease.161020-1700). I'm also using the most current Docker for Windows (1.12.3-beta29.2 (8280)). I have a Docker build for a simple nanoserver image. When I run the image I get the following error.
docker: Error response from daemon: container e0fa9da740f6ce0534516ededcce3d6f8f4c07ee656ce12e7b76b73077c52f38 encountered an error during Start: failure in a Windows system call: The system cannot find the path specified. (0x3): Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
Outside of using the 1.13 version of the Docker engine as outline here. Is there a way to get this to work?
I would rather stick with Docker for Windows since I go back and forth (Linux and Windows) and the tooling makes it easy

Related

Docker Desktop - Filesharing notification about poor performance

When my Docker containers start, I receive the following notification that reads:
Docker Desktop has detected that you shared a Windows file into a WSL 2 container, which may perform poorly. Click here for more details.
My questions are:
What does this mean?
What is the better practice / how should this be avoided?
If the message has been closed, or I've clicked "Don't show again", how can I get to the details of this warning?
I am happy to share the Dockerfile or Docker-Compose setup if needed, but I simply cannot find anything either here on SO or by a Google search that points me in any direction, so I'm not sure where to start. I'm assuming the issue lies in the Dockerfile since that is where we running COPY to move some files around.
Docker Version: Docker Desktop 2.4.0.0 (48506) Community
Operating System: Windows 10 Pro (version 10.0.19041)
This error means that accessing files on the Windows host file system from a Linux container will perform a little slower than accessing files that are already in a Linux filesystem. Accessing Windows files from the Linux container will perform like accessing files on a remote file share.
Docker and Microsoft recommend avoiding this by storing your source files in a WSL2 distro's file system (which you can bind mount to the container) or building your container image to include all the files needed rather than storing your files in the Windows file system.
If you've clicked "Don't show again", you can get to the details of this message by going to Develop with Docker and WSL 2.
For more information, Docker for Windows Best Practices says:
Linux containers only receive file change events (“inotify events”) if the original files are stored in the Linux filesystem. For example, some web development workflows rely on inotify events for automatic reloading when files have changed.
Performance is much higher when files are bind-mounted from the Linux filesystem, rather than remoted from the Windows host. Therefore avoid docker run -v /mnt/c/users:/users (where /mnt/c is mounted from Windows).
Instead, from a Linux shell use a command like docker run -v ~/my-project:/sources <my-image> where ~ is expanded by the Linux shell to $HOME.
Microsoft's Comparing WSL 1 and WSL 2 article has a whole section on Performance across OS file systems, and its opening paragraph says:
We recommend against working across operating systems with your files, unless you have a specific reason for doing so. For the fastest performance speed, store your files in the WSL file system if you are working in a Linux command line (Ubuntu, OpenSUSE, etc). If you're working in a Windows command line (PowerShell, Command Prompt), store your files in the Windows file system.
Also, the Docker blog article Docker Desktop: WSL 2 Best practices has an "Awesome mounts performance" section that says:
Both your own WSL 2 distro and docker-desktop run on the same utility VM. They share the same Kernel, VFS cache etc. They just run in separate namespaces so that they have the illusion of running totally independently. Docker Desktop leverages that to handle bind mounts from a WSL 2 distro without involving any remote file sharing system. This means that when you mount your project files in a container (with docker run -v ~/my-project:/sources <...>), docker will propagate inotify events and share the same cache as your own distro to avoid reading file content from disk repeatedly.
A little warning though: if you mount files that live in the Windows file system (such as with docker run -v /mnt/c/Users/Simon/windows-project:/sources <...>), you won’t get those performance benefits, as /mnt/c is actually a mountpoint exposing Windows files through a Plan9 file share.
All of that advice is great if you want your primary development workflow to be in Linux. Docker wants you to go "all in" on Linux containers. But if you work primarily in Windows and just want to use a Linux container for a specialized task, then it's fine to click "Don't show again". As Microsoft said, "If you're working in a Windows command line, store your files in the Windows file system."
I run with my main development folder in Windows, and I bind mount it to a Linux container that's just used to execute unit tests. So my full build runs in Windows, then I run all my unit tests in Windows, and I finish by running all my unit tests in a Linux container too. Having Linux bind mount to my Windows folder works fast and great for this scenario where the "dotnet test" call in Linux is just loading and executing the required DLLs from my Windows volume.
This setup may sound like heresy to those that believe containers must be used everywhere, but I love containers for application deployment. I'm not convinced that you need to go all in and do all your development inside a container too. I'm happy with Windows (and VS 2019) as my development environment, and then I use Linux containers for application testing and deployment. So the Windows/WSL2 file system performance hit is a minimal impact to me.

How can I clone my Google Cloud Instance so I can download it and host it locally using Docker [duplicate]

I have a Google Cloud VM that installed with my application. The installation step is completed and I:
Turned off the VM instance.
Exported the disk to disk image called MY_CUSTOM_IMAGE_1
My wish now is to use MY_CUSTOM_IMAGE_1 as the starting image of my docker image build. For building the images I'm using Google Cloud Build.
My docker file should look like this:
FROM MY_CUSTOM_IMAGE_1 AS BUILD_ENV
...
When I tried to use this image I got the build error:
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1
ERROR
pull access denied for MY_CUSTOM_IMAGE_1, repository does not exist or may require 'docker login'
Step 1/43 : FROM MY_CUSTOM_IMAGE_1 AS BUILD_ENV
The reason is that VM images are not the same as Docker images.
Is this possible to make this transform (GCP VM Image -> Docker image), without external tools (outside GCP, like "docker private repositories")?
Thanks!
If you know all the installed things on your VM (and all the commands), do the same thing in a Dokerfile. Use as base image, the same OS version as your current VM. Perform some tests and it should be quickly equivalent.
If you have statefull files in your VM application, it's a little bit more complex, you have to mount a disk in your container and to update your application's configuration to write in the correct mounted folder. It's more "complex" but there is tons of example on internet!
No, this is not possible without a tool to extract your application out of the virtual machine image and recreate in a container. To the best of my knowledge, there is no general-purpose tool that exists.
There is a big difference between a container image and a virtual machine image. Container images do not have an operating system, virtual machine images are a complete operating system and device data. The two conceptually are similar, but extremely different in how they are implemented at the software and hardware level.

Google Cloud Engine pull public docker image causes manifest unknown error

I'm trying to run a docker image on a gce vm instance. I changed the core count and checked the box for deploying a container image. In the container image box I put docker.io/urw7rs/spiralpp:latest.
When I create the vm, I get an error : Attempting next endpoint for pull after error: manifest unknown: Failed to fetch \"latest\" from request \"/v2/urw7rs/spiralpp/manifests/latest\
I tried changing docker.io to registry.hub.docker.com, checked allow http and https traffic in firewall. I also tried running without any command and command arguments.
container options
I've reproduced your issue in my project and I found a workaround for this:
Please create a new instance using a Container-Optimized OS image.
Container-Optimized OS is an operating system image for your Compute Engine VMs that is optimized for running Docker containers, please refer to this guide to know how to install it.
I suggest you to avoid steps 4, 5 and 6, so you can use the docker commands within the instance to pull the image.
Don't forget to choose the latest version of Container-Optimized OS during the VM instance creation, to achieve this; once you've clicked on the "create instance" button, please search for the "Boot disk" section and click on the "Change" button; then go to the operating system option and select "Container-Optimized OS" and adjust the Size (GB) of the disk, please leave the version by default.
Once the VM instance is ready, please SSH the instance.
Once you're inside the instance, please run this command:
docker pull urw7rs/spiralpp:latest
Please run this command to check the image is being downloaded to your instance.
docker images
Please let me know the results.

Running Actinia on docker container

I have recently heard about Actinia and I would like to try it out (I am remote sensing anaylst, I am not used to use command line )
I use Windows 10 . I have cloned Actinia on github and trying to use it on my docker container. I changed my windows containers to linux containers. Once I type in on my GitBash
docker-compose build --pull
It stops at step 16/49, while trying to connect to GRASS GIS. It iterates on the same problem,
GRASS GIS libgis version and date number not available
ERROR: Cannot open URL:
and the url he is trying to connect to.
Thus, I wonder if there is a configuration I am missing.
Source: https://github.com/mundialis/actinia_core/tree/master/docker

Google Cloud VM Image to docker image

I have a Google Cloud VM that installed with my application. The installation step is completed and I:
Turned off the VM instance.
Exported the disk to disk image called MY_CUSTOM_IMAGE_1
My wish now is to use MY_CUSTOM_IMAGE_1 as the starting image of my docker image build. For building the images I'm using Google Cloud Build.
My docker file should look like this:
FROM MY_CUSTOM_IMAGE_1 AS BUILD_ENV
...
When I tried to use this image I got the build error:
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1
ERROR
pull access denied for MY_CUSTOM_IMAGE_1, repository does not exist or may require 'docker login'
Step 1/43 : FROM MY_CUSTOM_IMAGE_1 AS BUILD_ENV
The reason is that VM images are not the same as Docker images.
Is this possible to make this transform (GCP VM Image -> Docker image), without external tools (outside GCP, like "docker private repositories")?
Thanks!
If you know all the installed things on your VM (and all the commands), do the same thing in a Dokerfile. Use as base image, the same OS version as your current VM. Perform some tests and it should be quickly equivalent.
If you have statefull files in your VM application, it's a little bit more complex, you have to mount a disk in your container and to update your application's configuration to write in the correct mounted folder. It's more "complex" but there is tons of example on internet!
No, this is not possible without a tool to extract your application out of the virtual machine image and recreate in a container. To the best of my knowledge, there is no general-purpose tool that exists.
There is a big difference between a container image and a virtual machine image. Container images do not have an operating system, virtual machine images are a complete operating system and device data. The two conceptually are similar, but extremely different in how they are implemented at the software and hardware level.

Resources