How does docker for windows decide what image to use - docker

I have two build servers building docker containers, a Windows10-1709 OS and Server2016-LTS, they both build containers based on microsoft/dotnet-framework:latest
These containers are then deployed by our CD system to a test Server2016-LTS host. The host is a VM and is restored to a checkpoint prior to each deployment. The checkpoint has the latest microsoft/dotnet-framework:latest image pulled and stored in it, checkpoint was updated today.
When the container from Server2016-LTS build server deploys, it just pulls our part of the image and is up and running in under 60 seconds.
When the image from the Windows10-1709 build server deploys, it takes ~10 minutes to deploy an image that contains ~10Mb of code. It's pull a different base image (I'm assuming one based on 1709). Once the pull completes, the image fails to run with the following error:
2018-02-15T23:15:57.3170769Z failed to register layer: re-exec error: exit status 1: output: time="2018-02-15T23:15:48Z" level=error msg="hcsshim::ImportLayer failed in Win32: The system cannot find the file specified. (0x2) layerId=\\?\C:\ProgramData\docker\windowsfilter\d7defcca1ec427b77fca7528840e442a596598002140b30afb4b5bb52311c8c6 flavour=1 folder=C:\Windows\TEMP\hcs025707919"
2018-02-15T23:15:57.3171830Z hcsshim::ImportLayer failed in Win32: The system cannot find the file specified. (0x2) layerId=\?\C:\ProgramData\docker\windowsfilter\d7defcca1ec427b77fca7528840e442a596598002140b30afb4b5bb52311c8c6 flavour=1 folder=C:\Windows\TEMP\hcs025707919
My assumption was that all the Microsoft/dotnet-framework:latest images are LTS, and you have to specify 1709 to get that base.
So why do my two docker images, which both have the same FROM in their docker file, behave so differently?
Thanks.

Related

Pulling / saving a few docker images in parallel - failure

I am trying to pull and save a few docker images in parallel (they are quite big and parallelism can save a lot of time - it is done within a Python script activating docker pull and then save within each thread). However, it fails all the time with the message like this:
Client Error: Not Found ("open /var/lib/docker/overlay2/41693d132695cd5ada8cf37f210d5b70bc1bac1b2cedfa5a4f352efa5ff00fc6/merged/some_file_name: no such file or directory")
the specific file on which it complains ('no such file or directory') varies.
In /var/log/messages (even after adding the debug flag to docker daemon options) I can't see anything valuable.
e.g.
level=error msg="Handler for GET /v1.35/images/xxx/xxx:xxx/get returned error: open /var/lib/docker/overlay2/41693d132695cd5ada8cf37f210d5b70bc1bac1b2cedfa5a4f352efa5ff00fc6/merged/opt/external/postgresql-42.2.5/postgresql-42.2.5.ja
r: no such file or directory"
Important (probably) note: - the images share many layers in common as they are built based on the same parent images (is this the reason for the collision in overlay FS?).
Running the same sequentually (number of parallel threads set to 1) works perfectly
OS: centos 7.9
Docker:
Server Version: 1.13.1
Storage Driver: overlay2
Backing Filesystem: xfs

How can I clone my Google Cloud Instance so I can download it and host it locally using Docker [duplicate]

I have a Google Cloud VM that installed with my application. The installation step is completed and I:
Turned off the VM instance.
Exported the disk to disk image called MY_CUSTOM_IMAGE_1
My wish now is to use MY_CUSTOM_IMAGE_1 as the starting image of my docker image build. For building the images I'm using Google Cloud Build.
My docker file should look like this:
FROM MY_CUSTOM_IMAGE_1 AS BUILD_ENV
...
When I tried to use this image I got the build error:
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1
ERROR
pull access denied for MY_CUSTOM_IMAGE_1, repository does not exist or may require 'docker login'
Step 1/43 : FROM MY_CUSTOM_IMAGE_1 AS BUILD_ENV
The reason is that VM images are not the same as Docker images.
Is this possible to make this transform (GCP VM Image -> Docker image), without external tools (outside GCP, like "docker private repositories")?
Thanks!
If you know all the installed things on your VM (and all the commands), do the same thing in a Dokerfile. Use as base image, the same OS version as your current VM. Perform some tests and it should be quickly equivalent.
If you have statefull files in your VM application, it's a little bit more complex, you have to mount a disk in your container and to update your application's configuration to write in the correct mounted folder. It's more "complex" but there is tons of example on internet!
No, this is not possible without a tool to extract your application out of the virtual machine image and recreate in a container. To the best of my knowledge, there is no general-purpose tool that exists.
There is a big difference between a container image and a virtual machine image. Container images do not have an operating system, virtual machine images are a complete operating system and device data. The two conceptually are similar, but extremely different in how they are implemented at the software and hardware level.

Google Cloud VM Image to docker image

I have a Google Cloud VM that installed with my application. The installation step is completed and I:
Turned off the VM instance.
Exported the disk to disk image called MY_CUSTOM_IMAGE_1
My wish now is to use MY_CUSTOM_IMAGE_1 as the starting image of my docker image build. For building the images I'm using Google Cloud Build.
My docker file should look like this:
FROM MY_CUSTOM_IMAGE_1 AS BUILD_ENV
...
When I tried to use this image I got the build error:
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1
ERROR
pull access denied for MY_CUSTOM_IMAGE_1, repository does not exist or may require 'docker login'
Step 1/43 : FROM MY_CUSTOM_IMAGE_1 AS BUILD_ENV
The reason is that VM images are not the same as Docker images.
Is this possible to make this transform (GCP VM Image -> Docker image), without external tools (outside GCP, like "docker private repositories")?
Thanks!
If you know all the installed things on your VM (and all the commands), do the same thing in a Dokerfile. Use as base image, the same OS version as your current VM. Perform some tests and it should be quickly equivalent.
If you have statefull files in your VM application, it's a little bit more complex, you have to mount a disk in your container and to update your application's configuration to write in the correct mounted folder. It's more "complex" but there is tons of example on internet!
No, this is not possible without a tool to extract your application out of the virtual machine image and recreate in a container. To the best of my knowledge, there is no general-purpose tool that exists.
There is a big difference between a container image and a virtual machine image. Container images do not have an operating system, virtual machine images are a complete operating system and device data. The two conceptually are similar, but extremely different in how they are implemented at the software and hardware level.

Docker volume mount windows container

I am getting the following error while trying to mount a volume in windows docker container.
===============
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: container 1234567ebcdh encountered an error during Start: failure in a Windows system call: The compute system exited unexpectedly. (0xc0370106)
================
I have mentioned almost all the possible combinations of c:/app in docker file but still getting error while starting the container itself without -v option.
-----------
FROM microsoft/windowsservercore
SHELL ["powershell", "-Command"]
WORKDIR /application
COPY . .
VOLUME C:/application
CMD cmd
-----------
OS: Windows 10
Docker: Docker for windows 2.0.0
If you have any idea what went wrong here?
This seems to be followed with docker/for-win issue 676 which includes:
I was also having this exact issue:
docker: Error response from daemon: container XYZ encountered an error during Start: failure in a Windows system call: The compute system exited unexpectedly. (0xc0370106).
I found 2 solutions for my case:
I was able to successfully build and run the image by reducing the number of layers in the history. (For me this number happened to be a max of 37 layers in history.) (If your dockerfile is based on a 2nd dockerfile, you may need to reduce the number of steps in the 2nd dockerfile.)
How to debug: I was able to debug this by cutting the number of steps in half until the image ran, then re-adding steps until I discovered how many steps the history could have before breaking the image.
I was able to successfully build and run the image without reducing the number of layers by making sure that the root image was a certain version of windowsservercore:1709 (specifically, the 10.0.16299.904_en-us version of 1709, which does not appear to be pull-able anymore; however, it might also work with the latest version of windowsservercore:1709, I haven't tried).
I didn't debug this, I discovered this by blind luck.
Note: the same issue reports that mounting can be problematic.

Windows Container on Windows 10 Insider Build and Docker for Windows

I'm on the latest insider preview build for Window 10 (14955.rs_prerelease.161020-1700). I'm also using the most current Docker for Windows (1.12.3-beta29.2 (8280)). I have a Docker build for a simple nanoserver image. When I run the image I get the following error.
docker: Error response from daemon: container e0fa9da740f6ce0534516ededcce3d6f8f4c07ee656ce12e7b76b73077c52f38 encountered an error during Start: failure in a Windows system call: The system cannot find the path specified. (0x3): Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
Outside of using the 1.13 version of the Docker engine as outline here. Is there a way to get this to work?
I would rather stick with Docker for Windows since I go back and forth (Linux and Windows) and the tooling makes it easy

Resources