Docker volume mount windows container - docker

I am getting the following error while trying to mount a volume in windows docker container.
===============
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: container 1234567ebcdh encountered an error during Start: failure in a Windows system call: The compute system exited unexpectedly. (0xc0370106)
================
I have mentioned almost all the possible combinations of c:/app in docker file but still getting error while starting the container itself without -v option.
-----------
FROM microsoft/windowsservercore
SHELL ["powershell", "-Command"]
WORKDIR /application
COPY . .
VOLUME C:/application
CMD cmd
-----------
OS: Windows 10
Docker: Docker for windows 2.0.0
If you have any idea what went wrong here?

This seems to be followed with docker/for-win issue 676 which includes:
I was also having this exact issue:
docker: Error response from daemon: container XYZ encountered an error during Start: failure in a Windows system call: The compute system exited unexpectedly. (0xc0370106).
I found 2 solutions for my case:
I was able to successfully build and run the image by reducing the number of layers in the history. (For me this number happened to be a max of 37 layers in history.) (If your dockerfile is based on a 2nd dockerfile, you may need to reduce the number of steps in the 2nd dockerfile.)
How to debug: I was able to debug this by cutting the number of steps in half until the image ran, then re-adding steps until I discovered how many steps the history could have before breaking the image.
I was able to successfully build and run the image without reducing the number of layers by making sure that the root image was a certain version of windowsservercore:1709 (specifically, the 10.0.16299.904_en-us version of 1709, which does not appear to be pull-able anymore; however, it might also work with the latest version of windowsservercore:1709, I haven't tried).
I didn't debug this, I discovered this by blind luck.
Note: the same issue reports that mounting can be problematic.

Related

large image pull using docker desktop in windows10 always failed at the end with error "unauthorized: failed authentication"

I am able to pull small-size images using docker desktop but getting issues with a large image of size around 7 GB, We are always getting stuck at the end while pulling large images using docker desktop in windows10 and it's getting failed with the error "unauthorized: failed authentication"
I tried to restart the docker service and also tried to add the below parameter
"max-concurrent-downloads": 1
and also tried to enable the feature "Use containerd for pulling and storing images" and also tried to re-install but there is no success,
Docker desktop version: 20.10.21
Is this the bug in this version, any workaround,
If image download seems stuck during large image pull, then use ctrl+c 1-2 times,
if it's exited after ctrl+c then try again to download, it should start downloading from same layer and then use same ctrl+c when stuck,
It works for me, It seems workaround for this problem,

Visual studio build fails while copying files to the bin directory due to file locks by vmwp.exe

I'm running my development environment in Docker containers. Since I have done some updates I'm now experiencing some difficulties when trying to rebuild my project that's running in my Docker container.
My project is running in a Windows Server Core Docker container running IIS, and I'm running the project from a shared volume on my host. I'm able to build the project before starting the docker container, but after the docker container is started the build fails with the following error:
Could not copy "C:\path\to\dll\name.dll" to "bin\name.dll". Exceeded retry count of 10. Failed. The file is locked by: "vmwp.exe (22604), vmmem (10488)"
It seems that the Hyper-V process is locking the DLL files. This clearly wasn't the case before and this seems to be related to some Docker or Windows updates I have done. How can I solve this issue? Do I need to change the process of building the application and running it in my Docker containers?
I have been searching for a while now, and I can't find much about this specific issue. Any help would be appreciated. Thanks in advance!
I've run in the similar problem. Solved by stopping/removing the running application container from docker-for-windows interface. docker rm -f will also do.
Potential solution:
If you use Docker Windows Containers make sure you have at least Windows 10.0.1809 on both environment(your physical machine and on docker) -run CMDs and you will see on top of it.
Use isolation flag with process when you run docker: --isolation process.
On physical machine two vmxxx(lower and higher PID)(don't remember the name exactly) processes was keeping *.dll file(the build was going on docker side where build tools 2019 was used).
Short description:
First MSbuild Error occurred because msbuild tries to delete file - access denied - probably this one vm process handle the file.
Second Msbuild Error occurred(the first vmxxx one caused that) showing that copy the same dll file from one direction to another it's not possible due to System lock (4).
Both two vmxxx processes kept one dll file during build on docker. It was visible in tool "Process Explorer"(use full version from Sysinternals)
One vmxxx had lower number of PID which lock the dll file and do not release it before second process with higher number of PID tries do something with it.
And it's one random dll file(s) that is kept by two different process.
Also, using and defining only one CPU without parallel on msbuild did not solved the issue before. Same on docker where you are able to manage the cpu and memory. In the end isolation on docker solved the case.
Isolation should take care of processes when you build project from docker container.

Docker build command interupted, unable to tag images / Direct execution

I have nearly the same problem as discussed here:
Docker build command with --tag unable to tag images
But I am not having an error occuring during build (as op figured out) but executing a server in the process. Last command of Dockerfile is a RUN command with a starting of a server. However, due to this, every-time I build this image, the build is never finished (it goes into the server-loop waiting for requests) and instead when stopping the container, an image with <none>:<none> was created.
is there any way to get this straight? Because the answer provided in the linked so-post just advises on changing the image name afterwards with a shell command.

`docker build` command hangs for a very long time, other commands work fine

Simple question: After using Docker for about a week, my docker build command gets bogged down and hangs (before anything executes) for about a minute. After staying in this hanging state, it will execute the docker build command with no issues at all and at at the expected speed.
Other Docker commands (like docker run) do not suffer from this "hanging" issue.
Docker Installation info:
Version 18.06.1-ce-win73
Channel: stable
Things I have tried:
docker system prune - This does clear up space, but doesn't speed up my docker build command
Reinstalling Docker on my machine - This does fix the issue, but it reappeared after about a week of using Docker again.
Does anyone else suffer from this issue?
I had the same issue. I solved it moving the Dockerfile to an empty folder, then I executed the docker build command and worked perfectly.
On some other forums people created a .dockerignore file including the any call to git and many other files, but that approach didn't work for me.
Here was the the issue:
The very first line of my Dockerfile (the FROM command) was failing. The "hanging" was caused by a timeout during the attempt to download the base image. I was attempting to download the base image from a location that I needed to set a proxy on my machine for.
So I was mistaken in my original post: The Docker build command wasn't running as expected. It was failing to download the base image due to a missing proxy setting.
2 reasons:
1.If you are building many dockers for hours ..please restart your router if possible as sometimes due to heavy data packets movement the router collapses.
2.Increase RAM ,CPU and Swap of docker engine and restart docker and try to build again.

Docker fails on changed GCP virtual machine?

I have a problem with Docker that seems to happen when I change the machine type of a Google Compute Platform VM instance. Images that were fine fail to run, fail to delete, and fail to pull, all with various obscure messages about missing keys (this on Linux), duplicate or missing layers, and others I don't recall.
The errors don't always happen. One that occurred just now, with an image that ran a couple hundred times yesterday on the same setup, though before a restart, was:
$ docker run --rm -it mbloore/model:conda4.3.1-aq0.1.9
docker: Error response from daemon: layer does not exist.
$ docker pull mbloore/model:conda4.3.1-aq0.1.9
conda4.3.1-aq0.1.9: Pulling from mbloore/model
Digest: sha256:4d203b18fd57f9d867086cc0c97476750b42a86f32d8a9f55976afa59e699b28
Status: Image is up to date for mbloore/model:conda4.3.1-aq0.1.9
$ docker rmi mbloore/model:conda4.3.1-aq0.1.9
Error response from daemon: unrecognized image ID sha256:8315bb7add4fea22d760097bc377dbc6d9f5572bd71e98911e8080924724554e
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
$
So it thinks it has no images, but the Docker folders are full of files, and it does know some hashes. It looks like some index has been damaged.
I restarted that instance, and then Docker seemed to be normal again without any special action on my part.
The only workarounds I have found so far are to restart and hope, or to delete several large Docker directories, and recreate them empty. Then after a restart and pull and run works again. But I'm now not sure that it always will.
I am running with Docker version 17.05.0-ce on Debian 9. My images were built with Docker version 17.03.2-ce on Amazon Linux, and are based on the official Ubuntu image.
Has anyone had this kind of problem, or know a way to reset the state of Docker without deleting almost everything?
Two points:
1) It seems that changing the VM had nothing to do with it. On some boots Docker worked, on others not, with no change in configuration or contents.
2) At Google's suggestion I installed Stackdriver monitoring and logging agents, and I haven't had a problem through seven restarts so far.
My first guess is that there is a race condition on startup, and adding those agents altered it in my favour. Of course, I'd like to have a real fix, but for now I don't have the time to pursue the problem.

Resources