Visual studio build fails while copying files to the bin directory due to file locks by vmwp.exe - docker

I'm running my development environment in Docker containers. Since I have done some updates I'm now experiencing some difficulties when trying to rebuild my project that's running in my Docker container.
My project is running in a Windows Server Core Docker container running IIS, and I'm running the project from a shared volume on my host. I'm able to build the project before starting the docker container, but after the docker container is started the build fails with the following error:
Could not copy "C:\path\to\dll\name.dll" to "bin\name.dll". Exceeded retry count of 10. Failed. The file is locked by: "vmwp.exe (22604), vmmem (10488)"
It seems that the Hyper-V process is locking the DLL files. This clearly wasn't the case before and this seems to be related to some Docker or Windows updates I have done. How can I solve this issue? Do I need to change the process of building the application and running it in my Docker containers?
I have been searching for a while now, and I can't find much about this specific issue. Any help would be appreciated. Thanks in advance!

I've run in the similar problem. Solved by stopping/removing the running application container from docker-for-windows interface. docker rm -f will also do.

Potential solution:
If you use Docker Windows Containers make sure you have at least Windows 10.0.1809 on both environment(your physical machine and on docker) -run CMDs and you will see on top of it.
Use isolation flag with process when you run docker: --isolation process.
On physical machine two vmxxx(lower and higher PID)(don't remember the name exactly) processes was keeping *.dll file(the build was going on docker side where build tools 2019 was used).
Short description:
First MSbuild Error occurred because msbuild tries to delete file - access denied - probably this one vm process handle the file.
Second Msbuild Error occurred(the first vmxxx one caused that) showing that copy the same dll file from one direction to another it's not possible due to System lock (4).
Both two vmxxx processes kept one dll file during build on docker. It was visible in tool "Process Explorer"(use full version from Sysinternals)
One vmxxx had lower number of PID which lock the dll file and do not release it before second process with higher number of PID tries do something with it.
And it's one random dll file(s) that is kept by two different process.
Also, using and defining only one CPU without parallel on msbuild did not solved the issue before. Same on docker where you are able to manage the cpu and memory. In the end isolation on docker solved the case.
Isolation should take care of processes when you build project from docker container.

Related

Docker install of AZCore results in authserver+worldserver doesn't exist error

I'm trying to spin up a fresh server using the azerothcore docker installation guide. I have completed all of the early installation steps, up until running the containers. Upon running the containers (for worldserver and authserver) i see the following output from the containers. It appears the destination of the world and auth servers in dist/bin is missing, how may i resolve this issue?
Check your docker settings. Make sure you have enough memory. If containers have low memory they will not finish the compile. Check if you have build issues.

"Building Image" task hangs in VS Code Dev Container when using a large directory

I'm using Visual Studio Code on a Windows machine. I'm trying to setup a Python Dev Container using a directory that contains a large set of CSV files (about 200GB). When I click to launch the remote container in Visual Studio the application hangs saying (Starting Dev Container (show log): Building image.
I've been looking through the docs and having read the Advanced Container Configuation I've tried modifying the devcontainer.json file by adding workspaceMount and workspaceFolder entries:
"workspaceMount" : "source=//c/path/to/folder,target=/workspace,type=bind,consistency=delegated"
"workspaceFolder" : "/workspace"
But to no avail. Is there a solution to launching Dev Containers on Windows using folders which contain large files?
I had a slightly different problem, but the solution might help you or someone else. I was trying to run docker-compose inside a docker-in-docker image (provided by vscode). In my case, my container was able to start, but nothing inside the container was able to run.
To solve my issue, I updated vscode and and now there is a new option Remote-Containers: Clone Repository in Container Volume.... If your code is a git repo, you can do this:
Step #1:
Step #2:
Step #3 and onwards:
Follow the given steps provided by vscode and you should have your repository in the container as a volume. It reduced my building times from about 30mins to 3mins (within the running container) because I brought stuff into the container after it was up and running.
Assuming the 200GB is ignored by your .gitignore, what you could try to do is once the container has started, you can copy the 200GB worth of excel files into the container. I thought this would help because I did a similar thing by bringing in all my node_modules after running the container.

Docker Desktop - Filesharing notification about poor performance

When my Docker containers start, I receive the following notification that reads:
Docker Desktop has detected that you shared a Windows file into a WSL 2 container, which may perform poorly. Click here for more details.
My questions are:
What does this mean?
What is the better practice / how should this be avoided?
If the message has been closed, or I've clicked "Don't show again", how can I get to the details of this warning?
I am happy to share the Dockerfile or Docker-Compose setup if needed, but I simply cannot find anything either here on SO or by a Google search that points me in any direction, so I'm not sure where to start. I'm assuming the issue lies in the Dockerfile since that is where we running COPY to move some files around.
Docker Version: Docker Desktop 2.4.0.0 (48506) Community
Operating System: Windows 10 Pro (version 10.0.19041)
This error means that accessing files on the Windows host file system from a Linux container will perform a little slower than accessing files that are already in a Linux filesystem. Accessing Windows files from the Linux container will perform like accessing files on a remote file share.
Docker and Microsoft recommend avoiding this by storing your source files in a WSL2 distro's file system (which you can bind mount to the container) or building your container image to include all the files needed rather than storing your files in the Windows file system.
If you've clicked "Don't show again", you can get to the details of this message by going to Develop with Docker and WSL 2.
For more information, Docker for Windows Best Practices says:
Linux containers only receive file change events (“inotify events”) if the original files are stored in the Linux filesystem. For example, some web development workflows rely on inotify events for automatic reloading when files have changed.
Performance is much higher when files are bind-mounted from the Linux filesystem, rather than remoted from the Windows host. Therefore avoid docker run -v /mnt/c/users:/users (where /mnt/c is mounted from Windows).
Instead, from a Linux shell use a command like docker run -v ~/my-project:/sources <my-image> where ~ is expanded by the Linux shell to $HOME.
Microsoft's Comparing WSL 1 and WSL 2 article has a whole section on Performance across OS file systems, and its opening paragraph says:
We recommend against working across operating systems with your files, unless you have a specific reason for doing so. For the fastest performance speed, store your files in the WSL file system if you are working in a Linux command line (Ubuntu, OpenSUSE, etc). If you're working in a Windows command line (PowerShell, Command Prompt), store your files in the Windows file system.
Also, the Docker blog article Docker Desktop: WSL 2 Best practices has an "Awesome mounts performance" section that says:
Both your own WSL 2 distro and docker-desktop run on the same utility VM. They share the same Kernel, VFS cache etc. They just run in separate namespaces so that they have the illusion of running totally independently. Docker Desktop leverages that to handle bind mounts from a WSL 2 distro without involving any remote file sharing system. This means that when you mount your project files in a container (with docker run -v ~/my-project:/sources <...>), docker will propagate inotify events and share the same cache as your own distro to avoid reading file content from disk repeatedly.
A little warning though: if you mount files that live in the Windows file system (such as with docker run -v /mnt/c/Users/Simon/windows-project:/sources <...>), you won’t get those performance benefits, as /mnt/c is actually a mountpoint exposing Windows files through a Plan9 file share.
All of that advice is great if you want your primary development workflow to be in Linux. Docker wants you to go "all in" on Linux containers. But if you work primarily in Windows and just want to use a Linux container for a specialized task, then it's fine to click "Don't show again". As Microsoft said, "If you're working in a Windows command line, store your files in the Windows file system."
I run with my main development folder in Windows, and I bind mount it to a Linux container that's just used to execute unit tests. So my full build runs in Windows, then I run all my unit tests in Windows, and I finish by running all my unit tests in a Linux container too. Having Linux bind mount to my Windows folder works fast and great for this scenario where the "dotnet test" call in Linux is just loading and executing the required DLLs from my Windows volume.
This setup may sound like heresy to those that believe containers must be used everywhere, but I love containers for application deployment. I'm not convinced that you need to go all in and do all your development inside a container too. I'm happy with Windows (and VS 2019) as my development environment, and then I use Linux containers for application testing and deployment. So the Windows/WSL2 file system performance hit is a minimal impact to me.

`docker build` command hangs for a very long time, other commands work fine

Simple question: After using Docker for about a week, my docker build command gets bogged down and hangs (before anything executes) for about a minute. After staying in this hanging state, it will execute the docker build command with no issues at all and at at the expected speed.
Other Docker commands (like docker run) do not suffer from this "hanging" issue.
Docker Installation info:
Version 18.06.1-ce-win73
Channel: stable
Things I have tried:
docker system prune - This does clear up space, but doesn't speed up my docker build command
Reinstalling Docker on my machine - This does fix the issue, but it reappeared after about a week of using Docker again.
Does anyone else suffer from this issue?
I had the same issue. I solved it moving the Dockerfile to an empty folder, then I executed the docker build command and worked perfectly.
On some other forums people created a .dockerignore file including the any call to git and many other files, but that approach didn't work for me.
Here was the the issue:
The very first line of my Dockerfile (the FROM command) was failing. The "hanging" was caused by a timeout during the attempt to download the base image. I was attempting to download the base image from a location that I needed to set a proxy on my machine for.
So I was mistaken in my original post: The Docker build command wasn't running as expected. It was failing to download the base image due to a missing proxy setting.
2 reasons:
1.If you are building many dockers for hours ..please restart your router if possible as sometimes due to heavy data packets movement the router collapses.
2.Increase RAM ,CPU and Swap of docker engine and restart docker and try to build again.

Do the various caches in Docker on Mac get corrupted?

I am trying to troubleshoot a Dockerfile I found on the web. As it is failing in a weird way, I am wondering whether failed docker builds or docker runs from various subsets of that file or other files that I have been experimenting with might corrupt some part of Docker's own state.
In other words, would it possibly help to restart Docker itself, Reboot the computer, or do some other Docker command, to eliminate that possibility?
Sometimes just rebooting things helps and it's not wrong to try restarting Docker for Mac or do a full reboot, but I can't think of a specific symptom it would fix and it's not something I need to do routinely.
I've only really run into two classes of problems that sound like what you're describing.
If you have a Dockerfile step that consistently succeeds, but produces inconsistent results:
RUN curl http://may.not.exist.example.com/ || true
You can wind up in a situation where the underlying command failed or produced the wrong output, but the RUN step as a whole succeeded. docker build --no-cache will re-run a build ignoring this, and an extremely aggressive docker rmi sequence (deleting every build, current and past, of the image in question) will clean it up too.
The other class of problem I've encountered involves some level of corruption in /var/lib/docker. This usually has very obvious symptoms generally involving "file not found" or "failed mounting directory" type errors on a setup that you otherwise know works. I've encountered it more on native Linux than Docker for Mac, probably because the DfM Linux installation is a little more controlled and optimized for Docker (it definitely isn't running a 3-year-old kernel with arbitrary vendor patches). On Linux you can work around this by stopping Docker, deleting everything in /var/lib/docker, and starting Docker again; in Docker for Mac, on the preferences window, there's a "Reset" page with various destructive cleanup options and "Reset to factory defaults" is closest to this.
I would first attempt using the Docker 'Diagnose and Feedback option. This generally runs tests on the health of Docker and the Docker engine.
Docker desktop also has options for various troubleshooting scenarios under 'Preferences' > 'Reset' (if you're using Docker Desktop) which have helped me in the past.
A brief look through the previous Docker Release notes.
It certainly looks like it has been possible in the past to corrupt the Docker Engine; there is evidence suggesting the engine has been iteratively fixed since.

Resources