I am required to a create container (docker run), run an application like gams and then destroy the container. for a load test I repeat this a 1000 times. by the end of this test, RHEL7 complains about a 'readonly filesystem' or 'segmentation fault' on ls. the only solution thus far is a disk reset. tried increasing the ulimit. tried resetting $LD_LIBRARY_PATH.none worked.
what could be a good diagnosis?
Solutions so far:
upgraded from docker 1.12.6 to 17.x
set the interval to 5min between runs
no disk reset has been experienced so far after implementing the above two solutions, waiting for the next test to complete to confirm
Update
Issue cropped again when copying files from the master node to compute node through java code: "bash: cannot create temp file for here-document: Read-only file system"
Related
I used to have a working Dockerfile which I have built multiple times (with docker compose) and it worked perfectly. Recently (for last 2 days), it stopped building (and will not rebuild). The Dockerfile and docker compose is used by Devcontainer to set up a container to run the development environment by VS Code. Find the contents of the Dockerfile below:
FROM mcr.microsoft.com/azure-functions/python:4-python3.8-core-tools # only a single line in the file to setup image which runs azure functions
The error I am getting:
failed commit on ref "layer-sha256:627f1702b7f112be5cefb32841e7d97268d106a402689b1bbc2e419641737372": "layer-sha256:627f1702b7f112be5cefb32841e7d97268d106a402689b1bbc2e419641737372" failed size validation: 503316480 != 622344404: failed precondition
My observation so far (manually building with docker build .):
The layer-sha256:627f1702b7f112be5cefb32841e7d97268d106a402689b1bbc2e419641737372 starts downloading very early on, and takes about nearly 2000 s to reach 503.32 MB, at which point it hangs. After a while, it resets to 0 and restarts the download process. This repeats around 5 times and then finally fails with the error above.
Is there someone who can guide me what the issue is, and how to fix it?
I'm running my development environment in Docker containers. Since I have done some updates I'm now experiencing some difficulties when trying to rebuild my project that's running in my Docker container.
My project is running in a Windows Server Core Docker container running IIS, and I'm running the project from a shared volume on my host. I'm able to build the project before starting the docker container, but after the docker container is started the build fails with the following error:
Could not copy "C:\path\to\dll\name.dll" to "bin\name.dll". Exceeded retry count of 10. Failed. The file is locked by: "vmwp.exe (22604), vmmem (10488)"
It seems that the Hyper-V process is locking the DLL files. This clearly wasn't the case before and this seems to be related to some Docker or Windows updates I have done. How can I solve this issue? Do I need to change the process of building the application and running it in my Docker containers?
I have been searching for a while now, and I can't find much about this specific issue. Any help would be appreciated. Thanks in advance!
I've run in the similar problem. Solved by stopping/removing the running application container from docker-for-windows interface. docker rm -f will also do.
Potential solution:
If you use Docker Windows Containers make sure you have at least Windows 10.0.1809 on both environment(your physical machine and on docker) -run CMDs and you will see on top of it.
Use isolation flag with process when you run docker: --isolation process.
On physical machine two vmxxx(lower and higher PID)(don't remember the name exactly) processes was keeping *.dll file(the build was going on docker side where build tools 2019 was used).
Short description:
First MSbuild Error occurred because msbuild tries to delete file - access denied - probably this one vm process handle the file.
Second Msbuild Error occurred(the first vmxxx one caused that) showing that copy the same dll file from one direction to another it's not possible due to System lock (4).
Both two vmxxx processes kept one dll file during build on docker. It was visible in tool "Process Explorer"(use full version from Sysinternals)
One vmxxx had lower number of PID which lock the dll file and do not release it before second process with higher number of PID tries do something with it.
And it's one random dll file(s) that is kept by two different process.
Also, using and defining only one CPU without parallel on msbuild did not solved the issue before. Same on docker where you are able to manage the cpu and memory. In the end isolation on docker solved the case.
Isolation should take care of processes when you build project from docker container.
I had started Hyperledger-composer from fabric-dev-server, So all images running as regular.
Now after two weeks I had seen that my HDD space is occupied by docker container.
So, Here are some screenshots of my hdd space:
Day-1
Day-2
In 2 days, the hdd available size become 9.8G to 9.3G.
So, How can I resolve this issue?
I think the problem is that the docker container of peer0 is generating too many logs, so if you run that container continuously, it will generate more logs when you access the fabric network.
you can check the file size of the log for particular docker container:
Find container id of peer0.
Goto directory /var/lib/docker/containers/container_id/.
There should be a file named as container_id-json.log.
So in my case:
My fabric was running from 2 weeks, and the logs file is at (example):
/var/lib/docker/containers/a50ea6b441ee327587a73e2a0efc766ff897bed2e187575fd69ff902b56a5830/a50ea6b441ee327587a73e2a0efc766ff897bed2e187575fd69ff902b56a5830-json.log
I had check the size of that file, it was near 6.5GB.
Solution (Temporary):
Run below command, which will delete data of that file (example):
> var/lib/docker/containers/a50ea6b441ee327587a73e2a0efc766ff897bed2e187575fd69ff902b56a5830/a50ea6b441ee327587a73e2a0efc766ff897bed2e187575fd69ff902b56a5830-json.log
Solution (Permanent):
What you can do this just make a script that run everyday and remove data from that log file.
You can use crontab, which give you ability to run script on specific time,day etc.
Simple question: After using Docker for about a week, my docker build command gets bogged down and hangs (before anything executes) for about a minute. After staying in this hanging state, it will execute the docker build command with no issues at all and at at the expected speed.
Other Docker commands (like docker run) do not suffer from this "hanging" issue.
Docker Installation info:
Version 18.06.1-ce-win73
Channel: stable
Things I have tried:
docker system prune - This does clear up space, but doesn't speed up my docker build command
Reinstalling Docker on my machine - This does fix the issue, but it reappeared after about a week of using Docker again.
Does anyone else suffer from this issue?
I had the same issue. I solved it moving the Dockerfile to an empty folder, then I executed the docker build command and worked perfectly.
On some other forums people created a .dockerignore file including the any call to git and many other files, but that approach didn't work for me.
Here was the the issue:
The very first line of my Dockerfile (the FROM command) was failing. The "hanging" was caused by a timeout during the attempt to download the base image. I was attempting to download the base image from a location that I needed to set a proxy on my machine for.
So I was mistaken in my original post: The Docker build command wasn't running as expected. It was failing to download the base image due to a missing proxy setting.
2 reasons:
1.If you are building many dockers for hours ..please restart your router if possible as sometimes due to heavy data packets movement the router collapses.
2.Increase RAM ,CPU and Swap of docker engine and restart docker and try to build again.
So, I am trying to create a docker image from a DockerFile. It involves copying a 10 GB binary file inside the docker image. Due to some connectivity issues, my download stopped after 90% for a total of 3 times. After each time, I would simply run the docker build command again.
I am not sure if docker will delete the old files on its own. Or have I now used about 30GB space?
I am using Windows 10 with Hyper-V.
Tempfiles are in the TEMP directory. Docker, as any other well written application removes its temp files after use.
Of course, when not regurlarly ended, things can go wrong. If unsure, check your TEMP directory, you should be able to see files from 10 GB from a specific time and date.
As specified, the docker daemon cleans up all traces of the build process when it loses contact with the docker client for any reason.