We have a dotnet 6 app deployed using kubernetes with autoscaling. Very occasionally a pod will start and after the container starts receiving requests it will repeatedly throw this exception with the message Bad binary signature. (0x80131192).
This is the docker command to publish the app:
RUN dotnet publish -c Release -r linux-x64 -o /app -p:PublishReadyToRun=true
I gather from my research that some referenced assembly is targeting 32-bit, which we need to find, but that still leaves me puzzled.
What I'd like to know is, why doesn't this happen to every pod?
Alternatively, as far as I understand, a file could be "corrupt". That leads me to ask, why does it only happen to this application?
Related
I am running a docker build command with a Dockerfile, but this is being held up by a slow, and sometimes aborted, download of a certain package (google's boringssl as it happens).
I would like to install squid near the start of the Dockerfile, so that subsequent git clones, apt gets, etc, i.e. every kind of download is cached to a directory outside the docker image, by defining a volume in the docker build command.
I'm fairly familiar with Docker, and understand the concept of layers. So a reply dealing solely with those will not be useful to me (although possibly to others). But sometimes one has to make a Dockerfile change that disrupts subsequent layers, and also a build error of a subsystem within a layer will mean all the web fetches for that layer will need repeating on the next build. So layers are not the answer to everything cache-related.
Thanks in anticipation!
I have a docker container where a server takes ~20s to boot up initially because it needs to do a lot of setup, but then it serves requests quickly.
This has been fine so far deploying webapps on AWS App Runner as the startup time is a one-off, but we are considering transitioning the service to lambda to increase scalability and are hitting an issue where that 20s boot is done on every lambda startup.
As all of the work for the boot stage can be done without serving any requests, I'd like to be able to do the boot part during docker build and then resume the executable during docker run to actually process & serve the request.
Is this sort of pattern/setup possible? Thanks!
Nothing in Docker directly supports this. If your application is able to create a snapshot of itself then this could be theoretically possible, but it'd work the same way inside Docker and without.
As a specific example of this, the Emacs text editor preloads much of its Lisp source at build time and then dumps the processed code, so that it doesn't have to reload the Lisp source at every startup. This is a very long-standing feature of Emacs, dating back decades before Docker existed. The mechanics of this are very specific to the program, though.
I have a docker container where a server takes ~20s to boot up initially...
I wouldn't consider this exceptional; in the space where it'd be nice if it started faster, but probably wouldn't cause operational problems. I normally expect the standard postgres or mysql database images to take 30-60 seconds to start up, and I routinely work with things with runtimes that take about that long to load.
I'm loading around 50mb of pre-compiled files at startup time...
You might see if you can restructure your application to preprocess this data into a form that's faster to load. Binary formats like protobuf or CBOR can be faster than reading JSON or XML or parsing text files, for example.
In Docker space this could be a reasonable use of a multi-stage build. Giving a hypothetical example of a Java application that could precompile files somehow:
FROM openjdk AS app
COPY app.jar /
FROM app AS precompile
WORKDIR /data
COPY *.json .
RUN java -cp /app.jar com.example.Preprocess -o data.bin *.json
FROM app
COPY --from=precompile /data/data.bin /data.bin
CMD java -jar /app.jar -precompiled-data /data.bin
I'm running my development environment in Docker containers. Since I have done some updates I'm now experiencing some difficulties when trying to rebuild my project that's running in my Docker container.
My project is running in a Windows Server Core Docker container running IIS, and I'm running the project from a shared volume on my host. I'm able to build the project before starting the docker container, but after the docker container is started the build fails with the following error:
Could not copy "C:\path\to\dll\name.dll" to "bin\name.dll". Exceeded retry count of 10. Failed. The file is locked by: "vmwp.exe (22604), vmmem (10488)"
It seems that the Hyper-V process is locking the DLL files. This clearly wasn't the case before and this seems to be related to some Docker or Windows updates I have done. How can I solve this issue? Do I need to change the process of building the application and running it in my Docker containers?
I have been searching for a while now, and I can't find much about this specific issue. Any help would be appreciated. Thanks in advance!
I've run in the similar problem. Solved by stopping/removing the running application container from docker-for-windows interface. docker rm -f will also do.
Potential solution:
If you use Docker Windows Containers make sure you have at least Windows 10.0.1809 on both environment(your physical machine and on docker) -run CMDs and you will see on top of it.
Use isolation flag with process when you run docker: --isolation process.
On physical machine two vmxxx(lower and higher PID)(don't remember the name exactly) processes was keeping *.dll file(the build was going on docker side where build tools 2019 was used).
Short description:
First MSbuild Error occurred because msbuild tries to delete file - access denied - probably this one vm process handle the file.
Second Msbuild Error occurred(the first vmxxx one caused that) showing that copy the same dll file from one direction to another it's not possible due to System lock (4).
Both two vmxxx processes kept one dll file during build on docker. It was visible in tool "Process Explorer"(use full version from Sysinternals)
One vmxxx had lower number of PID which lock the dll file and do not release it before second process with higher number of PID tries do something with it.
And it's one random dll file(s) that is kept by two different process.
Also, using and defining only one CPU without parallel on msbuild did not solved the issue before. Same on docker where you are able to manage the cpu and memory. In the end isolation on docker solved the case.
Isolation should take care of processes when you build project from docker container.
I'm running into an issue where my docker container will exit with exit code 137 after ~a day of running. The logs for the container contains no information indicating that an error code has occurred. Additionally, attempts to restart the container returns an error that the PID already exists for the application.
The container is built using the sbt docker plugin, sbt docker:publishLocal and is then run using
docker run --name=the_app --net=the_app_nw -d the_app:1.0-SNAPSHOT.
I'm also running 3 other docker containers which all together do use 90% of the available memory, but its only ever that particular container which exits.
Looking for any advice on to where to look next.
The error code 137 (128+9) means that it was killed (like kill -9 yourApp) by something. That something can be a lot of things (maybe it was killed because was using too much resources by docker or something else, maybe it got out of memory, etc)
Regarding the pid problem, you can add to your build.sbt this
javaOptions in Universal ++= Seq(
"-Dpidfile.path=/dev/null"
)
Basically this should instruct Play to not create a RUNNING_PID file. If it does not work you can try to pass that option directly in Docker using the JAVA_OPTS env variable.
I've been experimenting with docker recently but can't get my head around what I think is a fairly important/useful requirement:
The ability to download a NEW copy of a web site for running, when a container is run. NOT at build time, but at run time.
I have seen countless examples of Dockerfiles where java, tomcat, a copy of a WAR is installed and added to an image during build time, but none where that WAR is downloaded fresh each time "docker run -d me/myimage" is executed on the command line.
I think it might involve adding a CMD statement at the end of the Dockerfile but I wonder if people out there more experienced than me with docker have some advice? Perhaps I shouldn't even be attempting this and should re-build my images each time my web app has a new release? But that would mean I would have to distribute my new image via a private dockerhub or something right? I am not willing to stick my source in a public github repo and have the Dockerfile pull it and build it during an image build.
Thanks.
As Mark O'Connor said in his comment, it's certainly possible. A Docker container is just a process tree running on your Linux host, and with a few exceptions (generally involving privileged access to the kernel) can do anything you can do outside of a container.
So sure, you could put together an image that, when run, would download the most recent of an application and run it.
The reason this is considered a bad idea is that it suddenly becomes difficult if you want to run an older version of the application (or more generally a specific version). What if you redeploy your container and end up with a new version of the application that requires manual database schema upgrades before it will operate? Now instead of an application you have a brick.
Similarly, what if the newest version of the application is simply buggy? If you were performing the download and install at build time, you would simply deploy an image with an older version of the application.
Performing the application and download at run time makes the container unpredictable and less manageable.