Copy error log on a failed Docker image build - docker

I run a Docker image build task in a CI environment. The Docker image build failed, and the detailed error information is stored in a file of the temporary container. I wonder if there is any good approach that I can use to copy the error log out of the container so that I can view it as CI build artifacts.
I see some approach like [1], which basically temporarily comments out Dockerfile contents from the point of build failure and then run the image build again manually to get the log. However, this is very time consuming if I am not building the image locally but building it in a CI server. I am looking for better approach for addressing this issue. Thanks.
[1] https://pythonspeed.com/articles/debugging-docker-build/

Related

Get multistage dockerfile from image

I have tried docker history and dfimage for getting the dockerfile from a docker image.
From what I can see, any information about the multistage dockerfiles is not there. As I think about it, it makes sense. The final docker image just knows that files were copied in. It probably does not keep a reference to the layer that was used to construct it.
But I thought I would ask just to be sure. (It would be really helpful)
For example: I have a multistage docker file that, in the first stage builds a dot.net core application, then in the second stage copies the files from that build into an Nginx container.
Is there any way, given the final image, to get the dockerfile used to do the build?
Unfortunately this wont be possible since your final docker image won't contain anything from the "builder" stage. Basically the builder stage is a completely different image which was built, the files were copied from it during the build of the final image and than it was discarded.
The stages from the builder stage will live on in your build cache and you could even tag them to run some kind of docker image analyzer against them. However this does not help you, if you only have access to the final image...
No it is not possible. Docker image will only have its own history and not the multi stages that may have been used before it

How do I pass a docker image from one TeamCity build to another?

I need to split a teamcity build that builds and pushes a docker image into a docker registry into two separate builds.
a) The one that builds the docker image and publishes it as an artifact
b) The one that accepts the docker artifact from the first build and pushes it into a registry
The log says, that there are these three commands running:
docker build -t thingy -f /opt/teamcity-agent/work/55abcd6/docker/thingy/Dockerfile /opt/teamcity-agent/work/55abcd6
docker tag thingy docker.thingy.net/thingy/thingy:latest
docker push docker.thingy.net/thingy/thingy:latest
There's plenty of other stuff going on, but I figured that this is the important part.
So I have copied the initial build two times, with the first command in the first build, and the next two in the second build.
I have set the first build as a snapshot dependency for the second build, and run it. And what I get is:
FileNotFoundError: [Errno 2] No such file or directory: 'docker': 'docker'
Which probably is because some of the files are missing.
Now, I did want to publish the docker image as an artifact, and make the first build an artifact dependency, but I can't find where does the docker put its files and all of the searches containing a "docker" and a "file" in them just lead to a bunch of articles about what the Dockerfile is.
So what can I do to make it so that the second build could use the resulting image and/or enviroment from the first build?
in all honesty I didn't understand what exactly you are trying to do here.
However, this might help you:
You can save the image as a tar file:
docker save -o <image_file_name>.tar <image_tag>
This archive can then be moved and imported somewhere else.
You can get a lot of information about an image or a container with "docker inspect":
docker inspect <image_tag>
Hope this helps.

How can I see what's in the Docker layers?

I am running a docker build and one of the layers is big and always takes a long time to download. Is there a way to see where the layer is coming from and what it does? I would like to check it while it's downloading but it would still be useful to examine it after download. Is either possible?
This is the command, I am running:
docker build \
-t registry.gear.ge.com/predix_edge/edge-agent-i386 \
-f docker-runners/.dockerfile-build-i386 docker-runners
Sending build context to Docker daemon 8.027MB
Step 1/13 : FROM registry.gear.ge.com/predix_edge/edge-agent-build-i386:20180920
20180920: Pulling from predix_edge/edge-agent-build-i386
10c05d2b2fbf: Pull complete
3f9f2d6d7ae5: Pull complete
a2f288eed9a5: Pull complete
8fadaaf1d0d3: Pull complete
5c746e81cede: Pull complete
20d91e41d92e: Downloading [===============> ] 113.4MB/366.3MB
c0701269de1c: Download complete
e6a6642f6692: Download complete
ccac838d533e: Download complete
0e3809b7d911: Download complete
e0b7e3addbed: Verifying Checksum
I would like to see what's in layer 20d91e41d92e.
docker history will give you a listing of all of the layers in an image, the individual layer’s size, and the command that got run to create the image. That could be a shell command or a Dockerfile directive.
In practice, a very large layer will probably be either COPYing some artifact into Docker land, or a software installation of some sort. Depending on what it is that’s being installed, working around this may or may not be tricky. I see a lot of Dockerfiles go by on SO that install a full C toolchain and library header files just to produce a runnable Python library, for instance; that can be split into a multi-stage build that will have a much smaller runtime artifact, but that involves reengineering the build process.

aws codebuild failed in DOWNLOAD_SOURCE

I trying to run codebuild using my docker image for a build .Net framework application,
but when i run the project it's failed in "DOWNLOAD_SOURCE" step with the message:
"Build container found dead before completing the build. Build container died because it was out of memory, or the Docker image is not supported."
the source is codecommit .
Compute type is 15 GB .
docker image is the same as docker image here: [https://aws.amazon.com/blogs/devops/extending-aws-codebuild-with-custom-build-environments-for-the-net-framework/]
I've tried the same image with a lightweight project and it's work.
Any suggestions?
there is a way to get more logs?
Thanks.
did you try setting the git-clone-depth to 1 to do a shallow clone? since the same image worked for a lightweight project.

Coverity scan while building in Docker container

I have a custom Docker container in which I perform build and test of a project. It is somehow integrated with Travis CI. Now I want to run the Coverity scan analysis from within the Travis CI as well, but the tricky part is (if I understand the Coverity docs correctly), that I need to run the build. The build, however, runs in the container.
Now, according to the cov-build --help
The cov-build or cov-build-sbox command intercepts all calls to the
compiler invoked by the build system and captures source code from the
file system.
What I've tried:
cov-build --dir=./cov docker exec -ti container_name sh -c "<custom build commands>"
With this approach, however, Coverity apparently does not catch the calls to the compiler (it is quite understandable considering Docker philosophy) and emits no files
What I do not want (at least while there is hope for a better solution):
to install locally all the necessary stuff to build in the container
only to be able to run Coverity scan.
to run cov-build from within the container, since:
I believe this would increase the docker image size significantly
I use Travis CI addon for the Coverity scan and this would
complicate things a lot.
The Travis CI part just FWIW, tried all that locally and it doesn't work either.
I am thrilled for any suggestions to the problem. Thank you.
Okay, I sort of solved the issue.
I downloaded and modified ( just a few modification to fit my
environment ) the script that Travis uses to download and run Coverity
scan.
Then I installed Coverity to the host machine (in my case Travis
CI machine).
I ran the docker container and mounted the directory where the
Coverity is installed using docker run -dit -v <coverity-dir>:<container-dir>:ro .... This way I avoided increasing the docker image size.
Executed the cov-build command and uploaded the analysis using
another part of the script directly from docker container.
Hope this helps someone struggling with similar issue.
If you're amenable to adjusting your build, you can change your "compiler" to be cov-translate <args> --run-compile <original compiler command line>. This is effectively what cov-build does under the hood (minus the run-compile since your compiler is already running), and should result in a build capture.
Here is the solution I use:
In "script", "after_script" or another phase in Travis job's lifecycle you want
Download coverity tool archive using wget (the complete Command to use can be found in your coverity scan account)
Untar the archive into a coverity_tool directory
Start your docker container as usual without needing to mount coverity_tool directory as a volume (in case you've created coverity_tool inside the directory from where the docker container is started)
Build the project using cov-build tool inside docker
Archive the generated cov-int directory
Send the result to coverity using curl command
Step 6 should be feasible inside the container but I usually do it outside.
Also don't forget the COVERITY_SCAN_TOKEN to be encrypted and exported as an environment variable.
A concrete example is often more understandable than a long text; here is a commit that applies above steps to build and send results to coverity scan:
https://github.com/BoubacarDiene/NetworkService/commit/960d4633d7ec786d471fc62efb85afb5af2bed7c

Resources