I trying to run codebuild using my docker image for a build .Net framework application,
but when i run the project it's failed in "DOWNLOAD_SOURCE" step with the message:
"Build container found dead before completing the build. Build container died because it was out of memory, or the Docker image is not supported."
the source is codecommit .
Compute type is 15 GB .
docker image is the same as docker image here: [https://aws.amazon.com/blogs/devops/extending-aws-codebuild-with-custom-build-environments-for-the-net-framework/]
I've tried the same image with a lightweight project and it's work.
Any suggestions?
there is a way to get more logs?
Thanks.
did you try setting the git-clone-depth to 1 to do a shallow clone? since the same image worked for a lightweight project.
Related
I run a Docker image build task in a CI environment. The Docker image build failed, and the detailed error information is stored in a file of the temporary container. I wonder if there is any good approach that I can use to copy the error log out of the container so that I can view it as CI build artifacts.
I see some approach like [1], which basically temporarily comments out Dockerfile contents from the point of build failure and then run the image build again manually to get the log. However, this is very time consuming if I am not building the image locally but building it in a CI server. I am looking for better approach for addressing this issue. Thanks.
[1] https://pythonspeed.com/articles/debugging-docker-build/
I am building Docker containers using gcloud:
gcloud builds submit --timeout 1000 --tag eu.gcr.io/$PROJECT_ID/dockername Dockerfiles/folder_with_dockerfile
The last 2 steps of the Dockerfile contain this:
COPY script.sh .
CMD bash script.sh
Many of the changes I want to test are in the script. So the Dockerfile stays intact. Building those Docker files on Linux with Docker-compose results in a very quick build because it detects nothing has changed. However, doing this on gcloud, I notice the complete Docker being re-generated whereas only a minor change in the script.sh has been created.
Any way to prevent this behavior?
Your local build is fast because you already have all remote resouces cached locally.
Looks like using kaniko-cache would speed a lot your build. (see https://cloud.google.com/cloud-build/docs/kaniko-cache#kaniko-build).
To enable the cache on your project run
gcloud config set builds/use_kaniko True
The first time you build the container it will feed the cache (for 6h by default) and the rest will be faster since dependencies will be cached.
If you need to further speed up your build, I would use two containers and have both in my local GCP container registry:
The fist one as a cache with all remote dependencies (OS / language / framework / etc).
The second one is the one you need with just the COPY and CMD using the cache container as base.
Actually, gcloud has a lot to do:
The gcloud builds submit command:
compresses your application code, Dockerfile, and any other assets in the current directory as indicated by .;
uploads the files to a storage bucket;
initiates a build using the uploaded files as input;
tags the image using the provided name;
pushes the built image to Container Registry.
Therefore the compete build process could be time consuming.
There are recommended practices for speeding up builds such as:
building leaner containers;
using caching features;
using a custom high-CPU VM;
excluding unnecessary files from upload.
Those could optimize the overall build process.
I need to split a teamcity build that builds and pushes a docker image into a docker registry into two separate builds.
a) The one that builds the docker image and publishes it as an artifact
b) The one that accepts the docker artifact from the first build and pushes it into a registry
The log says, that there are these three commands running:
docker build -t thingy -f /opt/teamcity-agent/work/55abcd6/docker/thingy/Dockerfile /opt/teamcity-agent/work/55abcd6
docker tag thingy docker.thingy.net/thingy/thingy:latest
docker push docker.thingy.net/thingy/thingy:latest
There's plenty of other stuff going on, but I figured that this is the important part.
So I have copied the initial build two times, with the first command in the first build, and the next two in the second build.
I have set the first build as a snapshot dependency for the second build, and run it. And what I get is:
FileNotFoundError: [Errno 2] No such file or directory: 'docker': 'docker'
Which probably is because some of the files are missing.
Now, I did want to publish the docker image as an artifact, and make the first build an artifact dependency, but I can't find where does the docker put its files and all of the searches containing a "docker" and a "file" in them just lead to a bunch of articles about what the Dockerfile is.
So what can I do to make it so that the second build could use the resulting image and/or enviroment from the first build?
in all honesty I didn't understand what exactly you are trying to do here.
However, this might help you:
You can save the image as a tar file:
docker save -o <image_file_name>.tar <image_tag>
This archive can then be moved and imported somewhere else.
You can get a lot of information about an image or a container with "docker inspect":
docker inspect <image_tag>
Hope this helps.
I have a custom Docker container in which I perform build and test of a project. It is somehow integrated with Travis CI. Now I want to run the Coverity scan analysis from within the Travis CI as well, but the tricky part is (if I understand the Coverity docs correctly), that I need to run the build. The build, however, runs in the container.
Now, according to the cov-build --help
The cov-build or cov-build-sbox command intercepts all calls to the
compiler invoked by the build system and captures source code from the
file system.
What I've tried:
cov-build --dir=./cov docker exec -ti container_name sh -c "<custom build commands>"
With this approach, however, Coverity apparently does not catch the calls to the compiler (it is quite understandable considering Docker philosophy) and emits no files
What I do not want (at least while there is hope for a better solution):
to install locally all the necessary stuff to build in the container
only to be able to run Coverity scan.
to run cov-build from within the container, since:
I believe this would increase the docker image size significantly
I use Travis CI addon for the Coverity scan and this would
complicate things a lot.
The Travis CI part just FWIW, tried all that locally and it doesn't work either.
I am thrilled for any suggestions to the problem. Thank you.
Okay, I sort of solved the issue.
I downloaded and modified ( just a few modification to fit my
environment ) the script that Travis uses to download and run Coverity
scan.
Then I installed Coverity to the host machine (in my case Travis
CI machine).
I ran the docker container and mounted the directory where the
Coverity is installed using docker run -dit -v <coverity-dir>:<container-dir>:ro .... This way I avoided increasing the docker image size.
Executed the cov-build command and uploaded the analysis using
another part of the script directly from docker container.
Hope this helps someone struggling with similar issue.
If you're amenable to adjusting your build, you can change your "compiler" to be cov-translate <args> --run-compile <original compiler command line>. This is effectively what cov-build does under the hood (minus the run-compile since your compiler is already running), and should result in a build capture.
Here is the solution I use:
In "script", "after_script" or another phase in Travis job's lifecycle you want
Download coverity tool archive using wget (the complete Command to use can be found in your coverity scan account)
Untar the archive into a coverity_tool directory
Start your docker container as usual without needing to mount coverity_tool directory as a volume (in case you've created coverity_tool inside the directory from where the docker container is started)
Build the project using cov-build tool inside docker
Archive the generated cov-int directory
Send the result to coverity using curl command
Step 6 should be feasible inside the container but I usually do it outside.
Also don't forget the COVERITY_SCAN_TOKEN to be encrypted and exported as an environment variable.
A concrete example is often more understandable than a long text; here is a commit that applies above steps to build and send results to coverity scan:
https://github.com/BoubacarDiene/NetworkService/commit/960d4633d7ec786d471fc62efb85afb5af2bed7c
I've got Jenkins set up to do 2 things in 2 separate jobs:
Build an executable jar and push to Ivy repo
Build a docker image, pulling in the jar from the Ivy repo, and push image to a private docker registry
During step 1 the jar will have some version which will be appended to the filename (e.g. my-app-0.1-SNAPSHOT, my-app-1.0-RELEASE, etc.). The problem that I'm facing is that in the Dockerfile we have to pull in the correct jar file based on the version number from the upstream build. Additionally, I would ideally like the docker image to be tagged with that same version number.
Would love to hear from the community about any possible solutions to this problem.
Thanks in advance!!
Obviously you need a unique version from (1) to refer to in (2).
0.1 -> 0.2 -> 0.3 -> ...
Not too complicated in terms of how things work together from a build / Docker point of view. I guess the far bigger challenge is to give up SNAPSHOT builds in the development workflow.
With your current Jenkins: release every build you create a container for.
Much better alternative: Choose a CI / CD server that uses build pipelines. And if you haven't already done so, take a look at the underlaying concept here.
You could use the Groovy Postbulid Plugin to extract with a regular expression the exact name of the generated .jar file at the end of step 1.
Then for step 2, you could have a Dockerfile template and replace in it some placeholder with the exact jar name, build the image and push it to your registry.
Or, if you don't use a Dockerfile you could have in your Docker registry a premade Docker image which has everything but the jar file and add the jar to it with those steps:
create a container from the image
add the jar file into the container using the docker cp command
commit the container into a new image
push the new image to your docker registry
Same need by my customer. We ended up by putting placeholders in the Dockerfile, which are replaced using sed just before the docker build.
This way, you can use the version in multiple locations, either in the FROM or in any filenames.
Example:
FROM custom:#placeholder#
ENV VERSION #placeholder#
RUN wget ***/myjar-${VERSION}.jar && ...
Regarding the consistency, a unique version is used:
from a job parameter (Jenkins)
to build the artifact (Maven)
to tag the Docker image (Docker)
to tag the Git repository containing the Dockerfile (Git)