In a multistage docker build I execute unit tests, generate a coverage report and build an executable in a build stage and then copy the executable over to an run stage:
FROM golang:1.13 AS build-env
COPY . /build
WORKDIR /build
# execute tests
RUN go test ./... -coverprofile=cover.out
# execute build
RUN go build -o executable
FROM gcr.io/distroless/base:latest
COPY --from=build-env /build/executable /executable
ENTRYPOINT ["/executable"]
How can I extract the cover.out in a Jenkins environment to publish the coverage report?
With Docker 17.05, the only options I know of have been posted: refactoring to run the step with a run command and volume mount, or as you've done, docker cp. Each option requires building the specific build target with the --target option, which I suspect was your original question.
With BuildKit in 19.03, there's an option to build something other than an image. That would look like:
# import code
FROM golang:1.13 AS code
COPY . /build
WORKDIR /build
# execute tests
FROM code as test
RUN go test ./... -coverprofile=cover.out
# execute build
FROM code as build
RUN go build -o executable
# add a stage for artifacts you wish to extract from the build
FROM scratch as artifacts
COPY --from test /build/coverage.xml /
COPY --from test /build/report.xml /
# and finally the release stage
FROM gcr.io/distroless/base:latest as release
COPY --from=build /build/executable /executable
ENTRYPOINT ["/executable"]
Then your build command looks like:
mkdir -p artifacts
DOCKER_BUILDKIT=1 docker build -o type=local,dest=artifacts/. --target artifacts .
DOCKER_BUILDKIT=1 docker build -t myapp .
The important detail there is the --output or -o option that specifies what BuildKit should do with the resulting image. By default it is imported back into the local docker engine. But in this case, it writes the result out to the local filesystem.
You should run tests using a Docker Compose file so that its easy to non-interactively extract outputs, and so that build logic is separated from the rest of the pipeline (not to mention it's faster).
You should use a docker-compose.yml with a bind mount to the $PWD. Here's a snippet from said Docker Compose file which will mirror the outputs to your host machine:
version: '3.7'
services:
app:
image: golang:1.13
command: go test ./... -coverprofile=cover.out
volumes:
- type: bind
source: .
target: /app
The artifacts can now be saved as you normally would in your CI pipeline.
I've solved this for now by using the --target flag during the docker build:
docker build --target build-env -t myapp:build .
docker create --name myapp-build myapp:build
docker cp myapp-build:/build/coverage.xml .
docker cp myapp-build:/build/report.xml .
docker rm myapp-build
#proceed with normal build
docker build -t myapp .
However, the docker-compose suggestion look cleaner to me.
Related
There is a library writing in Go. Its folder structure is:
lib/
golang/
go.mod
lib.go
example/
golang/
proto/
protofile
go.mod
main.go
Dockerfile
example is folder to show how to use this lib, so it has a main.go which can build and run. In order to use the lib, the example/golang/go.mod is:
module golang
go 1.15
require (
lib/golang v0.0.0
other stuff...
)
replace lib/golang => ../../golang
Now I want to pack the runnable example into a docker image, the Dockerfile is:
FROM golang:1.15
WORKDIR /go/src/app
COPY . .
RUN go env -w GO111MODULE=auto
RUN go env -w GOPROXY=https://goproxy.io,direct
RUN go get -d -v ./...
RUN go install -v ./...
CMD ["app"]
Then I cd into example/golang run docker build -t example ., the error log is
open /go/golang/go.mod: no such file or directory
It seems can not access lib/go.mod file, then I cd into lib folder and run docker build -t server -f examples/golang/Dockerfile ., however this will affect the import of main.go:
import "golang/proto"
error log is:
golang/proto: unrecognized import path "golang/proto"
How should I fix this to make the docker image?
==========================================
After I spend some time to read docker doc, here is the summary about this problem:
docker build command follow a PATH argument, that is the dot . at the end. That control the content of building, it decides what files docker build can access. So the reason of first error is that the pwd is lib/example/golang, and docker build command path is ., then docker build can not access parent files of lib/example/golang and they are required in main.go as a lib.
the command should be: docker build -t example ../../ However, docker build seek Dockerfile only in root of content path, so use -f to tell it where the Dockerfile located: docker build -t example -f ./Dockerfile ../../
if pwd is lib/, command is docker build -t server -f examples/golang/Dockerfile .
Be short:
If the dockerfile is not located at project root path, use -f and PATH of docker build command to give it enough access of files. If using go module, make sure PATH contain a go.mod file
If main.go is located in sub folder, make sure the WORKDIR in dockerfile same as the sub folder after COPY all need, or else go build or go install fail to compile
Your Dockerfile is too nested. Since your go build relies on relative paths - paths that are in parent directories - a docker build . will not see any parent-directory files.
Move the Dockerfile to the top e.g.
Dockerile
lib/
and update to build the nested build directory:
FROM golang:1.15
WORKDIR /go/src/app
# just need to copy lib tree
COPY lib .
# working from here now
WORKDIR /go/src/app/lib/golang/example/golang
RUN go env -w GO111MODULE=auto
RUN go env -w GOPROXY=https://goproxy.io,direct
RUN go get -d -v ./...
RUN go install -v ./...
CMD ["app"]
You can use go mod vendor before to build with Docker, it will centralise all your mod in vendor folder.
I did a a new file called build.sh with this inside:
#! /bin/sh
go mod vendor
docker build . -t myapp/myservice
rm -rf ./vendor
and run it whenever i need to build and by deleting vendor i can still use go run *.go with fresh version of my libraries
I have created a jenkins slave with work directory. I then have a maven java application with a Dockerfile.
Dockerfile
#### BUILD image ###
FROM maven:3-jdk-11 as builder
RUN mkdir -p /build
WORKDIR /build
COPY pom.xml /build
RUN mvn -B dependency:resolve dependency:resolve-plugins
COPY /src /build/src
RUN mvn package
### RUN ###
FROM openjdk:11-slim as runtime
EXPOSE 8080
ENV APP_HOME /app
ENV JAVA_OPTS=""
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/config
RUN mkdir $APP_HOME/log
RUN mkdir $APP_HOME/src
VOLUME $APP_HOME/log
VOLUME $APP_HOME/config
WORKDIR $APP_HOME
COPY --from=builder /build/src $APP_HOME/src
COPY --from=builder /build/target $APP_HOME/target
COPY --from=builder /build/target/*.jar app.jar
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar app.jar" ]
Jenkins slave sees this Dockerfile and executes it. It builds the target folder. In the target folder, I have Jacoco to show code coverage.
Jenkins slave workspace is unable to see that target folder to show Code Coverage on the Jenkins jacoco page. How can i make this visible? I tried volumes in my docker-compose file as shown
docker-compose.yml
version: '3'
services:
my-application-service:
image: 'my-application-image:v1.0'
build: .
container_name: my-application-container
ports:
- 8090:8090
volumes:
- /home/bob/Desktop/project/jenkins/workspace/My-Application:/app
However, I am not able to get target folder in the workspace. How can i let jenkins slave see this jacoco code coverage?
It looks like you have a mapped volume from /home/bob/Desktop/project/jenkins/workspace/My-Application on the slave to /app in the docker image, so normally, if you copy your jacoco reports back to /app/..., it should appear on the Jenkins Slave.
But the volume mapping only works for the first stage of you docker build process. As soon as you start your 2nd stage (after FROM openjdk:11-slim as runtime), there is no more mapped volume. So these lines copies the data to the target image's /app dir, not the original mapped directory.
COPY --from=builder /build/src $APP_HOME/src
COPY --from=builder /build/target $APP_HOME/target
I think you could make it work if you do this in the first stage:
RUN cp /build/target /app/target
In your second stage you probably only need the built jar file, so you only need:
COPY --from=builder /build/target/*.jar app.jar
An alternative could be to extract the data from the docker image itself after it has been bult, but that needs some command line kungfu:
docker cp $(docker ps -a -f "label=image=$IMAGE" -f "label=build=$BUILDNR" -f "status=exited" --format "{{.ID}}"):/app/target .
See also https://medium.com/#cyril.delmas/how-to-keep-the-build-reports-with-a-docker-multi-stage-build-on-jenkins-507200f4007f
If I am understanding correctly, you want the contents of $APP_HOME/target to be visible from your Jenkins slave? If so, you may need to re-think your approach.
Jenkins is running a Docker build which builds your app and outputs your code coverage report under $APP_HOME/target however since you are building it in Docker, those files won't be available to the slave but rather the image itself.
I'd consider running code coverage outside the Docker build or you may have to do something hackey like run this build, and copy the files from the image to your ${WORKSPACE} which I believe you have to run it in a container first and then copy the files and destroy the container after.
I have a script used in the preapration of a Docker image. I have this in the Dockerfile:
COPY my_script /
RUN bash -c "/my_script"
The my_script file contains secrets that I don't want in the image (it deletes itself when it finishes).
The problem is that the file remains in the image despite being deleted because the COPY is a separate layer. What I need is for both COPY and RUN to affect the same layer.
How can I COPY and RUN a script so that both actions affect the same layer?
take a look to multi-stage:
Use multi-stage builds
With multi-stage builds, you use multiple FROM statements in your
Dockerfile. Each FROM instruction can use a different base, and each
of them begins a new stage of the build. You can selectively copy
artifacts from one stage to another, leaving behind everything you
don’t want in the final image. To show how this works, let’s adapt the
Dockerfile from the previous section to use multi-stage builds.
Dockerfile:
FROM golang:1.7.3
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html
COPY app.go .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=0 /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]
As of 18.09 you can use docker build --secret to use secret information during the build process. The secrets are mounted into the build environment and aren't stored in the final image.
RUN --mount=type=secret,id=script,dst=/my_script \
bash -c /my_script
$ docker build --secret id=script,src=my_script.sh
The script wouldn't need to delete itself.
This can be handled by BuildKit:
# syntax=docker/dockerfile:experimental
FROM ...
RUN --mount=type=bind,target=/my_script,source=my_script,rw \
bash -c "/my_script"
You would then build with:
DOCKER_BUILDKIT=1 docker build -t my_image .
This also sounds like you are trying to inject secrets into the build, e.g. to pull from a private git repo. BuildKit also allows you to specify:
# syntax=docker/dockerfile:experimental
FROM ...
RUN --mount=type=secret,target=/creds,id=cred \
bash -c "/my_script -i /creds"
You would then build with:
DOCKER_BUILDKIT=1 docker build -t my_image --secret id=creds,src=./creds .
With both of the BuildKit options, the mount command never actually adds the file to your image. It only makes the file available as a bind mount during that single RUN step. As long as that RUN step does not output the secret to another file in your image, the secret is never injected in the image.
For more on the BuildKit experimental syntax, see: https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md
I guess you can use a workaround to do this:
Put my_script in a local http server which for example using python -m SimpleHTTPServer, and then the file could be accessed with http://http_server_ip:8000/my_script
Then, in Dockerfile use next:
RUN curl http://http_server_ip:8000/my_script > /my_script && chmod +x /my_script && bash -c "/my_script"
This workaround assure file add & delete in same layer, of course, you may need to add curl install in Dockerfile.
I think RUN --mount=type=bind,source=my_script,target=/my_script bash /my_script in BuildKit can solve your problem.
First, prepare BuildKit
export DOCKER_CLI_EXPERIMENTAL=enabled
export DOCKER_BUILDKIT=1
docker buildx create --name mybuilder --driver docker-container
docker buildx use mybuilder
Then, write your Dockerfile.
# syntax = docker/dockerfile:experimental
FORM debian
## something
RUN --mount=type=bind,source=my_script,target=/my_script bash -c /my_script
The first lint must be # syntax = docker/dockerfile:experimental because it's experimental feature.
And this method are not work in Play with docker, but work on my computer...
My computer us Ubuntu 20.04 with docker 19.03.12
Then, build it with
docker buildx build --platform linux/amd64 -t user/imgname -f ./Dockerfile . --push
I am using Docker to run unit tests, to generate Cobertura code coverage results, and then to generate an HTML reports on this (using ReportGenerator). I then publish BOTH the code coverage results file and the HTML reports to VSTS DevOps.
Here are the commands I need to run:
# Generates coverage.cobertura.xml for use in the next step.
dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=cobertura /p:CoverletOutput=codecoveragereports/
# Generates HTML reports from coverage.cobertura.xml file.
dotnet reportgenerator -reports:app/test/MyApplication.UnitTests/codecoveragereports/coverage.cobertura.xml -targetdir:codecoveragereports -reportTypes:htmlInline
And now in dockerfile:
WORKDIR ./app/test/MyApplication.UnitTests/
RUN dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=cobertura /p:CoverletOutput=codecoveragereports/
ENTRYPOINT ["/bin/bash", "-c", "dotnet reportgenerator -reports:codecoveragereports/*.xml -targetdir:codecoveragereports -reportTypes:htmlInline"]
And to build the image:
docker build -t myapplication.tests -f dockerfile --target tester .
And to run it:
docker run --rm -it -v $PWD/codecoveragereports:/app/test/MyApplication.UnitTests/codecoveragereports myapplication.tests:latest
The problem:
The results file generated on dotnet test does get generated (I can test this with RUN dir), but seems to disappear when I specify a volume (using -v) on docker run.
Is it not possible to create a volume on files which are generated in the image during docker build?
The life of your container can be very roughly represented like
docker build
dot test --> codecoveragereports/
docker run -v
docker mount volume $PWD/codecoveragereports to codecoveragereports, this obscured the previous codecoveragereports
your entrypoint script
So you need to output dot test to a temp folder, then copy it to your mount point at runtime (in the entrypoint).
dockerfile
COPY init.sh /
dot test --> /temp/
ENTRYPOINT ['/bin/bash', '/init.sh']
init.sh
cp /temp /app/test/MyApplication.UnitTests/codecoveragereports
exec ["/bin/bash", "-c", "dotnet reportgenerator -reports:codecoveragereports/*.xml -targetdir:codecoveragereports -reportTypes:htmlInline"]
How can I mount a volume to store my .m2 repo so I don't have to download the internet on every build?
My build is a Multi stage build:
FROM maven:3.5-jdk-8 as BUILD
COPY . /usr/src/app
RUN mvn --batch-mode -f /usr/src/app/pom.xml clean package
FROM openjdk:8-jdk
COPY --from=BUILD /usr/src/app/target /opt/target
WORKDIR /opt/target
CMD ["/bin/bash", "-c", "find -type f -name '*.jar' | xargs java -jar"]
You can do that with Docker >18.09 and BuildKit. You need to enable BuildKit:
export DOCKER_BUILDKIT=1
Then you need to enable experimental dockerfile frontend features, by adding as first line do Dockerfile:
# syntax=docker/dockerfile:experimental
Afterwards you can call the RUN command with cache mount. Cache mounts stay persistent during builds:
RUN --mount=type=cache,target=/root/.m2 \
mvn --batch-mode -f /usr/src/app/pom.xml clean package
Although the anwer from #Marek Obuchowicz is still valid, here is a small update.
First add this line to Dockerfile:
# syntax=docker/dockerfile:1
You can set the DOCKER_BUILDKIT inline like this:
DOCKER_BUILDKIT=1 docker build -t mytag .
I would also suggest to split the dependency resolution and packagin phase, so you can take the full advantage from Docker layer caching (if nothing changes in pom.xml it will use the cached layer with already downloaded dependencies). The full Dockerfile could look like this:
# syntax=docker/dockerfile:1
FROM maven:3.6.3-openjdk-17 AS MAVEN_BUILD
COPY ./pom.xml ./pom.xml
RUN --mount=type=cache,target=/root/.m2 mvn dependency:go-offline -B
COPY ./src ./src
RUN --mount=type=cache,target=/root/.m2 mvn package
FROM openjdk:17-slim-buster
EXPOSE 8080
COPY --from=MAVEN_BUILD /target/myapp-*.jar /app.jar
ENTRYPOINT ["java","-jar","/app.jar","-Xms512M","-Xmx2G","-Djava.security.egd=file:/dev/./urandom"]