Run with volume for files generated during docker build - docker

I am using Docker to run unit tests, to generate Cobertura code coverage results, and then to generate an HTML reports on this (using ReportGenerator). I then publish BOTH the code coverage results file and the HTML reports to VSTS DevOps.
Here are the commands I need to run:
# Generates coverage.cobertura.xml for use in the next step.
dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=cobertura /p:CoverletOutput=codecoveragereports/
# Generates HTML reports from coverage.cobertura.xml file.
dotnet reportgenerator -reports:app/test/MyApplication.UnitTests/codecoveragereports/coverage.cobertura.xml -targetdir:codecoveragereports -reportTypes:htmlInline
And now in dockerfile:
WORKDIR ./app/test/MyApplication.UnitTests/
RUN dotnet test /p:CollectCoverage=true /p:CoverletOutputFormat=cobertura /p:CoverletOutput=codecoveragereports/
ENTRYPOINT ["/bin/bash", "-c", "dotnet reportgenerator -reports:codecoveragereports/*.xml -targetdir:codecoveragereports -reportTypes:htmlInline"]
And to build the image:
docker build -t myapplication.tests -f dockerfile --target tester .
And to run it:
docker run --rm -it -v $PWD/codecoveragereports:/app/test/MyApplication.UnitTests/codecoveragereports myapplication.tests:latest
The problem:
The results file generated on dotnet test does get generated (I can test this with RUN dir), but seems to disappear when I specify a volume (using -v) on docker run.
Is it not possible to create a volume on files which are generated in the image during docker build?

The life of your container can be very roughly represented like
docker build
dot test --> codecoveragereports/
docker run -v
docker mount volume $PWD/codecoveragereports to codecoveragereports, this obscured the previous codecoveragereports
your entrypoint script
So you need to output dot test to a temp folder, then copy it to your mount point at runtime (in the entrypoint).
dockerfile
COPY init.sh /
dot test --> /temp/
ENTRYPOINT ['/bin/bash', '/init.sh']
init.sh
cp /temp /app/test/MyApplication.UnitTests/codecoveragereports
exec ["/bin/bash", "-c", "dotnet reportgenerator -reports:codecoveragereports/*.xml -targetdir:codecoveragereports -reportTypes:htmlInline"]

Related

Why does the ENTRYPOINT log output but not the CMD or RUN in Dockerfile

In this Dockerfile i have an ENTRYPOINT that calls a script that simply logs an echo "testing". This output works locally when I build and run the Dockerfile. It also logs to cloudwatch when I use in conjunction with a docker-compose for aws.
However the RUN and CMD commands do not output anything to the console or cloudwatch, how do i see their output? I would expect at least some errors
ENTRYPOINT bash -c "/migrate.sh"
WORKDIR /
RUN yarn
CMD ["yarn migration:run", "dist/src/main"]
I'm building just with docker build -t test:test . then docker run <imagename>
The RUN statement in the Dockerfile is only invoked when you build the container (at which point you should see the output of yarn in this case). When you docker run the container it will just execute the ENTRYPOINT and/or CMD (in this case the output of the ENTRYPOINT as there is no CMD)

Dockerfile and Docker run -w / workdir

Let's take the sample python dockerfile as an example.
FROM python:3
WORKDIR /project
COPY . /project
and then the run command to run the tests with in that container:
docker run --rm -v$(CWD):/project -w/project mydocker:1.0 pytest tests/
We are declaring the WORKDIR in the dockerfile and the run.
Am I right in saying
The WORKDIR in the dockerfile is the directory which the subsequent commands in the Dockerfile are run? But this will have no impact on when we run the docker run command?
Instead we need to pas in the -w/project to have pytests run in the /projects directory, well for pytest to look for the rests directory in /projects.
My setup.cfg
[tool:pytest]
addopts =
--junitxml=results/unit-tests.xml
In the example you give, you shouldn't need either the -v or -w option.
Various options in the Dockerfile give defaults that can be overridden at docker run time. CMD in the Dockerfile, for example, will be overridden by anything in a docker run command after the image name. (...and it's better to specify it in the Dockerfile than to have to manually specify it on every docker run invocation.)
Specifically to your question, WORKDIR affects the current directory for subsequent RUN and COPY commands, but it also specifies the default directory when the container runs; if you don't have a docker run -w option it will use that WORKDIR. Specifying -w to the same directory as the final image WORKDIR won't have an effect.
You also COPY the code into your image in the Dockerfile (which is good). You don't need a docker run -v option to overwrite that code at run time.
More specifically looking at pytest, it won't usually write things out to the filesystem. If you are using functionality like JUnit XML or code coverage reports, you can set it to write those out somewhere other than your application directory:
docker run --rm \
-v $PWD/reports:/reports \
mydocker:1.0 \
pytest --cov=myapp --cov-report=html:/reports/coverage.html tests

Extract file from multistage Docker build

In a multistage docker build I execute unit tests, generate a coverage report and build an executable in a build stage and then copy the executable over to an run stage:
FROM golang:1.13 AS build-env
COPY . /build
WORKDIR /build
# execute tests
RUN go test ./... -coverprofile=cover.out
# execute build
RUN go build -o executable
FROM gcr.io/distroless/base:latest
COPY --from=build-env /build/executable /executable
ENTRYPOINT ["/executable"]
How can I extract the cover.out in a Jenkins environment to publish the coverage report?
With Docker 17.05, the only options I know of have been posted: refactoring to run the step with a run command and volume mount, or as you've done, docker cp. Each option requires building the specific build target with the --target option, which I suspect was your original question.
With BuildKit in 19.03, there's an option to build something other than an image. That would look like:
# import code
FROM golang:1.13 AS code
COPY . /build
WORKDIR /build
# execute tests
FROM code as test
RUN go test ./... -coverprofile=cover.out
# execute build
FROM code as build
RUN go build -o executable
# add a stage for artifacts you wish to extract from the build
FROM scratch as artifacts
COPY --from test /build/coverage.xml /
COPY --from test /build/report.xml /
# and finally the release stage
FROM gcr.io/distroless/base:latest as release
COPY --from=build /build/executable /executable
ENTRYPOINT ["/executable"]
Then your build command looks like:
mkdir -p artifacts
DOCKER_BUILDKIT=1 docker build -o type=local,dest=artifacts/. --target artifacts .
DOCKER_BUILDKIT=1 docker build -t myapp .
The important detail there is the --output or -o option that specifies what BuildKit should do with the resulting image. By default it is imported back into the local docker engine. But in this case, it writes the result out to the local filesystem.
You should run tests using a Docker Compose file so that its easy to non-interactively extract outputs, and so that build logic is separated from the rest of the pipeline (not to mention it's faster).
You should use a docker-compose.yml with a bind mount to the $PWD. Here's a snippet from said Docker Compose file which will mirror the outputs to your host machine:
version: '3.7'
services:
app:
image: golang:1.13
command: go test ./... -coverprofile=cover.out
volumes:
- type: bind
source: .
target: /app
The artifacts can now be saved as you normally would in your CI pipeline.
I've solved this for now by using the --target flag during the docker build:
docker build --target build-env -t myapp:build .
docker create --name myapp-build myapp:build
docker cp myapp-build:/build/coverage.xml .
docker cp myapp-build:/build/report.xml .
docker rm myapp-build
#proceed with normal build
docker build -t myapp .
However, the docker-compose suggestion look cleaner to me.

Containerised Applications (Docker) and Jenkins Slave CICD

I have created a jenkins slave with work directory. I then have a maven java application with a Dockerfile.
Dockerfile
#### BUILD image ###
FROM maven:3-jdk-11 as builder
RUN mkdir -p /build
WORKDIR /build
COPY pom.xml /build
RUN mvn -B dependency:resolve dependency:resolve-plugins
COPY /src /build/src
RUN mvn package
### RUN ###
FROM openjdk:11-slim as runtime
EXPOSE 8080
ENV APP_HOME /app
ENV JAVA_OPTS=""
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/config
RUN mkdir $APP_HOME/log
RUN mkdir $APP_HOME/src
VOLUME $APP_HOME/log
VOLUME $APP_HOME/config
WORKDIR $APP_HOME
COPY --from=builder /build/src $APP_HOME/src
COPY --from=builder /build/target $APP_HOME/target
COPY --from=builder /build/target/*.jar app.jar
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar app.jar" ]
Jenkins slave sees this Dockerfile and executes it. It builds the target folder. In the target folder, I have Jacoco to show code coverage.
Jenkins slave workspace is unable to see that target folder to show Code Coverage on the Jenkins jacoco page. How can i make this visible? I tried volumes in my docker-compose file as shown
docker-compose.yml
version: '3'
services:
my-application-service:
image: 'my-application-image:v1.0'
build: .
container_name: my-application-container
ports:
- 8090:8090
volumes:
- /home/bob/Desktop/project/jenkins/workspace/My-Application:/app
However, I am not able to get target folder in the workspace. How can i let jenkins slave see this jacoco code coverage?
It looks like you have a mapped volume from /home/bob/Desktop/project/jenkins/workspace/My-Application on the slave to /app in the docker image, so normally, if you copy your jacoco reports back to /app/..., it should appear on the Jenkins Slave.
But the volume mapping only works for the first stage of you docker build process. As soon as you start your 2nd stage (after FROM openjdk:11-slim as runtime), there is no more mapped volume. So these lines copies the data to the target image's /app dir, not the original mapped directory.
COPY --from=builder /build/src $APP_HOME/src
COPY --from=builder /build/target $APP_HOME/target
I think you could make it work if you do this in the first stage:
RUN cp /build/target /app/target
In your second stage you probably only need the built jar file, so you only need:
COPY --from=builder /build/target/*.jar app.jar
An alternative could be to extract the data from the docker image itself after it has been bult, but that needs some command line kungfu:
docker cp $(docker ps -a -f "label=image=$IMAGE" -f "label=build=$BUILDNR" -f "status=exited" --format "{{.ID}}"):/app/target .
See also https://medium.com/#cyril.delmas/how-to-keep-the-build-reports-with-a-docker-multi-stage-build-on-jenkins-507200f4007f
If I am understanding correctly, you want the contents of $APP_HOME/target to be visible from your Jenkins slave? If so, you may need to re-think your approach.
Jenkins is running a Docker build which builds your app and outputs your code coverage report under $APP_HOME/target however since you are building it in Docker, those files won't be available to the slave but rather the image itself.
I'd consider running code coverage outside the Docker build or you may have to do something hackey like run this build, and copy the files from the image to your ${WORKSPACE} which I believe you have to run it in a container first and then copy the files and destroy the container after.

how to copy file from docker container to host using shell script?

I have created an image, which is an automation project. when I run container it executes all test inside the container then it generates the test report. I want to take this report out before deleting container.
FROM maven:3.6.0-ibmjava-8-alpine
COPY ./pom.xml .
ADD ./src $HOME/src
COPY ./test-execution.sh /
RUN mvn clean install -Dmaven.test.skip=true -Dassembly.skipAssembly=true
ENTRYPOINT ["/test-execution.sh"]
CMD []
Below is shell file
#!/bin/bash
echo parameters you provided : "$#"
mvn test "$#"
cp api-automation:target/*.zip /Users/abcd/Desktop/docker_report
You will want to use the docker cp command. See here for more details.
However, it appears docker cp does not support standard unix globbing patterns (i.e * in your src path).
So instead you will want to run:
docker cp api-automation:target/ /Users/abcd/Desktop/docker_report
However, then you will have to have a final step to remove all the non-zip files from your docker_report directory.

Resources