I have created a jenkins slave with work directory. I then have a maven java application with a Dockerfile.
Dockerfile
#### BUILD image ###
FROM maven:3-jdk-11 as builder
RUN mkdir -p /build
WORKDIR /build
COPY pom.xml /build
RUN mvn -B dependency:resolve dependency:resolve-plugins
COPY /src /build/src
RUN mvn package
### RUN ###
FROM openjdk:11-slim as runtime
EXPOSE 8080
ENV APP_HOME /app
ENV JAVA_OPTS=""
RUN mkdir $APP_HOME
RUN mkdir $APP_HOME/config
RUN mkdir $APP_HOME/log
RUN mkdir $APP_HOME/src
VOLUME $APP_HOME/log
VOLUME $APP_HOME/config
WORKDIR $APP_HOME
COPY --from=builder /build/src $APP_HOME/src
COPY --from=builder /build/target $APP_HOME/target
COPY --from=builder /build/target/*.jar app.jar
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar app.jar" ]
Jenkins slave sees this Dockerfile and executes it. It builds the target folder. In the target folder, I have Jacoco to show code coverage.
Jenkins slave workspace is unable to see that target folder to show Code Coverage on the Jenkins jacoco page. How can i make this visible? I tried volumes in my docker-compose file as shown
docker-compose.yml
version: '3'
services:
my-application-service:
image: 'my-application-image:v1.0'
build: .
container_name: my-application-container
ports:
- 8090:8090
volumes:
- /home/bob/Desktop/project/jenkins/workspace/My-Application:/app
However, I am not able to get target folder in the workspace. How can i let jenkins slave see this jacoco code coverage?
It looks like you have a mapped volume from /home/bob/Desktop/project/jenkins/workspace/My-Application on the slave to /app in the docker image, so normally, if you copy your jacoco reports back to /app/..., it should appear on the Jenkins Slave.
But the volume mapping only works for the first stage of you docker build process. As soon as you start your 2nd stage (after FROM openjdk:11-slim as runtime), there is no more mapped volume. So these lines copies the data to the target image's /app dir, not the original mapped directory.
COPY --from=builder /build/src $APP_HOME/src
COPY --from=builder /build/target $APP_HOME/target
I think you could make it work if you do this in the first stage:
RUN cp /build/target /app/target
In your second stage you probably only need the built jar file, so you only need:
COPY --from=builder /build/target/*.jar app.jar
An alternative could be to extract the data from the docker image itself after it has been bult, but that needs some command line kungfu:
docker cp $(docker ps -a -f "label=image=$IMAGE" -f "label=build=$BUILDNR" -f "status=exited" --format "{{.ID}}"):/app/target .
See also https://medium.com/#cyril.delmas/how-to-keep-the-build-reports-with-a-docker-multi-stage-build-on-jenkins-507200f4007f
If I am understanding correctly, you want the contents of $APP_HOME/target to be visible from your Jenkins slave? If so, you may need to re-think your approach.
Jenkins is running a Docker build which builds your app and outputs your code coverage report under $APP_HOME/target however since you are building it in Docker, those files won't be available to the slave but rather the image itself.
I'd consider running code coverage outside the Docker build or you may have to do something hackey like run this build, and copy the files from the image to your ${WORKSPACE} which I believe you have to run it in a container first and then copy the files and destroy the container after.
Related
I have a git repository as my build context for a docker image, and i want to execute gradle build and copy the jar file into the docker image and run it on entrypoint.
I know that we can copy the entire project into the image and run build and execute, however i want to run build first and copy only the jar executable into the image.
Is it possible to execute commands in the build context before copying the files?
Current way:
FROM azul/zulu-openjdk-alpine:8
COPY ./ .
RUN ./gradlew clean build
ENTRYPOINT ["/bin/sh","-c","java -jar oms-core-app-*-SNAPSHOT.jar"]
What i want:
FROM azul/zulu-openjdk-alpine:8
RUN ./gradlew clean build
COPY ./oms-core-app-*-SNAPSHOT.jar .
ENTRYPOINT ["/bin/sh","-c","java -jar oms-core-app-*-SNAPSHOT.jar"]
I cannot run the ./gradlew clean build before copying the project because the files don't exist in the image when running the command. But i want to run it in the source itself and then copy the jar.
Any help would be highly appreciated thank you.
Conceptually this isn't that far off from what a multi-stage build does. Docker can't create files or run commands on the host, but you can build your application in an intermediate image and then copy the results out of that image.
# First stage: build the jar file.
FROM azul/zulu-openjdk-alpine:8 AS build
WORKDIR /app # avoid the root directory
COPY ./ . # same as "current way"
RUN ./gradlew clean build # same as "current way"
RUN mv oms-core-app-*-SNAPSHOT.jar oms-core-app.jar
# give the jar file a fixed name
# Second stage: actually run the jar file.
FROM azul/zulu-openjdk-alpine:8 # or some JRE-only image
WORKDIR /app
COPY --from=build /app/oms-core-app.jar .
CMD java -jar oms-core-app.jar
So the first stage runs the build from source; you don't need to run ./gradlew build on the host system. The second stage starts over from just the plain Java installation, and COPY only the jar file in. The second stage is what eventually runs, and if you docker push the image it will not contain the source code.
I have written the following Dockerfile to containerize a maven build:
FROM maven AS builder
WORKDIR /build
COPY . /build
RUN mvn -Dmaven.repo.local=.m2/repository clean test clover:instrument clover:clover clover:check findbugs:check package site
FROM amazoncorretto:8
COPY --from=builder /build/target/*.jar app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]
The second stage copies the jar file built by the first stage so that it can be used by its ENTRYPOINT but I'd also like the docker host to get a copy of the entire target directory from the container because it holds other build artefacts like code coverage and analysis reports.
Specifically, I'd like the copy to happen as part of the docker build -t fakefirehose . Is this possible?
i am trying to build jar files in docker gradle, then docker openjdk copy the jar files and run it.
but hitting error
Step 10/14 : COPY --from=builder /test-command/build/libs/*.jar /app/test-command.jar
ERROR: Service 'test-command' failed to build: COPY failed: no source files were specified
docker file
FROM gradle:5.6.3-jdk8 as builder
COPY --chown=gradle:gradle . /test-command
ADD --chown=gradle . /app
WORKDIR /app
RUN gradle build
FROM ubuntu
FROM openjdk:8-alpine
WORKDIR /app
VOLUME ["/app"]
COPY --from=builder /test-command/build/libs/*.jar /app/test-command.jar
COPY --from=builder /test-command/docker/startup.sh /app/startup.sh
#RUN sh -c 'touch /app/test-command.jar'
RUN chmod +x /app/startup.sh
RUN chmod +x /app/test-command.jar
ENTRYPOINT ["/bin/sh", "/app/startup.sh"]
when the docker gradle building files, i can view them by using below docker command, and i saw the path, jar files in the container.
but once build is completed, then i did not see the container anymore.
could it be the reason why docker oepnjdk cannot find the files / source path??
docker exec -it name-of-container bash
In a multistage docker build I execute unit tests, generate a coverage report and build an executable in a build stage and then copy the executable over to an run stage:
FROM golang:1.13 AS build-env
COPY . /build
WORKDIR /build
# execute tests
RUN go test ./... -coverprofile=cover.out
# execute build
RUN go build -o executable
FROM gcr.io/distroless/base:latest
COPY --from=build-env /build/executable /executable
ENTRYPOINT ["/executable"]
How can I extract the cover.out in a Jenkins environment to publish the coverage report?
With Docker 17.05, the only options I know of have been posted: refactoring to run the step with a run command and volume mount, or as you've done, docker cp. Each option requires building the specific build target with the --target option, which I suspect was your original question.
With BuildKit in 19.03, there's an option to build something other than an image. That would look like:
# import code
FROM golang:1.13 AS code
COPY . /build
WORKDIR /build
# execute tests
FROM code as test
RUN go test ./... -coverprofile=cover.out
# execute build
FROM code as build
RUN go build -o executable
# add a stage for artifacts you wish to extract from the build
FROM scratch as artifacts
COPY --from test /build/coverage.xml /
COPY --from test /build/report.xml /
# and finally the release stage
FROM gcr.io/distroless/base:latest as release
COPY --from=build /build/executable /executable
ENTRYPOINT ["/executable"]
Then your build command looks like:
mkdir -p artifacts
DOCKER_BUILDKIT=1 docker build -o type=local,dest=artifacts/. --target artifacts .
DOCKER_BUILDKIT=1 docker build -t myapp .
The important detail there is the --output or -o option that specifies what BuildKit should do with the resulting image. By default it is imported back into the local docker engine. But in this case, it writes the result out to the local filesystem.
You should run tests using a Docker Compose file so that its easy to non-interactively extract outputs, and so that build logic is separated from the rest of the pipeline (not to mention it's faster).
You should use a docker-compose.yml with a bind mount to the $PWD. Here's a snippet from said Docker Compose file which will mirror the outputs to your host machine:
version: '3.7'
services:
app:
image: golang:1.13
command: go test ./... -coverprofile=cover.out
volumes:
- type: bind
source: .
target: /app
The artifacts can now be saved as you normally would in your CI pipeline.
I've solved this for now by using the --target flag during the docker build:
docker build --target build-env -t myapp:build .
docker create --name myapp-build myapp:build
docker cp myapp-build:/build/coverage.xml .
docker cp myapp-build:/build/report.xml .
docker rm myapp-build
#proceed with normal build
docker build -t myapp .
However, the docker-compose suggestion look cleaner to me.
How can I mount a volume to store my .m2 repo so I don't have to download the internet on every build?
My build is a Multi stage build:
FROM maven:3.5-jdk-8 as BUILD
COPY . /usr/src/app
RUN mvn --batch-mode -f /usr/src/app/pom.xml clean package
FROM openjdk:8-jdk
COPY --from=BUILD /usr/src/app/target /opt/target
WORKDIR /opt/target
CMD ["/bin/bash", "-c", "find -type f -name '*.jar' | xargs java -jar"]
You can do that with Docker >18.09 and BuildKit. You need to enable BuildKit:
export DOCKER_BUILDKIT=1
Then you need to enable experimental dockerfile frontend features, by adding as first line do Dockerfile:
# syntax=docker/dockerfile:experimental
Afterwards you can call the RUN command with cache mount. Cache mounts stay persistent during builds:
RUN --mount=type=cache,target=/root/.m2 \
mvn --batch-mode -f /usr/src/app/pom.xml clean package
Although the anwer from #Marek Obuchowicz is still valid, here is a small update.
First add this line to Dockerfile:
# syntax=docker/dockerfile:1
You can set the DOCKER_BUILDKIT inline like this:
DOCKER_BUILDKIT=1 docker build -t mytag .
I would also suggest to split the dependency resolution and packagin phase, so you can take the full advantage from Docker layer caching (if nothing changes in pom.xml it will use the cached layer with already downloaded dependencies). The full Dockerfile could look like this:
# syntax=docker/dockerfile:1
FROM maven:3.6.3-openjdk-17 AS MAVEN_BUILD
COPY ./pom.xml ./pom.xml
RUN --mount=type=cache,target=/root/.m2 mvn dependency:go-offline -B
COPY ./src ./src
RUN --mount=type=cache,target=/root/.m2 mvn package
FROM openjdk:17-slim-buster
EXPOSE 8080
COPY --from=MAVEN_BUILD /target/myapp-*.jar /app.jar
ENTRYPOINT ["java","-jar","/app.jar","-Xms512M","-Xmx2G","-Djava.security.egd=file:/dev/./urandom"]