The Dockerfile contains several steps to build and create a docker image.
E.g. RUN dotnet restore, RUN dotnet build and RUN dotnet publish.
Is it possible to export the results/logging of each step to a separate file which can be displayed/ formatted in several Jenkins stages?
You can export Jenkins build to log file as well using this plugin https://github.com/cboylan/jenkins-log-console-log.
But if you want to view each step log in your Jenkins console log try this way.
Create a job and build your docker image from the bash script and run that script from Jenkins.
docker build --compress --no-cache --build-arg DOCKER_ENV=staging --build-arg DOCKER_REPO="${docker_name}" -t "${docker_name}:${docker_tag}" .
If you run this command from Jenkins or create bash file you will see each step logs as mentioned below. If you looking for something more let me know.
Building in workspace /var/lib/jenkins/workspace/testlog
[testlog] $ /bin/sh -xe /tmp/jenkins8370164159405243093.sh
+ cd /opt/containers/
+ ./scripts/abode_docker.sh build alpine base
verifying docker name: alpine
Docker name verified
verify_config retval= 0
comparing props
LIST: alpine:3.7
Now each step will be displayed under
http://jenkins.domain.com/job/testlog/1/console
Step 1/5 : FROM alpine:3.7
Step 2/5 : COPY id_rsa /root/.ssh/id_rsa
---> 6645bd2838c9
Step 3/5 : COPY supervisord.conf /etc/supervisord.conf
---> 635e37d9503e
.....
Step 5/5 : ONBUILD RUN ls /usr/share/zoneinfo && cp /usr/share/zoneinfo/Europe/Brussels /etc/localtime && echo "US/Eastern" > /etc/timezone && date
---> Running in 7b8517d90264
Removing intermediate container 7b8517d90264
---> 3ead0f40b7b4
Successfully built 3ead0f40b7b4
Successfully tagged alpine:3.7
Related
I'm building a Docker image for node:10.20.1-buster-slim as base image and the Makefile runs fine, it creates the image and runs it.
However when using this Docker build in CI pipelines in Jenkins which is installed on a unique machine. Sometimes, the same image (node:10.20.1-buster-slim) is built from two processes simultaneously (e.g two Jenkins users run two Acceptance tests at the same time) so sometimes I end getting this kind of errors when trying to run the the container, after building the image:
failed to get digest sha256:3d3f18764dcb8026243f228a3eace39dabf06677e4468edaa47aa6e888122fd7: open /var/lib/docker/image/overlay2/imagedb/content/sha256/3d3f18764dcb8026243f228a3eace39dabf06677e4468edaa47aa6e888122fd7: no such file or directory
Makefile targets:
define down
docker-compose down
endef
define prune
rm -rf build/dist/*.js.map build/dist/*.js build/dist/*.css
docker container prune -f
docker volume prune -f
docker image prune -f
endef
build/dist/main.js: $(SRC)
$(call down)
$(call prune)
chmod -R a+rw build
docker build . --target=prod -t web_app_release
docker run --mount type=bind,source="$(PWD)"/web_app/build,target=/opt/web_app/build --rm web_app_release npm run build-prod
docker run --mount type=bind,source="$(PWD)"/web_app/build,target=/opt/web_app/build --rm web_app_release npm run build-css
Error (reproducible sometimes):
Compiling web_app ...
docker-compose down
Removing network webapp_default
Network webapp_default not found.
rm -rf build/dist/*.js.map build/dist/*.js build/dist/*.css
docker container prune -f
Total reclaimed space: 0B
docker volume prune -f
Total reclaimed space: 0B
docker image prune -f
Total reclaimed space: 0B
chmod -R a+rw build
docker build . --target=prod -t web_app_release
Sending build context to Docker daemon 122.5MB
Step 1/10 : FROM node:10.20.1-buster-slim AS base
---> cfd03db45bdc
Step 2/10 : ENV WORKPATH /opt/web_app/
---> Using cache
---> e1e61766fc45
Step 3/10 : WORKDIR $WORKPATH
---> Using cache
---> e776ef1390bd
Step 4/10 : RUN chown -R node $WORKPATH
---> Using cache
---> 1a5bb0d392a3
Step 5/10 : USER node
---> Using cache
---> bc682380a352
Step 6/10 : COPY --chown=node:node package* $WORKPATH
---> Using cache
---> 71b1848cad1e
Step 7/10 : RUN npm install --only=prod --no-progress
---> Using cache
---> de4ef62ab81e
Step 8/10 : CMD ["npm", "start"]
---> Using cache
---> 319f69778fb6
Step 9/10 : FROM base AS prod
---> 319f69778fb6
Step 10/10 : COPY --chown=node:node . $WORKPATH
---> 3d3f18764dcb
Successfully built 3d3f18764dcb
failed to get digest sha256:3d3f18764dcb8026243f228a3eace39dabf06677e4468edaa47aa6e888122fd7: open /var/lib/docker/image/overlay2/imagedb/content/sha256/3d3f18764dcb8026243f228a3eace39dabf06677e4468edaa47aa6e888122fd7: no such file or directory
Makefile:91: recipe for target 'build/dist/main.js' failed
make[1]: *** [build/dist/main.js] Error 1
Makefile:94: recipe for target 'web_app' failed
make: *** [web_app] Error 2
[Pipeline] error
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: Build failed
Finished: FAILURE
How could I make it so that users could run tests concurrently without the build of the same image overlapping each other?
Could I use some tool included in docker or check manually with bash scripts or in the Makefile?
I need to run a cmd that will create my home folder within a docker container. So, if my username in my linux box is josecz then I could use it from within a Dockerfile to run a cmd like:
RUN mkdir /home/${GetMyUsername}
and get the folder /home/josecz after the Dockerfile is processed.
The only way just as commented by folks: use ARG, next gives you a workable minimal example:
Dockerfile:
FROM alpine:3.14.0
ARG GetMyUsername
RUN echo ${GetMyUsername}
RUN mkdir -p /home/${GetMyUsername}
Execution:
cake#cake:~/3$ docker build --build-arg GetMyUsername=`whoami` -t abc:1 . --no-cache
Sending build context to Docker daemon 2.048kB
Step 1/4 : FROM alpine:3.14.0
---> d4ff818577bc
Step 2/4 : ARG GetMyUsername
---> Running in 4d87a0970dbd
Removing intermediate container 4d87a0970dbd
---> 8b67912b3788
Step 3/4 : RUN echo ${GetMyUsername}
---> Running in 2d68a7e93715
cake
Removing intermediate container 2d68a7e93715
---> 100428a1c526
Step 4/4 : RUN mkdir -p /home/${GetMyUsername}
---> Running in 938d10336daa
Removing intermediate container 938d10336daa
---> 939729b76f09
Successfully built 939729b76f09
Successfully tagged abc:1
Explaination:
When docker build, you could use whoami to get the username who run the docker build, then pass to args GetMyUsername. Then, in Dockerfile, you could use ${GetMyUsername} to get the value.
I have a Dockerfile like this:
FROM python:2.7
RUN echo "Hello World"
When I build this the first time with docker build -f Dockerfile -t test ., or build it with the --no-cache option, I get this output:
Sending build context to Docker daemon 40.66MB
Step 1/2 : FROM python:2.7
---> 6c76e39e7cfe
Step 2/2 : RUN echo "Hello World"
---> Running in 5b5b88e5ebce
Hello World
Removing intermediate container 5b5b88e5ebce
---> a23687d914c2
Successfully built a23687d914c2
My echo command executes.
If I run it again without busting the cache, I get this:
Sending build context to Docker daemon 40.66MB
Step 1/2 : FROM python:2.7
---> 6c76e39e7cfe
Step 2/2 : RUN echo "Hello World"
---> Using cache
---> a23687d914c2
Successfully built a23687d914c2
Successfully tagged test-requirements:latest
Cache is used for Step 2/2, and Hello World is not executed. I could get it to execute again by using --no-cache. However, each time, even when I am using --no-cache it uses a cached python:2.7 base image (although, unlike when the echo command is cached, it does not say ---> Using cache).
How do I bust the cache for the FROM python:2.7 line? I know I can do FROM python:latest, but that also seems to just cache whatever the latest version is the first time you build the Dockerfile.
If I understood the context correctly, you can use --pull while using docker build to get the latest base image -
$ docker build -f Dockerfile.test -t test . --pull
So using both --no-cache & --pull will create an absolute fresh image using Dockerfile -
$ docker build -f Dockerfile.test -t test . --pull --no-cache
Issue - https://github.com/moby/moby/issues/4238
FROM pulls an image from the registry (DockerHub in this case).
After the image is pulled to produce your build, you will see it if you run docker images.
You may remove it by running docker rmi python:2.7.
Hi I am new to Docker and trying to wrap around my head on how to clone a private repo from github and found some interesting link issues/6396
I followed one of the post and my dockerfile looks like
FROM python:2.7 as builder
# Deploy app's code
#RUN set -x
RUN mkdir /code
RUN mkdir /root/.ssh/
RUN ls -l /root/.ssh/
# The GITHUB_SSH_KEY Build Argument must be a path or URL
# If it's a path, it MUST be in the docker build dir, and NOT in .dockerignore!
ARG SSH_PRIVATE_KEY=C:\\Users\\MyUser\\.ssh\\id_rsa
RUN echo "${SSH_PRIVATE_KEY}"
# Set up root user SSH access for GitHub
ADD ${SSH_PRIVATE_KEY} /root/.ssh/id_rsa
RUN ssh -o StrictHostKeyChecking=no -vT git#github.com 2>&1 | grep -i auth
# Test SSH access (this returns false even when successful, but prints results)
RUN git clone git#github.com:***********.git
COPY . /code
WORKDIR /code
ENV PYTHONPATH /datawarehouse_process
# Setup app's virtualenv
RUN set -x \
&& pip install tox \
&& tox -e luigi
WORKDIR /datawarehouse_process
# Finally, remove the $GITHUB_SSH_KEY if it was a file, so it's not in /app!
# It can also be removed from /root/.ssh/id_rsa, but you're probably not
going
# to COPY that directory into the runtime image.
RUN rm -vf ${GITHUB_SSH_KEY} /root/.ssh/id*
#FROM python:2.7 as runtime
#COPY --from=builder /code /code
When I run docker build . from the correct location I get this error below. Any clue will be appreciated.
c:\Domain\Project\Docker-Images\datawarehouse_process>docker build .
Sending build context to Docker daemon 281.7MB
Step 1/15 : FROM python:2.7 as builder
---> 43c5f3ee0928
Step 2/15 : RUN mkdir /code
---> Running in 841fadc29641
Removing intermediate container 841fadc29641
---> 69fdbcd34f12
Step 3/15 : RUN mkdir /root/.ssh/
---> Running in 50199b0eb002
Removing intermediate container 50199b0eb002
---> 6dac8b120438
Step 4/15 : RUN ls -l /root/.ssh/
---> Running in e15040402b79
total 0
Removing intermediate container e15040402b79
---> 65519edac99a
Step 5/15 : ARG SSH_PRIVATE_KEY=C:\\Users\\MyUser\\.ssh\\id_rsa
---> Running in 10e0c92eed4f
Removing intermediate container 10e0c92eed4f
---> 707279c92614
Step 6/15 : RUN echo "${SSH_PRIVATE_KEY}"
---> Running in a9f75c224994
C:\Users\MyUser\.ssh\id_rsa
Removing intermediate container a9f75c224994
---> 96e0605d38a9
Step 7/15 : ADD ${SSH_PRIVATE_KEY} /root/.ssh/id_rsa
ADD failed: stat /var/lib/docker/tmp/docker-
builder142890167/C:\Users\MyUser\.ssh\id_rsa: no such file or
directory
From the Documentation:
ADD obeys the following rules:
The path must be inside the context of the build; you cannot ADD
../something /something, because the first step of a docker build is
to send the context directory (and subdirectories) to the docker
daemon.
You are passing an absolute path to ADD, but you can see from the error:
/var/lib/docker/tmp/docker-builder142890167/C:\Users\MyUser\.ssh\id_rsa:
no such file or directory
It is being looked for within the build context. Again from the documentation:
Traditionally, the Dockerfile is called Dockerfile and located in the
root of the context.
So, you need to place the RSA key somewhere in the directory tree which has it's root at the path that you specify in your Docker build command, so if you are entering docker build . your ARG statement would change to something like:
ARG SSH_PRIVATE_KEY = .\.ssh\id_rsa
I am setting an automatic build from which I would like to produce 2 images.
The use-case is in building and distributing a library:
- one image with the dependencies which will be reused for building and testing on Travis
- one image to provide the built software libs
Basically, I need to be able to push an image of the container at a certain point (before building) and one later (after building and installing).
Is this possible? I did not find anything relevant in Dockerfile docs.
You can do that using Docker Multi Stage builds. Have two Docker files
Dockerfile
FROM alpine
RUN apk update && apk add gcc
RUN echo "This is a test" > /tmp/builtfile
Dockerfile-prod
FROM myapp:testing as source
FROM alpine
COPY --from=source /tmp/builtfile /tmp/builtfile
RUN cat /tmp/builtfile
build.sh
docker build -t myapp:testing .
docker build -t myapp:production -f Dockerfile-prod .
So to explain, what we do is build the image with dependencies first. Then in our second Dockerfile-prod, we just include a FROM of our previously build image. And copy the built file to the production image.
Truncated output from my build
vagrant#vagrant:~/so$ ./build.sh
Step 1/3 : FROM alpine
Step 2/3 : RUN apk update && apk add gcc
Step 3/3 : RUN echo "This is a test" > /tmp/builtfile
Successfully tagged myapp:testing
Step 1/4 : FROM myapp:testing as source
Step 2/4 : FROM alpine
Step 3/4 : COPY --from=source /tmp/builtfile /tmp/builtfile
Step 4/4 : RUN cat /tmp/builtfile
This is a test
Successfully tagged myapp:production
For more information refer to https://docs.docker.com/engine/userguide/eng-image/multistage-build/#name-your-build-stages