Docker simultaneous build same machine - docker

I'm building a Docker image for node:10.20.1-buster-slim as base image and the Makefile runs fine, it creates the image and runs it.
However when using this Docker build in CI pipelines in Jenkins which is installed on a unique machine. Sometimes, the same image (node:10.20.1-buster-slim) is built from two processes simultaneously (e.g two Jenkins users run two Acceptance tests at the same time) so sometimes I end getting this kind of errors when trying to run the the container, after building the image:
failed to get digest sha256:3d3f18764dcb8026243f228a3eace39dabf06677e4468edaa47aa6e888122fd7: open /var/lib/docker/image/overlay2/imagedb/content/sha256/3d3f18764dcb8026243f228a3eace39dabf06677e4468edaa47aa6e888122fd7: no such file or directory
Makefile targets:
define down
docker-compose down
endef
define prune
rm -rf build/dist/*.js.map build/dist/*.js build/dist/*.css
docker container prune -f
docker volume prune -f
docker image prune -f
endef
build/dist/main.js: $(SRC)
$(call down)
$(call prune)
chmod -R a+rw build
docker build . --target=prod -t web_app_release
docker run --mount type=bind,source="$(PWD)"/web_app/build,target=/opt/web_app/build --rm web_app_release npm run build-prod
docker run --mount type=bind,source="$(PWD)"/web_app/build,target=/opt/web_app/build --rm web_app_release npm run build-css
Error (reproducible sometimes):
Compiling web_app ...
docker-compose down
Removing network webapp_default
Network webapp_default not found.
rm -rf build/dist/*.js.map build/dist/*.js build/dist/*.css
docker container prune -f
Total reclaimed space: 0B
docker volume prune -f
Total reclaimed space: 0B
docker image prune -f
Total reclaimed space: 0B
chmod -R a+rw build
docker build . --target=prod -t web_app_release
Sending build context to Docker daemon 122.5MB
Step 1/10 : FROM node:10.20.1-buster-slim AS base
---> cfd03db45bdc
Step 2/10 : ENV WORKPATH /opt/web_app/
---> Using cache
---> e1e61766fc45
Step 3/10 : WORKDIR $WORKPATH
---> Using cache
---> e776ef1390bd
Step 4/10 : RUN chown -R node $WORKPATH
---> Using cache
---> 1a5bb0d392a3
Step 5/10 : USER node
---> Using cache
---> bc682380a352
Step 6/10 : COPY --chown=node:node package* $WORKPATH
---> Using cache
---> 71b1848cad1e
Step 7/10 : RUN npm install --only=prod --no-progress
---> Using cache
---> de4ef62ab81e
Step 8/10 : CMD ["npm", "start"]
---> Using cache
---> 319f69778fb6
Step 9/10 : FROM base AS prod
---> 319f69778fb6
Step 10/10 : COPY --chown=node:node . $WORKPATH
---> 3d3f18764dcb
Successfully built 3d3f18764dcb
failed to get digest sha256:3d3f18764dcb8026243f228a3eace39dabf06677e4468edaa47aa6e888122fd7: open /var/lib/docker/image/overlay2/imagedb/content/sha256/3d3f18764dcb8026243f228a3eace39dabf06677e4468edaa47aa6e888122fd7: no such file or directory
Makefile:91: recipe for target 'build/dist/main.js' failed
make[1]: *** [build/dist/main.js] Error 1
Makefile:94: recipe for target 'web_app' failed
make: *** [web_app] Error 2
[Pipeline] error
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: Build failed
Finished: FAILURE
How could I make it so that users could run tests concurrently without the build of the same image overlapping each other?
Could I use some tool included in docker or check manually with bash scripts or in the Makefile?

Related

Kaniko multistage builds > Error building image: could not save file: copying file: write /kaniko/0/dev/full: no space left on device

I am using kaniko to build my docker images and stumbled upon a strange issue when doing multistage builds.
When using COPY --from= in the second stage, kaniko seems to eat up all diskspace in /var/lib/docker (using docker-ce):
$ df -h
...
/disks/docker 2T 2T 53M 100% /var/lib/docker
...
The console output when doing a kaniko build looks like:
$ docker run -v $(pwd):/workspace gcr.io/kaniko-project/executor:latest --dockerfile=./Dockerfile --context=/workspace --no-push
INFO[0000] Resolved base name registry.access.redhat.com/ubi8/ubi-minimal:8.6-902 to dependencies
INFO[0000] Retrieving image manifest registry.access.redhat.com/ubi8/ubi-minimal:8.6-902
INFO[0000] Retrieving image registry.access.redhat.com/ubi8/ubi-minimal:8.6-902 from registry registry.access.redhat.com
INFO[0000] Retrieving image manifest registry.access.redhat.com/ubi8/ubi-minimal:8.6-902
INFO[0000] Returning cached image manifest
INFO[0000] Built cross stage deps: map[0:[.]]
INFO[0000] Retrieving image manifest registry.access.redhat.com/ubi8/ubi-minimal:8.6-902
INFO[0000] Returning cached image manifest
INFO[0000] Executing 0 build triggers
INFO[0000] Building stage 'registry.access.redhat.com/ubi8/ubi-minimal:8.6-902' [idx: '0', base-idx: '-1']
INFO[0000] Unpacking rootfs as cmd RUN echo "Hello stage 1" && touch A_FILE_PATH requires it.
INFO[0003] ARG USER=nobody
INFO[0003] ARG A_FILE_PATH=/usr/bin/a_file
INFO[0003] RUN echo "Hello stage 1" && touch A_FILE_PATH
INFO[0003] Initializing snapshotter ...
INFO[0003] Taking snapshot of full filesystem...
INFO[0004] Cmd: /bin/sh
INFO[0004] Args: [-c echo "Hello stage 1" && touch A_FILE_PATH]
INFO[0004] Running: [/bin/sh -c echo "Hello stage 1" && touch A_FILE_PATH]
Hello stage 1
INFO[0004] Taking snapshot of full filesystem...
INFO[0004] Saving file . for later use
error building image: could not save file: copying file: write /kaniko/0/dev/full: no space left on device
This does not happen when I build the image with docker-ce:
$ docker build - < Dockerfile
Sending build context to Docker daemon 2.048kB
Step 1/9 : FROM registry.access.redhat.com/ubi8/ubi-minimal:8.6-902 AS dependencies
8.6-902: Pulling from ubi8/ubi-minimal
a96e4e55e78a: Pull complete
67d8ef478732: Pull complete
Digest: sha256:6e79406e33049907e875cb65a31ee2f0575f47afa0f06e3a2a9316b01ee379eb
Status: Downloaded newer image for registry.access.redhat.com/ubi8/ubi-minimal:8.6-902
---> c9882b8114e3
Step 2/9 : ARG USER=nobody
---> Running in 1aa898089bf3
Removing intermediate container 1aa898089bf3
---> da20079ed534
Step 3/9 : ARG A_FILE_PATH=/usr/bin/a_file
---> Running in fd2f43ef6a26
Removing intermediate container fd2f43ef6a26
---> b96ec1468dbd
Step 4/9 : RUN echo "Hello stage 1" && touch A_FILE_PATH
---> Running in 0b98f322dda8
Hello stage 1
Removing intermediate container 0b98f322dda8
---> 2c74725b6be9
Step 5/9 : FROM registry.access.redhat.com/ubi8/ubi-minimal:8.6-902
---> c9882b8114e3
Step 6/9 : ARG USER=nobody
---> Using cache
---> da20079ed534
Step 7/9 : COPY --from=dependencies ${A_FILE_PATH} ${A_FILE_PATH}
---> 2220a4d5ab88
Step 8/9 : RUN echo "Hello stage 2"
---> Running in 0efdca439c1e
Hello stage 2
Removing intermediate container 0efdca439c1e
---> aabbf5dabd6c
Step 9/9 : USER nobody
---> Running in 67b47ba45a95
Removing intermediate container 67b47ba45a95
---> b4ca6fa04f00
Successfully built b4ca6fa04f00
The Dockerfile to reproduce this looks like:
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.6-902 AS dependencies
ARG USER=nobody
ARG A_FILE_PATH=/usr/bin/a_file
RUN echo "Hello stage 1" \
&& touch A_FILE_PATH
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.6-902
ARG USER=nobody
COPY --from=dependencies ${A_FILE_PATH} ${A_FILE_PATH}
RUN echo "Hello stage 2"
USER nobody
The fact that kaniko makes docker to eat up all disk-space is causing other builds or containers to fail on the same machine. This is a pretty severe side effect.
I also posted my finding in https://github.com/GoogleContainerTools/kaniko/issues/2203 but there seems to be no activity in the project to analyze this.
I had the same problem with gcr.io/kaniko-project/executor:v1.9.0-debug within a gitlab-ci pipeline (using a multistage Dockerfile).
I tried with gcr.io/kaniko-project/executor:v1.7.0-debug and the build was successful.

Shell script failing on mkdir in Dockerfile

Have a Shell script as follows
ENV HOME=/home
RUN mkdir $HOME
Above script is in a Docker file and getting exception as follows
[91mmkdir: missing operand
Try 'mkdir --help' for more information.
Build step 'Docker Build and Publish' marked build as failure
I just tried with a Dockerfile like this:
FROM scratch
ENV HOME=/home
RUN mkdir $HOME
and got that:
$ docker build .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM scratch
--->
Step 2/3 : ENV HOME=/home
---> Running in e9d03ee00aa7
Removing intermediate container e9d03ee00aa7
---> 16f6e2ba2f09
Step 3/3 : RUN mkdir $HOME
---> Running in 6847b9b34717
failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown
$
Then I edited the file to look like this:
FROM mariadb
ENV HOME=/home
RUN mkdir $HOME
and got that:
$ docker build .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM mariadb
---> ea81af801379
Step 2/3 : ENV HOME=/home
---> Running in 7d903215da74
Removing intermediate container 7d903215da74
---> 77276b970083
Step 3/3 : RUN mkdir $HOME
---> Running in f120f966144f
mkdir: cannot create directory '/home': File exists
The command '/bin/sh -c mkdir $HOME' returned a non-zero code: 1
$
From the error message I can see the $HOME variable would not have been an issue. It should have worked on your side as well.

Docker not able to find/run binary

I have what I believe is a pretty simple setup.
I build a binary file outside of docker and then try to add it using this Dockerfile
FROM alpine
COPY apps/dist/apps /bin/
RUN chmod +x /bin/apps
RUN ls -al /bin | grep apps
CMD /bin/apps
And I think this should work.
The binary on its own seems to work on my host machine and I don't understand why it wouldn't on the docker image.
Anyways, the output I get is this:
docker build -t apps -f app.Dockerfile . && docker run apps
Sending build context to Docker daemon 287.5MB
Step 1/5 : alpine
---> d05cf6536f67
Step 2/5 : COPY apps/dist/apps /bin/
---> Using cache
---> c54d6d57154e
Step 3/5 : RUN chmod +x /bin/apps
---> Using cache
---> aa7e6adb0981
Step 4/5 : RUN ls -al /bin | grep apps
---> Running in 868c5e235d68
-rwxr-xr-x 1 root root 68395166 Dec 20 13:35 apps
Removing intermediate container 868c5e235d68
---> f052c06269b0
Step 5/5 : CMD /bin/apps
---> Running in 056fd02733e1
Removing intermediate container 056fd02733e1
---> 331600154cbe
Successfully built 331600154cbe
Successfully tagged apps:latest
/bin/sh: /bin/apps: not found
does this make sense, and am I just missing something obvious?
Your binary likely has dynamic links to libraries that don't exist inside the image filesystem. You can check those dynamic links with the ldd apps/dist/apps command.

Detecting username in Dockerfile

I need to run a cmd that will create my home folder within a docker container. So, if my username in my linux box is josecz then I could use it from within a Dockerfile to run a cmd like:
RUN mkdir /home/${GetMyUsername}
and get the folder /home/josecz after the Dockerfile is processed.
The only way just as commented by folks: use ARG, next gives you a workable minimal example:
Dockerfile:
FROM alpine:3.14.0
ARG GetMyUsername
RUN echo ${GetMyUsername}
RUN mkdir -p /home/${GetMyUsername}
Execution:
cake#cake:~/3$ docker build --build-arg GetMyUsername=`whoami` -t abc:1 . --no-cache
Sending build context to Docker daemon 2.048kB
Step 1/4 : FROM alpine:3.14.0
---> d4ff818577bc
Step 2/4 : ARG GetMyUsername
---> Running in 4d87a0970dbd
Removing intermediate container 4d87a0970dbd
---> 8b67912b3788
Step 3/4 : RUN echo ${GetMyUsername}
---> Running in 2d68a7e93715
cake
Removing intermediate container 2d68a7e93715
---> 100428a1c526
Step 4/4 : RUN mkdir -p /home/${GetMyUsername}
---> Running in 938d10336daa
Removing intermediate container 938d10336daa
---> 939729b76f09
Successfully built 939729b76f09
Successfully tagged abc:1
Explaination:
When docker build, you could use whoami to get the username who run the docker build, then pass to args GetMyUsername. Then, in Dockerfile, you could use ${GetMyUsername} to get the value.

When build beego docker image with default docker file, show error: `godep: No Godeps found (or in any parent directory)`

I'm new to Go & Beego.
When I build docker image with beego's default docker file, it shows this error :
godep: No Godeps found (or in any parent directory)
The build info is:
Sending build context to Docker daemon 13.6MB
Step 1/9 : FROM library/golang
---> 138bd936fa29
Step 2/9 : RUN go get github.com/tools/godep
---> Running in 9003355d967f
---> bae9e4289f9b
Removing intermediate container 9003355d967f
Step 3/9 : RUN CGO_ENABLED=0 go install -a std
---> Running in 63d367bd487e
---> 3ce0b2d47c0a
Removing intermediate container 63d367bd487e
Step 4/9 : ENV APP_DIR $GOPATH/src/TestProject
---> Running in 53ddc4661a07
---> 528794352eb0
Removing intermediate container 53ddc4661a07
Step 5/9 : RUN mkdir -p $APP_DIR
---> Running in 37718f358f5c
---> ef9332ca086c
Removing intermediate container 37718f358f5c
Step 6/9 : ENTRYPOINT (cd $APP_DIR && ./TestProject)
---> Running in 059c06321914
---> 8538ea070871
Removing intermediate container 059c06321914
Step 7/9 : ADD . $APP_DIR
---> df129482c662
Step 8/9 : RUN cd $APP_DIR && CGO_ENABLED=0 godep go build -ldflags '-d -w -s'
---> Running in 50b29d1307b5
godep: No Godeps found (or in any parent directory)
The command '/bin/sh -c cd $APP_DIR && CGO_ENABLED=0 godep go build -ldflags '-d -w -s'' returned a non-zero code: 1
The solution is very simple: run godep save in your project locally, and you will go a new folder Godeps in your project. it contains file:Godeps.json. After this, run docker build . again, you will got your docker image.

Resources