It's commonly known that you can run docker commit against a failed build process to take a snapshot of a container for debugging purposes. The container ID is gleaned from the running in <ID> text. However, this text is not emitted during builds that happen with Docker's newer BuildKit buildx functionality.
I tried using --progress plain on the Docker build command, but that hasn't shown me the container IDs. Plus, I cannot run a new container from the image layer IDs (SHA hashes) that are spit out.
Sample BuildKit Output
Using this command:
#1 [internal] load build definition from Dockerfile
#1 sha256:0e70418d547c3ccb20da7b100cf4f69564bddc416652e3e2b9b514e9a732b4aa
#1 transferring dockerfile: 32B done
#1 DONE 0.0s
#2 [internal] load .dockerignore
#2 sha256:396b2cfd81ff476a70ecda27bc5d781bd61c859b608537336f8092e155dd38bf
#2 transferring context: 34B done
#2 DONE 0.0s
#3 [internal] load metadata for docker.io/library/node:latest
#3 sha256:1c0b05b884068c98f7acad32e4f7fd374eba1122b4adcbb1de68aa72d5a6046f
#3 DONE 0.0s
#4 [1/4] FROM docker.io/library/node
#4 sha256:5045d46e15358f34ea7fff145af304a1fa3a317561e9c609f4ae17c0bd3359df
#4 DONE 0.0s
#5 [internal] load build context
#5 sha256:49d7a085caed3f75e779f05887e53e0bba96452e3a719963993002a3638cb8a3
#5 transferring context: 35.17kB 0.0s done
#5 DONE 0.1s
#6 [2/4] ADD [trevortest/*, /app/]
#6 sha256:6da32965a50f6e13322efb20007ff49fb0546e2ff55799163b3b00d034a62c57
#6 CACHED
Question: How can I obtain the container IDs of the build process, during each step, specifically when using Docker BuildKit?
The BuildKit works differently than the legacy docker build system. At the moment, there is no direct way to spawn a container from a step in the build and troubleshoot it.
To use the BuildKit potential up to the maximum, best approach is to organize the builds in smaller logical stages. Once the build is organized in this way, When running the builds, you can specify that you want to stop at a certain stage by using --target. When the target is specified, Docker creates an image with the results of the build up to that stage. You can use this container to further troubleshoot in the same way that was possible with the old build system.
Take this example. Here I have 4 stages out of which 2 are parallel stages:
FROM debian:9.11 AS stage-01
# Prepare for installation
RUN apt update && \
apt upgrade -y
FROM stage-01 as stage-02
# Install building tools
RUN apt install -y build-essential
FROM stage-02 as stage-02a
RUN echo "Build 0.1" > /version.txt
FROM stage-02 as stage-03
RUN apt install -y cmake gcc g++
Now you can use the --target option to tell Docker that you want to stop at the stage-02 as follows:
$ docker build -f test-docker.Dockerfile -t test . --target stage-02 [+] Building 67.5s (7/7) FINISHED
=> [internal] load build definition from test-docker.Dockerfile 0.0s
=> => transferring dockerfile: 348B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/debian:9.11 0.0s
=> [stage-01 1/2] FROM docker.io/library/debian:9.11 0.0s
=> CACHED [stage-01 2/2] RUN apt update && apt upgrade -y 0.0s
=> [stage-02 1/1] RUN apt install -y build-essential 64.7s
=> exporting to image 2.6s
=> => exporting layers 2.5s
=> => writing image sha256:ac36b95184b79b6cabeda3e4d7913768f6ed73527b76f025262d6e3b68c2a357 0.0s
=> => naming to docker.io/library/test 0.0s
Now you have the image with the name test and you can spawn a container to troubleshoot.
docker run -ti --rm --name troubleshoot test /bin/bash
root#bbdb0d2188c0:/# ls
Using multiple stages facilitates the troubleshooting, however it really speeds up the build process since the parallel branches can be build on different instances. Also, the readability of the build file is significantly improved.
Related
Scenario:
I made a working dockerfile, and I want to test them from scratch. However, the remove command only removes the image temporarily, meaning that running build command again will make them reappear as if it was never removed in a first place.
Example:
This is what my terminal looks like:
*Note: first two images are irrelevant to this question.
The ***_seis image is removed using docker rmi ***_seis command, and as a result, running docker images will show that ***_seis image was deleted.
However, when I run the following build command:
docker build -f dockerfile -t ***_seis:latest .
It will build successfully, but gives this result:
Even though it was removed seconds ago, build took less than a minute and the created date indicates that it was made 3 days ago.
Log:
This is what my build log looks like:
docker build -f dockerfile -t ***_seis:latest .
[+] Building 11.3s (14/14) FINISHED
=> [internal] load build definition from dockerfile 0.0s
=> => transferring dockerfile: 38B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/jupyter/base-notebook:latest 11.2s
=> [1/9] FROM docker.io/jupyter/base-notebook:latest#sha256:bc9ad73498f21ae716ba0e58d660063eae1677f6dd2bd5b669248fd0bf22dc79 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 32B 0.0s
=> CACHED [2/9] RUN apt update && apt install --no-install-recommends -y software-properties-common git zip unzip wget v 0.0s
=> CACHED [3/9] RUN conda install -c conda-forge jupyter_contrib_nbextensions jupyter_nbextensions_configurator jupyter-resource-usage 0.0s
=> CACHED [4/9] RUN mkdir /home/jovyan/environment_ymls 0.0s
=> CACHED [5/9] COPY seis.yml /home/jovyan/environment_ymls/seis.yml 0.0s
=> CACHED [6/9] RUN conda env create -f /home/jovyan/environment_ymls/seis.yml 0.0s
=> CACHED [7/9] RUN python -m ipykernel install --name seis--display-name "seis" 0.0s
=> CACHED [8/9] WORKDIR /home/jovyan/***_seis 0.0s
=> CACHED [9/9] RUN chown -R jovyan:users /home/jovyan 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:16a8e90e47c0adc1c32f28e32ad17a8bc72795c3ca9fc39e792fa383793c3bdb 0.0s
=> => naming to docker.io/library/***_seis:latest
Troubleshooting: So far, I've tried different ways of removing them, such as
docker rmi <image_name>
docker image prune
and manually removing from docker desktop.
I made sure that all containers are deleted by using:
docker ps -a
Expected result: If successful, it should rebuild from scratch, takes longer than a minute to build, and creation date should reflect the time it was actually created.
Question:
I would like to know what is the issue here in terms of image not being deleted completely. Why does it recreate image from the past rather than just starting new build?
Thank you in advance for your help.
It's building from the cache. Since no inputs appear to have changed to the build engine, and it has the steps from the previous build, they are reused, including the image creation date.
You can delete the build cache. But I'd recommend instead to run:
docker build --pull --no-cache -f dockerfile -t ***_seis:latest .
The --pull option pulls a new base image should you have an old version pulled locally. And the --no-cache option skips the caching for various steps (in particular a RUN step that may fetch the latest external dependency).
I have a simple application that uses github.com/go-sql-driver/mysql to connect to a MySQL database and execute simple queries. This all works fine on my local machine, however when I try to build it using docker build I get the following output:
[+] Building 4.1s (9/10)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 104B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/golang:onbuild 1.3s
=> [auth] library/golang:pull token for registry-1.docker.io 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 5.63kB 0.0s
=> CACHED [1/2] FROM docker.io/library/golang:onbuild#sha256:c0ec19d49014d604e4f62266afd490016b11ceec103f0b7ef44 0.0s
=> [2/2] COPY . /go/src/app 0.1s
=> [3/2] RUN go-wrapper download 2.0s
=> ERROR [4/2] RUN go-wrapper install 0.6s
------
> [4/2] RUN go-wrapper install:
#8 0.465 + exec go install -v
#8 0.535 github.com/joho/godotenv
#8 0.536 github.com/go-sql-driver/mysql
#8 0.581 # github.com/go-sql-driver/mysql
#8 0.581 ../github.com/go-sql-driver/mysql/driver.go:88: undefined: driver.Connector
#8 0.581 ../github.com/go-sql-driver/mysql/driver.go:99: undefined: driver.Connector
#8 0.581 ../github.com/go-sql-driver/mysql/nulltime.go:36: undefined: sql.NullTime
------
executor failed running [/bin/sh -c go-wrapper install]: exit code: 2
My go version is up to date and I am using the following dockerfile:
FROM golang:onbuild
To my knowledge this should go get all the packages it requires. I've also tried it this way:
FROM golang:onbuild
RUN go get "github.com/go-sql-driver/mysql"
This had the same output.
Note that in my code I import the package like this:
import _ "github.com/go-sql-driver/mysql"
I also use other packages from github, these seem to work fine.
The Docker community has generally been steering away from the Dockerfile ONBUILD directive, since it makes it very confusing what will actually happen in derived images (see the various comments around "is that really the entire Dockerfile?"). If you search Docker Hub for the golang:onbuild image you'll discover that this is Go 1.7 or 1.8; Go modules were introduced in Go 1.11.
You'll need to update to a newer base image, and that means writing out the Dockerfile steps by hand. For a typical Go application this would look like
FROM golang:1.18 AS build
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY ./ ./
RUN go build -o myapp .
FROM ubuntu:20.04
COPY --from=build /app/myapp /usr/local/bin
CMD ["myapp"]
(In the final stage you may need to RUN apt-get update && apt-get install ... a MySQL client library or other tools.)
It's commonly known that you can run docker commit against a failed build process to take a snapshot of a container for debugging purposes. The container ID is gleaned from the running in <ID> text. However, this text is not emitted during builds that happen with Docker's newer BuildKit buildx functionality.
I tried using --progress plain on the Docker build command, but that hasn't shown me the container IDs. Plus, I cannot run a new container from the image layer IDs (SHA hashes) that are spit out.
Sample BuildKit Output
Using this command:
#1 [internal] load build definition from Dockerfile
#1 sha256:0e70418d547c3ccb20da7b100cf4f69564bddc416652e3e2b9b514e9a732b4aa
#1 transferring dockerfile: 32B done
#1 DONE 0.0s
#2 [internal] load .dockerignore
#2 sha256:396b2cfd81ff476a70ecda27bc5d781bd61c859b608537336f8092e155dd38bf
#2 transferring context: 34B done
#2 DONE 0.0s
#3 [internal] load metadata for docker.io/library/node:latest
#3 sha256:1c0b05b884068c98f7acad32e4f7fd374eba1122b4adcbb1de68aa72d5a6046f
#3 DONE 0.0s
#4 [1/4] FROM docker.io/library/node
#4 sha256:5045d46e15358f34ea7fff145af304a1fa3a317561e9c609f4ae17c0bd3359df
#4 DONE 0.0s
#5 [internal] load build context
#5 sha256:49d7a085caed3f75e779f05887e53e0bba96452e3a719963993002a3638cb8a3
#5 transferring context: 35.17kB 0.0s done
#5 DONE 0.1s
#6 [2/4] ADD [trevortest/*, /app/]
#6 sha256:6da32965a50f6e13322efb20007ff49fb0546e2ff55799163b3b00d034a62c57
#6 CACHED
Question: How can I obtain the container IDs of the build process, during each step, specifically when using Docker BuildKit?
The BuildKit works differently than the legacy docker build system. At the moment, there is no direct way to spawn a container from a step in the build and troubleshoot it.
To use the BuildKit potential up to the maximum, best approach is to organize the builds in smaller logical stages. Once the build is organized in this way, When running the builds, you can specify that you want to stop at a certain stage by using --target. When the target is specified, Docker creates an image with the results of the build up to that stage. You can use this container to further troubleshoot in the same way that was possible with the old build system.
Take this example. Here I have 4 stages out of which 2 are parallel stages:
FROM debian:9.11 AS stage-01
# Prepare for installation
RUN apt update && \
apt upgrade -y
FROM stage-01 as stage-02
# Install building tools
RUN apt install -y build-essential
FROM stage-02 as stage-02a
RUN echo "Build 0.1" > /version.txt
FROM stage-02 as stage-03
RUN apt install -y cmake gcc g++
Now you can use the --target option to tell Docker that you want to stop at the stage-02 as follows:
$ docker build -f test-docker.Dockerfile -t test . --target stage-02 [+] Building 67.5s (7/7) FINISHED
=> [internal] load build definition from test-docker.Dockerfile 0.0s
=> => transferring dockerfile: 348B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/debian:9.11 0.0s
=> [stage-01 1/2] FROM docker.io/library/debian:9.11 0.0s
=> CACHED [stage-01 2/2] RUN apt update && apt upgrade -y 0.0s
=> [stage-02 1/1] RUN apt install -y build-essential 64.7s
=> exporting to image 2.6s
=> => exporting layers 2.5s
=> => writing image sha256:ac36b95184b79b6cabeda3e4d7913768f6ed73527b76f025262d6e3b68c2a357 0.0s
=> => naming to docker.io/library/test 0.0s
Now you have the image with the name test and you can spawn a container to troubleshoot.
docker run -ti --rm --name troubleshoot test /bin/bash
root#bbdb0d2188c0:/# ls
Using multiple stages facilitates the troubleshooting, however it really speeds up the build process since the parallel branches can be build on different instances. Also, the readability of the build file is significantly improved.
I just installed docker desktop on my windows box, but I uses the new output style, i'd like to switch back to the old style, having trouble finding the exact command or profile part to change.
What I have
docker build .
[+] Building 0.8s (10/10) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/php:7.4.12-fpm-buster 0.5s
=> [1/6] FROM docker.io/library/php:7.4.12-fpm-buster#sha256:07db4f537d7ea591cd9cecda712aed03ac1aaba8f243961c396 0.0s
=> CACHED [2/6] RUN apt-get update && apt-get upgrade -y && apt-get install git zip -y 0.0s
=> CACHED [3/6] WORKDIR /var/www 0.0s
=> CACHED [4/6] RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename= 0.0s
=> CACHED [5/6] RUN composer --version 0.0s
=> CACHED [6/6] RUN composer require google/cloud google/auth phpseclib/phpseclib 0.0s
=> exporting to image 0.1s
=> => exporting layers 0.0s
=> => writing image sha256:ee8e9007493a15d9ba26d4cf46cdbc7c618a9ab949c7ff9c5e5e2ce717f039d5 0.0s
What I want
docker build .
Sending build context to Docker daemon 2.048kB
Step 1/7 : FROM php:7.4.12-fpm-buster
---> 15d55c4fd75d
Step 2/7 : RUN apt-get update && apt-get upgrade -y && apt-get install git zip -y
---> Running in 6d719912d1e1
...
The new logging style comes from the BuildKit features.
You can disable this in the Docker Desktop GUI:
select the Docker Engine tab
set "features"{ "buildkit" : false }
Then if you want to use the logging features again, you can run with DOCKER_BUILDKIT=1. I believe you can run with DOCKER_BUILDKIT=0 if you want to selectively disable but haven't tested that yet.
Of course, be aware that you'll lose out on the following features that BuildKit adds to Docker:
Docker images created with BuildKit can be pushed to Docker Hub just like Docker images created with legacy build
the Dockerfile format that works on legacy build will also work with BuildKit builds
The new --secret command line option allows the user to pass secret information for building new images with a specified Dockerfile
It's commonly known that you can run docker commit against a failed build process to take a snapshot of a container for debugging purposes. The container ID is gleaned from the running in <ID> text. However, this text is not emitted during builds that happen with Docker's newer BuildKit buildx functionality.
I tried using --progress plain on the Docker build command, but that hasn't shown me the container IDs. Plus, I cannot run a new container from the image layer IDs (SHA hashes) that are spit out.
Sample BuildKit Output
Using this command:
#1 [internal] load build definition from Dockerfile
#1 sha256:0e70418d547c3ccb20da7b100cf4f69564bddc416652e3e2b9b514e9a732b4aa
#1 transferring dockerfile: 32B done
#1 DONE 0.0s
#2 [internal] load .dockerignore
#2 sha256:396b2cfd81ff476a70ecda27bc5d781bd61c859b608537336f8092e155dd38bf
#2 transferring context: 34B done
#2 DONE 0.0s
#3 [internal] load metadata for docker.io/library/node:latest
#3 sha256:1c0b05b884068c98f7acad32e4f7fd374eba1122b4adcbb1de68aa72d5a6046f
#3 DONE 0.0s
#4 [1/4] FROM docker.io/library/node
#4 sha256:5045d46e15358f34ea7fff145af304a1fa3a317561e9c609f4ae17c0bd3359df
#4 DONE 0.0s
#5 [internal] load build context
#5 sha256:49d7a085caed3f75e779f05887e53e0bba96452e3a719963993002a3638cb8a3
#5 transferring context: 35.17kB 0.0s done
#5 DONE 0.1s
#6 [2/4] ADD [trevortest/*, /app/]
#6 sha256:6da32965a50f6e13322efb20007ff49fb0546e2ff55799163b3b00d034a62c57
#6 CACHED
Question: How can I obtain the container IDs of the build process, during each step, specifically when using Docker BuildKit?
The BuildKit works differently than the legacy docker build system. At the moment, there is no direct way to spawn a container from a step in the build and troubleshoot it.
To use the BuildKit potential up to the maximum, best approach is to organize the builds in smaller logical stages. Once the build is organized in this way, When running the builds, you can specify that you want to stop at a certain stage by using --target. When the target is specified, Docker creates an image with the results of the build up to that stage. You can use this container to further troubleshoot in the same way that was possible with the old build system.
Take this example. Here I have 4 stages out of which 2 are parallel stages:
FROM debian:9.11 AS stage-01
# Prepare for installation
RUN apt update && \
apt upgrade -y
FROM stage-01 as stage-02
# Install building tools
RUN apt install -y build-essential
FROM stage-02 as stage-02a
RUN echo "Build 0.1" > /version.txt
FROM stage-02 as stage-03
RUN apt install -y cmake gcc g++
Now you can use the --target option to tell Docker that you want to stop at the stage-02 as follows:
$ docker build -f test-docker.Dockerfile -t test . --target stage-02 [+] Building 67.5s (7/7) FINISHED
=> [internal] load build definition from test-docker.Dockerfile 0.0s
=> => transferring dockerfile: 348B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/debian:9.11 0.0s
=> [stage-01 1/2] FROM docker.io/library/debian:9.11 0.0s
=> CACHED [stage-01 2/2] RUN apt update && apt upgrade -y 0.0s
=> [stage-02 1/1] RUN apt install -y build-essential 64.7s
=> exporting to image 2.6s
=> => exporting layers 2.5s
=> => writing image sha256:ac36b95184b79b6cabeda3e4d7913768f6ed73527b76f025262d6e3b68c2a357 0.0s
=> => naming to docker.io/library/test 0.0s
Now you have the image with the name test and you can spawn a container to troubleshoot.
docker run -ti --rm --name troubleshoot test /bin/bash
root#bbdb0d2188c0:/# ls
Using multiple stages facilitates the troubleshooting, however it really speeds up the build process since the parallel branches can be build on different instances. Also, the readability of the build file is significantly improved.