Docker cli use older logging? - docker

I just installed docker desktop on my windows box, but I uses the new output style, i'd like to switch back to the old style, having trouble finding the exact command or profile part to change.
What I have
docker build .
[+] Building 0.8s (10/10) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 32B 0.0s
=> [internal] load .dockerignore 0.1s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/php:7.4.12-fpm-buster 0.5s
=> [1/6] FROM docker.io/library/php:7.4.12-fpm-buster#sha256:07db4f537d7ea591cd9cecda712aed03ac1aaba8f243961c396 0.0s
=> CACHED [2/6] RUN apt-get update && apt-get upgrade -y && apt-get install git zip -y 0.0s
=> CACHED [3/6] WORKDIR /var/www 0.0s
=> CACHED [4/6] RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename= 0.0s
=> CACHED [5/6] RUN composer --version 0.0s
=> CACHED [6/6] RUN composer require google/cloud google/auth phpseclib/phpseclib 0.0s
=> exporting to image 0.1s
=> => exporting layers 0.0s
=> => writing image sha256:ee8e9007493a15d9ba26d4cf46cdbc7c618a9ab949c7ff9c5e5e2ce717f039d5 0.0s
What I want
docker build .
Sending build context to Docker daemon 2.048kB
Step 1/7 : FROM php:7.4.12-fpm-buster
---> 15d55c4fd75d
Step 2/7 : RUN apt-get update && apt-get upgrade -y && apt-get install git zip -y
---> Running in 6d719912d1e1
...

The new logging style comes from the BuildKit features.
You can disable this in the Docker Desktop GUI:
select the Docker Engine tab
set "features"{ "buildkit" : false }
Then if you want to use the logging features again, you can run with DOCKER_BUILDKIT=1. I believe you can run with DOCKER_BUILDKIT=0 if you want to selectively disable but haven't tested that yet.
Of course, be aware that you'll lose out on the following features that BuildKit adds to Docker:
Docker images created with BuildKit can be pushed to Docker Hub just like Docker images created with legacy build
the Dockerfile format that works on legacy build will also work with BuildKit builds
The new --secret command line option allows the user to pass secret information for building new images with a specified Dockerfile

Related

dockerfile cannot build: CONDA env create

Hi there I'm new to Docker and Dockerfiles in general,
However, I will need to create one in order to load an application on a server using WDL. With that said, there are few important aspects of this Dockerfile:
requires to create a Conda environment
in there I have to install Snakemake (through Mamba)
finally, I will need to git clone a repository and follow the steps to generate an executable for the application, later invoked by Snakemake
Luckily, it seems most of the pieces are already on dockerhub; correct if I'm wrong based on the script (see below)
# getting ubuntu base image & anaconda3 loaded
2 FROM ubuntu:latest
3 FROM continuumio/anaconda3:2021.05
4 FROM condaforge/mambaforge:latest
5 FROM snakemake/snakemake:stable
6
7 FROM node:alpine
8 RUN apk add --no-cache git
9 RUN apk add --no-cache openssh
10
11 MAINTAINER Name <email>
12
13 WORKDIR /home/xxx/Desktop/Pangenie
14
15 ## ACTUAL PanGenIe INSTALLATION
16 RUN git clone https://github.com/eblerjana/pangenie.git /home/xxx/Desktop/Pangenie
17 # create the environment
18 RUN conda env create -f environment.yml
19 # build the executable
20 RUN conda activate pangenie
21 RUN mkdir build; cd build; cmake .. ; make
First, I think that loading also Mamba and Snakemake would allow me to simply launch the application, as the tools are already set-up by the Dockerfile. Then, I ideally would like to build from the repository the executable, still I get an error at line 18 when I try to create a Conda environment, this is what I get:
[+] Building 1.7s (10/10) FINISHED
[internal] load build definition from Dockerfile
0.1s => => transferring dockerfile: 708B 0.1s => [internal] load .dockerignore 0.1s => => transferring context: 2B 0.1s => [internal] load metadata for docker.io/library/node:alpine 1.4s => [auth] library/node:pull token for registry-1.docker.io 0.0s => [stage-4 1/6] FROM docker.io/library/node:alpine#sha256:1a04e2ec39cc0c3a9657c1d6f8291ea2f5ccadf6ef4521dec946e522833e87ea
0.0s => CACHED [stage-4 2/6] RUN apk add --no-cache git 0.0s => CACHED [stage-4 3/6] RUN apk add --no-cache openssh 0.0s => CACHED [stage-4 4/6] WORKDIR /home/mat/Desktop/Pangenie 0.0s => CACHED [stage-4 5/6] RUN git clone https://github.com/eblerjana/pangenie.git /home/mat/Desktop/Pangenie
0.0s => ERROR [stage-4 6/6] RUN conda env create -f environment.yml 0.1s
[stage-4 6/6] RUN conda env create -f environment.yml:
#10 0.125 /bin/sh: conda: not found executor failed running [/bin/sh -c conda env create -f environment.yml]: exit code: 127
Now, I'm not really experienced as I said, and I spent some time looking for a solution and tried different things, but nothing worked out... if anyone has an idea or even suggesions on how to fix this Dockerfile, please let me know.
Thanks in advance!

Removed Docker image is reappearing again upon new build command

Scenario:
I made a working dockerfile, and I want to test them from scratch. However, the remove command only removes the image temporarily, meaning that running build command again will make them reappear as if it was never removed in a first place.
Example:
This is what my terminal looks like:
*Note: first two images are irrelevant to this question.
The ***_seis image is removed using docker rmi ***_seis command, and as a result, running docker images will show that ***_seis image was deleted.
However, when I run the following build command:
docker build -f dockerfile -t ***_seis:latest .
It will build successfully, but gives this result:
Even though it was removed seconds ago, build took less than a minute and the created date indicates that it was made 3 days ago.
Log:
This is what my build log looks like:
docker build -f dockerfile -t ***_seis:latest .
[+] Building 11.3s (14/14) FINISHED
=> [internal] load build definition from dockerfile 0.0s
=> => transferring dockerfile: 38B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/jupyter/base-notebook:latest 11.2s
=> [1/9] FROM docker.io/jupyter/base-notebook:latest#sha256:bc9ad73498f21ae716ba0e58d660063eae1677f6dd2bd5b669248fd0bf22dc79 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 32B 0.0s
=> CACHED [2/9] RUN apt update && apt install --no-install-recommends -y software-properties-common git zip unzip wget v 0.0s
=> CACHED [3/9] RUN conda install -c conda-forge jupyter_contrib_nbextensions jupyter_nbextensions_configurator jupyter-resource-usage 0.0s
=> CACHED [4/9] RUN mkdir /home/jovyan/environment_ymls 0.0s
=> CACHED [5/9] COPY seis.yml /home/jovyan/environment_ymls/seis.yml 0.0s
=> CACHED [6/9] RUN conda env create -f /home/jovyan/environment_ymls/seis.yml 0.0s
=> CACHED [7/9] RUN python -m ipykernel install --name seis--display-name "seis" 0.0s
=> CACHED [8/9] WORKDIR /home/jovyan/***_seis 0.0s
=> CACHED [9/9] RUN chown -R jovyan:users /home/jovyan 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:16a8e90e47c0adc1c32f28e32ad17a8bc72795c3ca9fc39e792fa383793c3bdb 0.0s
=> => naming to docker.io/library/***_seis:latest
Troubleshooting: So far, I've tried different ways of removing them, such as
docker rmi <image_name>
docker image prune
and manually removing from docker desktop.
I made sure that all containers are deleted by using:
docker ps -a
Expected result: If successful, it should rebuild from scratch, takes longer than a minute to build, and creation date should reflect the time it was actually created.
Question:
I would like to know what is the issue here in terms of image not being deleted completely. Why does it recreate image from the past rather than just starting new build?
Thank you in advance for your help.
It's building from the cache. Since no inputs appear to have changed to the build engine, and it has the steps from the previous build, they are reused, including the image creation date.
You can delete the build cache. But I'd recommend instead to run:
docker build --pull --no-cache -f dockerfile -t ***_seis:latest .
The --pull option pulls a new base image should you have an old version pulled locally. And the --no-cache option skips the caching for various steps (in particular a RUN step that may fetch the latest external dependency).

Unable to run intermediate image after updating [duplicate]

It's commonly known that you can run docker commit against a failed build process to take a snapshot of a container for debugging purposes. The container ID is gleaned from the running in <ID> text. However, this text is not emitted during builds that happen with Docker's newer BuildKit buildx functionality.
I tried using --progress plain on the Docker build command, but that hasn't shown me the container IDs. Plus, I cannot run a new container from the image layer IDs (SHA hashes) that are spit out.
Sample BuildKit Output
Using this command:
#1 [internal] load build definition from Dockerfile
#1 sha256:0e70418d547c3ccb20da7b100cf4f69564bddc416652e3e2b9b514e9a732b4aa
#1 transferring dockerfile: 32B done
#1 DONE 0.0s
#2 [internal] load .dockerignore
#2 sha256:396b2cfd81ff476a70ecda27bc5d781bd61c859b608537336f8092e155dd38bf
#2 transferring context: 34B done
#2 DONE 0.0s
#3 [internal] load metadata for docker.io/library/node:latest
#3 sha256:1c0b05b884068c98f7acad32e4f7fd374eba1122b4adcbb1de68aa72d5a6046f
#3 DONE 0.0s
#4 [1/4] FROM docker.io/library/node
#4 sha256:5045d46e15358f34ea7fff145af304a1fa3a317561e9c609f4ae17c0bd3359df
#4 DONE 0.0s
#5 [internal] load build context
#5 sha256:49d7a085caed3f75e779f05887e53e0bba96452e3a719963993002a3638cb8a3
#5 transferring context: 35.17kB 0.0s done
#5 DONE 0.1s
#6 [2/4] ADD [trevortest/*, /app/]
#6 sha256:6da32965a50f6e13322efb20007ff49fb0546e2ff55799163b3b00d034a62c57
#6 CACHED
Question: How can I obtain the container IDs of the build process, during each step, specifically when using Docker BuildKit?
The BuildKit works differently than the legacy docker build system. At the moment, there is no direct way to spawn a container from a step in the build and troubleshoot it.
To use the BuildKit potential up to the maximum, best approach is to organize the builds in smaller logical stages. Once the build is organized in this way, When running the builds, you can specify that you want to stop at a certain stage by using --target. When the target is specified, Docker creates an image with the results of the build up to that stage. You can use this container to further troubleshoot in the same way that was possible with the old build system.
Take this example. Here I have 4 stages out of which 2 are parallel stages:
FROM debian:9.11 AS stage-01
# Prepare for installation
RUN apt update && \
apt upgrade -y
FROM stage-01 as stage-02
# Install building tools
RUN apt install -y build-essential
FROM stage-02 as stage-02a
RUN echo "Build 0.1" > /version.txt
FROM stage-02 as stage-03
RUN apt install -y cmake gcc g++
Now you can use the --target option to tell Docker that you want to stop at the stage-02 as follows:
$ docker build -f test-docker.Dockerfile -t test . --target stage-02 [+] Building 67.5s (7/7) FINISHED
=> [internal] load build definition from test-docker.Dockerfile 0.0s
=> => transferring dockerfile: 348B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/debian:9.11 0.0s
=> [stage-01 1/2] FROM docker.io/library/debian:9.11 0.0s
=> CACHED [stage-01 2/2] RUN apt update && apt upgrade -y 0.0s
=> [stage-02 1/1] RUN apt install -y build-essential 64.7s
=> exporting to image 2.6s
=> => exporting layers 2.5s
=> => writing image sha256:ac36b95184b79b6cabeda3e4d7913768f6ed73527b76f025262d6e3b68c2a357 0.0s
=> => naming to docker.io/library/test 0.0s
Now you have the image with the name test and you can spawn a container to troubleshoot.
docker run -ti --rm --name troubleshoot test /bin/bash
root#bbdb0d2188c0:/# ls
Using multiple stages facilitates the troubleshooting, however it really speeds up the build process since the parallel branches can be build on different instances. Also, the readability of the build file is significantly improved.

How do I inspect the last good layer from a failed Docker build? [duplicate]

It's commonly known that you can run docker commit against a failed build process to take a snapshot of a container for debugging purposes. The container ID is gleaned from the running in <ID> text. However, this text is not emitted during builds that happen with Docker's newer BuildKit buildx functionality.
I tried using --progress plain on the Docker build command, but that hasn't shown me the container IDs. Plus, I cannot run a new container from the image layer IDs (SHA hashes) that are spit out.
Sample BuildKit Output
Using this command:
#1 [internal] load build definition from Dockerfile
#1 sha256:0e70418d547c3ccb20da7b100cf4f69564bddc416652e3e2b9b514e9a732b4aa
#1 transferring dockerfile: 32B done
#1 DONE 0.0s
#2 [internal] load .dockerignore
#2 sha256:396b2cfd81ff476a70ecda27bc5d781bd61c859b608537336f8092e155dd38bf
#2 transferring context: 34B done
#2 DONE 0.0s
#3 [internal] load metadata for docker.io/library/node:latest
#3 sha256:1c0b05b884068c98f7acad32e4f7fd374eba1122b4adcbb1de68aa72d5a6046f
#3 DONE 0.0s
#4 [1/4] FROM docker.io/library/node
#4 sha256:5045d46e15358f34ea7fff145af304a1fa3a317561e9c609f4ae17c0bd3359df
#4 DONE 0.0s
#5 [internal] load build context
#5 sha256:49d7a085caed3f75e779f05887e53e0bba96452e3a719963993002a3638cb8a3
#5 transferring context: 35.17kB 0.0s done
#5 DONE 0.1s
#6 [2/4] ADD [trevortest/*, /app/]
#6 sha256:6da32965a50f6e13322efb20007ff49fb0546e2ff55799163b3b00d034a62c57
#6 CACHED
Question: How can I obtain the container IDs of the build process, during each step, specifically when using Docker BuildKit?
The BuildKit works differently than the legacy docker build system. At the moment, there is no direct way to spawn a container from a step in the build and troubleshoot it.
To use the BuildKit potential up to the maximum, best approach is to organize the builds in smaller logical stages. Once the build is organized in this way, When running the builds, you can specify that you want to stop at a certain stage by using --target. When the target is specified, Docker creates an image with the results of the build up to that stage. You can use this container to further troubleshoot in the same way that was possible with the old build system.
Take this example. Here I have 4 stages out of which 2 are parallel stages:
FROM debian:9.11 AS stage-01
# Prepare for installation
RUN apt update && \
apt upgrade -y
FROM stage-01 as stage-02
# Install building tools
RUN apt install -y build-essential
FROM stage-02 as stage-02a
RUN echo "Build 0.1" > /version.txt
FROM stage-02 as stage-03
RUN apt install -y cmake gcc g++
Now you can use the --target option to tell Docker that you want to stop at the stage-02 as follows:
$ docker build -f test-docker.Dockerfile -t test . --target stage-02 [+] Building 67.5s (7/7) FINISHED
=> [internal] load build definition from test-docker.Dockerfile 0.0s
=> => transferring dockerfile: 348B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/debian:9.11 0.0s
=> [stage-01 1/2] FROM docker.io/library/debian:9.11 0.0s
=> CACHED [stage-01 2/2] RUN apt update && apt upgrade -y 0.0s
=> [stage-02 1/1] RUN apt install -y build-essential 64.7s
=> exporting to image 2.6s
=> => exporting layers 2.5s
=> => writing image sha256:ac36b95184b79b6cabeda3e4d7913768f6ed73527b76f025262d6e3b68c2a357 0.0s
=> => naming to docker.io/library/test 0.0s
Now you have the image with the name test and you can spawn a container to troubleshoot.
docker run -ti --rm --name troubleshoot test /bin/bash
root#bbdb0d2188c0:/# ls
Using multiple stages facilitates the troubleshooting, however it really speeds up the build process since the parallel branches can be build on different instances. Also, the readability of the build file is significantly improved.

Get container ID from Docker buildkit for interactive debugging

It's commonly known that you can run docker commit against a failed build process to take a snapshot of a container for debugging purposes. The container ID is gleaned from the running in <ID> text. However, this text is not emitted during builds that happen with Docker's newer BuildKit buildx functionality.
I tried using --progress plain on the Docker build command, but that hasn't shown me the container IDs. Plus, I cannot run a new container from the image layer IDs (SHA hashes) that are spit out.
Sample BuildKit Output
Using this command:
#1 [internal] load build definition from Dockerfile
#1 sha256:0e70418d547c3ccb20da7b100cf4f69564bddc416652e3e2b9b514e9a732b4aa
#1 transferring dockerfile: 32B done
#1 DONE 0.0s
#2 [internal] load .dockerignore
#2 sha256:396b2cfd81ff476a70ecda27bc5d781bd61c859b608537336f8092e155dd38bf
#2 transferring context: 34B done
#2 DONE 0.0s
#3 [internal] load metadata for docker.io/library/node:latest
#3 sha256:1c0b05b884068c98f7acad32e4f7fd374eba1122b4adcbb1de68aa72d5a6046f
#3 DONE 0.0s
#4 [1/4] FROM docker.io/library/node
#4 sha256:5045d46e15358f34ea7fff145af304a1fa3a317561e9c609f4ae17c0bd3359df
#4 DONE 0.0s
#5 [internal] load build context
#5 sha256:49d7a085caed3f75e779f05887e53e0bba96452e3a719963993002a3638cb8a3
#5 transferring context: 35.17kB 0.0s done
#5 DONE 0.1s
#6 [2/4] ADD [trevortest/*, /app/]
#6 sha256:6da32965a50f6e13322efb20007ff49fb0546e2ff55799163b3b00d034a62c57
#6 CACHED
Question: How can I obtain the container IDs of the build process, during each step, specifically when using Docker BuildKit?
The BuildKit works differently than the legacy docker build system. At the moment, there is no direct way to spawn a container from a step in the build and troubleshoot it.
To use the BuildKit potential up to the maximum, best approach is to organize the builds in smaller logical stages. Once the build is organized in this way, When running the builds, you can specify that you want to stop at a certain stage by using --target. When the target is specified, Docker creates an image with the results of the build up to that stage. You can use this container to further troubleshoot in the same way that was possible with the old build system.
Take this example. Here I have 4 stages out of which 2 are parallel stages:
FROM debian:9.11 AS stage-01
# Prepare for installation
RUN apt update && \
apt upgrade -y
FROM stage-01 as stage-02
# Install building tools
RUN apt install -y build-essential
FROM stage-02 as stage-02a
RUN echo "Build 0.1" > /version.txt
FROM stage-02 as stage-03
RUN apt install -y cmake gcc g++
Now you can use the --target option to tell Docker that you want to stop at the stage-02 as follows:
$ docker build -f test-docker.Dockerfile -t test . --target stage-02 [+] Building 67.5s (7/7) FINISHED
=> [internal] load build definition from test-docker.Dockerfile 0.0s
=> => transferring dockerfile: 348B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/debian:9.11 0.0s
=> [stage-01 1/2] FROM docker.io/library/debian:9.11 0.0s
=> CACHED [stage-01 2/2] RUN apt update && apt upgrade -y 0.0s
=> [stage-02 1/1] RUN apt install -y build-essential 64.7s
=> exporting to image 2.6s
=> => exporting layers 2.5s
=> => writing image sha256:ac36b95184b79b6cabeda3e4d7913768f6ed73527b76f025262d6e3b68c2a357 0.0s
=> => naming to docker.io/library/test 0.0s
Now you have the image with the name test and you can spawn a container to troubleshoot.
docker run -ti --rm --name troubleshoot test /bin/bash
root#bbdb0d2188c0:/# ls
Using multiple stages facilitates the troubleshooting, however it really speeds up the build process since the parallel branches can be build on different instances. Also, the readability of the build file is significantly improved.

Resources