how to save the logs for docker-compose up -d --build - docker

The logs are printed on the console when I do docker-compose up -d --build, how can I have them saved to a file so I can go through the logs?
These logs
0.0s => => transferring dockerfile: 771B
0.0s => [internal] load .dockerignore
0.0s => => transferring context: 2B
0.0s => [internal] load metadata for docker.io/library/python:3.8.12-bullseye
1.5s => [auth] library/python:pull token for registry-1.docker.io
0.0s => [internal] load build context
2.0s => => transferring context: 847.24kB
1.9s => [1/7] FROM docker.io/library/python:3.8.12-bullseye#sha256:b39ab988ac2fa749273cf6fdeb89fb3635850824419dc61
0.0s => => resolve docker.io/library/python:3.8.12-bullseye#sha256:b39ab988ac2fa749273cf6fdeb89fb3635850824419dc61
0.0s => CACHED [2/7] WORKDIR /usr/src/app

Docker already saves the logs in a json file. To find it you can do
docker inspect --format='{{.LogPath}}' [container id]
Or if you just want the outputs of the docker logs function just do this :
docker-compose logs --no-color > logs.txt
For docker-compose up --build, did you try that :
docker-compose up --build &> logs.txt

This can be achieved using docker-compose build and the --progress flag and specifying one of the following options: auto, tty, plain, or quiet (default "auto")
So to persist the logs in your terminal you would build with:
docker-compose build --progress plain
See --help:
 $ docker-compose build --help
Usage: docker compose build [OPTIONS] [SERVICE...]
Build or rebuild services
Options:
--build-arg stringArray Set build-time variables for services.
--no-cache Do not use cache when building the image
--progress string Set type of progress output (auto, tty, plain, quiet) (default "auto")
--pull Always attempt to pull a newer version of the image.
-q, --quiet Don't print anything to STDOUT
--ssh string Set SSH authentications used when building service images. (use
'default' for using your default SSH Agent)

Related

unable to build VSC devcontainer for Haskell on OS X Ventura

I'm trying to use the official haskell devcontainer for vsc on OS X Ventura.
After moving the .devcontainer folder to the root of my project and clicking Reopen in container, this is what I get:
[2022-12-27T12:44:59.593Z] Dev Containers 0.266.1 in VS Code 1.74.2 (e8a3071ea4344d9d48ef8a4df2c097372b0c5161).
[2022-12-27T12:44:59.593Z] Start: Resolving Remote
[2022-12-27T12:44:59.610Z] Setting up container for folder or workspace: /Users/samuelebonini/Desktop/APROG/second_mid_term/1copy
[2022-12-27T12:44:59.613Z] Start: Check Docker is running
[2022-12-27T12:44:59.613Z] Start: Run: docker version --format {{.Server.APIVersion}}
[2022-12-27T12:44:59.765Z] Stop (152 ms): Run: docker version --format {{.Server.APIVersion}}
[2022-12-27T12:44:59.766Z] Server API version: 1.41
[2022-12-27T12:44:59.766Z] Stop (153 ms): Check Docker is running
[2022-12-27T12:44:59.766Z] Start: Run: docker volume ls -q
[2022-12-27T12:44:59.885Z] Stop (119 ms): Run: docker volume ls -q
[2022-12-27T12:44:59.898Z] Start: Run: docker ps -q -a --filter label=vsch.local.folder=/Users/samuelebonini/Desktop/APROG/second_mid_term/1copy --filter label=vsch.quality=stable
[2022-12-27T12:45:00.021Z] Stop (123 ms): Run: docker ps -q -a --filter label=vsch.local.folder=/Users/samuelebonini/Desktop/APROG/second_mid_term/1copy --filter label=vsch.quality=stable
[2022-12-27T12:45:00.022Z] Start: Run: docker ps -q -a --filter label=devcontainer.local_folder=/Users/samuelebonini/Desktop/APROG/second_mid_term/1copy
[2022-12-27T12:45:00.150Z] Stop (128 ms): Run: docker ps -q -a --filter label=devcontainer.local_folder=/Users/samuelebonini/Desktop/APROG/second_mid_term/1copy
[2022-12-27T12:45:00.150Z] Start: Run: /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper --ms-enable-electron-run-as-node /Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/spec-node/devContainersSpecCLI.js up --user-data-folder /Users/samuelebonini/Library/Application Support/Code/User/globalStorage/ms-vscode-remote.remote-containers/data --workspace-folder /Users/samuelebonini/Desktop/APROG/second_mid_term/1copy --workspace-mount-consistency cached --id-label devcontainer.local_folder=/Users/samuelebonini/Desktop/APROG/second_mid_term/1copy --log-level debug --log-format json --config /Users/samuelebonini/Desktop/APROG/second_mid_term/1copy/.devcontainer/devcontainer.json --default-user-env-probe loginInteractiveShell --mount type=volume,source=vscode,target=/vscode,external=true --skip-post-create --update-remote-user-uid-default on --mount-workspace-git-root true
[2022-12-27T12:45:00.311Z] (node:24087) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.
[2022-12-27T12:45:00.311Z] (Use `Code Helper --trace-deprecation ...` to show where the warning was created)
[2022-12-27T12:45:00.312Z] #devcontainers/cli 0.25.2. Node.js v16.14.2. darwin 22.1.0 arm64.
[2022-12-27T12:45:00.312Z] Start: Run: docker buildx version
[2022-12-27T12:45:00.477Z] Stop (165 ms): Run: docker buildx version
[2022-12-27T12:45:00.477Z] github.com/docker/buildx v0.8.1 5fac64c2c49dae1320f2b51f1a899ca451935554
[2022-12-27T12:45:00.477Z]
[2022-12-27T12:45:00.477Z] Start: Resolving Remote
[2022-12-27T12:45:00.479Z] Start: Run: git rev-parse --show-cdup
[2022-12-27T12:45:00.490Z] Stop (11 ms): Run: git rev-parse --show-cdup
[2022-12-27T12:45:00.491Z] Start: Run: docker ps -q -a --filter label=devcontainer.local_folder=/Users/samuelebonini/Desktop/APROG/second_mid_term/1copy
[2022-12-27T12:45:00.604Z] Stop (113 ms): Run: docker ps -q -a --filter label=devcontainer.local_folder=/Users/samuelebonini/Desktop/APROG/second_mid_term/1copy
[2022-12-27T12:45:00.606Z] Start: Run: docker inspect --type image debian:bullseye-slim
[2022-12-27T12:45:00.720Z] Stop (114 ms): Run: docker inspect --type image debian:bullseye-slim
[2022-12-27T12:45:02.021Z] local container features stored at: /Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/node_modules/vscode-dev-containers/container-features
[2022-12-27T12:45:02.022Z] Start: Run: tar --no-same-owner -x -f -
[2022-12-27T12:45:02.032Z] Stop (10 ms): Run: tar --no-same-owner -x -f -
[2022-12-27T12:45:02.033Z] Start: Run: docker buildx build --load --build-arg BUILDKIT_INLINE_CACHE=1 -f /var/folders/3_/gmcg3yrd7d3d7q4vfw0jyqkm0000gn/T/devcontainercli/container-features/0.25.2-1672145102020/Dockerfile-with-features -t vsc-1copy-35a5c23ba93fc94a4cdbfe5ffc09ab01 --target dev_containers_target_stage --build-arg _DEV_CONTAINERS_BASE_IMAGE=dev_container_auto_added_stage_label /Users/samuelebonini/Desktop/APROG/second_mid_term/1copy/.devcontainer
[2022-12-27T12:45:02.415Z] [+] Building 0.0s (0/0)
[2022-12-27T12:45:02.565Z] [+] Building 0.0s (0/0)
[2022-12-27T12:45:02.665Z] [+] Building 0.0s (1/2)
=> [internal] load build definition from Dockerfile-with-features 0.0s
=> => transferring dockerfile: 2.24kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 0.0s
[2022-12-27T12:45:02.816Z]
[2022-12-27T12:45:02.816Z] [+] Building 0.2s (2/3)
=> [internal] load build definition from Dockerfile-with-features 0.0s
=> => transferring dockerfile: 2.24kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/debian:bullseye-slim 0.1s
[2022-12-27T12:45:02.967Z] [+] Building 0.3s (2/3)
=> [internal] load build definition from Dockerfile-with-features 0.0s
=> => transferring dockerfile: 2.24kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/debian:bullseye-slim 0.3s
[2022-12-27T12:45:03.118Z] [+] Building 0.5s (2/3)
=> [internal] load build definition from Dockerfile-with-features 0.0s
=> => transferring dockerfile: 2.24kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/debian:bullseye-slim 0.4s
[2022-12-27T12:45:03.183Z] [+] Building 0.6s (3/3) FINISHED
=> [internal] load build definition from Dockerfile-with-features 0.0s
[2022-12-27T12:45:03.183Z] => => transferring dockerfile: 2.24kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/debian:bullseye-slim 0.5s
error: failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to process "\"${templateOption:installZsh}\"": unsupported modifier (i) in substitution
[2022-12-27T12:45:03.200Z] Stop (1167 ms): Run: docker buildx build --load --build-arg BUILDKIT_INLINE_CACHE=1 -f /var/folders/3_/gmcg3yrd7d3d7q4vfw0jyqkm0000gn/T/devcontainercli/container-features/0.25.2-1672145102020/Dockerfile-with-features -t vsc-1copy-35a5c23ba93fc94a4cdbfe5ffc09ab01 --target dev_containers_target_stage --build-arg _DEV_CONTAINERS_BASE_IMAGE=dev_container_auto_added_stage_label /Users/samuelebonini/Desktop/APROG/second_mid_term/1copy/.devcontainer
[2022-12-27T12:45:03.200Z] Error: Command failed: docker buildx build --load --build-arg BUILDKIT_INLINE_CACHE=1 -f /var/folders/3_/gmcg3yrd7d3d7q4vfw0jyqkm0000gn/T/devcontainercli/container-features/0.25.2-1672145102020/Dockerfile-with-features -t vsc-1copy-35a5c23ba93fc94a4cdbfe5ffc09ab01 --target dev_containers_target_stage --build-arg _DEV_CONTAINERS_BASE_IMAGE=dev_container_auto_added_stage_label /Users/samuelebonini/Desktop/APROG/second_mid_term/1copy/.devcontainer
[2022-12-27T12:45:03.200Z] at Doe (/Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/spec-node/devContainersSpecCLI.js:1894:1669)
[2022-12-27T12:45:03.200Z] at process.processTicksAndRejections (node:internal/process/task_queues:96:5)
[2022-12-27T12:45:03.201Z] at async EF (/Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/spec-node/devContainersSpecCLI.js:1893:1978)
[2022-12-27T12:45:03.201Z] at async uT (/Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/spec-node/devContainersSpecCLI.js:1893:901)
[2022-12-27T12:45:03.201Z] at async Poe (/Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/spec-node/devContainersSpecCLI.js:1899:2128)
[2022-12-27T12:45:03.201Z] at async Zf (/Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/spec-node/devContainersSpecCLI.js:1899:3278)
[2022-12-27T12:45:03.201Z] at async aue (/Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/spec-node/devContainersSpecCLI.js:2020:15276)
[2022-12-27T12:45:03.201Z] at async oue (/Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/spec-node/devContainersSpecCLI.js:2020:15030)
[2022-12-27T12:45:03.202Z] Stop (3052 ms): Run: /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper --ms-enable-electron-run-as-node /Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/spec-node/devContainersSpecCLI.js up --user-data-folder /Users/samuelebonini/Library/Application Support/Code/User/globalStorage/ms-vscode-remote.remote-containers/data --workspace-folder /Users/samuelebonini/Desktop/APROG/second_mid_term/1copy --workspace-mount-consistency cached --id-label devcontainer.local_folder=/Users/samuelebonini/Desktop/APROG/second_mid_term/1copy --log-level debug --log-format json --config /Users/samuelebonini/Desktop/APROG/second_mid_term/1copy/.devcontainer/devcontainer.json --default-user-env-probe loginInteractiveShell --mount type=volume,source=vscode,target=/vscode,external=true --skip-post-create --update-remote-user-uid-default on --mount-workspace-git-root true
[2022-12-27T12:45:03.203Z] Exit code 1
[2022-12-27T12:45:03.205Z] Command failed: /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper --ms-enable-electron-run-as-node /Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/spec-node/devContainersSpecCLI.js up --user-data-folder /Users/samuelebonini/Library/Application Support/Code/User/globalStorage/ms-vscode-remote.remote-containers/data --workspace-folder /Users/samuelebonini/Desktop/APROG/second_mid_term/1copy --workspace-mount-consistency cached --id-label devcontainer.local_folder=/Users/samuelebonini/Desktop/APROG/second_mid_term/1copy --log-level debug --log-format json --config /Users/samuelebonini/Desktop/APROG/second_mid_term/1copy/.devcontainer/devcontainer.json --default-user-env-probe loginInteractiveShell --mount type=volume,source=vscode,target=/vscode,external=true --skip-post-create --update-remote-user-uid-default on --mount-workspace-git-root true
[2022-12-27T12:45:03.205Z] Exit code 1
I'm following the instructions on how to run a devcontainer in vsc, so I'm stuck and have no clue how to proceed. How do I fix this error?
Looks like you are using a Community Template which is no longer maintained. Also, have you copy pasted the source code? Instead can you use the Dev Container: Add Dev Container Configuration Files command to add the Haskell Template to your repo?
As the Haskell Template is no longer maintained, it will only receive security updates (if needed). List of all Templates is here

docker buildx fails to show result in image list

The following commands do not show the output ubuntu1 image:
docker buildx build -f 1.dockerfile -t ubuntu1 .
docker image ls | grep ubuntu1
# no output
1.dockerfile:
FROM ubuntu:latest
RUN echo "my ubuntu"
Plus, I cannot use the image in FROM statements in other docker files (both builds are on my local Windows box):
2.dockerfile:
FROM ubuntu1
RUN echo "my ubuntu 2"
docker buildx build -f 2.dockerfile -t ubuntu2 .
#error:
WARNING: No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
[+] Building 1.8s (4/4) FINISHED
=> [internal] load build definition from 2.dockerfile 0.0s
=> => transferring dockerfile: 84B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for docker.io/library/ubuntu1:latest 1.8s
=> [auth] library/ubuntu1:pull token for registry-1.docker.io 0.0s
------
> [internal] load metadata for docker.io/library/ubuntu1:latest:
------
2.dockerfile:1
--------------------
1 | >>> FROM ubuntu1:latest
2 | RUN echo "my ubuntu 2"
3 |
--------------------
error: failed to solve: ubuntu1:latest: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed (did you mean ubuntu:latest?)
Any idea what's going on? How can I see what buildx prepared and reference one image in another dockerfile?
Ok found a partial solution, I need to add --output type=docker as per the docs. This puts the docker in the image list. But I still cannot use it in the second docker.

Removed Docker image is reappearing again upon new build command

Scenario:
I made a working dockerfile, and I want to test them from scratch. However, the remove command only removes the image temporarily, meaning that running build command again will make them reappear as if it was never removed in a first place.
Example:
This is what my terminal looks like:
*Note: first two images are irrelevant to this question.
The ***_seis image is removed using docker rmi ***_seis command, and as a result, running docker images will show that ***_seis image was deleted.
However, when I run the following build command:
docker build -f dockerfile -t ***_seis:latest .
It will build successfully, but gives this result:
Even though it was removed seconds ago, build took less than a minute and the created date indicates that it was made 3 days ago.
Log:
This is what my build log looks like:
docker build -f dockerfile -t ***_seis:latest .
[+] Building 11.3s (14/14) FINISHED
=> [internal] load build definition from dockerfile 0.0s
=> => transferring dockerfile: 38B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/jupyter/base-notebook:latest 11.2s
=> [1/9] FROM docker.io/jupyter/base-notebook:latest#sha256:bc9ad73498f21ae716ba0e58d660063eae1677f6dd2bd5b669248fd0bf22dc79 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 32B 0.0s
=> CACHED [2/9] RUN apt update && apt install --no-install-recommends -y software-properties-common git zip unzip wget v 0.0s
=> CACHED [3/9] RUN conda install -c conda-forge jupyter_contrib_nbextensions jupyter_nbextensions_configurator jupyter-resource-usage 0.0s
=> CACHED [4/9] RUN mkdir /home/jovyan/environment_ymls 0.0s
=> CACHED [5/9] COPY seis.yml /home/jovyan/environment_ymls/seis.yml 0.0s
=> CACHED [6/9] RUN conda env create -f /home/jovyan/environment_ymls/seis.yml 0.0s
=> CACHED [7/9] RUN python -m ipykernel install --name seis--display-name "seis" 0.0s
=> CACHED [8/9] WORKDIR /home/jovyan/***_seis 0.0s
=> CACHED [9/9] RUN chown -R jovyan:users /home/jovyan 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:16a8e90e47c0adc1c32f28e32ad17a8bc72795c3ca9fc39e792fa383793c3bdb 0.0s
=> => naming to docker.io/library/***_seis:latest
Troubleshooting: So far, I've tried different ways of removing them, such as
docker rmi <image_name>
docker image prune
and manually removing from docker desktop.
I made sure that all containers are deleted by using:
docker ps -a
Expected result: If successful, it should rebuild from scratch, takes longer than a minute to build, and creation date should reflect the time it was actually created.
Question:
I would like to know what is the issue here in terms of image not being deleted completely. Why does it recreate image from the past rather than just starting new build?
Thank you in advance for your help.
It's building from the cache. Since no inputs appear to have changed to the build engine, and it has the steps from the previous build, they are reused, including the image creation date.
You can delete the build cache. But I'd recommend instead to run:
docker build --pull --no-cache -f dockerfile -t ***_seis:latest .
The --pull option pulls a new base image should you have an old version pulled locally. And the --no-cache option skips the caching for various steps (in particular a RUN step that may fetch the latest external dependency).

Docker Go image: starting container process caused: exec: "app": executable file not found in $PATH: unknown

I have been reading a lot of similar issues on different languages, none of them are Go.
I just created a Dockerfile with the instructions I followed on official Docker hub page:
FROM golang:1.17.3
WORKDIR /go/src/app
COPY . .
RUN go get -d -v ./...
RUN go install -v ./...
CMD ["app"]
This is my folder structure:
users-service
|-> .gitignore
|-> Dockerfile
|-> go.mod
|-> main.go
|-> README.md
If anyone needs to see some code, this is how my main.go looks like:
package main
import "fmt"
func main() {
fmt.Println("Hello, World!")
}
I ran docker build -t users-service .:
$ docker build -t users-service .
[+] Building 5.5s (11/11) FINISHED
=> [internal] load build definition from Dockerfile 0.1s
=> => transferring dockerfile: 154B 0.1s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/golang:1.17.3 3.3s
=> [auth] library/golang:pull token for registry-1.docker.io 0.0s
=> [1/5] FROM docker.io/library/golang:1.17.3#sha256:6556ce40115451e40d6afbc12658567906c9250b0fda250302dffbee9d529987 0.3s
=> [internal] load build context 0.1s
=> => transferring context: 2.05kB 0.0s
=> [2/5] WORKDIR /go/src/app 0.1s
=> [3/5] COPY . . 0.1s
=> [4/5] RUN go get -d -v ./... 0.6s
=> [5/5] RUN go install -v ./... 0.7s
=> exporting to image 0.2s
=> => exporting layers 0.1s
=> => writing image sha256:1f0e97ed123b079f80eb259dh3e34c90a48bf93e8f55629d05044fec8bfcaca6 0.0s
=> => naming to docker.io/library/users-service 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
Then I ran docker run users-service but I get that error:
$ docker run users-service
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "app": executable file not found in $PATH: unknown.
I remember I had some troubles with GOPATH environment variable in Visual Studio Code on Windows, maybe it's related... Any sugguestions?
The official Docker documentation has useful instructions for building a Go image: https://docs.docker.com/language/golang/build-images/
In summary, you need to build your Go binary and you need to configure the CMD appropriately, e.g.:
FROM golang:1.17.3
WORKDIR /app
COPY main.go .
COPY go.mod ./
RUN go build -o /my-go-app
CMD ["/my-go-app"]
Build the container:
$ docker build -t users-service .
Run the docker container:
$ docker run --rm -it users-service
Hello, World!
Your "app" executable binary should be available in your $PATH to call globally without any path prefix. Otherwise, you have to supply your full path to your executable like CMD ["/my/app"]
Also, I recommend using an ENTRYPOINT instruction. ENTRYPOINT indicates the direct path to the executable, while CMD indicates arguments supplied to the ENTRYPOINT.
Using combined RUN instructions make your image layers minimal, your overall image size becomes little bit smaller compared to using multiple RUNs.

Check if stage exists in Dockerfile

I have a CI script that builds Dockerfiles. My plan is that unit tests should be run in a test stage in each Dockerfile, for example:
FROM alpine AS build
WORKDIR /app
COPY src .
...
FROM build AS test
RUN mvn clean test
FROM build AS package
COPY --from=build ...
So, for a given Dockerfile, I would like to check if it has a test stage and, if so, run docker build --target test .... If it doesn't have a test stage, I don't want to run docker build (which would fail).
How can I check if a Dockerfile contains a certain stage without actually building it?
I do realize this question has some XY problem vibes to it, so feel free to enlighten me. But I also think the question can be generally useful anyway.
I'm going to shy away from trying to parse the Dockerfile since there are a lot of ways to inject false positives or negatives. E.g.
RUN echo \
FROM base as test
or
FROM base \
as test
So instead, I'm going to favor letting docker do the hard work, and modifying the file to not fail on a missing test. This can be done by adding a test stage to a file even when it already as a test stage. Whether you want to put this at the beginning or end of the Dockerfile depends on whether you are running buildkit:
$ cat df.dup-target
FROM busybox as test
RUN exit 1
FROM busybox as test
RUN exit 0
$ DOCKER_BUILDKIT=0 docker build --target test -f df.dup-target .
Sending build context to Docker daemon 20.99kB
Step 1/2 : FROM busybox as test
---> be5888e67be6
Step 2/2 : RUN exit 1
---> Running in 9f96f42bc6d8
The command '/bin/sh -c exit 1' returned a non-zero code: 1
$ DOCKER_BUILDKIT=1 docker build --target test -f df.dup-target .
[+] Building 0.1s (6/6) FINISHED
=> [internal] load build definition from df.dup-target 0.0s
=> => transferring dockerfile: 114B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> [internal] load metadata for docker.io/library/busybox:latest 0.0s
=> [test 1/2] FROM docker.io/library/busybox 0.0s
=> CACHED [test 2/2] RUN exit 0 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:8129063cb183c1c1aafaf3eef0c8671e86a54f795092fa7a918145c14da3ec3b 0.0s
Then you could append the always successful test at the beginning or end, passing that modified Dockerfile to stdin for the docker build to process:
$ cat df.simple
FROM busybox as build
RUN exit 0
$ cat - df.simple <<EOF | DOCKER_BUILDKIT=1 docker build --target test -f - .
FROM busybox as test
RUN exit 0
EOF
[+] Building 0.1s (6/6) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 109B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 34B 0.0s
=> [internal] load metadata for docker.io/library/busybox:latest 0.0s
=> [test 1/2] FROM docker.io/library/busybox 0.0s
=> CACHED [test 2/2] RUN exit 0 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:8129063cb183c1c1aafaf3eef0c8671e86a54f795092fa7a918145c14da3ec3b 0.0s
This is a simple grep invocation:
egrep -i -q '^FROM .* AS test$' Dockerfile
You also might consider running your unit tests outside of Docker, before you start building containers. (Or, if your CI system supports running steps inside containers, use a container to get a language runtime, but not necessarily run the Dockerfile.) You'll still need a Docker-based setup to run larger integration tests, but you can run these on your built production-ready containers.

Resources