I'm trying to use the official haskell devcontainer for vsc on OS X Ventura.
After moving the .devcontainer folder to the root of my project and clicking Reopen in container, this is what I get:
[2022-12-27T12:44:59.593Z] Dev Containers 0.266.1 in VS Code 1.74.2 (e8a3071ea4344d9d48ef8a4df2c097372b0c5161).
[2022-12-27T12:44:59.593Z] Start: Resolving Remote
[2022-12-27T12:44:59.610Z] Setting up container for folder or workspace: /Users/samuelebonini/Desktop/APROG/second_mid_term/1copy
[2022-12-27T12:44:59.613Z] Start: Check Docker is running
[2022-12-27T12:44:59.613Z] Start: Run: docker version --format {{.Server.APIVersion}}
[2022-12-27T12:44:59.765Z] Stop (152 ms): Run: docker version --format {{.Server.APIVersion}}
[2022-12-27T12:44:59.766Z] Server API version: 1.41
[2022-12-27T12:44:59.766Z] Stop (153 ms): Check Docker is running
[2022-12-27T12:44:59.766Z] Start: Run: docker volume ls -q
[2022-12-27T12:44:59.885Z] Stop (119 ms): Run: docker volume ls -q
[2022-12-27T12:44:59.898Z] Start: Run: docker ps -q -a --filter label=vsch.local.folder=/Users/samuelebonini/Desktop/APROG/second_mid_term/1copy --filter label=vsch.quality=stable
[2022-12-27T12:45:00.021Z] Stop (123 ms): Run: docker ps -q -a --filter label=vsch.local.folder=/Users/samuelebonini/Desktop/APROG/second_mid_term/1copy --filter label=vsch.quality=stable
[2022-12-27T12:45:00.022Z] Start: Run: docker ps -q -a --filter label=devcontainer.local_folder=/Users/samuelebonini/Desktop/APROG/second_mid_term/1copy
[2022-12-27T12:45:00.150Z] Stop (128 ms): Run: docker ps -q -a --filter label=devcontainer.local_folder=/Users/samuelebonini/Desktop/APROG/second_mid_term/1copy
[2022-12-27T12:45:00.150Z] Start: Run: /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper --ms-enable-electron-run-as-node /Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/spec-node/devContainersSpecCLI.js up --user-data-folder /Users/samuelebonini/Library/Application Support/Code/User/globalStorage/ms-vscode-remote.remote-containers/data --workspace-folder /Users/samuelebonini/Desktop/APROG/second_mid_term/1copy --workspace-mount-consistency cached --id-label devcontainer.local_folder=/Users/samuelebonini/Desktop/APROG/second_mid_term/1copy --log-level debug --log-format json --config /Users/samuelebonini/Desktop/APROG/second_mid_term/1copy/.devcontainer/devcontainer.json --default-user-env-probe loginInteractiveShell --mount type=volume,source=vscode,target=/vscode,external=true --skip-post-create --update-remote-user-uid-default on --mount-workspace-git-root true
[2022-12-27T12:45:00.311Z] (node:24087) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.
[2022-12-27T12:45:00.311Z] (Use `Code Helper --trace-deprecation ...` to show where the warning was created)
[2022-12-27T12:45:00.312Z] #devcontainers/cli 0.25.2. Node.js v16.14.2. darwin 22.1.0 arm64.
[2022-12-27T12:45:00.312Z] Start: Run: docker buildx version
[2022-12-27T12:45:00.477Z] Stop (165 ms): Run: docker buildx version
[2022-12-27T12:45:00.477Z] github.com/docker/buildx v0.8.1 5fac64c2c49dae1320f2b51f1a899ca451935554
[2022-12-27T12:45:00.477Z]
[2022-12-27T12:45:00.477Z] Start: Resolving Remote
[2022-12-27T12:45:00.479Z] Start: Run: git rev-parse --show-cdup
[2022-12-27T12:45:00.490Z] Stop (11 ms): Run: git rev-parse --show-cdup
[2022-12-27T12:45:00.491Z] Start: Run: docker ps -q -a --filter label=devcontainer.local_folder=/Users/samuelebonini/Desktop/APROG/second_mid_term/1copy
[2022-12-27T12:45:00.604Z] Stop (113 ms): Run: docker ps -q -a --filter label=devcontainer.local_folder=/Users/samuelebonini/Desktop/APROG/second_mid_term/1copy
[2022-12-27T12:45:00.606Z] Start: Run: docker inspect --type image debian:bullseye-slim
[2022-12-27T12:45:00.720Z] Stop (114 ms): Run: docker inspect --type image debian:bullseye-slim
[2022-12-27T12:45:02.021Z] local container features stored at: /Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/node_modules/vscode-dev-containers/container-features
[2022-12-27T12:45:02.022Z] Start: Run: tar --no-same-owner -x -f -
[2022-12-27T12:45:02.032Z] Stop (10 ms): Run: tar --no-same-owner -x -f -
[2022-12-27T12:45:02.033Z] Start: Run: docker buildx build --load --build-arg BUILDKIT_INLINE_CACHE=1 -f /var/folders/3_/gmcg3yrd7d3d7q4vfw0jyqkm0000gn/T/devcontainercli/container-features/0.25.2-1672145102020/Dockerfile-with-features -t vsc-1copy-35a5c23ba93fc94a4cdbfe5ffc09ab01 --target dev_containers_target_stage --build-arg _DEV_CONTAINERS_BASE_IMAGE=dev_container_auto_added_stage_label /Users/samuelebonini/Desktop/APROG/second_mid_term/1copy/.devcontainer
[2022-12-27T12:45:02.415Z] [+] Building 0.0s (0/0)
[2022-12-27T12:45:02.565Z] [+] Building 0.0s (0/0)
[2022-12-27T12:45:02.665Z] [+] Building 0.0s (1/2)
=> [internal] load build definition from Dockerfile-with-features 0.0s
=> => transferring dockerfile: 2.24kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 0.0s
[2022-12-27T12:45:02.816Z]
[2022-12-27T12:45:02.816Z] [+] Building 0.2s (2/3)
=> [internal] load build definition from Dockerfile-with-features 0.0s
=> => transferring dockerfile: 2.24kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/debian:bullseye-slim 0.1s
[2022-12-27T12:45:02.967Z] [+] Building 0.3s (2/3)
=> [internal] load build definition from Dockerfile-with-features 0.0s
=> => transferring dockerfile: 2.24kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/debian:bullseye-slim 0.3s
[2022-12-27T12:45:03.118Z] [+] Building 0.5s (2/3)
=> [internal] load build definition from Dockerfile-with-features 0.0s
=> => transferring dockerfile: 2.24kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/debian:bullseye-slim 0.4s
[2022-12-27T12:45:03.183Z] [+] Building 0.6s (3/3) FINISHED
=> [internal] load build definition from Dockerfile-with-features 0.0s
[2022-12-27T12:45:03.183Z] => => transferring dockerfile: 2.24kB 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/debian:bullseye-slim 0.5s
error: failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to process "\"${templateOption:installZsh}\"": unsupported modifier (i) in substitution
[2022-12-27T12:45:03.200Z] Stop (1167 ms): Run: docker buildx build --load --build-arg BUILDKIT_INLINE_CACHE=1 -f /var/folders/3_/gmcg3yrd7d3d7q4vfw0jyqkm0000gn/T/devcontainercli/container-features/0.25.2-1672145102020/Dockerfile-with-features -t vsc-1copy-35a5c23ba93fc94a4cdbfe5ffc09ab01 --target dev_containers_target_stage --build-arg _DEV_CONTAINERS_BASE_IMAGE=dev_container_auto_added_stage_label /Users/samuelebonini/Desktop/APROG/second_mid_term/1copy/.devcontainer
[2022-12-27T12:45:03.200Z] Error: Command failed: docker buildx build --load --build-arg BUILDKIT_INLINE_CACHE=1 -f /var/folders/3_/gmcg3yrd7d3d7q4vfw0jyqkm0000gn/T/devcontainercli/container-features/0.25.2-1672145102020/Dockerfile-with-features -t vsc-1copy-35a5c23ba93fc94a4cdbfe5ffc09ab01 --target dev_containers_target_stage --build-arg _DEV_CONTAINERS_BASE_IMAGE=dev_container_auto_added_stage_label /Users/samuelebonini/Desktop/APROG/second_mid_term/1copy/.devcontainer
[2022-12-27T12:45:03.200Z] at Doe (/Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/spec-node/devContainersSpecCLI.js:1894:1669)
[2022-12-27T12:45:03.200Z] at process.processTicksAndRejections (node:internal/process/task_queues:96:5)
[2022-12-27T12:45:03.201Z] at async EF (/Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/spec-node/devContainersSpecCLI.js:1893:1978)
[2022-12-27T12:45:03.201Z] at async uT (/Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/spec-node/devContainersSpecCLI.js:1893:901)
[2022-12-27T12:45:03.201Z] at async Poe (/Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/spec-node/devContainersSpecCLI.js:1899:2128)
[2022-12-27T12:45:03.201Z] at async Zf (/Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/spec-node/devContainersSpecCLI.js:1899:3278)
[2022-12-27T12:45:03.201Z] at async aue (/Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/spec-node/devContainersSpecCLI.js:2020:15276)
[2022-12-27T12:45:03.201Z] at async oue (/Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/spec-node/devContainersSpecCLI.js:2020:15030)
[2022-12-27T12:45:03.202Z] Stop (3052 ms): Run: /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper --ms-enable-electron-run-as-node /Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/spec-node/devContainersSpecCLI.js up --user-data-folder /Users/samuelebonini/Library/Application Support/Code/User/globalStorage/ms-vscode-remote.remote-containers/data --workspace-folder /Users/samuelebonini/Desktop/APROG/second_mid_term/1copy --workspace-mount-consistency cached --id-label devcontainer.local_folder=/Users/samuelebonini/Desktop/APROG/second_mid_term/1copy --log-level debug --log-format json --config /Users/samuelebonini/Desktop/APROG/second_mid_term/1copy/.devcontainer/devcontainer.json --default-user-env-probe loginInteractiveShell --mount type=volume,source=vscode,target=/vscode,external=true --skip-post-create --update-remote-user-uid-default on --mount-workspace-git-root true
[2022-12-27T12:45:03.203Z] Exit code 1
[2022-12-27T12:45:03.205Z] Command failed: /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper.app/Contents/MacOS/Code Helper --ms-enable-electron-run-as-node /Users/samuelebonini/.vscode/extensions/ms-vscode-remote.remote-containers-0.266.1/dist/spec-node/devContainersSpecCLI.js up --user-data-folder /Users/samuelebonini/Library/Application Support/Code/User/globalStorage/ms-vscode-remote.remote-containers/data --workspace-folder /Users/samuelebonini/Desktop/APROG/second_mid_term/1copy --workspace-mount-consistency cached --id-label devcontainer.local_folder=/Users/samuelebonini/Desktop/APROG/second_mid_term/1copy --log-level debug --log-format json --config /Users/samuelebonini/Desktop/APROG/second_mid_term/1copy/.devcontainer/devcontainer.json --default-user-env-probe loginInteractiveShell --mount type=volume,source=vscode,target=/vscode,external=true --skip-post-create --update-remote-user-uid-default on --mount-workspace-git-root true
[2022-12-27T12:45:03.205Z] Exit code 1
I'm following the instructions on how to run a devcontainer in vsc, so I'm stuck and have no clue how to proceed. How do I fix this error?
Looks like you are using a Community Template which is no longer maintained. Also, have you copy pasted the source code? Instead can you use the Dev Container: Add Dev Container Configuration Files command to add the Haskell Template to your repo?
As the Haskell Template is no longer maintained, it will only receive security updates (if needed). List of all Templates is here
Related
The following commands do not show the output ubuntu1 image:
docker buildx build -f 1.dockerfile -t ubuntu1 .
docker image ls | grep ubuntu1
# no output
1.dockerfile:
FROM ubuntu:latest
RUN echo "my ubuntu"
Plus, I cannot use the image in FROM statements in other docker files (both builds are on my local Windows box):
2.dockerfile:
FROM ubuntu1
RUN echo "my ubuntu 2"
docker buildx build -f 2.dockerfile -t ubuntu2 .
#error:
WARNING: No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
[+] Building 1.8s (4/4) FINISHED
=> [internal] load build definition from 2.dockerfile 0.0s
=> => transferring dockerfile: 84B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for docker.io/library/ubuntu1:latest 1.8s
=> [auth] library/ubuntu1:pull token for registry-1.docker.io 0.0s
------
> [internal] load metadata for docker.io/library/ubuntu1:latest:
------
2.dockerfile:1
--------------------
1 | >>> FROM ubuntu1:latest
2 | RUN echo "my ubuntu 2"
3 |
--------------------
error: failed to solve: ubuntu1:latest: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed (did you mean ubuntu:latest?)
Any idea what's going on? How can I see what buildx prepared and reference one image in another dockerfile?
Ok found a partial solution, I need to add --output type=docker as per the docs. This puts the docker in the image list. But I still cannot use it in the second docker.
I'm new to dockerfile and I have a hard time running a simple check command present in an image.
When I run a container with this image, I'm seeing this behavior from the shell (/bin/sh):
$ /bin/bash -c foamVersion
/bin/bash: foamVersion: command not found
$ /bin/bash
nextfoam#3ab61c950023:/home$ foamVersion
OpenFOAM-6
I'm running it from /bin/sh because this image seems to use it by default (Ubuntu 18).
My goal is to do get foamVersion result. Here's the dockerfile:
FROM nextfoam/baram6:latest
SHELL ["/bin/bash", "-c"]
RUN foamVersion
On host (Ubuntu 21.10), I run:
docker build -t mybaram:latest .
[+] Building 0.3s (5/5) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 384B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/nextfoam/baram6:latest 0.0s
=> CACHED [1/2] FROM docker.io/nextfoam/baram6:latest 0.0s
=> ERROR [2/2] RUN foamVersion 0.3s
------
> [2/2] RUN foamVersion:
#5 0.242 /bin/bash: foamVersion: command not found
type foamVersion
foamVersion is a function
foamVersion ()
{
if [ "$1" ]; then
foamInstDir=$FOAM_INST_DIR;
. $WM_PROJECT_DIR/etc/config.sh/unset;
. $foamInstDir/OpenFOAM-$1/etc/bashrc;
cd $WM_PROJECT_DIR;
echo "Changed to OpenFOAM-$1" 1>&2;
else
echo "OpenFOAM-$WM_PROJECT_VERSION" 1>&2;
fi
}
Any idea how to get foamVersion working from build process in dockerfile?
The logs are printed on the console when I do docker-compose up -d --build, how can I have them saved to a file so I can go through the logs?
These logs
0.0s => => transferring dockerfile: 771B
0.0s => [internal] load .dockerignore
0.0s => => transferring context: 2B
0.0s => [internal] load metadata for docker.io/library/python:3.8.12-bullseye
1.5s => [auth] library/python:pull token for registry-1.docker.io
0.0s => [internal] load build context
2.0s => => transferring context: 847.24kB
1.9s => [1/7] FROM docker.io/library/python:3.8.12-bullseye#sha256:b39ab988ac2fa749273cf6fdeb89fb3635850824419dc61
0.0s => => resolve docker.io/library/python:3.8.12-bullseye#sha256:b39ab988ac2fa749273cf6fdeb89fb3635850824419dc61
0.0s => CACHED [2/7] WORKDIR /usr/src/app
Docker already saves the logs in a json file. To find it you can do
docker inspect --format='{{.LogPath}}' [container id]
Or if you just want the outputs of the docker logs function just do this :
docker-compose logs --no-color > logs.txt
For docker-compose up --build, did you try that :
docker-compose up --build &> logs.txt
This can be achieved using docker-compose build and the --progress flag and specifying one of the following options: auto, tty, plain, or quiet (default "auto")
So to persist the logs in your terminal you would build with:
docker-compose build --progress plain
See --help:
$ docker-compose build --help
Usage: docker compose build [OPTIONS] [SERVICE...]
Build or rebuild services
Options:
--build-arg stringArray Set build-time variables for services.
--no-cache Do not use cache when building the image
--progress string Set type of progress output (auto, tty, plain, quiet) (default "auto")
--pull Always attempt to pull a newer version of the image.
-q, --quiet Don't print anything to STDOUT
--ssh string Set SSH authentications used when building service images. (use
'default' for using your default SSH Agent)
How do I make Docker (or podman, for that matter - interested in a solution for both or just one) re-run the CMD of a stopped container?
I've got this barebone Dockerfile:
FROM alpine
CMD ["date"]
I build it:
$ podman build -t reruncmd .
STEP 1: FROM alpine
STEP 2: CMD ["date"]
STEP 3: COMMIT reruncmd
--> 32ef88d23c0
Successfully tagged localhost/reruncmd:latest
32ef88d23c04eeb8b8bbafb1dc2851e9ce046fb88dbfddea020c16c3a1944461
Then I run it:
$ podman run --name re-run-cmd reruncmd
Fri Jun 18 06:20:33 UTC 2021
Now it's obviously stopped:
$ podman ps -a --filter 'name=^/?re-run-cmd$'
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2795e08162e1 localhost/reruncmd:latest date 2 minutes ago Exited (0) 2 minutes ago re-run-cmd
But when I restart the container, the CMD isn't run again:
$ podman container restart re-run-cmd
2795e08162e1089eb639098a804ec7d8743ed274d4d7acbdc97f6b07ec1ecdfe
What do I need to change?
I used podman in my examples above; get the exact same behaviour with docker:
$ docker build -t reruncmd .
[+] Building 14.4s (6/6) FINISHED
=> [internal] load build definition from Dockerfile 0.2s
=> => transferring dockerfile: 69B 0.2s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/alpine:latest 10.7s
=> [auth] library/alpine:pull token for registry-1.docker.io 0.0s
=> [1/1] FROM docker.io/library/alpine#sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0 3.1s
=> => resolve docker.io/library/alpine#sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0 0.0s
=> => sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0 1.64kB / 1.64kB 0.0s
=> => sha256:1775bebec23e1f3ce486989bfc9ff3c4e951690df84aa9f926497d82f2ffca9d 528B / 528B 0.0s
=> => sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83 1.47kB / 1.47kB 0.0s
=> => sha256:5843afab387455b37944e709ee8c78d7520df80f8d01cf7f861aae63beeddb6b 2.81MB / 2.81MB 1.0s
=> => extracting sha256:5843afab387455b37944e709ee8c78d7520df80f8d01cf7f861aae63beeddb6b 1.9s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:c5faac680f09542b5efbfc9f9f9fe40265ea17e0654a47ac67040cf2f14473fc 0.0s
=> => naming to docker.io/library/reruncmd 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
$ docker run --name re-run-cmd reruncmd
Fri Jun 18 06:25:47 UTC 2021
$ docker ps -a --filter 'name=^/?re-run-cmd$'
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0a3bfcc42814 reruncmd "date" 11 seconds ago Exited (0) 8 seconds ago re-run-cmd
$ docker container restart re-run-cmd
re-run-cmd
$ docker container start re-run-cmd
re-run-cmd
Actually, CMD is executed at every start. You have been misled by the fact that docker start does not show you the result of CMD. If you run docker start with -a or --attach key, you will see the output.
❯ docker run --name test debian echo hi
hi
❯ docker start test
test
❯ docker start -a test
hi
❯ docker logs test
hi
hi
hi
As you see from the last command, there were exactly three runs.
I am trying to cross-compile a rust app to run on my raspberry pi cluster. I saw buildx from docker was supposed to be able to make this possible. I have a minimal dockerfile right now, it is as follows:
FROM rust
RUN apt-get update
ENTRYPOINT ["echo", "hello world"]
I try to compile this by running the command: docker buildx build --platform=linux/arm/v7 some/repo:tag .
when I do I get the following error:
[+] Building 0.9s (5/5) FINISHED
=> [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 102B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/rust:latest 0.7s => CACHED [1/2] FROM docker.io/library/rust#sha256:65e254fff15478af71d342706b1e73b26fd883f3432813c129665a97a74e2278
0.0s => ERROR [2/2] RUN apt-get update 0.2s
------
> [2/2] RUN apt-get update:
#5 0.191 standard_init_linux.go:219: exec user process caused: exec format error
------ error: failed to solve: rpc error: code = Unknown desc = executor failed running [/bin/sh -c apt-get update]: exit code: 1
I feel like I'm missing something pretty basic here, hoping for someone to tell me why such a simple thing isn't working for me.
I am running docker version 20.10.1 on a Ubuntu OS
Thanks in advance!
output of docker buildx inspect --bootstrap:
Name: default
Driver: docker
Nodes:
Name: default
Endpoint: default
Status: running
Platforms: linux/amd64, linux/386
output of ls -l /proc/sys/fs/binfmt_misc/:
total 0
--w------- 1 root root 0 Dec 19 07:29 register
-rw-r--r-- 1 root root 0 Dec 19 07:29 status
To cross-compile requires qemu-user-static and binfmt-support.
$ sudo apt install -y qemu-user-static binfmt-support
qemu-user-static for user mode emulation of QEMU, and binfmt_misc for switching to QEMU when read other executable binary. Then, tell docker to use them.
$ docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
You must be afraid to run unknown image as privileged, but the content is safe. Next, create a user in docker for building images.
$ docker buildx create --name sofia # name as you like
$ docker buildx use sofia
$ docker buildx inspect --bootstrap
If you success, buildkit will be pulled:
[+] Building 9.4s (1/1) FINISHED
=> [internal] booting buildkit 9.4s
=> => pulling image moby/buildkit:buildx-stable-1 8.7s
=> => creating container buildx_buildkit_sofia0 0.7s
Name: sofia
Driver: docker-container
Nodes:
Name: sofia0
Endpoint: unix:///var/run/docker.sock
Status: running
Platforms: linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
Available targets expand!
Reference:
Building Multi-Architecture Docker Images With Buildx | by Artur Klauser | Medium