docker buildx "exec user process caused: exec format error" - docker

I am trying to cross-compile a rust app to run on my raspberry pi cluster. I saw buildx from docker was supposed to be able to make this possible. I have a minimal dockerfile right now, it is as follows:
FROM rust
RUN apt-get update
ENTRYPOINT ["echo", "hello world"]
I try to compile this by running the command: docker buildx build --platform=linux/arm/v7 some/repo:tag .
when I do I get the following error:
[+] Building 0.9s (5/5) FINISHED
=> [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 102B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/rust:latest 0.7s => CACHED [1/2] FROM docker.io/library/rust#sha256:65e254fff15478af71d342706b1e73b26fd883f3432813c129665a97a74e2278
0.0s => ERROR [2/2] RUN apt-get update 0.2s
------
> [2/2] RUN apt-get update:
#5 0.191 standard_init_linux.go:219: exec user process caused: exec format error
------ error: failed to solve: rpc error: code = Unknown desc = executor failed running [/bin/sh -c apt-get update]: exit code: 1
I feel like I'm missing something pretty basic here, hoping for someone to tell me why such a simple thing isn't working for me.
I am running docker version 20.10.1 on a Ubuntu OS
Thanks in advance!
output of docker buildx inspect --bootstrap:
Name: default
Driver: docker
Nodes:
Name: default
Endpoint: default
Status: running
Platforms: linux/amd64, linux/386
output of ls -l /proc/sys/fs/binfmt_misc/:
total 0
--w------- 1 root root 0 Dec 19 07:29 register
-rw-r--r-- 1 root root 0 Dec 19 07:29 status

To cross-compile requires qemu-user-static and binfmt-support.
$ sudo apt install -y qemu-user-static binfmt-support
qemu-user-static for user mode emulation of QEMU, and binfmt_misc for switching to QEMU when read other executable binary. Then, tell docker to use them.
$ docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
You must be afraid to run unknown image as privileged, but the content is safe. Next, create a user in docker for building images.
$ docker buildx create --name sofia # name as you like
$ docker buildx use sofia
$ docker buildx inspect --bootstrap
If you success, buildkit will be pulled:
[+] Building 9.4s (1/1) FINISHED
=> [internal] booting buildkit 9.4s
=> => pulling image moby/buildkit:buildx-stable-1 8.7s
=> => creating container buildx_buildkit_sofia0 0.7s
Name: sofia
Driver: docker-container
Nodes:
Name: sofia0
Endpoint: unix:///var/run/docker.sock
Status: running
Platforms: linux/amd64, linux/arm64, linux/riscv64, linux/ppc64le, linux/s390x, linux/386, linux/arm/v7, linux/arm/v6
Available targets expand!
Reference:
Building Multi-Architecture Docker Images With Buildx | by Artur Klauser | Medium

Related

Migrating local Docker images to buildx

I have been using several locally built docker images that I am trying to migrate to building with docker buildx. Essentially I have a local container to build something from source, and then a prod container that references the local build container.
For example, I have two Dockerfiles in a directory, Dockerfile.builder and Dockerfile.prod
# Dockerfile.builder
FROM maven:3-eclipse-temurin-17
ARG VERSION
# clone git repository, do building things
# Dockerfile.prod
ARG BUILDER_TAG
FROM builder:$BUILDER_TAG as builder
# pull in build artifacts from builder container, do other things
Then from that working directory I would build the containers like so:
docker build --no-cache --build-arg VERSION=$BUILD_VERSION -t builder-container:${BUILD_VERSION} -f Dockerfile.builder .
docker build --no-cache --build-arg BUILDER_TAG=$BUILD_VERSION -t prod-container:${BUILD_VERSION} -f Dockerfile.prod .
I'm trying to adapt this to docker buildx but am struggling with the extra overhead and complexity.
I think this would be the closest to what I'm wanting to do:
docker buildx build --no-cache --build-arg VERSION=$BUILD_VERSION -t builder:${BUILD_VERSION} - < Dockerfile.builder
However, when I try that, I get the following:
[+] Building 4.3s (2/2) FINISHED
=> ERROR [internal] load .dockerignore 4.0s
=> => transferring context: 0.0s
=> ERROR [internal] load build definition from Dockerfile 4.3s
=> => transferring dockerfile: 30B 0.0s
------
> [internal] load .dockerignore:
------
------
> [internal] load build definition from Dockerfile:
------
ERROR: failed to solve: failed to read dockerfile: failed to remove: /var/lib/docker/zfs/graph/hiqgpytehhglat0nn1a06dop1/.zfs: unlinkat /var/lib/docker/zfs/graph/hiqgpytehhglat0nn1a06dop1/.zfs/snapshot: operation not permitted
So that is telling me that it's not reading from my Dockerfile that I'm trying to supply via STDIN, and the path /var/lib/docker/zfs/graph/hiqgpytehhglat0nn1a06dop1/.zfs/snapshot doesn't exist.
Am I invoking docker buildx correctly for my use case?
Do I need to start with a fresh graph directory in order to start building my own images with buildx, or is there something I need to do with docker buildx create first?
I'm finding Docker's documentation on buildx very lacking in terms of how it differs conceptually from the legacy docker build, and I think that's part of my problem.
buildx config:
docker buildx inspect
Name: default
Driver: docker
Last Activity: 2023-02-15 03:27:29 +0000 UTC
Nodes:
Name: default
Endpoint: default
Status: running
Buildkit: 23.0.1
Platforms: linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386

docker buildx fails to show result in image list

The following commands do not show the output ubuntu1 image:
docker buildx build -f 1.dockerfile -t ubuntu1 .
docker image ls | grep ubuntu1
# no output
1.dockerfile:
FROM ubuntu:latest
RUN echo "my ubuntu"
Plus, I cannot use the image in FROM statements in other docker files (both builds are on my local Windows box):
2.dockerfile:
FROM ubuntu1
RUN echo "my ubuntu 2"
docker buildx build -f 2.dockerfile -t ubuntu2 .
#error:
WARNING: No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
[+] Building 1.8s (4/4) FINISHED
=> [internal] load build definition from 2.dockerfile 0.0s
=> => transferring dockerfile: 84B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for docker.io/library/ubuntu1:latest 1.8s
=> [auth] library/ubuntu1:pull token for registry-1.docker.io 0.0s
------
> [internal] load metadata for docker.io/library/ubuntu1:latest:
------
2.dockerfile:1
--------------------
1 | >>> FROM ubuntu1:latest
2 | RUN echo "my ubuntu 2"
3 |
--------------------
error: failed to solve: ubuntu1:latest: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed (did you mean ubuntu:latest?)
Any idea what's going on? How can I see what buildx prepared and reference one image in another dockerfile?
Ok found a partial solution, I need to add --output type=docker as per the docs. This puts the docker in the image list. But I still cannot use it in the second docker.

Docker use local image with buildx

I am building an image for a docker container running on a different architecture. As I don't have internet access all the time, I usually just pull the image when I have internet and docker uses the local image instead of pulling a new one. After I started to build the image with buildx, this does not seem to work anymore. Is there any way to tell docker to only use the local image? When I have connection, docker seems to check wherever there is a new version available but uses the local (or cached) image as I would expect it without internet connection.
$ docker image ls
ros galactic bac817d14f26 5 weeks ago 626MB
$ docker image inspect ros:galactic
...
"Architecture": "arm64",
"Variant": "v8",
"Os": "linux",
...
Example build command
$ docker buildx build . --platform linux/arm64
WARN[0000] No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load
[+] Building 0.3s (3/3) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 72B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> ERROR [internal] load metadata for docker.io/library/ros:galactic 0.0s
------
> [internal] load metadata for docker.io/library/ros:galactic:
------
Dockerfile:1
--------------------
1 | >>> FROM ros:galactic
2 | RUN "echo hello"
3 |
--------------------
error: failed to solve: failed to fetch anonymous token: Get "https://auth.docker.io/token?scope=repository%3Alibrary%2Fros%3Apull&service=registry.docker.io": proxyconnect tcp: dial tcp 127.0.0.1:3333: connect: connection refused
My workaround for this is to explicitly state the registry in the Dockerfile FROM sections, be it your own private registry or dockerhub.
For example, to use the dockerhub ubuntu:latest image, instead of just doing FROM ubuntu:latest I would write in the Dockerfile:
FROM docker.io/library/ubuntu:latest
To use myprivateregistry:5000 I would use:
FROM myprivateregistry:5000/ubuntu:latest
And also you must set --pull=false flag for the docker buildx build or DOCKER_BUILDKIT=1 docker build commands. When you have internet again you can use --pull=true again

Restart stopped container and execute CMD again

How do I make Docker (or podman, for that matter - interested in a solution for both or just one) re-run the CMD of a stopped container?
I've got this barebone Dockerfile:
FROM alpine
CMD ["date"]
I build it:
$ podman build -t reruncmd .
STEP 1: FROM alpine
STEP 2: CMD ["date"]
STEP 3: COMMIT reruncmd
--> 32ef88d23c0
Successfully tagged localhost/reruncmd:latest
32ef88d23c04eeb8b8bbafb1dc2851e9ce046fb88dbfddea020c16c3a1944461
Then I run it:
$ podman run --name re-run-cmd reruncmd
Fri Jun 18 06:20:33 UTC 2021
Now it's obviously stopped:
$ podman ps -a --filter 'name=^/?re-run-cmd$'
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2795e08162e1 localhost/reruncmd:latest date 2 minutes ago Exited (0) 2 minutes ago re-run-cmd
But when I restart the container, the CMD isn't run again:
$ podman container restart re-run-cmd
2795e08162e1089eb639098a804ec7d8743ed274d4d7acbdc97f6b07ec1ecdfe
What do I need to change?
I used podman in my examples above; get the exact same behaviour with docker:
$ docker build -t reruncmd .
[+] Building 14.4s (6/6) FINISHED
=> [internal] load build definition from Dockerfile 0.2s
=> => transferring dockerfile: 69B 0.2s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/alpine:latest 10.7s
=> [auth] library/alpine:pull token for registry-1.docker.io 0.0s
=> [1/1] FROM docker.io/library/alpine#sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0 3.1s
=> => resolve docker.io/library/alpine#sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0 0.0s
=> => sha256:234cb88d3020898631af0ccbbcca9a66ae7306ecd30c9720690858c1b007d2a0 1.64kB / 1.64kB 0.0s
=> => sha256:1775bebec23e1f3ce486989bfc9ff3c4e951690df84aa9f926497d82f2ffca9d 528B / 528B 0.0s
=> => sha256:d4ff818577bc193b309b355b02ebc9220427090057b54a59e73b79bdfe139b83 1.47kB / 1.47kB 0.0s
=> => sha256:5843afab387455b37944e709ee8c78d7520df80f8d01cf7f861aae63beeddb6b 2.81MB / 2.81MB 1.0s
=> => extracting sha256:5843afab387455b37944e709ee8c78d7520df80f8d01cf7f861aae63beeddb6b 1.9s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:c5faac680f09542b5efbfc9f9f9fe40265ea17e0654a47ac67040cf2f14473fc 0.0s
=> => naming to docker.io/library/reruncmd 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
$ docker run --name re-run-cmd reruncmd
Fri Jun 18 06:25:47 UTC 2021
$ docker ps -a --filter 'name=^/?re-run-cmd$'
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0a3bfcc42814 reruncmd "date" 11 seconds ago Exited (0) 8 seconds ago re-run-cmd
$ docker container restart re-run-cmd
re-run-cmd
$ docker container start re-run-cmd
re-run-cmd
Actually, CMD is executed at every start. You have been misled by the fact that docker start does not show you the result of CMD. If you run docker start with -a or --attach key, you will see the output.
❯ docker run --name test debian echo hi
hi
❯ docker start test
test
❯ docker start -a test
hi
❯ docker logs test
hi
hi
hi
As you see from the last command, there were exactly three runs.

Building alpine based images for PowerPC (PPC64le) fails when trying to run apk add

Adding any apk packages whilst building docker images for target platform linux/ppc64le results in a "bad signature" error.
6 0.470 (1/1) Installing sudo (1.8.27-r0) 6 0.537 ERROR: sudo-1.8.27-r0: BAD signature
I have tried many packages and all of them results in this error. I have however been successful if I use alpine version 3.8 and below.
Im doing a docker build using buildx on my Macbook Pro (X86) I can successfully build docker images for operating systems ubuntu and debian from my macbook for PPC64le, but not for alpine version 3.9 and above.
Dockerfile
FROM alpine
RUN apk update
RUN apk add sudo
Docker build command
docker buildx build -t alpine_test . --platform=linux/ppc64le --load
I expect this simple build script to build a simple alpine docker image for linux/ppc64le architecture with the sudo package installed.
However I get the following error during the build process:
[+] Building 3.6s (6/6) FINISHED => [internal] load build
definition from Dockerfile 0.0s => => transferring dockerfile: 81B 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load metadata for docker.io/library/alpine:latest 2.9s => [1/3] FROM docker.io/library/alpine#sha256:72c42ed48c3a2db31b7dafe17d275b634664a708d901ec9fd57b1529280f01fb 0.0s => => resolve docker.io/library/alpine#sha256:72c42ed48c3a2db31b7dafe17d275b634664a708d901ec9fd57b1529280f01fb 0.0s => CACHED [2/3] RUN apk update 0.0s
=> ERROR [3/3] RUN apk add sudo 0.7s
[3/3] RUN apk add sudo:
6 0.452 (1/1) Installing sudo (1.8.27-r0)
6 0.566 ERROR: sudo-1.8.27-r0: BAD signature
#6 0.577 1 error; 6 MiB in 14 packages
failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to build LLB: executor failed running [/bin/sh -c apk add sudo]: exit code: 1

Resources