By coincidence, I discovered today that two binaries compiled on my MacBook Pro 2017 using cross-compiling with two different architectures both work inside the same ubuntu:latest docker container. Here is what happened:
I first compiled a hello-world running env GOOS=linux GOARCH=amd64 go build, lets call this binary A. I then compiled the same hello-world program running env GOOS=linux GOARCH=arm64 go build, lets call this binary B.
I check the md5sum of A and B and made sure that they were different binaries. I copied both of these binaries into the same docker container running ubuntu:latest as its base, expecting B to fail when executing. However, they were both executed perfectly.
Similarly, a binary compiled using env GOOS=linux GOARCH=ppc64 go build will not execute inside the same docker container. Does anyone know why this is?
For reference, the output of uname -sm on my MacBook gives Darwin x86_64. The output of uname -sm inside my docker container running ubuntu:latest gives Linux x86_64
It looks like this functionality was added way back in 1.13, but there's a lot more being done to make this seamless for developers that docker announced this week. From the Docker for Mac release notes:
Support for arm, aarch64, ppc64le architectures using qemu
What happening in this scenario is binfmt_misc with qemu are used to allow programs to be executed from other architectures. This requires changes on the host, which is why you often get errors trying to run commands for other architectures in Linux.
Docker only supports ppc64le. More info here: https://docs.docker.com/docker-for-mac/multi-arch/
Related
I'm trying use MacBook M1 with docker build Go project, this project depends on lib confluent-kafka-go, thus must use below ways.
Both works, but what's the difference?
What I can see, first one use multi-platform image, second one use a special amd64/golang image.
docker run --platform linux/amd64 --rm -v $PWD:/build -w /build -e GOOS=linux -e GOARCH=amd64 golang:1.17-buster go build
docker run --rm -v "$PWD":/build -w /build -e GOOS=linux -e GOARCH=amd64 amd64/golang:1.17 go build
That would be the difference between ""Shared" and "Simple" tags":
"Simple Tags" are instances of a "single" Linux or Windows image.
It is often a manifest list that can include the same image built for other architectures; for example, mongo:4.0-xenial currently has images for amd64 and arm64v8.
The Docker daemon is responsible for picking the appropriate image for the host architecture.
"Shared Tags" are tags that always point to a manifest list which includes some combination of potentially multiple versions of Windows and Linux images across all their respective images' architectures -- in the mongo example, the 4.0 tag is a shared tag consisting of (at the time of this writing) all of 4.0-xenial, 4.0-windowsservercore-ltsc2016, 4.0-windowsservercore-1709, and 4.0-windowsservercore-1803.
So:
The "Simple Tags" enable docker run mongo:4.0-xenial to "do the right thing" across architectures on a single platform (Linux in the case of mongo:4.0-xenial).
The "Shared Tags" enable docker run mongo:4.0 to roughly work on both Linux and as many of the various versions of Windows that are supported (such as Windows Server Core LTSC 2016, where the Docker daemon is again responsible for determining the appropriate image based on the host platform and version).
In your case, the Docker Golang image offers both tags
golang:1.17-buster is a shared tag, from which you are extracting the linux/amd64 through the --platform option.
If you did not specify the platform, would get a warning like:
The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm/v7) and no specific platform was requested
amd64/golang:1.17 is a simple tag, already tailored to linux/amd64
I have a Docker file which creates an image and then I run it using docker compose together with a container built using a Postgres image. (To set up a local environment of Airflow - we use the mwaa local runner).
Recently I got a new M1 pro machine and I’m getting into issues running the container.
The problem is, from my understanding, is that the image is being built and then run using my machine which has a different kind of cpu architecture, which causes pip to look for wheels for this kind of architecture. My college has an intel Mac and he says he doesn’t experience any issues.
The build phase is ok, but when I run the container, we’ve set docker compose to run an entrypoint script that also installs some airflow providers and other dependencies, one of which is plyvel, which fails to install and cause other packages not to install as well. When I remove plyvel from the requirements.txt file, the installation completes but some of my airflow providers are missing some files or attributes which create its own issues.
I tried forcing docker to building and running the image and container using amd64 by changing the build command to:
docker build --platform linux/amd64 --rm --compress $3 -t amazon/mwaa-local:2.2 ./docker which runs but runs very slowly.
Also, added platform: linux/amd64 in docker-compose file to both the postgres and the local-runner containers.
Then, when I spin up the container, it takes a lot of time to get into a working state when I can access the airflow ui in the web browser and then it is very slow in the ui - every link is taking a few seconds to process and direct me to the new place. I believe this is due to some emulation or something.
I then found this article:
https://medium.com/nttlabs/buildx-multiarch-2c6c2df00ca2
It says there is a faster way to run without emulation but I didn’t understand how to implement.
In addition, found this Reddit thread:
https://www.reddit.com/r/docker/comments/qlrn3s/docker_on_m1_max_horrible_performance/
They suggest building and running the container inside a virtual machine, not sure if that is the way to go in my situation.
I tried both Docker Desktop and Rancher Desktop (with dockerd ) but both shows the same symptoms.
Why are some docker images incompatible with platforms like Raspberry Pi (linux/arm/v7)?
Furthermore, can you modify a Dockerfile or another config file such that it is compatible?
Thanks for any suggestions!
So far I've installed docker and docker-compose then followed the puckel/docker-airflow readme, skipping the optional build, then tried to run the container by:
docker run -d -p 8080:8080 puckel/docker-airflow webserver
got this warning:
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm/v7) and no specific platform was requested
found this issue and ran:
docker run -d -p 8080:8080 --platform linux/arm/v7 puckel/docker-airflow:latest webserver
then, this error:
docker: Error response from daemon: image with reference puckel/docker-airflow:latest was found but does not match the specified platform: wanted linux/arm/v7, actual: linux/amd64.
See 'docker run --help'.
Executable files, i.e. binary files, are dependent on the computer's architecture (amd64, arm...). Docker's image contains binary files. That is, the docker image is computer architecture dependent.
Therefore, if you look at the registry of docker, the OS and Architecture of the image are specified. Refer to the dockerhub/puckel/docker-airflow you used, linux/amd64 You can see it only supports. In other words, it doesn't work in arm architecture.
If you want to run this arm architecture there will be several ways, but the point is one. It is to build the result of building the origin code with arm, not amd64, as docker image.
In github.com/puckel/docker-airflow, guidelines for building are well stated.
First, if you look at the Dockerfile provided by the github, it starts from the image FROM python:3.7-slim-buster. For the corresponding python:3.7-slim-buster, it supports linux/arm/v5, linux/arm/v7, linux/arm/v5, linux/arm64/v8. dockerhub/python/3.7-slim-buster
In other words, you can build to arm architecture
I have experience creating images for multiple architectures through the docker buildx command. Of course, other methods exist, but I will only briefly introduce the commands below.
dockerhub/buildx
docker buildx is an experimental feature, and it is still recommended Experimental features must not be used in production environments.
docker buildx build --platform linux/arm/v5,linux/arm/v7 .
I have a Dockerfile that pulls FROM hivemq/hivemq-ce. This works well on "standard" platforms but not on the Raspberry Pi. So I built the image for arm64 myself directly on the RasPi following the tutorial in the official HiveMQ repo and pushed it to my private docker registry. The Dockerfile works well on RasPi if I change the FROM line to FROM my-private-registry/hivemq-ce.
So now I have images that work on different platforms in different sources. But how can I make my Dockerfile work on all platforms? Is there any way to pull from different sources for different architectures?
As outlined here docker supports multiple cpu architectures and will select the correct image for the correct platform. So you could build a non arm64 image for frederikheld/hivemq-ce and push it to the same location without affecting the arm64 image.
You should be able to run docker manifest inspect frederikheld/hivemq-ce to see the available architectures for a given image.
I went with this approach:
start.sh:
...
if [ "$(uname -m)" = "aarch64" ]; then
docker-compose -f docker-compose.aarch64.yml up -d --build --force-recreate
else
docker-compose up -d --build --force-recreate
fi
...
This requires one standard docker-compose.yml and additional docker-compose.<architecture>.yml for each architecture that has different needs.
It's not great, but it works in my environment.
I'm still open for better solutions though!
I have setup a build pipeline on an ARM device that is building a .NET Core application. The last step of the build pipeline would be to store the compiled .NET Core app in a docker image.
Is it possible to store the app in the .NET Core runtime image for X86?
My hope is that the .NET Core app does not care about the system architecture as long as the .NET framework is deployed. And that docker does not need to start the X86 image to generate the new image:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
COPY /my-application/build/ /app/
EXPOSE 80/tcp
WORKDIR /app
ENTRYPOINT ["dotnet", "app.dll"]
If I understand your question correctly you have a ARM machine running the pipeline and you want it to compile both the ARM and the x86 image?
Buildx - for cross-platform image building
Sure you can. You can use buildx to manage the cross-compiling for you. So go ahead and install buildx.
After you have setup buildx and configured it. You can just run:
docker buildx build \
--platform linux/amd64,linux/386,linux/arm/v7 \
--push \
-t docker_user/docker_image:latest \
.
Since the base image this will work for every platform you want. You can change the platforms you want to build for.
What buildx does, it emulate the target platform and execute all the steps in your regular docker file as if running on that platform. Buildx also tags the image, -t parameter. And Pushes it to the docker registry of choice, if you specify --push.
Actually it pushes an image per platform and a manifest file joining those images. If an other docker client wants to run the image, the manifest is loaded and the needed platform is selected.
In docker compiling
For this to work, you'll need to compile the image in the docker pipeline. That is recommended anyway because compiling it locally and then copying it to the container will result in different images depending on the the installed software on the machine building the image.
Follow the instructions here to created a needed dockerfile.
Requirements
For this to work the base image has to also support multiple architectures. You can check this in the docker registry. It is the case for the dotnet core images. But if your base image isn't supporting the platform it probably won't work. However recompiling the entire image should work (as long al the base image is supporting that platform).
See in action
You also have a github action for installing buildx in a github runner. I use this for several of my libraries, see this workflow file or the result here