I'm trying to build an ARM (arm32v7) container, but using an x86_64 host. While I know there are some pretty cool things like Resin using Qemu shenanigans, and Multiarch for doing crossbuilding of generic containers, I have a slight issue: The container I'm trying to build starts off as multiarch, and so Docker always chooses the x86 image in the FROM instruction.
I want to build an ARM container from a Multi-arch Rust image on an x86 host. The problem is, I can't find any documentation to explicitly say I want to start with the ARM container and build from that, not the x86 container. Additionally, the tags on the image don't disambiguate, so I can't use those to select the starting container.
I've tried editing the /etc/docker/daemon.json file to contain:
{
"labels": [ "os=linux", "arch=arm32v7" ],
"experimental": true
}
but that hasn't helped at all. docker pull still retrieves the x86 images. The purpose of all this is to boost compile times for containers ultimately running on Raspberry Pi; compile times are super slow as it stands.
Are there any ways to explicitly say that I want to build starting with the ARM image?
It is possible to build simple Docker containers for another architecture ("cross-compile") by using an appropriate base image for that architecture. By simple, I mean images that don't need a RUN command in their Dockerfile to be built. This is because Docker doesn't have the ability to actually run commands in a container for another architecture. While this sounds restrictive, it can be quite powerful when combined with multi-stage builds to cross-compile code.
Let's walk through this step-by-step. First off, let's enable experimental mode for our Docker client to enable docker manifest by adding the following option to ~/.docker/config.json:
{
"experimental": "enabled"
}
We can then use docker manifest inspect debian:stretch to show the fat manifest that contains a digest for the image in the architecture we want to build for. For example, the arm32v7 image has "architecture": "arm" and "variant": "v7" specified under the platform key. Using jq, we can extract the digest for this image programatically:
docker manifest inspect debian:stretch | jq -r '.manifests[] | select(.platform.architecture == "arm" and .platform.variant == "v7") | .digest'`
This digest can then be used in the FROM command in a Dockerfile:
FROM debian#sha256:d01d682bdbacb520a434490018bfd86d76521c740af8d8dbd02397c3415759b1
It is then possible to COPY cross-compiled binary into the image. This binary could come from a cross-compiler on your machine or from another container in a multi-stage build. To get rid of the hard-coded digest in the Dockerfile's FROM line, it's possible to externalise it through a Docker build argument (ARG).
Related
I wonder if it is possible to make Docker automatically mount volumes during build or run container phase. With podman it is easy, using /usr/share/containers/mounts.conf, but I need to use Docker CE.
If it is not, may I somehow use host RHEL subscription during Docker build phase? I need to use RHEL UBI image and I have to use companys Satellite
A container image build in docker is designed to be self contained and portable. It shouldn't matter whether you run the build on your host or a CI server in the cloud. To do that, they rely on the build context and args to the build command, rather than other settings on the host, where possible.
buildah seems to have taken a different approach with their tooling, allowing you to use components from the host in your build, giving you more flexibility, but also fragility.
That's a long way of saying the "feature" doesn't exist in docker, and if it gets created, I doubt it would look like what you're describing. Instead, with buildkit, they allow you to inject secrets from the build command line, which are mounted into the steps where they are required. An example of this is available in the buildkit docs:
# syntax = docker/dockerfile:1.3
FROM python:3
RUN pip install awscli
RUN --mount=type=secret,id=aws,target=/root/.aws/credentials aws s3 cp s3://... ...
And to build that Dockerfile, you would pass the secret as a CLI arg:
$ docker build --secret id=aws,src=$HOME/.aws/credentials .
I want to build Docker image for AMD and ARM Graviton2 processors. I already know about multi-arch CLI command docker buildx build --platform linux/amd64,linux/arm64, manifests and the fact that Docker will pull the right image variant matching architecture.
I wonder if I have to use in my Dockerfile for ARM as a parent arm64v8/ubuntu:20.04 or it's fine to use ubuntu:20.04 for both? Will it work the same way on both architectures? What's the purpose of this official arm64v8 dockerhub repo?
There is a significant difference in build times - 5min with FROM ubuntu:20.04 vs 30min with FROM arm64v8/ubuntu:20.04.
Ok so I figured it out that this ubuntu:20.04 and this arm64v8/ubuntu:20.04 two images has exactly the same SHA. So Ubuntu:20.04 is only the parent of all these per-arch images and if you run docker manifest inspect ubuntu you will see it all.
So it's clear that arm64v8/ubuntu:20.04 repo is only for the case you want to build ARM image on different architecture (if you don't want to use multibuild buildx command). It that case you have to start writing your Dockerfile with FROM arm64v8/ubuntu:20.04.
I have a Dockerfile that pulls FROM hivemq/hivemq-ce. This works well on "standard" platforms but not on the Raspberry Pi. So I built the image for arm64 myself directly on the RasPi following the tutorial in the official HiveMQ repo and pushed it to my private docker registry. The Dockerfile works well on RasPi if I change the FROM line to FROM my-private-registry/hivemq-ce.
So now I have images that work on different platforms in different sources. But how can I make my Dockerfile work on all platforms? Is there any way to pull from different sources for different architectures?
As outlined here docker supports multiple cpu architectures and will select the correct image for the correct platform. So you could build a non arm64 image for frederikheld/hivemq-ce and push it to the same location without affecting the arm64 image.
You should be able to run docker manifest inspect frederikheld/hivemq-ce to see the available architectures for a given image.
I went with this approach:
start.sh:
...
if [ "$(uname -m)" = "aarch64" ]; then
docker-compose -f docker-compose.aarch64.yml up -d --build --force-recreate
else
docker-compose up -d --build --force-recreate
fi
...
This requires one standard docker-compose.yml and additional docker-compose.<architecture>.yml for each architecture that has different needs.
It's not great, but it works in my environment.
I'm still open for better solutions though!
I'm using centos:6 and need to build an image -using Dockerfile- which has a number of rpms installed (Oracle client, in fact). I don't want to copy/add the rpms inside the image, as it will make the image bulky (and I have to remove the rpms after install, anyway).
Is there a way to mount a folder on the host (CentOS, itself) which contains the rpms, on the image, via Dockerfile and/or using any option of "docker build" command, during the BUILD phase?
There's no way, according to the docs for build and run as well as from my experience.
Mounting things is done when you're running a container, rather than when building an image.
Can images in docker be installed by source code. What I mean is I want to build my environment with several components and their dependencies. I want to build the components by executing the source code. Does docker allow me to do something like that ?
Sounds like you want a dynamic docker build process. For this you need docker 1.9 up , use --build-args to pass argument variables . You can build multiple images from a single docker file passing in different argument values each time.
Obviously suffers the reproducibility issue discussed.
Yes, it allows you to do that. You need to start with a base image. For example Ubuntu:
docker pull ubuntu
docker run -t -i ubuntu /bin/bash
After that you will have a bash running inside of your container. Then you can apt-get stuff, run code, change configurations, clone repos and whatever else you want. After that to convert your container into an image you need to commit the container.
Be aware that this is not the Docker way of building infrastructure. The correct way is to create a recipe for building you images by using other base images and standard Docker instructions. This will allow your infrastructure to be stateless, faster to build and will provide more reproducibility.