Quick and easy way to get around Docker's architecture image specifications? - docker

One of the reasons that we switched to Docker a few months back was to eliminate the need of having to maintain VMs with our latest tools, etc. Figured Docker would be a lot easier to just simply pull down the image and get going. However, it's become quite a pain lately.
Doesn't run in a Windows VM (because of lack of nested VM support and requires days of troubleshooting), getting it running in RHEL has become quite painful (with some saying Docker and RHEL don't work well together), and now I'm running into a platform support issue with the Raspberry Pi 4.
When trying to run my container on the Raspberry Pi 4, it's now telling me that the container's architecture doesn't match the host's architecture.
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm/v7) and no specific platform was requested
A little confusing because I was hoping Docker would give us a lot more flexibility and compatibility with our customer platforms, but it seems to be quite painful.
From my understanding, we have to re-build the entire container and push out arm containers to arm systems, and amd64 to others?
Is there not just a quick and dirty workaround that I can use to get the container to run and ignore the architecture?

Shared v.s. Simple Tag
To indirectly address the issue you mentioned, you can use a shared tag that is multi-platform. You can see the difference here.
An example:
The "Simple Tags" enable docker run mongo:4.0-xenial to "do the right thing" across architectures on a single platform (Linux in the case of mongo:4.0-xenial).
The "Shared Tags" enable docker run mongo:4.0 to roughly work on both Linux and as many of the various versions of Windows that are supported
Force platform
To directly address the issue you mentioned, this warning HAS happened before to others. Check out the following bitwarden docker discussion. The fix is to force the platform when using docker run, like so:
docker run --platform linux/arm64 image_name_or_id
Make sure --platform argument is defined before the image name/id.
Building multi-platform images
docker run --platform parameter was added probably here but not to the docs! It's shown when you run docker run --help
platform string: Set platform if server is multi-platform capable
Multi-Platform Docker Builds
buildx

Related

Docker & Kubernetes & architecture: understanding platform differences

Intro
There is an option --platform for Docker image to be run and config platform for docker-compose.
Also, almost in all official Docker images in hub.docker.com there is some of supported architectures in one tag.
Example, Ubuntu official image:
Most of Servers (also in Kubernetes) are linux/amd64.
I updated my MacBook to new one with their own Silicon chip (M1/M2...) and now Docker Desktop showing me message:
For official images (you can see them without yellow note) it downloads automatically needed platform (I guess).
But for custom created images (in private repository like nexus, artifacts) I have no influence. Yes, I can build appropriate images (like with buildx) for different platforms and push it to the private repository, but, in companies, where repos managed by DevOps - it is tricky to do so. They say that the server architecture is linux/amd64, and if I develop web-oriented software (PHP etc.) on a different platform, even if the version (tag) is the same - then the environment is different, and there is no guarantee that it will work on the server.
I assumed that it is only the difference in interpretation of instructions between the software and the hardware.
I would like to understand the subject better. There is a lot of superficial information on the web, no details.
Questions
what "platform/architecture" for Docker image does it really means? Like core basics.
Will you really get different code for interpreted programming languages?
It seems to me that if the wrong platform is specified, the containers work very slowly. But how to measure this (script performance, interaction with the host file system, etc.)
TLDR
Build multi-arch images supporting multiple architectures
Always ensure that the image you're trying to run has compatible architecture
what "platform/architecture" for docker image does it really means? Like core basics. Links would be appreciated.
It means that some of the compiled binary code within the image contains CPU instructions exlusive to that specific architecture.
If you run that image on the incorrect architecture, it'll either be slower due to the incompatible code needing to run through an emulator, or it might even not work at all.
Some images are "multi-arch", where your Docker installation selects the most suitable architecture of the image to run.
Will you really get different code for interpreted programming languages?
Different machine code, yes. But it will be functionally equivalent.
It seems to me that if the wrong platform is specified, the containers work very slowly. But how to measure this (script performance, interaction with the host file system, etc.)
I recommend to always ensure you're running images meant for your machine's infrastructure.
For the sake of science, you could do an experiment.
You can build an image that is meant to run a simple batch job for two different architectures, and then you can try running them both on your machine. Compare the time it takes the containers to finish.
Sources:
https://serverfault.com/questions/1066298/why-are-docker-images-architecture-specific#:~:text=Docker%20images%20obviously%20contain%20processor,which%20makes%20them%20architecture%20dependent.
https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/
https://www.reddit.com/r/docker/comments/o7u8uy/run_linuxamd64_images_on_m1_mac/

quarkus container with state hangs on Apple M1

I am building a quarkus (Keycloak 18) based container on the following way:
Start a container from this image quay.io/keycloak/keycloak:18.0.0
Fill the running container with state (users, roles, clients using terraform)
Commit the running container with docker commit running-container my-filled-keycloak-image
The whole process is running in a Github action pipeline
The image can be used on a regular basis and runs quite normal. Only users of an Apple M1 seem to have problem with this image. In most cases starting a container simply gets stuck and hangs until a docker timeout occurs. Sometimes the container is able to start, but on very low performance and very slow.
The problem seems to be related to the Apple M1 architecture and up to now we do not have an idea how to fix this. Any help on this is greatly appreciated.
It looks like you are building images on your CI, which is running on amd64 (Intel) architecture. Unfortunately, this architecture is not natively supported on M1, which uses arm64. So your users with M1 use emulation (search Rosetta, which is not a perfect). The better option is to build multiarch image (image, which contains amd64,arm64,... architectures), so users will use native architecture on their machines.
There is a lot of resources how to build multiarch (multiplatform) Docker images. Random one: https://jitsu.com/blog/multi-platform-docker-builds

Does Docker image include OS?

I have below Dockerfile"
FROM openjdk:12.0.2
EXPOSE 8080
ADD ./build/libs/*.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
The resulting Docker image encapsulates Java program. When I deploy this Docker image to Windows Server or Linux, does the image always include OS like Linux which runs on top of host OS (Windows Server or Linux) ?
I am asking this question in the sense of Docker image being physical box which contains other boxes (one being openjdk), does this box also contain Linux OS box that I can pull out of it ( assuming if this was possible) and install it as Linux OS on empty machine?
That depends on what you call the "OS". It will always contain stuff from the distribution image, it is built on.
For example, a debian based images will include apt and other debian-specific tools. But most of the stuff you need on a "complete" machine (as in non-container), will have been removed to keep the image as small as possible.
It will not contain the kernel, as it is running on the host machine and is controlled by the host's kernel.
The "official" OpenJDK images from the Docker Hub are available in variants based on a number of different Linux distributions. There is a cut-down Debian, an Alpine, and others. There are advantages and disadvantages to each.
The image will need to contain enough operating system dependencies to allow the JVM to run. It may also include basic diagnostic and management tools -- enough to carry out rudimentary troubleshooting in the container, anyway. You can expect all the images to contain at least basic console shell tools like "cp" and "cat", although they differ in implementation. For example, the Alpine variant gets these utilities from BusyBox, not from a conventional GNU/Linux installation.
It's possible to create a Docker image that contains no platform dependencies at all, but there's little incentive to be that minimal -- you'd just have to build more stuff into the application program itself.
It doesn't include the entire operating system, but the image will be dependent on either linux or windows, you can't build an image that runs on both in one Dockerfile.
The reason for the dependency is that a docker container shares resources with it's host machine in a carefully fenced off way, this mechanism is different on windows and linux (though to you, as a docker user, the difference is invisible).

Can I run a docker container doing a x86 build on a IBM Power system?

Our build setup is backed into a large docker container (basically a 2 GB image coming with a complete X86 linux in itself).
We have two ways to actually build: the official approach is jenkins environment (running on X86 hardware). But we also have a little "side X86 server" running RH 7. Developers can log into that RH server and kick off specific builds (using said docker images) themselves.
Those RH servers will be shut down at some point, to be replaced with IBM Power8 machines (running RH7 Little Endian for power).
I am simply wondering: is there a chance that our existing build setup and docker images simply work on Power8? Or are the fundamental technical issues that make it unlikely and not even worth trying?
You can probably use your existing build methodology and scripts close to unchanged, but you'll need to rebuild the actual images.
You can't directly run x86 binaries on Power (at a very low level, the bytes of machine code are just different). Docker doesn't contain any sort of virtualization layer; it does a bunch of setup to isolate the container from the host, but then runs the binaries in an image directly.
If your Jenkins setup has enough parameters for image names and version tags, then you should be able to run the x86 and Power setups side-by-side; you need to encode the architecture somewhere in the built image name or tag; for instance, repo.example.com/app/build:20180904-power. (I don't know that one or the other is considered better if you control all of the machinery.) If you have a private repo, you could encode it earlier in the path, winding up with image names like repo.example.com/power/build:20180904.
You'd need to double-check that everywhere that has a Docker image reference has it correctly parameterized (which is a good practice anyways). That would include any direct docker run commands; any Docker Compose or Kubernetes YAML files or similar artifacts; and the FROM line of any Dockerfiles.
Existing build setup? Not sure!
Docker images? NO, don’t even try.
Docker images are actually multiple layers which stored on filesystem through corresponding storage driver and backing filesystem(shown in the output of docker info).
If storage driver/backing filesystem has been changed, which likely be true when OS changed, older docker images could not be valid any more. Meaning they must be rebuilt for sure.

Docker swarm for usb devices

I'm trying to build a distributed python application that connects several hosts with android devices over usb. These hosts then connect over TCP to a central broker for job disbursement. I'm currently tackling the problem of supporting multiple python builds for developers (linux/windows) as well as production (runs an older OS which requires it's own build of python). On the surface, docker seems like a good fit here as it would allow me to support a single python build.
However, docker doesn't seem suited well to working with external hardware. There is the --device option to pass a specific device, but that requires that the device be present before the docker run command and it doesn't persist across device reboots. I can get around that problem with --privileged but docker swarm currently does not support that (see issue 24862) so I'd have to manually setup the service on each of the hosts, which would not only be a pain, but I'd lose the niceness of swarm's automatic deployment and rollout.
Does anyone have any suggestions on how to make something like this work with docker, or am I just barking up the wrong tree here?
you can try developing on docker source code, and build docker from source code to support your requirement.
There is a hack, how to do that. In the end of this issue:
https://github.com/docker/swarmkit/issues/1244

Resources