I am building a quarkus (Keycloak 18) based container on the following way:
Start a container from this image quay.io/keycloak/keycloak:18.0.0
Fill the running container with state (users, roles, clients using terraform)
Commit the running container with docker commit running-container my-filled-keycloak-image
The whole process is running in a Github action pipeline
The image can be used on a regular basis and runs quite normal. Only users of an Apple M1 seem to have problem with this image. In most cases starting a container simply gets stuck and hangs until a docker timeout occurs. Sometimes the container is able to start, but on very low performance and very slow.
The problem seems to be related to the Apple M1 architecture and up to now we do not have an idea how to fix this. Any help on this is greatly appreciated.
It looks like you are building images on your CI, which is running on amd64 (Intel) architecture. Unfortunately, this architecture is not natively supported on M1, which uses arm64. So your users with M1 use emulation (search Rosetta, which is not a perfect). The better option is to build multiarch image (image, which contains amd64,arm64,... architectures), so users will use native architecture on their machines.
There is a lot of resources how to build multiarch (multiplatform) Docker images. Random one: https://jitsu.com/blog/multi-platform-docker-builds
Related
Intro
There is an option --platform for Docker image to be run and config platform for docker-compose.
Also, almost in all official Docker images in hub.docker.com there is some of supported architectures in one tag.
Example, Ubuntu official image:
Most of Servers (also in Kubernetes) are linux/amd64.
I updated my MacBook to new one with their own Silicon chip (M1/M2...) and now Docker Desktop showing me message:
For official images (you can see them without yellow note) it downloads automatically needed platform (I guess).
But for custom created images (in private repository like nexus, artifacts) I have no influence. Yes, I can build appropriate images (like with buildx) for different platforms and push it to the private repository, but, in companies, where repos managed by DevOps - it is tricky to do so. They say that the server architecture is linux/amd64, and if I develop web-oriented software (PHP etc.) on a different platform, even if the version (tag) is the same - then the environment is different, and there is no guarantee that it will work on the server.
I assumed that it is only the difference in interpretation of instructions between the software and the hardware.
I would like to understand the subject better. There is a lot of superficial information on the web, no details.
Questions
what "platform/architecture" for Docker image does it really means? Like core basics.
Will you really get different code for interpreted programming languages?
It seems to me that if the wrong platform is specified, the containers work very slowly. But how to measure this (script performance, interaction with the host file system, etc.)
TLDR
Build multi-arch images supporting multiple architectures
Always ensure that the image you're trying to run has compatible architecture
what "platform/architecture" for docker image does it really means? Like core basics. Links would be appreciated.
It means that some of the compiled binary code within the image contains CPU instructions exlusive to that specific architecture.
If you run that image on the incorrect architecture, it'll either be slower due to the incompatible code needing to run through an emulator, or it might even not work at all.
Some images are "multi-arch", where your Docker installation selects the most suitable architecture of the image to run.
Will you really get different code for interpreted programming languages?
Different machine code, yes. But it will be functionally equivalent.
It seems to me that if the wrong platform is specified, the containers work very slowly. But how to measure this (script performance, interaction with the host file system, etc.)
I recommend to always ensure you're running images meant for your machine's infrastructure.
For the sake of science, you could do an experiment.
You can build an image that is meant to run a simple batch job for two different architectures, and then you can try running them both on your machine. Compare the time it takes the containers to finish.
Sources:
https://serverfault.com/questions/1066298/why-are-docker-images-architecture-specific#:~:text=Docker%20images%20obviously%20contain%20processor,which%20makes%20them%20architecture%20dependent.
https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/
https://www.reddit.com/r/docker/comments/o7u8uy/run_linuxamd64_images_on_m1_mac/
What are the "best practices" workflow for developing and testing an image (locally I guess) that is going to be deployed into a K8s cluster, and that has different hardware than my laptop?
To explain the context a bit, I'm running some deep learning code that needs gpus and my laptop doesn't have any so I launch a "training job" into the K8s cluster (K8s is probably not meant to be used this way, but is the way that we use it where I work) and I'm not sure how I should be developing and testing my Docker images.
At the moment I'm creating a container that has the desired gpu and manually running a bunch of commands till I can make the code work. Then, once I got the code running, I manually copy all the commands from history that made the code work and then copy them to a local docker file on my computer, compile it and push it to a docker hub, from which the docker image is going to be pulled the next time I launch a training job into the cluster, that will create a container from it and train the model.
The problem with this approach is that if there's a bug in the image, I have to wait until the deployment to the container to realize that my Docker file is wrong and I have to start the process all over again to change it. Also finding bugs from the output of kubectl logs is very cumbersome.
Is it a better way to do this?
I was thinking of installing docker into the docker container and use IntelliJ (or any other IDE) to attach it to the container via SSH and develop and test the image remotely; but I read in many places that this is not a good idea.
What would you recommend then instead?
Many thanks!!
One of the reasons that we switched to Docker a few months back was to eliminate the need of having to maintain VMs with our latest tools, etc. Figured Docker would be a lot easier to just simply pull down the image and get going. However, it's become quite a pain lately.
Doesn't run in a Windows VM (because of lack of nested VM support and requires days of troubleshooting), getting it running in RHEL has become quite painful (with some saying Docker and RHEL don't work well together), and now I'm running into a platform support issue with the Raspberry Pi 4.
When trying to run my container on the Raspberry Pi 4, it's now telling me that the container's architecture doesn't match the host's architecture.
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm/v7) and no specific platform was requested
A little confusing because I was hoping Docker would give us a lot more flexibility and compatibility with our customer platforms, but it seems to be quite painful.
From my understanding, we have to re-build the entire container and push out arm containers to arm systems, and amd64 to others?
Is there not just a quick and dirty workaround that I can use to get the container to run and ignore the architecture?
Shared v.s. Simple Tag
To indirectly address the issue you mentioned, you can use a shared tag that is multi-platform. You can see the difference here.
An example:
The "Simple Tags" enable docker run mongo:4.0-xenial to "do the right thing" across architectures on a single platform (Linux in the case of mongo:4.0-xenial).
The "Shared Tags" enable docker run mongo:4.0 to roughly work on both Linux and as many of the various versions of Windows that are supported
Force platform
To directly address the issue you mentioned, this warning HAS happened before to others. Check out the following bitwarden docker discussion. The fix is to force the platform when using docker run, like so:
docker run --platform linux/arm64 image_name_or_id
Make sure --platform argument is defined before the image name/id.
Building multi-platform images
docker run --platform parameter was added probably here but not to the docs! It's shown when you run docker run --help
platform string: Set platform if server is multi-platform capable
Multi-Platform Docker Builds
buildx
We planning to shift the releases to docker; i.e. the software that we release will be based on docker. We also have an HPC cluster available.
I tried searching the internet but could not find a reference to make the docker build faster by utilising GPUs. If anyone is doing the same or aware of the procedure "How it can be achieved", kindly share the same.
Edit I am not talking about accessing gpu from inside the container, I want to use gpu while running docker build
EDIT I am not sure why the question is marked duplicate? Do we not understand the difference between docker build and docker run? And how do we manage to give power to mark duplicate to people who dont even understand the topic?
I'm trying to build a distributed python application that connects several hosts with android devices over usb. These hosts then connect over TCP to a central broker for job disbursement. I'm currently tackling the problem of supporting multiple python builds for developers (linux/windows) as well as production (runs an older OS which requires it's own build of python). On the surface, docker seems like a good fit here as it would allow me to support a single python build.
However, docker doesn't seem suited well to working with external hardware. There is the --device option to pass a specific device, but that requires that the device be present before the docker run command and it doesn't persist across device reboots. I can get around that problem with --privileged but docker swarm currently does not support that (see issue 24862) so I'd have to manually setup the service on each of the hosts, which would not only be a pain, but I'd lose the niceness of swarm's automatic deployment and rollout.
Does anyone have any suggestions on how to make something like this work with docker, or am I just barking up the wrong tree here?
you can try developing on docker source code, and build docker from source code to support your requirement.
There is a hack, how to do that. In the end of this issue:
https://github.com/docker/swarmkit/issues/1244