Running docker container as amd64 machine using a M1 Mac - docker

I have a Docker file which creates an image and then I run it using docker compose together with a container built using a Postgres image. (To set up a local environment of Airflow - we use the mwaa local runner).
Recently I got a new M1 pro machine and I’m getting into issues running the container.
The problem is, from my understanding, is that the image is being built and then run using my machine which has a different kind of cpu architecture, which causes pip to look for wheels for this kind of architecture. My college has an intel Mac and he says he doesn’t experience any issues.
The build phase is ok, but when I run the container, we’ve set docker compose to run an entrypoint script that also installs some airflow providers and other dependencies, one of which is plyvel, which fails to install and cause other packages not to install as well. When I remove plyvel from the requirements.txt file, the installation completes but some of my airflow providers are missing some files or attributes which create its own issues.
I tried forcing docker to building and running the image and container using amd64 by changing the build command to:
docker build --platform linux/amd64 --rm --compress $3 -t amazon/mwaa-local:2.2 ./docker which runs but runs very slowly.
Also, added platform: linux/amd64 in docker-compose file to both the postgres and the local-runner containers.
Then, when I spin up the container, it takes a lot of time to get into a working state when I can access the airflow ui in the web browser and then it is very slow in the ui - every link is taking a few seconds to process and direct me to the new place. I believe this is due to some emulation or something.
I then found this article:
https://medium.com/nttlabs/buildx-multiarch-2c6c2df00ca2
It says there is a faster way to run without emulation but I didn’t understand how to implement.
In addition, found this Reddit thread:
https://www.reddit.com/r/docker/comments/qlrn3s/docker_on_m1_max_horrible_performance/
They suggest building and running the container inside a virtual machine, not sure if that is the way to go in my situation.
I tried both Docker Desktop and Rancher Desktop (with dockerd ) but both shows the same symptoms.

Related

Using Docker and Cypress in Same Docker Image

Fair warning: I'm new to all of this, so there there might be some mistakes in my thinking process.
I want to system test an application we are developing, and we ship this application via Docker, so that's what I want to test.
For GitLab CI, this means creating a Docker image which has Docker in Docker and Cypress, since that is what I'd like to use.
So just from checking the Docker docs I can see that Docker can be installed on a multitude of Linux distros, but not on Alpine. The official image however is Alpine.
The Cypress docs however show that Cypress can not be installed to Alpine. Only the package managers "apt-get" and "yum" are supported, which is Ubuntu and Fedora, respectively.
So as far as I can tell, it's not possible to have both of these at once? Which would be absolutely baffling (but so is the package manager chaos I just learned about).
What I tried:
used the Docker image as a base and tried to install Cypress (does not work because there is no installation manual and the packages you need to install via apt-get don't exist for apk)
used the Cypress image as a base and tried to install Docker (does not work because the Cypress images don't work)
used another image and tried to install both (does not work because installing Docker inside the Docker container does not work, that's why they have the image provided)
used DinD with another distro (cruizba/ubuntu-dind, fails with " dockerd is not running after max time")
So... what am I missing? Is there any way to get to the point where I can use both Cypress and DinD in the same image?
There is an image named blackholegalaxy/cypress-dind which combines DinD and Cypress.
Sadly it's really old and there is no way to update Docker to the newest version easily.

ng command very slow on docker volume (Windows)

I use Docker Desktop on Windows 10 (WSL) and need to use Angular on a Docker Volume (with the -v option). Everything works correctly, but the "ng" command seems very slow when it's run from the volume.
I first noticed this running ng serve: the command hangs for more than 1 minute with no log (even in verbose mode) before beginning the compilation. But even ng --version hangs for 15 seconds when it's run from any directory in the volume (the version is 8.1.2) - without any error message (and no docker log). If I run ng --version from any other folder in the container (not in the volume), the version is displayed immediately.
Would you know the reason of this delay or any way to understand and solve it?
I suspect that the main issue is due to the fact that ng commands are read/write intensive. That being said, the Visual Studio Code devcontainer doc indicates:
While using this approach to bind mount the local filesystem into a container is convenient, it does have some performance overhead on Windows and macOS. There are some techniques that you can apply to improve disk performance, or you can open a repository in a container using a isolated container volume instead.
Therefore, instead of mounting the current directory, it would be better in that case to clone the repository in an isolated container volume.
To do so, in VS Code, open the command palette by pressing F1 and select Remote-Containers: Clone Repository in Container Volume. This will create a unique volume for your container with your repository inside.
The techniques mentioned in the quote can be found here.

Enable gpu support by default on docker containers

I'm using a platform (Cytomine) on Ubuntu 18.04 to run some deep learning containerized applications (this platform handles the Docker images and containers automatically, so I only need to create the image and provide its download URL to the platform). So far it's working good but now I need to enable GPU support to run the model efficiently. Thus, I did some local tests with nvidia-docker to manually run the model container with GPU support, it was really easy to have it working because I just had to add one option to the run command:
docker run --gpus all
However, because I cannot add this option to the code on the Cytomine platform I need to find a way of adding/enabling that option by default to all the containers run by docker.
I tried adding this option to the files /etc/docker/daemon.json and /etc/docker/key.json and then restarted docker sudo systemctl restart docker. However, it didn't work.
Also, I found how to create docker config files (docker config); however, this seems to work only with Docker Swarm and I'm not going to use a Swarm for this project.
Thus, I'm looking for a straightforward solution that can be deployed properly. Is there any way to enable this option (--gpus all) by default when running any Docker container? (like somehow including it on the Dockerfile?)
Thanks!

Rapidly modifying Python app in K8S pod for debugging

Background
I have a large Python service that runs on a desktop PC, and I need to have it run as part of a K8S deployment. I expect that I will have to make several small changes to make the service run in a deployment/pod before it will work.
Problem
So far, if I encounter an issue in the Python code, it takes a while to update the code, and get it deployed for another round of testing. For example, I have to:
Modify my Python code.
Rebuild the Docker container (which includes my Python service).
scp the Docker container over to the Docker Registry server.
docker load the image, update tags, and push it to the Registry back-end DB.
Manually kill off currently-running pods so the deployment restarts all pods with the new Docker image.
This involves a lot of lead time each time I need to debug a minor issue. Ideally, I've prefer being able to just modify the copy of my Python code already running on a pod, but I can't kill it (since the Python service is the default app that is launched, with PID=1), and K8S doesn't support restarting a pod (to my knowledge). Alternately, if I kill/start another pod, it won't have my local changes from the pod I was previously working on (which is by design, of course; but doesn't help with my debug efforts).
Question
Is there a better/faster way to rapidly deploy (experimental/debug) changes to the container I'm testing, without having to spend several minutes recreating container images, re-deploying/tagging/pushing them, etc? If I could find and mount (read-write) the Docker image, that might help, as I could edit the data within it directly (i.e. new Python changes), and just kill pods so the deployment re-creates them.
There are two main options: one is to use a tool that reduces or automates that flow, the other is to develop locally with something like Minikube.
For the first, there are a million and a half tools but Skaffold is probably the most common one.
For the second, you do something like ( eval $(minikube docker-env) && docker build -t myimagename . ) which will build the image directly in the Minikube docker environment so you skip steps 3 and 4 in your list entirely. You can combine this with a tool which detects the image change and either restarts your pods or updates the deployment (which restarts the pods).
Also FWIW using scp and docker load is very not standard, generally that would be combined into docker push.
I think your pain point is the container relied on the python code. You can find a way to exclude the source code from docker image build phase.
For my experience, I will create a docker image only include python package dependencies, and use volume to map source code dir to the container path, so you don't need to rebuild the image if no dependencies are added or removed.
Example
I have not much experience with k8s, but I believe it must be more or less the same as docker run.
Dockerfile
FROM python:3.7-stretch
COPY ./python/requirements.txt /tmp/requirements.txt
RUN pip install --no-cache-dir -r /tmp/requirements.txt
ENTRYPOINT ["bash"]
Docker container
scp deploy your code to the server, and map your host source path to the container source path like this:
docker run -it -d -v /path/to/your/python/source:/path/to/your/server/source --name python-service your-image-name
With volume mapping, your container no longer depend on the source code, you can easily change your source code without rebuilding your image.

Can I build Docker contains for a Raspberry Pi on a AMD64 machine?

I am exploring using Docker containers on a Raspberry PI to help with managing upgrades to my application and the versions of NodeJS that it runs with.
I am wondering how the best way to build the containers would be. I could build the containers in the production machine, but it would be much more convenient if I could start with (say) the latest armvf nodejs image and build a new image with the application sources added (along with npm modules and bower components) that the application needs on my Home Desktop (Debian AMD64) or Laptop (OSX) or the Windows 7 machine I have available at work. I don't need to run the containers, just build them.
One slight niggle is that the code needs to be kept confidential, so I can't put the resultant containers in any public repository. Can I ensure the containers have managable names and can I just copy them around between machines?
AFAIK containers are architecture agnostic. You should be able to modify them one a host with a different architecture, but will be unable to enter it. Entering basically means executing a program (e.g. a shell) in the container's context. Since the container's shell is not executable on your host this won't work. Consequently cross-compiling within the container is also no option.
However, if you cross-compile on the outside, you should be able to add your executables to the image, move it over to your pi and run it.
You can move docker images without any public repository either with a private repository or you use docker save IMAGE > image.tar to store an image in a tarball, move it to the pi, and use docker load -i image.tar to restore it.

Resources