I am running some 30+ containers (Redis, MySQL, RabbitMQ, .Net Core, Flask and others) on Ubuntu 20.04 Server.
Lately — in random intervals — docker builds as well as other docker commands seem to run really slow.
For example, it takes 30 mins at times to build a Flask app which other times build inside 10 seconds. I know its not a cache issue because it would stay stuck on COPY directives that is supposed to copy single .py file.
Same is the case with docker commands like ps, stat, logs
Whenever this happens, I have monitored the resource usage and I have more than 70% RAM and CPU available.
My docker version is Docker version 23.0.1, build a5ee5b1
and containerd version is containerd.io 1.6.16
Related
We put together a very powerful machine at my company for CI, using GitHub Actions. It's a very powerful machine, so we have ~15 Github Actions runners, running directly on the linux machine. We also run all of our jobs using the container syntax. i.e.
build-js-servers:
name: Build JS Servers
runs-on: [ self-hosted, docker ]
container:
image: node:16
We're running into an issue now where we are creating so many dangling containers, our device is running out of space in a few hours. We have a nightly build that runs docker system prune --all -f on the machine, but it appears to not be enough.
All the containers or images that github actions creates though are pointless, waste space, and unnecessarily wear out our SSD. Is there a way we can configure the temporary containers not to be saved to disk at all? That would be an ideal solution.
You can use a cronjob to run docker system prune --all -f every day at midnight.
Another option is to use a Post-job script to clean up the workspace after job execution, this feature is still in beta, so consider adding a step at the very end of your workflow that runs rm -rf to remove the files from the GitHub workspace.
Also, consider mounting a secondary device using an NFS such as Amazon EFS or Google Cloud Filestore that does offer elasticity, later on, you can change the Docker daemon configuration using the data-root parameter to the mounted device.
I run numerous containers on a single docker instance, a few of which have the need to move files around between local and remove file systems that are mounted on the host as CIFS devices, then presented to the container as volumes. In the past I haven’t had an issue when transferring data between these volumes, however recently I’ve seen the performance drop and transfer rates only running at less than 10 MByte/s when they used to run at around 60 MByte/s.
What are some things I can check in the configuration of either the docker engine on my server, or the container configuration?
I using the docker-ce package on Centos 7 for these containers.
I have a Docker file which creates an image and then I run it using docker compose together with a container built using a Postgres image. (To set up a local environment of Airflow - we use the mwaa local runner).
Recently I got a new M1 pro machine and I’m getting into issues running the container.
The problem is, from my understanding, is that the image is being built and then run using my machine which has a different kind of cpu architecture, which causes pip to look for wheels for this kind of architecture. My college has an intel Mac and he says he doesn’t experience any issues.
The build phase is ok, but when I run the container, we’ve set docker compose to run an entrypoint script that also installs some airflow providers and other dependencies, one of which is plyvel, which fails to install and cause other packages not to install as well. When I remove plyvel from the requirements.txt file, the installation completes but some of my airflow providers are missing some files or attributes which create its own issues.
I tried forcing docker to building and running the image and container using amd64 by changing the build command to:
docker build --platform linux/amd64 --rm --compress $3 -t amazon/mwaa-local:2.2 ./docker which runs but runs very slowly.
Also, added platform: linux/amd64 in docker-compose file to both the postgres and the local-runner containers.
Then, when I spin up the container, it takes a lot of time to get into a working state when I can access the airflow ui in the web browser and then it is very slow in the ui - every link is taking a few seconds to process and direct me to the new place. I believe this is due to some emulation or something.
I then found this article:
https://medium.com/nttlabs/buildx-multiarch-2c6c2df00ca2
It says there is a faster way to run without emulation but I didn’t understand how to implement.
In addition, found this Reddit thread:
https://www.reddit.com/r/docker/comments/qlrn3s/docker_on_m1_max_horrible_performance/
They suggest building and running the container inside a virtual machine, not sure if that is the way to go in my situation.
I tried both Docker Desktop and Rancher Desktop (with dockerd ) but both shows the same symptoms.
I really love the idea to use Docker so on the host computer I don't have any developlent stuff needed, like for frontend node, yarn/npm, for backend like nginx, php, mysql and then all the services like mailhog, redis etc. Just take any computer, install docker and this is like perfect zero config environment to start developing.
Although, I haven't seen too many good examples how to work like that.
And then I start to think, if it is even possible to have environment without dependencies on host, or it is just my crazy ideas. I want to hear some thought, some examples.
At the moment I've built docker-compose file with 3 VueJs frontend projects running my development command command: sh -c 'yarn run serve', but if I check docker stats, I see that ram is 150mb for each container, and cpu usage - nothing. But the issue is that I hear my fans spinning too much when I run docker-comose up -d. I see that docker it self eats ~33% of CPU all the time on host.
Computer specs:
MacBook Pro (15-inch, 2017)
2,8 GHz Quad-Core Intel Core i7
16 GB 2133 MHz LPDDR3
Well, that's about it, maybe you have some good examples or suggestions.
One of the thing I haven't yet tried out is to not to run frontend containers when I run all the services, but spin them up only when neccessary when developing.
I also use docker for development on my Mac, I had the same problems as you with excessive memory consumption. The solution I found was to add the flag :delegated to the volumes.
Read more about volumes here.
Or, you can use NFS:
Set Up Docker For Mac with Native NFS
NFS With Docker on macOS Catalina
Revisiting Docker for Mac's performance with NFS volumes
I'm trying to get virtualbox to run inside docker. I'm well past is it possible to run virtualbox inside a docker container because I can start VBoxManage but it unfortunately spins at 100% CPU (despite working perfectly on the host) for several minutes before finally settling.
This is the Dockerfile I'm running: https://github.com/fommil/freeslick/tree/docker-build which includes a Windows XP virtualbox image built using these instructions https://github.com/fommil/freeslick/tree/docker-base
My host has the dkms running (and virtualbox/that image works), and I'm starting the container under privileged mode (to keep things simple):
docker run -i -t --privileged=true fommil/freeslick:build
But when I start a headless VM with
VBoxManage startvm --type headless MSSQL
(which works when run just on the host) VBox just consumes 100% and the services I expect sometimes never start up (I'm trying to get connections to MSSQL via tsql, see the await-mssql.sh script at the same repo).
To keep things nice and simple, my host and container are both running Debian Jessie (although I eventually want to run Jessie on an externally hosted Ubuntu VPS... let's leave that for another day)
Is there something additional that I need to do in order to be able to run virtualbox under docker?
There is nothing untoward in the log files when run inside the container, except perhaps this (almost 3 minutes to do command 0x30)
00:00:03.176907 Guest Log: BIOS: Booting from Hard Disk...
00:02:48.776941 PIIX3 ATA: execution time for ATA command 0x30 was 164 seconds
there is no such 0x30 command when running on the host.
UPDATE ATA command 0x30 is WRITE SECTORS. I wonder if docker is doing a "copy on write" for my 6GB windows drive, simply as part of the Windows bootup process. Hmm.
answering my own question: it really was the copy-on-write behaviour. My VPS has really slow hard drives that get even slower under docker. Workaround is to try to use a faster volume for my images and create fresh copies in that space before starting the image.