Why is virtualbox running so slowly under docker? - docker

I'm trying to get virtualbox to run inside docker. I'm well past is it possible to run virtualbox inside a docker container because I can start VBoxManage but it unfortunately spins at 100% CPU (despite working perfectly on the host) for several minutes before finally settling.
This is the Dockerfile I'm running: https://github.com/fommil/freeslick/tree/docker-build which includes a Windows XP virtualbox image built using these instructions https://github.com/fommil/freeslick/tree/docker-base
My host has the dkms running (and virtualbox/that image works), and I'm starting the container under privileged mode (to keep things simple):
docker run -i -t --privileged=true fommil/freeslick:build
But when I start a headless VM with
VBoxManage startvm --type headless MSSQL
(which works when run just on the host) VBox just consumes 100% and the services I expect sometimes never start up (I'm trying to get connections to MSSQL via tsql, see the await-mssql.sh script at the same repo).
To keep things nice and simple, my host and container are both running Debian Jessie (although I eventually want to run Jessie on an externally hosted Ubuntu VPS... let's leave that for another day)
Is there something additional that I need to do in order to be able to run virtualbox under docker?
There is nothing untoward in the log files when run inside the container, except perhaps this (almost 3 minutes to do command 0x30)
00:00:03.176907 Guest Log: BIOS: Booting from Hard Disk...
00:02:48.776941 PIIX3 ATA: execution time for ATA command 0x30 was 164 seconds
there is no such 0x30 command when running on the host.
UPDATE ATA command 0x30 is WRITE SECTORS. I wonder if docker is doing a "copy on write" for my 6GB windows drive, simply as part of the Windows bootup process. Hmm.

answering my own question: it really was the copy-on-write behaviour. My VPS has really slow hard drives that get even slower under docker. Workaround is to try to use a faster volume for my images and create fresh copies in that space before starting the image.

Related

Docker is very slow

I am running some 30+ containers (Redis, MySQL, RabbitMQ, .Net Core, Flask and others) on Ubuntu 20.04 Server.
Lately — in random intervals — docker builds as well as other docker commands seem to run really slow.
For example, it takes 30 mins at times to build a Flask app which other times build inside 10 seconds. I know its not a cache issue because it would stay stuck on COPY directives that is supposed to copy single .py file.
Same is the case with docker commands like ps, stat, logs
Whenever this happens, I have monitored the resource usage and I have more than 70% RAM and CPU available.
My docker version is Docker version 23.0.1, build a5ee5b1
and containerd version is containerd.io 1.6.16

ng command very slow on docker volume (Windows)

I use Docker Desktop on Windows 10 (WSL) and need to use Angular on a Docker Volume (with the -v option). Everything works correctly, but the "ng" command seems very slow when it's run from the volume.
I first noticed this running ng serve: the command hangs for more than 1 minute with no log (even in verbose mode) before beginning the compilation. But even ng --version hangs for 15 seconds when it's run from any directory in the volume (the version is 8.1.2) - without any error message (and no docker log). If I run ng --version from any other folder in the container (not in the volume), the version is displayed immediately.
Would you know the reason of this delay or any way to understand and solve it?
I suspect that the main issue is due to the fact that ng commands are read/write intensive. That being said, the Visual Studio Code devcontainer doc indicates:
While using this approach to bind mount the local filesystem into a container is convenient, it does have some performance overhead on Windows and macOS. There are some techniques that you can apply to improve disk performance, or you can open a repository in a container using a isolated container volume instead.
Therefore, instead of mounting the current directory, it would be better in that case to clone the repository in an isolated container volume.
To do so, in VS Code, open the command palette by pressing F1 and select Remote-Containers: Clone Repository in Container Volume. This will create a unique volume for your container with your repository inside.
The techniques mentioned in the quote can be found here.

Frontend development on Docker

I really love the idea to use Docker so on the host computer I don't have any developlent stuff needed, like for frontend node, yarn/npm, for backend like nginx, php, mysql and then all the services like mailhog, redis etc. Just take any computer, install docker and this is like perfect zero config environment to start developing.
Although, I haven't seen too many good examples how to work like that.
And then I start to think, if it is even possible to have environment without dependencies on host, or it is just my crazy ideas. I want to hear some thought, some examples.
At the moment I've built docker-compose file with 3 VueJs frontend projects running my development command command: sh -c 'yarn run serve', but if I check docker stats, I see that ram is 150mb for each container, and cpu usage - nothing. But the issue is that I hear my fans spinning too much when I run docker-comose up -d. I see that docker it self eats ~33% of CPU all the time on host.
Computer specs:
MacBook Pro (15-inch, 2017)
2,8 GHz Quad-Core Intel Core i7
16 GB 2133 MHz LPDDR3
Well, that's about it, maybe you have some good examples or suggestions.
One of the thing I haven't yet tried out is to not to run frontend containers when I run all the services, but spin them up only when neccessary when developing.
I also use docker for development on my Mac, I had the same problems as you with excessive memory consumption. The solution I found was to add the flag :delegated to the volumes.
Read more about volumes here.
Or, you can use NFS:
Set Up Docker For Mac with Native NFS
NFS With Docker on macOS Catalina
Revisiting Docker for Mac's performance with NFS volumes

Where are Docker volumes located when running WSL using Docker Desktop?

I am running Windows Subsystem Linux (WSL) with Ubuntu as client OS under Windows 10. Now I installed Docker Desktop on the Windows host and enabled the WSL integration in the Docker settings. That works fine so far, I can access the Docker daemon running on the Windows host from my WSL Ubuntu client.
Now I am wondering where all the Docker volumes and other data is stored in this setup. Usually these are under /var/lib/docker, but it seems when using WSL this is not the case. When running df -h I can see the following Docker-related lines:
/dev/sdd 251G 3.1G 236G 2% /mnt/wsl/docker-desktop-data/isocache
/dev/sdc 251G 120M 239G 1% /mnt/wsl/docker-desktop/shared-sockets
/dev/loop0 244M 244M 0 100% /mnt/wsl/docker-desktop/cli-tools
So they are somewhere on the Windows host it seems.
... but where?
When I create a volume named shared_data in docker, I can find it under
\\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\shared_data\\_data
You can find WSL2 volumes under a hidden network share. Open Windows Explorer, and type \\wsl$ into the location bar. Hit enter, and it should display your WSL volumes, including the ones for Docker for Windows.
If you are wondering where on the Windows host the docker volumes are located, for me they seem to be at:
C:\Users\username\AppData\Local\Docker\wsl\data\ext4.vhdx
and
C:\Users\username\AppData\Local\Docker\wsl\distro\ext4.vhdx
presumably, these are docker-desktop-data and docker-desktop respectively.
In theory, these WSL2 instances can be re-located to an alternate drive to free disk space as per this post; that is the standard method for exporting, unregistering, and re-importing an instance from a new location. This process is also described here (with regard to standard WSL instances).
(Caveat - I haven't yet done this with the docker WSL2 instances yet myself, only for Ubuntu using the method in the second link.)
Windows 10 + WSL2
I run docker-desktop on Windows 10 + WSL2. Just make sure you run the docker desktop, so the path would be accessible from a network.
I found my volume data under
\\wsl$\docker-desktop-data\data\docker\volumes
Note that you need to have docker desktop running before you will be able to discover those network direcotories:
Docker Desktop's WSL2 feature creates two new wsl2 containers docker-desktop and docker-desktop-data, which can be seen by the command wsl -l -v
NAME STATE VERSION
* Ubuntu-18.04 Running 2
docker-desktop Running 2
docker-desktop-data Running 2
This is where the docker daemon actually runs and where you can find the data you are looking for.
The volumes in the wsl2 kernel are mapped as follows:
docker run -ti -v host_dir:/app amazing-container will get mapped to /mnt/wsl/docker-desktop-data/data/docker/volumes/host_dir/_data/
The above is the right path, even though docker volume inspect amazing-container will tell you differently (/var/lib/docker/volumes/).
To conclude, the volumes are mapped to: /mnt/wsl/docker-desktop-data/data/docker/volumes/
Most answers on this topic are about the location from the Windows side, I needed to access the container log files (the issue is the same as for volumes) from my WSL distribution, the Windows path \\wsl$ was not an option.
The files could be found on Windows in \\wsl$\docker-desktop-data\version-pack-data\community\docker\containers.
From the WSL distribution, I could go to /mnt/wsl/docker-desktop-data/version-pack-data but it was empty.
Finally found a solution here:
From Windows, create a disk for docker-desktop-data:
net use w: \\wsl$\docker-desktop-data
From your WSL distribution, mount it to docker:
sudo mkdir /mnt/docker
sudo mount -t drvfs w: /mnt/docker
Now you can get everything you want, in my case log files:
ls -l /mnt/docker/version-pack-data/community/docker/containers/
total 0
drwxrwxrwx 4 root root 512 May 19 15:06 3f41ade0891c06725e828853524d73f185b415d035262f9c51d6b6e03654d505
In my case, i install docker-desktop on wsl2, windows 10 home. i find my image files in
\\wsl$\docker-desktop-data\version-pack-data\community\docker\overlay2
All image files are stored there, and have been seperated into several folders with long string names. When i look into every folder, i can find all the real image files in "diff" folders.
Although the terminal show the path "var/lib/docker", but the folder doesn't exsit and the actual files are not stored there. i think there is no error, the "var/lib/docker" is just linked or mapped to the real folder, kind like that.
In windows, we also use mklink to link two folders, it is similar, right?
You can find volumes and others data when using docker with WSL under docker-desktop-data
If you are running Docker on Windows host, using Docker Desktop, you can access the volumes at \\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\ (search this path from windows explorer and make sure docker engine is running).
When running Docker desktop app, the app creates its own Linux VM or using WSL to run the docker container and the path /var/lib/docker/volumes/ is from within that VM I think. The volumes are created as mountable .vhdx file at
C:\Users\username\AppData\Local\Docker\wsl\distro\
but accessing this directly is tricky.
Ref: Google how to access WSl files from Windows
Windows 10 + WSL2, Docker Desktop v4.13.1, free service tier, 2022-11-03:
I found my volumes at \\wsl$\docker-desktop-data\data\docker\volumes

Detach container from host console

I am creating a docker container with Ubuntu:16.04 image using python docker package. I am passing tty as True and detach as True to the client.containers.run() function. The container starts with /sbin/init process. The container is created successfully. But the problem is, the login prompt on my host machine is replaced with the container login prompt on my host machine console. As a result, I am not able to the login on the machine on the console. SSH connection to the machine work fine.
This happens even when I run my python script after connecting SSH to the machine. I tried different options like setting tty to False, setting stdout to False, setting the environment variable TERM to xterm in the container, but nothing help.
It would be really great if someone can suggest a solution for this problem.
My script is very simple:
import docker
client = docker.from_env()
container = client.containers.run('ubuntu:16.04', '/sbin/init', privileged=True,
detach=True, tty=True, stdin_open=True, stdout=False, stderr=False,
environment=['TERM=xterm'])
I am not using any dockerfile.
I have been able to figure out that this problem happens when I start container in privileged mode. If I do this, the /sbin/init process launches /sbin/agetty processes which causes /dev/tty to be attached to the container. I need to figure out a way to start /sbin/init in such a way that it does not create /sbin/agetty processes.
/sbin/init in Ubuntu is a service called systemd. If you look at the linked page it does a ton of things – configures various kernel parameters, mounts filesystems, configures the network, launches getty process, .... Many of these things require changing host-global settings, and if you launch a container with --privileged you're allowing systemd to do that.
I'd give two key recommendations on this command:
Don't run systemd in Docker. If you really need a multi-process init system, supervisord is popular, but prefer single-process containers. If you know you need some init(8) (process ID 1 has some responsibilities) then tini is another popular option.
Don't directly run bare Linux distribution images. Whatever software you're trying to run, it's almost assuredly not in an alpine or ubuntu image. Build a custom image that has the software you need and run that; you should set up its CMD correctly so that you can docker run the image without any manual setup.
Also remember that the ability to run any Docker command at all implies unrestricted root-level access over the host. You're seeing some of that here where a --privileged container is taking over the host's console; it's also very very easy to read and edit files like the host's /etc/shadow and /etc/sudoers. There's nothing technically wrong with the kind of script you're showing, but you need to be extremely careful with standard security concerns.

Resources