KVM virsh attach-disk does not honour device letter - storage

I use the following command to attach a disk to a running KVM virtual server:
virsh attach-disk vps_89 /dev/nbd31 --target vdc --driver tap
The disk is attached correctly. However, the disk is not attached as vdc inside the virtual server. I know the documentation says that the --target parameter is only giving a "hint" to the virtual server, but I can't really believe there is NO way to force the disk to be attached as a certain device (at least on Linux).
I am controlling my virtual servers through scripts and I must trust that the disk is attached to the exact device as I tell it to.
When I detach the disk using this:
virsh detach-disk vps_89 --target vdc
Then re-attaching the same disk again, the device-ID seems to be increamented each time, i.e vdc, vdd, vde, vdf etc... (totally ignoring my --target param)
Does anyone know a good (and reliable) way of attaching disks to KVM in a predictable way?

According to RedHat [1] and libvirt [2] this is not a bug, it's a feature:
The actual device name specified is not guaranteed to map to the
device name in the guest OS. Treat it as a device ordering hint.
The only available solution is to use UUID (/dev/disk/by-uuid/) for handling disk inside virtual machine.
https://bugzilla.redhat.com/show_bug.cgi?id=693372
http://libvirt.org/formatdomain.html

Related

How can I manipulate storage devices outside of Docker?

I'd like to spin up an Ubuntu image with certain tools like testdisk for disk recovery. How can I manage all detected volumes on the host machine with testdisk inside a Docker container?
The O'reilly info worked for Windows with supposed limitations (inability to repartition). I'm assuming if you use disk management to see the disk number (0,1,2,etc) it will correspond to the sd# you have to reference. Supposedly with Windows Server Editions, you can use the device flag and specify a device class GUID to share inside Docker. But like previously mentioned, it isn't raw access but rather a shared device.

Docker doesn't release / display space after running system prune and informing about reclaiming 16 gb on windows 10 home edition

so I'm really new to docker, and my friend told me that docker system prune run from the elevated cmd prompt suppose to clean pretty much everything, after running it however the message notifying about "reclaiming 16.24 gb" was displayed but my file explorer doesn't show any changes to disk c, restart of docker or host machine didn't help, pruning volumes yield same results. How do I make him release the space or display it correctly (as I don't really know what the case is) ?
I'm not super familiar with the internals of Docker for Windows, but fairly recently it worked by having a small virtual machine with a virtual disk image. The reclaimed disk space is inside that virtual disk image, but the "file" for that image will still remain the same size on your physical disk. If you want to reclaim the physical disk space, there should be a "Reset Docker" button somewhere in the Docker for Windows control panel, which will essentially delete that disk image and create a new, empty one.

How to bypass memory caching while using FIO inside of a docker container?

I am trying to benchmark I/O performance on my host and docker container using flexible IO tool with O_direct enabled in order to bypass memory caching. The result is very suspicious. docker performs almost 50 times better than my host machine which is impossible. It seems like docker is not bypassing the caching at all. even if I ran it with --privileged mode. This is the command I ran inside of a container, Any suggestions?
fio --name=seqread --rw=read --direct=1 --ioengine=libaio --bs=4k --numjobs=1 --size=10G --runtime=600 --group_reporting --output-format=json >/home/docker/docker_seqread_4k.json
(Note this isn't really a programming question so Stackoverflow is the wrong place to ask this... Maybe Super User or Serverfault would be a better choice and get faster answers?)
The result is very suspicious. docker performs almost 50 times better than my host machine which is impossible. It seems like docker is not bypassing the caching at all.
If your best case latencies are suspiciously small compared to your worst case latencies it is highly likely your suspicions are well founded and that kernel caching is still happening. Asking for O_DIRECT is a hint not an order and the filesystem can choose to ignore it and use the cache anyway (see the part about "You're asking for direct I/O to a file in a filesystem but...").
If you have the option and you're interested in disk speed, it is better to do any such test outside of a container (with all the caveats that implies). Another option when you can't/don't want to disable caching is ensure that you do I/O that is at least two to three times the size (both in terms of amount and the region being used) of RAM so the majority of I/O can't be satisfied by buffers/cache (and if you're doing write I/O then do something like end_fsync=1 too).
In summary, the filesystem being used by docker may make it impossible to accurately do what you're requesting (measure the disk speed by bypassing cache while using whatever your default docker filesystem is).
Why a Docker benchmark may give the results you expect
The Docker engine uses, by default, the OverlayFS [1][2] driver for data storage in a containers. It assembles all of the different layers from the images and makes them readable. Writing is always done to the "top" layer, which is the container storage.
When performing reads and writes to the container's filesystem, you're passing through Docker's overlay2 driver, through the OverlayFS kernel driver, through your filesystem driver (e.g. ext4) and onto your block device. Additionally, as Anon mentioned, DIRECT/O_DIRECT is just a hint, and may not be respected by any of the layers you're passing through.
Getting more accurate results
To get an accurate benchmarks within a Docker container, you should write to a volume mount or change your storage driver to one that is not overlaid, such as the Device Mapper driver or the ZFS driver.
Both the Device Mapper driver and the ZFS driver require a dedicated block device (you'll likely need a separate hard drive), so using a volume mount might be the easiest way to do this.
Use a volume mount
Use the -v options with a directory that sits on a block device on your host.
docker run -v /absolute/host/directory:/container_mount_point alpine
Use a different Docker storage driver
Note that the storage driver must be changed on the Docker daemon (dockerd) and cannot be set per container. From the documentation:
Important: When you change the storage driver, any existing images and containers become inaccessible. This is because their layers cannot be used by the new storage driver. If you revert your changes, you can access the old images and containers again, but any that you pulled or created using the new driver are then inaccessible.
With that disclaimer out of the way, you can change your storage driver by editing daemon.json and restarting dockerd.
{
"storage-driver": "devicemapper",
"storage-opts": [
"dm.directlvm_device=/dev/sd_",
"dm.thinp_percent=95",
"dm.thinp_metapercent=1",
"dm.thinp_autoextend_threshold=80",
"dm.thinp_autoextend_percent=20",
"dm.directlvm_device_force=false"
]
}
Additional container benchmark notes - kernel
If you are trying to compare different flavors of Linux, keep in mind that Docker is still running on your host machine's kernel.

Docker Desktop cannot set large disk size

I'm running Docker Desktop 2.2.0 on Windows 10. It appears that the disk size cannot be set beyond 64GB. I tried setting the diskSizeMiB value to 100GB in %APPDATA%\Docker\settings.json, but docker appears to ignore it and set the size to 64GB in the resulting Hyper-V VM.
"cpus": 6,
"diskSizeMiB": 102400,
The issue I'm having is older images being evicted when pulling new ones in. Even when manually expanding the HyperV disk to 100GB, docker pull deletes older images to make space for new ones.
Docker for Windows docs don't seem to explicitly mention a limit, but 64Gb ominously equals 2^16 bytes which hints at it being a technical limit.
Anyone knows of a workaround for this limitation?
Looks like I was on the right track with increasing the virtual disk size directly in Hyper-V (See this guide). The only missing piece was restarting Docker (or Windows). Once restarted, I was able to use the full disk.

How to reduce default VM memory for Docker Linux containers on Windows

Scenario
Windows 10 Professional
Docker 18.06.1-ce running in Windows container mode
4GB of available memory on host system
using Hyper-V virtual machine
Problem
When trying to "switch to Linux containers" via Docker's taskbar item the process fails after a couple of seconds showing an error about "Not enough memory to start Docker".
Since the host system does not have that much memory, I'd like to reduce the maximum amount of memory the global Docker machine is allowed to use (I think 2 GB is the default here). Thus, I'd like to reduce that to just 1 GB.
When having Docker running in Windows container mode, there is no "advanced" section in Docker's settings that would allow to reduct that memory assignment easily.
I was able to find the "MobyLinuxVM" using Windows' Hyper-V manager. However, when adjusting memory settings there, it is overwritten each time I start Docker and try again switching to Linux container mode.
Question
Is there a different way to define the maximum amount of memory for Docker without using the user interface (which won't work in this scenario due to the missing "advanced" section in Windows container mode - before being able to switch to Linux containers)?
After some searching I found out that settings of Docker's user interface are stored in %APPDATA%\Docker\settings.json (e.g. C:\Users\olly\AppData\Roaming\Docker), memory settings are defined in memoryMiB property.
The following solved the problem on my environement:
quit Docker
modify settings.json file using notepad %APPDATA%\Docker\settings.json in the run command prompt (Windows-Key + R)
adjust value memoryMiB to 1024 (has been 2048 before)
in Docker versions 19.x and later the property is called memoryMiB
in Docker versions 18.x and before the property was called VmMemory
save settings.json
start Docker and finally being able to use "switch to Linux containers"
Property memoryMiB in Docker versions 19.x and later
Property VmMemory in Docker versions 18.x and before

Resources