Kubernetes persistent volume on Docker Desktop (Windows) - docker

I'm using Docker Desktop on Windows 10. For the purposes of development, I want to expose a local folder to a container. When running the container in Docker, I do this by specifying the volume flag (-v).
How do I achieve the same when running the container in Kubernetes?

You should use hostpath Volume type in your pod`s spec to mount a file or directory from the host node’s filesystem, where hostPath.path field should be of following format to accept Windows like paths:
/W/fooapp/influxdb
//W/fooapp/influxdb
/////W/fooapp/influxdb
Please check this github issue explaining peculiarities of Kubernetes Volumes on Windows.
I assume also that you have enabled Shared Drives feature in your Docker for Windows installation.

Using k8s 1.21.5 the following type of path worked for me:
/run/desktop/mnt/host/c/PATH/TO/FILE
Digging through this github issue helped me resolve which path to use:
https://github.com/kubernetes/kubernetes/issues/59876

The explanation is on the github link above.
The folder mount for /run/desktop/mnt/host/c does not exist on the distro you installed in WSL2 - on that WSL2 distro, the mount point to your C:\ drive is the more obvious /mnt/c.
Realize that Kubernetes and Docker are not installed in your installed WSL2 distro. Instead, Docker Desktop for Windows creates its own WSL2 VM called docker-desktop and installs Docker and Kubernetes on that VM. Then Docker Desktop for Windows installs the docker and kubectl CLIs on your WSL2 distro (and also on your Windows machine) and configures them all to point to the Docker and Kubernetes instances it created on the docker-desktop VM. This docker-desktop VM is hosting Docker and Kubernetes and also contains the /run/desktop/mnt/host/c mount point to your Windows C:\ drive and that can be used by your containers to persist data.
You can remote into the docker-desktop VM and see the /run/desktop/mnt/host/c mount point and folder structure by following the instructions (and discussion) at https://stackoverflow.com/a/62117039/11057678:
docker run -it --rm --privileged --pid=host justincormack/nsenter1

Related

Pointing powershell to correct Docker

I am new to Docker and MiniKube.
On my Windows laptop I have installed Docker Desktop and MiniKube.
I created two nodes in MiniKube and they are up and running.
I have been using Powershell with images and containers with Docker Desktop with no issues.
Now I realize that Minikube is using it's own installation of Docker and I cannot see the containers created by MiniKube in Powershell.
How do I get Powershell to point to Docker used by MiniKube?
How do I reverse that change to work again with Docker Desktop?

How can I access wsl2 which is used by Docker desktop?

I want to transfer a Docker image from my Windows10 PC to another one, Fedora, using rsync. I can't use WSL, I need WSL2 as the compiler says:
ubu#DESKTOP-QL4RO3V:/mnt/c/Windows/system32$ docker images
The command 'docker' could not be found in this WSL 1 distro.
We recommend to convert this distro to WSL 2 and activate
the WSL integration in Docker Desktop settings.
For details about using Docker Desktop with WSL 2, visit:
https://docs.docker.com/go/wsl2/
But I think that as I have Docker desktop it is using WSL2:
But I don't know how to run the wsl2 Docker is using for my own.
PS C:\Users\antoi> wsl -l -v
NAME STATE VERSION
* Ubuntu Running 1
docker-desktop-data Running 2
docker-desktop Running 2
Docker Desktop images, containers, and volumes are stored in the special docker-desktop-data. As noted in this Super User question and my answer there, docker-desktop-data is not bootable (by design).
If you really had to get to the filesystem, I've documented a way to do so there. But in general, you should not need to do this.
Instead, use the normal docker commands (from WSL2, PowerShell, or CMD) to save the image to a tar file as documented in this answer:
docker save -o <image.tar> <image_name>
Then transfer the file using rsync or other means, and on the destination machine, import it via:
docker load -i <image.tar>
Again, that's from WSL2, PowerShell, or CMD. But in your case, the Ubuntu instance is WSL1. That won't work for Docker. You'll need to convert it to WSL2.
Just in case, I always recommend backing up your instance before converting it. From PowerShell:
wsl --export Ubuntu ubuntu_backup.tar
Then, once you have the backup:
wsl --set-version Ubuntu 2
wsl --set-default-version 2 # if desired
After conversion, you shouldn't see that error when running docker in Ubuntu.
Side note -- Docker Desktop "injects" the docker command into any WSL2 instance that you set in the "WSL Integration" tab in Settings. This should default to your "default" WSL2 instance, which (from your screenshot) is Ubuntu. The "real" docker command is inside docker-desktop, but it's linked into Ubuntu for you.
So by default, you should have all docker functionality directly in your Ubuntu instance. Neither docker-desktop nor docker-desktop-data are designed to be used directly by the end-user.
You can access docker desktop WSL using the following command
wsl -d docker-desktop

How can I access a shell on the VM Linux host when using the Docker Windows Beta

I have set up Docker for Windows (Hyperv Beta) on my Laptop.
My intention is to laborate on some setups for containers I intend to install in my real server later. I am fairly new to Docker (but know the basics) so I wanted to laborate with volumes and volume images a bit.
However all anonymous volumes end up on the virtual Linux host. I would like to access the filesystem of the host, not within a container.
I cannot access it from within a container easily due to (well founded) security constraints. Neither can I find a way to access it from the windows prompt.
(Using Docker for Windows version 1.12.0-beta21)
I know that it possible to mount volumes using the c share made by Docker for Windows, but that raises the complexity for me. My intent is to use Docker tutorials unmodified and inspect the results in the host filesystem. Preferably through a (bash) shell in the host VM or with a windows file access into the virtual machine.
Later on I would also like to copy volume contents into the vm volumes although that could be solved using a volume against the c drive.
I have after research on my own deducted the following technique to create a privileged container that works as if it was the Linux root host. This is the best I have been able to pinpoint so far.
docker run --net=host --ipc=host --uts=host --pid=host -it --security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
Docker-machine will allow you to ssh to the default machine by typing:
"docker-machine ssh"
You'll be logged into the VM that is running docker.

How is docker able to mount a volume from docker client into a docker container running on docker host?

I am using docker toolbox on Mac. The setup looks like:
docker host - Boot2Docker VirtualBox VM running on Mac
docker client - Mac
I am using following command - docker run -it -v $PWD/dir_on_docker_client:/dir_inside_container ubuntu:14.04 /bin/bash to run a container with a volume mount. I wonder, how is docker able to mount volume from docker client (in this case Mac) into a docker container running on docker host (in this case, VM running on Mac)?
The toolbox VM includes a shared directory from the client. /c/Users (C:\Users) on Windows and /Users on Mac.
Directories in these folders, on the client, can be added as volumes in a container.
Note though that if you add for example /tmp as a volume, it will be /tmp in the toolbox.
The main problem is that virtulbox shares only your home folder with the docker machine at the moment you can only shares content inside this directory. It's uncomfortable but the unique way that I fund to resolve this problem is with the bootlocal.sh file, you can write this file inside your docker-machine to mount after the boot new directory
https://github.com/boot2docker/boot2docker/blob/master/doc/FAQ.md#local-customisation-with-persistent-partition
Yesterday during this dockercon they announced a public beta for "Docker For Mac", I think that you can replace docker-machine with this tool, it provide the best experience with docker and macos, and it resolves this problem
https://www.docker.com/products/docker

How to use kvm in a Centos 6 docker container via docker machine

I'm trying to use kvm in a Centos 6 docker container, via docker machine. My docker machine vm (vmware fusion based) supports nested VMs, but in my docker container I'm seeing:
modprobe kvm
FATAL: Could not load /lib/modules/4.1.12-boot2docker/modules.dep: No such file or directory
modprobe kvm_intel
FATAL: Could not load /lib/modules/4.1.12-boot2docker/modules.dep: No such file or directory
Any idea what I'm missing?
Docker isn't virtual machine. It is a way to package your application.
So I think that running KVM - Kernel Virtual Machine it is not possible inside docker container.
You can read about difference between Docker and other kind of virtualization on page:
https://www.docker.com/what-docker
you may need to load the kvm and kvm_intel module on the docker host before trying to run the container
https://github.com/boot2docker/boot2docker/issues/1138#issuecomment-183199287
You can use KVM simple container in DockerHub. Source code is available on GitHub and has been tested in DockerHosts with Ubuntu 16.04, Centos 7, Centos-Atomic 7.2 and RancherOS.

Resources