I have the following scenario:
Docker CLI is running on my machine.
There is a remote VM running the docker demon inside. The API is exposed. My local Docker CLI connects to the docker demon inside the VM.
I have my project folder on my local machine. When I run a docker build, the files from my local folder are correctly sent to the docker demon in the VM and used to build an image.
However, when I run the image, I also want to mount my local project folder into the docker container running in the VM so I can make changes and see the results. This seems to be impossible - I seem to be only able to mount folders from my VM into the docker image.
Related
I am new to Docker and MiniKube.
On my Windows laptop I have installed Docker Desktop and MiniKube.
I created two nodes in MiniKube and they are up and running.
I have been using Powershell with images and containers with Docker Desktop with no issues.
Now I realize that Minikube is using it's own installation of Docker and I cannot see the containers created by MiniKube in Powershell.
How do I get Powershell to point to Docker used by MiniKube?
How do I reverse that change to work again with Docker Desktop?
I am trying to build an image with docker and then upload it to the docker hub, after passing the quality tests I receive the following error: docker: not found, how can I communicate my docker service (localhost) with the container of jenkins.
Important: I have docker desktop installed locally and I have installed jenkins in a local container also in windows 10 pro.
Error: https://imgur.com/q1SrKGe
Pipeline: https://imgur.com/nQWL1HR
You have 2 options to do this:
Install Docker inside your Jenkins Container and also add a bind mount for the Docker socket from your host. Otherwise your Docker Daemon inside your Container wont work. On Linux this socket is /var/run/docker.sock, so the bind mount would look like -v /var/run/docker.sock:/var/run/docker.sock.
Use a different slave agent for the Building Image Stage where you have docker installed. For e.g. you could use Docker-in-Docker (https://hub.docker.com/_/docker) as a Slave Agent for Jenkins (connected via ssh) and run your docker build inside this slave agent.
I'm testing Docker running on my Windows 7 PC. I can mount directories under C:\Users to containers without issue with e.g.
docker run --rm -it -v //c/Users/someuser/:/data/ alpine ash
but when I try to attach a networked location like //server1/data with e.g.
docker run --rm -it -v //server1/data/:/data/ alpine ash
the /data directory in the container appears empty. How do I pass a directory not under C:\Users\ to my Docker containers?
Because my PC was running Windows 7 I'd installed Docker Toolbox, which uses Virtualbox instead of Hyper-V. My understanding is that this means Docker is running inside a VM on my system, so that VM needs to have access to any data I intend to pass to Docker.
To attach network directories (or anything local above C:\Users) I needed to add it as a shared folder in Virtualbox.
VM (default in my case) => Settings => Shared Folders => +
After navigating the file explorer and adding //server1/data to the list of folders shared with VM 'default' I was able to pass it to the container as a volume using the second command outline in my original question.
I'm using Docker Desktop on Windows 10. For the purposes of development, I want to expose a local folder to a container. When running the container in Docker, I do this by specifying the volume flag (-v).
How do I achieve the same when running the container in Kubernetes?
You should use hostpath Volume type in your pod`s spec to mount a file or directory from the host node’s filesystem, where hostPath.path field should be of following format to accept Windows like paths:
/W/fooapp/influxdb
//W/fooapp/influxdb
/////W/fooapp/influxdb
Please check this github issue explaining peculiarities of Kubernetes Volumes on Windows.
I assume also that you have enabled Shared Drives feature in your Docker for Windows installation.
Using k8s 1.21.5 the following type of path worked for me:
/run/desktop/mnt/host/c/PATH/TO/FILE
Digging through this github issue helped me resolve which path to use:
https://github.com/kubernetes/kubernetes/issues/59876
The explanation is on the github link above.
The folder mount for /run/desktop/mnt/host/c does not exist on the distro you installed in WSL2 - on that WSL2 distro, the mount point to your C:\ drive is the more obvious /mnt/c.
Realize that Kubernetes and Docker are not installed in your installed WSL2 distro. Instead, Docker Desktop for Windows creates its own WSL2 VM called docker-desktop and installs Docker and Kubernetes on that VM. Then Docker Desktop for Windows installs the docker and kubectl CLIs on your WSL2 distro (and also on your Windows machine) and configures them all to point to the Docker and Kubernetes instances it created on the docker-desktop VM. This docker-desktop VM is hosting Docker and Kubernetes and also contains the /run/desktop/mnt/host/c mount point to your Windows C:\ drive and that can be used by your containers to persist data.
You can remote into the docker-desktop VM and see the /run/desktop/mnt/host/c mount point and folder structure by following the instructions (and discussion) at https://stackoverflow.com/a/62117039/11057678:
docker run -it --rm --privileged --pid=host justincormack/nsenter1
I am using docker toolbox on Mac. The setup looks like:
docker host - Boot2Docker VirtualBox VM running on Mac
docker client - Mac
I am using following command - docker run -it -v $PWD/dir_on_docker_client:/dir_inside_container ubuntu:14.04 /bin/bash to run a container with a volume mount. I wonder, how is docker able to mount volume from docker client (in this case Mac) into a docker container running on docker host (in this case, VM running on Mac)?
The toolbox VM includes a shared directory from the client. /c/Users (C:\Users) on Windows and /Users on Mac.
Directories in these folders, on the client, can be added as volumes in a container.
Note though that if you add for example /tmp as a volume, it will be /tmp in the toolbox.
The main problem is that virtulbox shares only your home folder with the docker machine at the moment you can only shares content inside this directory. It's uncomfortable but the unique way that I fund to resolve this problem is with the bootlocal.sh file, you can write this file inside your docker-machine to mount after the boot new directory
https://github.com/boot2docker/boot2docker/blob/master/doc/FAQ.md#local-customisation-with-persistent-partition
Yesterday during this dockercon they announced a public beta for "Docker For Mac", I think that you can replace docker-machine with this tool, it provide the best experience with docker and macos, and it resolves this problem
https://www.docker.com/products/docker