I want to transfer a Docker image from my Windows10 PC to another one, Fedora, using rsync. I can't use WSL, I need WSL2 as the compiler says:
ubu#DESKTOP-QL4RO3V:/mnt/c/Windows/system32$ docker images
The command 'docker' could not be found in this WSL 1 distro.
We recommend to convert this distro to WSL 2 and activate
the WSL integration in Docker Desktop settings.
For details about using Docker Desktop with WSL 2, visit:
https://docs.docker.com/go/wsl2/
But I think that as I have Docker desktop it is using WSL2:
But I don't know how to run the wsl2 Docker is using for my own.
PS C:\Users\antoi> wsl -l -v
NAME STATE VERSION
* Ubuntu Running 1
docker-desktop-data Running 2
docker-desktop Running 2
Docker Desktop images, containers, and volumes are stored in the special docker-desktop-data. As noted in this Super User question and my answer there, docker-desktop-data is not bootable (by design).
If you really had to get to the filesystem, I've documented a way to do so there. But in general, you should not need to do this.
Instead, use the normal docker commands (from WSL2, PowerShell, or CMD) to save the image to a tar file as documented in this answer:
docker save -o <image.tar> <image_name>
Then transfer the file using rsync or other means, and on the destination machine, import it via:
docker load -i <image.tar>
Again, that's from WSL2, PowerShell, or CMD. But in your case, the Ubuntu instance is WSL1. That won't work for Docker. You'll need to convert it to WSL2.
Just in case, I always recommend backing up your instance before converting it. From PowerShell:
wsl --export Ubuntu ubuntu_backup.tar
Then, once you have the backup:
wsl --set-version Ubuntu 2
wsl --set-default-version 2 # if desired
After conversion, you shouldn't see that error when running docker in Ubuntu.
Side note -- Docker Desktop "injects" the docker command into any WSL2 instance that you set in the "WSL Integration" tab in Settings. This should default to your "default" WSL2 instance, which (from your screenshot) is Ubuntu. The "real" docker command is inside docker-desktop, but it's linked into Ubuntu for you.
So by default, you should have all docker functionality directly in your Ubuntu instance. Neither docker-desktop nor docker-desktop-data are designed to be used directly by the end-user.
You can access docker desktop WSL using the following command
wsl -d docker-desktop
Related
Trying to wrap my head around Docker, WSL2, Distros, Images and Containers. What is the difference between a WSL distro and a Docker image? Looking at the following two snapshots, it looks like those are different things:
List of installed distros in WSL:
List of images in Docker Desktop:
Alpine and Ubuntu are listed in the list of additional distros but do not show up in the list of images.
How am I supposed to run one of the installed WSL distros (Alpine or Ubuntu) as a Container and get to its terminal? Lastly, can I launch Ubuntu's desktop UI from within that container?
Docker images and WSL distributions are two completely different things from the usage standpoint.
In the context of over-simplifying to compare and explain the two:
WSL Distributions contain the tools that you use to interact with and develop applications using Linux. This includes your shell (bash by default in Ubuntu) and the docker client (provided by Docker Desktop).
Docker images are what you use as a starting point for your Docker containers.
The third screenshot you provided is Settings dialog that allows you to choose which WSL images should be integrated with Docker. Try the following:
Turn off Ubuntu in that setting
Apply and Restart Docker Desktop
Start your Ubuntu WSL distribution by running it from the Start Menu (or preferably Windows Terminal, if you have it installed).
Try running the docker command
You should find it isn't there.
Turn Ubuntu back on in Docker Desktop's settings
Apply and Restart Docker Desktop
You don't need to restart Ubuntu, but the docker command should now be there.
Docker desktop actually injects a link for the docker command:
From the docker-desktop distribution
Into the "user" distributions you select in that Settings option.
In general, it's fine just to leave it on for all distributions.
Now I know your next question:
So how do I run a Docker container based on Alpine or Ubuntu docker images?
You need to actually pull the Docker images onto your computer first:
Make sure you've enabled all WSL2 distributions again (not necessarily required, but you don't want to leave any off by mistake).
Start your Ubuntu distribution
Run:
docker run --rm -it alpine
Docker will detect that you don't have the Docker Alpine image installed, pull it, and run it. This is a bit of Docker shorthand for two steps, actually:
docker pull alpine
docker run --rm -it alpine
The -it options are for "interactive" and "terminal".
At this point, you'll be at the BusyBox shell prompt (Alpine's default) that is running inside your Ubuntu WSL distribution.
Before exiting, go back to Docker Desktop and examine the list of containers and images. You'll see:
An "alpine" image
A randomly-named container that is running based on the alpine image
If you start another terminal with Ubuntu open, you can run:
docker ps to show you the container information
docker images to show you the image information
This is essentially the same info that you see in Docker Desktop.
Go back to the first Ubuntu WSL2 terminal, which is running the Alpine container with the BusyBox prompt. Try running docker there -- It won't work, because this is a container based on the Docker Alpine image, not your WSL Alpine distribution.
Type exit or Ctrl+D to exit that prompt, and you'll now be back at the bash prompt for Ubuntu.
At this point, you'll notice that your Docker container is now gone, since we specified the --rm option that removes it when the process ends. If we hadn't done that, it would still show as a "Stopped" container in Docker.
You'll find that the Alpine Docker image, however, is still there. Once pulled onto your machine, Docker images stay until you remove them.
Docker images and containers can take a little bit to understand, and I can understand the added confusion in WSL distributions. Play around with them a bit, read some Docker tutorials, and the information above will start to make sense shortly.
Side note: Come back and read this after the above makes sense.
Docker containers and WSL2 distributions shared one big similarity in their architecture, at least -- they are both container technologies, just different.
WSL2 distributions are actually containers running in their own namespace inside the (hidden) WSL2 Hyper-V virtual machine. They share the same kernel and network, but each WSL2 distribution/instance has its own, separate, isolated PID and user namespaces.
This is, at the core, the same concept that Docker containers use. So with WSL2 and Docker, we are really running:
A WSL2 distribution "container" inside the WSL2 VM
A Docker container inside the WSL2 distribution container
I installed minikube and now I want to create my docker containers, but how do I run the docker commands? I tried the following from command prompt
But it does not recognize docker as a command.
Also I tried from PowerShell with the same result, docker not recognized.
I currently only have minikube installed on my workstation because I was given the impression from comments to a previous question that I did not need Docker Desktop (see Unable to connect to running docker containers (minikube docker daemon))
In this SO question there is an answer that will show you 3 ways how to make Minikube and Docker work on Windows:
Scenarios are like this:
1) Use Docker, and minikube with Hyper-V (you will find instruction in
an answer above) Enable Hyper-V, install Docker, use minikube with
arguments minikube start --vm-driver hyperv --hyperv-virtual-switch
"<created Hyper-V switch name>" In the same time you will be able to
interact with Docker in normal way. Use kubectl/minikube commands for
your Kubernetes cluster and Docker commands for Docker. 2) Use
VirtualBox for Kubernetes and Docker toolbox for Docker minikube
start --vm-driver=virtualbox
3) Use Docker for Windows and Kubernetes in Docker
I believe this will solve your issue. Please, let me know if that helped.
I'm using Docker Desktop on Windows 10. For the purposes of development, I want to expose a local folder to a container. When running the container in Docker, I do this by specifying the volume flag (-v).
How do I achieve the same when running the container in Kubernetes?
You should use hostpath Volume type in your pod`s spec to mount a file or directory from the host node’s filesystem, where hostPath.path field should be of following format to accept Windows like paths:
/W/fooapp/influxdb
//W/fooapp/influxdb
/////W/fooapp/influxdb
Please check this github issue explaining peculiarities of Kubernetes Volumes on Windows.
I assume also that you have enabled Shared Drives feature in your Docker for Windows installation.
Using k8s 1.21.5 the following type of path worked for me:
/run/desktop/mnt/host/c/PATH/TO/FILE
Digging through this github issue helped me resolve which path to use:
https://github.com/kubernetes/kubernetes/issues/59876
The explanation is on the github link above.
The folder mount for /run/desktop/mnt/host/c does not exist on the distro you installed in WSL2 - on that WSL2 distro, the mount point to your C:\ drive is the more obvious /mnt/c.
Realize that Kubernetes and Docker are not installed in your installed WSL2 distro. Instead, Docker Desktop for Windows creates its own WSL2 VM called docker-desktop and installs Docker and Kubernetes on that VM. Then Docker Desktop for Windows installs the docker and kubectl CLIs on your WSL2 distro (and also on your Windows machine) and configures them all to point to the Docker and Kubernetes instances it created on the docker-desktop VM. This docker-desktop VM is hosting Docker and Kubernetes and also contains the /run/desktop/mnt/host/c mount point to your Windows C:\ drive and that can be used by your containers to persist data.
You can remote into the docker-desktop VM and see the /run/desktop/mnt/host/c mount point and folder structure by following the instructions (and discussion) at https://stackoverflow.com/a/62117039/11057678:
docker run -it --rm --privileged --pid=host justincormack/nsenter1
I used docker with docker-machine ( can access container server by 192.168.99.100 ). I would like not to use docker-machine. so I can directly access my container by localhost (127.0.0.1). I shut down docker-machine (docker-machine stop) and tried to build image and container, but It said 'no daemon'. how should I completely shut down docker-machine and use local docker?
I think what you want is unset all docker-machine environment variables to use you host Docker daemon. This can be achieved with this command.
eval $(docker-machine env -u)
There are two different installs for docker on Mac. Both use a VM running Linux under the covers.
The older method includes docker toolbox and docker machine to manage the VM in virtualbox. When you use docker machine to stop this VM, the docker commands have no host to run on and will error out as you've seen.
The newer install uses xhyve to run the VM and various other tricks to make it appear seamless. This is a completely different install that you download and run from Docker, and it requires your Mac be at least version 10.10.3 with Yosemite.
See this install page for more details: https://store.docker.com/editions/community/docker-ce-desktop-mac?tab=description
I have set up Docker for Windows (Hyperv Beta) on my Laptop.
My intention is to laborate on some setups for containers I intend to install in my real server later. I am fairly new to Docker (but know the basics) so I wanted to laborate with volumes and volume images a bit.
However all anonymous volumes end up on the virtual Linux host. I would like to access the filesystem of the host, not within a container.
I cannot access it from within a container easily due to (well founded) security constraints. Neither can I find a way to access it from the windows prompt.
(Using Docker for Windows version 1.12.0-beta21)
I know that it possible to mount volumes using the c share made by Docker for Windows, but that raises the complexity for me. My intent is to use Docker tutorials unmodified and inspect the results in the host filesystem. Preferably through a (bash) shell in the host VM or with a windows file access into the virtual machine.
Later on I would also like to copy volume contents into the vm volumes although that could be solved using a volume against the c drive.
I have after research on my own deducted the following technique to create a privileged container that works as if it was the Linux root host. This is the best I have been able to pinpoint so far.
docker run --net=host --ipc=host --uts=host --pid=host -it --security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
Docker-machine will allow you to ssh to the default machine by typing:
"docker-machine ssh"
You'll be logged into the VM that is running docker.