How do I use docker from local to a remote machine? - docker

I've noticed that boot2docker runs docker on a VM as a deamon on port 2375.
Then I use local Mac OS X 'docker' command and it executes all calls on VM.
These are the commands I use:
boot2docker start
export DOCKER_HOST=tcp://:2375
And then 'docker images' (for example) is running on VM.
How can I do the same with a physical machine rather then VM?

boot2docker is meant to be for dev purpose. It will spawn a VM. For bare metal, simply install docker on the host and start the docker daemon with docker -d -H tcp://0.0.0.0:4243.
WARNING: This is very dangerous. Anyone will have root access to your host. In order to secure this, you should change 0.0.0.0 to 127.0.0.1 and either use a SSH tunnel or a nginx/apache frontend with authentification.
On you mac, then just export DOCKER_HOST=tcp://<host ip>:4243

Related

How to use docker daemon running on host machine in minikube

I have installed minikube on my laptop, I see that minikube uses docker daemon running within cluster.
Is it possible to run minikube to use the host machine docker daemon?
I tried using
export DOCKER_HOST="tcp://localhost:2376"
ran, minikube start
and, minikube start --docker-env=DOCKER_HOST="tcp://localhost:2376"
Both did not work.
Is it possible to run minikube to use the host machine docker daemon?
No. Minikube runs in a VM, and can't connect to the host's /var/run/docker.sock file. (The setup you show requires a non-default host Docker configuration with significant risk of just outright getting the host rooted, and from the VM's point of view, localhost is the VM.)
You can do the opposite, though, set your local Docker daemon to talk to minikube's Docker daemon
eval $(minikube docker-env)
(Also remember that Kubernetes is designed for multi-host deployments based around immutable images. If you're trying to do live development inside a Kubernetes pod, it is rather complicated and translates poorly to production environments. Use plain Docker, or better still, install a development environment directly on your host. If you're just trying to test out deployment wiring, minikube, or the Kubernetes included in Docker Desktop, or other tools like kind work just fine.)
#David Maze, it's not completely true what you wrote in your answer:
No. Minikube runs in a VM, and can't connect to the host's
/var/run/docker.sock file.
Let's say it can be true only in particular case, so the following question:
Is it possible to run minikube to use the host machine docker daemon?
I would answer: Yes, it is. However typical Minikube instance runs on a separate VM, it is still possible to run it directly on the host. More on that you can read in minikube installation guide in official Kubernetes documentation:
Note: Minikube also supports a --vm-driver=none option that runs the
Kubernetes components on the host and not in a VM. Using this driver
requires Docker and a Linux environment but not a hypervisor. It is
recommended to use the apt installation of docker from Docker, when
using the none driver. The snap installation of docker does not work
with minikube.
#Sunil Gajula, adding following flag:
--vm-driver=none
when running your Minikube instance should actually resolve your problem as it is not set by default to none and it seems the missing element in your attempts to run Minikube on your local machine. So by default it runs in a VM, using one of the available hypervisors ( if you don't specify above mentioned flag).
I got this working on my mac OS.
And I use fish:
##install docker-cli
#brew install docker
#brew install minikube hyperkit
## run minikube without kubernetes enabled
#minikube start --memory 6144 --cpus 4 --docker-opt=bip=172.17.42.1/16 --no-kubernetes
# minikube -p minikube docker-env | source (put the result into config and source it)for bash/zsh: minikube docker-env
And if you want to run minikube k8s cluster:
you can:
# minikube start --addons=registry --cni=calico --driver=hyperkit --cpus=8 --memory=8g (or some simple command)
You may need to install docker-machine-driver-hyperkit with install command.
With everything ok, you can use docker-cli to interact docer daemon in minikube.

Decision rule to use docker-machine or not on docker run

When I use docker-machine in a Windows environment (installed with docker-toolbox), every docker run command uses that docker-machine as the docker daemon.
However, when I use docker-machine in a Linux environment, which has native docker daemon installed along with docker-machine, docker run command uses native docker daemon even if there is a running docker-machine instance.
Questions are:
How does docker run command decide which daemon to use?
Are there any method to list running containers on a docker-machine instance?
For the second one, I know I can SSH to the docker-machine instance and query docker ps in it, but I want check it from outside the instance.
Thanks in advance.
The Docker Machine stack works by firing up a VM, and then setting the DOCKER_HOST environment variable to point at it. In particular, it also does the required setup to TLS-encrypt the connection and to set up a TLS client certificate to authenticate the caller. (Without this setup, a remote DOCKER_HOST is extremely dangerous.)
So: docker run and every other Docker command uses the DOCKER_HOST environment variable to decide where to run things. If DOCKER_HOST points at a Docker Machine VM, docker ps will list the containers there; you won’t usually need to docker-machine ssh (though it’s a useful tool when you really need it).
On a native Linux host it’s far easier to just directly use a local Docker daemon. If you do have both a local daemon and a docker-machine VM, you can
# switch to the Docker Machine VM
eval $(docker-machine env default)
# switch back to the host Docker
eval $(docker-machine env -u)

Connect with ssh to docker daemon on Windows

I installed Docker Desktop for Windows on Windows 10 with https://docs.docker.com/docker-for-windows/install/#install-docker-for-windows. It not uses VirtualBox and default VM to host docker.
I am able to run containers but how I connect to a docker with ssh?
docker-machine ls does not show my docker host.
Tried to connect to docker#10.0.75.1 but it requires password. And tcuser that used for boot2docker VM not matching:
ssh docker#10.0.75.1 Could not create directory '/home/stan/.ssh'. The
authenticity of host '10.0.75.1 (10.0.75.1)' can't be established. RSA
key fingerprint is .... Are you sure you want to continue connecting
(yes/no)? yes Failed to add the host to the list of known hosts
(/home/stan/.ssh/known_hosts). docker#10.0.75.1's password: Write
failed: Connection reset by peer
Run this:
docker run -it --rm --privileged --pid=host justincormack/nsenter1
Just run this from your CLI and it'll drop you in a container with
full permissions on the Moby VM. Only works for Moby Linux VM (doesn't
work for Windows Containers). Note this also works on Docker for Mac.
Reference:
https://www.bretfisher.com/getting-a-shell-in-the-docker-for-windows-vm/
As far as I know you can't connect to the docker VM using SSH and you cannot connect to the console/terminal using Hyper-V Manager either. https://forums.docker.com/t/how-can-i-ssh-into-the-betas-mobylinuxvm/10991/17

Not use docker-machine

I used docker with docker-machine ( can access container server by 192.168.99.100 ). I would like not to use docker-machine. so I can directly access my container by localhost (127.0.0.1). I shut down docker-machine (docker-machine stop) and tried to build image and container, but It said 'no daemon'. how should I completely shut down docker-machine and use local docker?
I think what you want is unset all docker-machine environment variables to use you host Docker daemon. This can be achieved with this command.
eval $(docker-machine env -u)
There are two different installs for docker on Mac. Both use a VM running Linux under the covers.
The older method includes docker toolbox and docker machine to manage the VM in virtualbox. When you use docker machine to stop this VM, the docker commands have no host to run on and will error out as you've seen.
The newer install uses xhyve to run the VM and various other tricks to make it appear seamless. This is a completely different install that you download and run from Docker, and it requires your Mac be at least version 10.10.3 with Yosemite.
See this install page for more details: https://store.docker.com/editions/community/docker-ce-desktop-mac?tab=description

Access host docker-machine from within container

I have an image that I'm using to run my CI/CD builds (using GitLab CE). I'd like to deploy my app doing something like this from within the container:
eval "$(docker-machine env manager)"
sudo docker stack deploy --compose-file docker-stack.yml web
However, I'd like the docker-machine to access machines defined on the host system since the container will be destroyed and I don't want to include access details in the image.
I've tried a few things
Accessing the Remote Host via docker-machine
Create the docker-machine on the host and mount the MACHINE_STORAGE_PATH so that it is available to the container
Connect to the remote docker-machine manually from within the container and setting the MACHINE_STORAGE_PATH equal to a mounted volume
Mounting the docker socket
In both cases, I can see the machine storage is persisted, but whenever I create a new container and run docker-machine ls none of the machines are listed.
Accessing the Remote Host via DOCKER_HOST
Forward the remote machine docker port to the host docker port docker-machine ssh manager-1 -N -L 2376:localhost:2376
export DOCKER_HOST=:2376
Tell docker to use the same certs that are used by docker-machine: export DOCKER_TLS_VERIFY=1 and export DOCKER_CERT_PATH=/Users/me/.docker/machine/machines/manager-‌​1
Test with docker info
This gives me error during connect: Get https://localhost:2376/v1.26/info: x509: certificate signed by unknown authority
Any ideas on how I can perform a remote deployment from within a container?
Thanks
EDIT
Here is a diagram to try and help better communicate the scenario.
Don't use docker-machine for this.
Docker-machine stores files in $HOME/.docker/machine, so when you restart with a fresh copy of this folder, all previously defined machines will be removed. You could store this folder as a volume, but there's a much easier way for your purposes.
The solution is to mount the docker socket, and either as root or from a user with the same gid as the docker socket (note that group names themselves inside and outside the container may not match, so gid is important), run your docker ... commands as normal. You can skip the docker-machine eval completely since you are running the commands against the local docker socket.
If you need to run commands remotely, I find it easier to define the DOCKER_HOST and DOCKER_TLS_VERIFY variables manually rather than using docker-machine.
In case you want to communicate from your CI container to the Docker host you can simply mount the Docker socket when starting the CI container:
docker run -v /var/run/docker.sock:/var/run/docker.sock <gitlab-image>
Now you can run docker commands on the host from within the CI container.

Resources