Docker Machine to a remote server - docker

I understaand that when I create a VM with Docker Machine using the Virtualbox driver, it creates a local VM running the boot2docker distribution. I can then create my containers on it using for instance Docker Compose.
But what exactly happens when you use Docker Machine onto a remote server? Does it create a VM on that remote server?
Does it differ if you use a known provider (using say the AWS driver) or an unknown provider (using the generic driver)?

When you use Digital Ocean, AWS etc, you give Docker machine an API key, which it will use to create a VM. It will then install the Docker daemon and any dependencies and configure remote access. So you don't give it a remote server - it creates one.
If you use the generic driver, you give Docker machine SSH access to an IP, where I presume it again installs Docker and configures remote access (so it effectively skips the creation step).

Related

Docker desktop networking Windows and Linux nodes

I have a windows service within a Docker container that needs to access a MySQL database in a Linux container on the same machine (dev machine currently).
I thought of creating an overlay network on the two "nodes" on the same machine but this isn't possible as creating the swarm worker fails on windows after creating the swarm master on linux.
Is this possible, if not what is the easiest way of doing this? The purpose of the windows container is simply to deploy to a test environment to gather data. Do I need to deploy the linux to the cloud or another machine maybe, so the windows container can communicate?
You can simply use docker compose, it will create the network automatically. Replace the MySQL host with the MySQL service name you defined in the compose yaml file. Detailed information please refer to docker-compose.

It's possible to manage MacOS Docker Desktop with Docker Machine?

I have Docker Desktop installed on my Mac (not Docker Toolkit) and I installed docker-machine according to the official documentation
I'm triying to add my localhost Docker engine like a docker node under docker machine with no success.
The steps that I made were:
Enable sshd in localhost (ssh localhost works)
Add localhost Docker to Docker Machine:
docker-machine create --driver generic --generic-ip-address 127.0.0.1 --generic-ssh-user <"ssh_username"> <node_name>
Running pre-create checks...
Creating machine...
(localhost) No SSH key specified. Assuming an existing key at the default location.
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Password:
Detecting the provisioner...
Password:
Error creating machine: Error detecting OS: Error getting SSH command: ssh command error:
command : cat /etc/os-release
err : exit status 1
output : cat: /etc/os-release: No such file or directory
Output of docker-machine ls
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
localhost - generic Running tcp://127.0.0.1:2376 Unknown Unable to query docker version: Cannot connect to the docker engine endpoint
Sorry for my English, I'm not native.
docker-machine is dangerous. I wouldn't recommend it for managing production servers as it requires passwordless sudo and makes it very easy to damage your Docker installation. I managed to completely remove all containers an images from a server, not realizing the command I ran was not merely connecting to the server, but initializing it from scratch.
If you want to control multiple Docker daemons from single CLI try Docker Contexts.
Edit:
docker-machine's purpose is provisioning and managing machines with Docker daemon.
It can be used both with local VM's and with various cloud providers. With a single command it can create and start a VM, then install and configure Docker on that new VM (including generating TLS certificates).
It can create an entire Docker Swarm cluster.
It can also install Docker on a physical machine, given SSH access with passwordless sudo (that is what generic driver you tried to use is for).
Once a machine is fully provisioned with Docker it also can set environment variables that configure Docker CLI to send commands to a remote Docker daemon installed on that machine - see here for details.
Finally, one can also add machines with Docker manually configured by not using any driver - as described here. The only purpose of that is to allow for a unified workflow when switching between various remote machines.
However, as I stated before docker-machine is dangerous - it can also remove existing VMs and in case of physical machines reprovsion them, thereby removing all existing images, containers, etc. A simple mistake can wipe a server clean. Not to mention it requires both key-based SSH and passwordless sudo, so if an unauthorized person gets their hands on an SSH key for a production server, then that's it - they have full root access to everything.
It is possible to use docker-machine with preexisting Docker installations safely - you need to add them without using any driver as described here. In this scenario, however, most docker-machine commands won't work, so the only benefit is easy generation of those environment variables for Docker CLI I mentioned before.
Docker Contexts are a new way of telling Docker CLI which Docker daemon it's supposed to communicate with. They essentially are meant to replace all those environment variables docker-machine generates.
Since Docker CLI only communicates with Docker daemon, there is no risk of accidentally deleting a VM or reprovisioning already configured physical machine. And since they are a part of Docker CLI, there is no need to install additional software.
On the other hand, Docker contexts cannot be used to create or provision new machines - one needs to either do that manually or use some other mechanism or tool (like Vagrant or some kind of template provided by the cloud provider).
So if you really need a tool that'll let you easily create, provision and remove docker-enabled machines then use docker-machine. If, however, all you wan is to have a list of all your Docker-enabled machines in one place and a way to easily set up which one your local Docker CLI is supposed to talk to, Docker Contexts are a much safer alternative.

Trying to get Xdebug session initiated in a docker inside a VM to reach my remote computer

I have a docker running my PHP app.
This docker needs to run inside a VM in a remote datacenter.
I work from a computer that can connect to the mentioned VM.
My intention is to have the Xdebug session that is initiated inside the docker reach my computer (more precisely my PHPStorm).
Both docker and the VM are running Centos (company approved/installed images).
The development computer is OSx.
I am able to use ssh remote forward (aka: tunnel) to forward any requests from the VM to my computer.
I want to either:
- be able to open a tunnel from my computer directly to the docker container in the VM
- or be able to continue the current tunnel in from the VM to the docker.
Have found no way to do the first option and have ran into a lot of issues trying to do the second.
Any suggestions?

What is the Docker Engine?

When people talk about the 'Docker Engine' do they mean both the Client and the Daemon? Or is it something else entirely?
As I see it there is a Docker Client, a Docker Daemon. The Client runs locally and connects to the Daemon which does the actual running of the containers. The Client can connect to a remote Daemon. Are these both together the Engine? thanks
The Docker Engine is the Docker Daemon running on a single host, installed with the Docker Client CLI. Here are the docs that answer this specific question.
On top of that, you can have a Swarm running that joins multiple hosts to horizontally scale and provide fault tolerance. And there are numerous other projects from Docker, like their Registry, Docker Cloud, and Universal Control Plane, that are each separate from the engine.
Docker engine is a client-server application which comprises of 3 components.
1. Client: Docker CLI or the command line window that helps us to interact.
2. REST API: Client communicate with the server with REST API, the commands issued by the client is sent to the server in the form of REST API, it is this reason our server can either be in the local or remote machine.
3. Server: Server here is either the local or remote machine or host machine which has a daemon process running in it which receives the commands and creates, manages and destroys the docker objects like images, containers, volumes etc.

Use real server instead of docker-machine for OSX

I have a linux on cloud with a installed docker service on it. How can I use my VS on cloud instead of docker-machine on my OSX? it means instead of install VirtualBox and create a VM on it by docker-machine, I use my server on cloud as docker server.
To access a remote Docker daemon simply pass the -H flag to your docker commands:
docker -H=tcp://192.168.0.100:2375 images
You need to ensure that the remote Docker daemon is listening on the appropriate network interface. Be aware though that doing this on an external server is highly insecure, anyone that can reach the port has effectively root access on the server. At the very least read this article on securing the Docker daemon.
Personally I would only recommend using a port binding via ssh tunnel to access the remote Docker daemon.
You might get a solution from docker-machine's generic driver. Just start the virtual server in cloud, set up proper SSH keys and get started :) It should work just the same as with a VM within VirtualBox.
I'm not sure how to get VS auto-started if it is shut down though. Via a could-vendor specific command line program?
Edit: I should have read the docs better, the first cloud example actually shows the usage of digital ocean driver. If it is already running then just use the generic driver.

Resources