Connect to remote docker host / deploy a stack - docker

I created a docker stack to deploy to a swarm. Now I´m a bit confused how the proper way looks like to deploy it to a real server?
Of course I can
scp my docker-stack.yml file to a node of my swarm
ssh into the node
run docker stack deploy -c docker-stack.yml stackname
So there is the docker-machine tool I thought.
I tried
docker-machine -d none --url=tcp://<RemoteHostIp>:2375 node1
what only seems to work if you open the port without TLS?
I received following:
$ docker-machine env node1
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "192.168.178.49:2375": dial tcp 192.168.178.49:2375: connectex: No connection could be made because the target machine actively refused it.
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
I already tried to generate a certificate & copy it over to the host:
ssh-keygen -t rsa
ssh-copy-id myuser#node1
Then I ran
docker-machine --tls-ca-cert PathToMyCert --tls-client-cert PathToMyCert create -d none --url=tcp://192.168.178.49:2375 node1
With the following result:
$ docker-machine env node1
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "node1:2375": There was an error reading certificate
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
I also tried it with the generic driver
$ docker-machine create -d generic --generic-ssh-port "22" --generic-ssh-user "MyRemoteUser" --generic-ip-address 192.168.178.49 node1
Running pre-create checks...
Creating machine...
(node1) No SSH key specified. Assuming an existing key at the default location.
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Error creating machine: Error detecting OS: OS type not recognized
How do I add the remote docker host with docker-machine properly with TLS?
Or is there a better way to deploy stacks to a server/into production?
I read often that you shouldn´t expose the docker port but not once how to do it. And I can´t believe that they doesn´t provide a simple way to do this.
Update & Solution
I think both answers have there qualification.
I found Deploy to Azure Offical Doc (it´s the same for AWS).
The answer from #Tarun Lalwani pointed me into the right direction and it´s almost the official solution. Thats the reason I accepted his answer.
For me the following commands worked:
ssh -fNL localhost:2374:/var/run/docker.sock myuser#node1
Then you can run either:
docker -H localhost:2374 stack deploy -c stack-compose.yml stackname
or
DOCKER_HOST=localhost:2374
docker stack deploy -c stack-compose.yml stackname
The answer from #BMitch is also valid and the security concern he mentioned shouldn´t be ignored.
Update 2
The answer from #bretf is a awesome way to connect to your swarm. Especially if you have more than one. It´s still beta but works for swarms which are available to the internet and don´t have a ARM architecture.

I would prefer not opening/exposing the docker port even if I am thinking of TLS. I would rather use a SSH tunnel and then do the deployment
ssh -L 2375:127.0.0.1:2375 myuser#node1
And then use
DOCKER_HOST=tcp://127.0.0.1:2375
docker stack deploy -c docker-stack.yml stackname

You don't need docker-machine for this. Docker has the detailed steps to configure TLS in their documentation. The steps involve:
creating a CA
create and sign a server certificate
configuring the dockerd daemon to use this cert and validate client certs
create and sign a client certificate
copy the ca and client certificate files to your client machine's .docker folder
set some environment variables to use the client certificates and remote docker host
I wouldn't use the ssh tunnel method on a multi-user environment since any user with access to 127.0.0.1 would have root access to the remote docker host without a password or any auditing.

If you're using Docker for Windows or Docker for Mac, Docker Cloud has a more automated way to setup your TLS certs and get you securely connected to a remote host for free. Under "Swarms" there's "Bring your own Swarm" which runs an agent container on your Swarm managers to let you easily use your local docker cli without manual cert setup. It still requires the Swarm port open to internet, but this setup ensures it has TLS mutual auth enabled.
Here's a youtube video showing how to set it up. It can also support group permissions for adding/removing other admins from remotely accessing the Swarm.

Related

Cannot Connect to docker daemon. is docker daemon running?

I'm using Jenkins on Docker on my local Mac Machine.
And I'm running another Docker on ubuntu VirtualBox. So now, there are 2 docker machines. one is on my mac machine and one is on my Ubuntu VirtualBox machine. I'm running Jenkins on Mac Docker. Now in the Jenkins pipeline, I want to build an image on my ubuntu machine.
I've configured Jenkins docker cloud and in the docker host URL, it is connected to the ubuntu docker-machine.
But while building a new image, I'm getting the error. Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I've tried even adding ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock
at /lib/systemd/system/docker.service
WHen i check ps -aux,
Can someone please help me out?
help is appreciated.
First personally if I had a setup like that I would not bother connecting to the remote docker and would just install a Jenkins agent on the ubuntu machine and make it talk to the Jenkins master.
But if you want to do it they way you have it set up right now we a Jenkins talking from inside out one docker host into another docker host I suggest looking into the following:
Your Jenkins master and the ubuntu machine a very isolated they might as well just be on different machines not even in the same room. Unix domain sockets, the ones that are identified by unix://* are made for communicating within a single local OS kernel, trying to bridge them into remote machine will lead to disaster.
So the only way Jenkins could communicate to the remote host is via a remote protocol like TCP. Most of the time when you install docker with the default settings it doesn't even listen to TCP at all, mostly for security reasons.
First thing you should do is to configure a docker inside of the ubuntu machine to listen on TCP port and accept connections from remote hosts. You can use netstat -nat to see if anything is listening on TCP 4243. When things are configured correctly you see the line that stats with 0.0.0.0:4243 or something like that in the output of the nestat
Second you need to make sure your the firewalls/iptables/netfilter configuration on the Ubuntu host lets in connections from outside. A good test to try is to telnet <ubuntu-ip> 4243 from a terminal session on your Mac.
Then you need to make sure you that docker networking is configured correctly so that connections from the inside of the container that is running Jenkins end up on your ubuntu box. To test you need to exec -it into your jenkins container and repeat the telnet test. On modern linuxes telnet is usually not installed, so you can use curl -vvv which will always end up with an error, so just look at the verbose output to see if the error because things cannot communicate (timeout, connection reset etc) or the error occurs because your curl tried to talk HTTP to docker and got gibberish response. In the later case you can consider things to be set up correctly.
Finally you need to tell Jenkins Docker to communicate to the remote docker via TCP. Usually that is given on the command line to your docker run, docker ps, docker exec
I've configured it by defining the slave label in my Jenkins Pipeline.
Jenkins agents run on a variety of different environments such as physical machines, virtual machines, Kubernetes clusters, and Docker images.
In your Jenkins Pipeline or In your JenkinsFile, you've to set the agent accordingly to what you're using either using Docker image or any virtual machine.
Also Thank you so much #Vlad, all the things you told me, were really helpful.

Docker Socket over SSH

can i run docker socket over ssh?
i'm trying to run unix:///var/run/docker.sock but i'm getting the error "Is daemon service running?, Cannnot connect to daemon service"
Jenkins master and the ubuntu machine a very isolated they might as well just be on different machines not even in the same room. Unix domain sockets, the ones that are identified by unix://* are made for communicating within a single local OS kernel, trying to bridge them into remote machine will lead to disaster.
how can i use Docker sock over ssh?
stephen proposed a solution but i find this one more adequate to your use case.
you can simply use
ssh xxx "docker run yyy"
or you can use env variables :
be sure that you have ssh key authentification active
and call all your docker commands with this env variable defined :
DOCKER_HOST=remoteservername
docker will use ssh connection to run commands
you can also use -H (works the same)
see more here
https://betterprogramming.pub/docker-tips-access-the-docker-daemon-via-ssh-97cd6b44a53

It's possible to manage MacOS Docker Desktop with Docker Machine?

I have Docker Desktop installed on my Mac (not Docker Toolkit) and I installed docker-machine according to the official documentation
I'm triying to add my localhost Docker engine like a docker node under docker machine with no success.
The steps that I made were:
Enable sshd in localhost (ssh localhost works)
Add localhost Docker to Docker Machine:
docker-machine create --driver generic --generic-ip-address 127.0.0.1 --generic-ssh-user <"ssh_username"> <node_name>
Running pre-create checks...
Creating machine...
(localhost) No SSH key specified. Assuming an existing key at the default location.
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Password:
Detecting the provisioner...
Password:
Error creating machine: Error detecting OS: Error getting SSH command: ssh command error:
command : cat /etc/os-release
err : exit status 1
output : cat: /etc/os-release: No such file or directory
Output of docker-machine ls
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
localhost - generic Running tcp://127.0.0.1:2376 Unknown Unable to query docker version: Cannot connect to the docker engine endpoint
Sorry for my English, I'm not native.
docker-machine is dangerous. I wouldn't recommend it for managing production servers as it requires passwordless sudo and makes it very easy to damage your Docker installation. I managed to completely remove all containers an images from a server, not realizing the command I ran was not merely connecting to the server, but initializing it from scratch.
If you want to control multiple Docker daemons from single CLI try Docker Contexts.
Edit:
docker-machine's purpose is provisioning and managing machines with Docker daemon.
It can be used both with local VM's and with various cloud providers. With a single command it can create and start a VM, then install and configure Docker on that new VM (including generating TLS certificates).
It can create an entire Docker Swarm cluster.
It can also install Docker on a physical machine, given SSH access with passwordless sudo (that is what generic driver you tried to use is for).
Once a machine is fully provisioned with Docker it also can set environment variables that configure Docker CLI to send commands to a remote Docker daemon installed on that machine - see here for details.
Finally, one can also add machines with Docker manually configured by not using any driver - as described here. The only purpose of that is to allow for a unified workflow when switching between various remote machines.
However, as I stated before docker-machine is dangerous - it can also remove existing VMs and in case of physical machines reprovsion them, thereby removing all existing images, containers, etc. A simple mistake can wipe a server clean. Not to mention it requires both key-based SSH and passwordless sudo, so if an unauthorized person gets their hands on an SSH key for a production server, then that's it - they have full root access to everything.
It is possible to use docker-machine with preexisting Docker installations safely - you need to add them without using any driver as described here. In this scenario, however, most docker-machine commands won't work, so the only benefit is easy generation of those environment variables for Docker CLI I mentioned before.
Docker Contexts are a new way of telling Docker CLI which Docker daemon it's supposed to communicate with. They essentially are meant to replace all those environment variables docker-machine generates.
Since Docker CLI only communicates with Docker daemon, there is no risk of accidentally deleting a VM or reprovisioning already configured physical machine. And since they are a part of Docker CLI, there is no need to install additional software.
On the other hand, Docker contexts cannot be used to create or provision new machines - one needs to either do that manually or use some other mechanism or tool (like Vagrant or some kind of template provided by the cloud provider).
So if you really need a tool that'll let you easily create, provision and remove docker-enabled machines then use docker-machine. If, however, all you wan is to have a list of all your Docker-enabled machines in one place and a way to easily set up which one your local Docker CLI is supposed to talk to, Docker Contexts are a much safer alternative.

docker-machine ssh into Vagrant VM failing

I have two vagrant VMs running Ubuntu16.04 over VirtualBox with docker installed. I want to create an overlay-network for the docker containers running on these two VMs. Hence, I followed the tutorial here.
I have created the VMs and tried to run eval "$(docker-machine env mh-keystore)". However, it failed with the following error:
Error checking TLS connection: Error checking and/or regenerating the certs:
There was an error validating certificates for host "172.28.128.5:2376": dial tcp 172.28.128.5:2376: getsockopt: connection refused
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
I then tried to regenerate the certificates as mentioned in the error. However, it fails to establish ssh connection to the VM.
Regenerating TLS certificates
Waiting for SSH to be available...
Too many retries waiting for SSH to be available. Last error: Maximum number of retries (60) exceeded
I can still vagrant ssh to the VMs. Can somebody help me use the vagrant VMs using docker-machine.
I faced a similar issue "waiting for ssh to be available" and it turns out to be unsigned drivers in the network stack installed by some corporate proxy interception software called proxycap that were causing virtualbox to error when setting up port forwarding from the localmachine into the boot2docker vm. Check you VM machine logs and look for an error message while setting up port forwarding. It should also list the unsigned drivers causing errors and then you just need to uninstall the corresponding application.

accessing docker private registry

I have my private docker registry running on a remote machine, which is secured by TLS and uses HTTPS. Now I want to access it from my local docker-machine installed on Windows 7. I have copied the certificates to "/etc/docker/certs.d/" in the docker-machine VM and restarted docker.
After this I can successfully login to my private registry using credentials, but when I try to push an image to it, it gives me a certificate signed by unknown authority error. After researching a little I restarted the docker daemon with docker -d --insecure-registry https://<registry-host>, and it worked.
My question is: if I have copied my certificates to the host machine, why do I need to start the registry with the --insecure-registry option?
I can only access the registry from another host with certificates as well as restarting docker with --insecure-registry , which looks a little wrong to me.
Docker version: 1.8.3
Any pointers on this would be really helpful.
certificate signed by unknown authority
The error message gives it away - your certificates are self-signed (as in not trusted by a known CA).
See here.
If you would like to access your registry with HTTP, follow the instructions here
Basically (do this on the machine from which you try to access the registry):
edit the file /etc/default/docker so that there is a line that reads: DOCKER_OPTS="--insecure-registry myregistrydomain.com:5000" (or add that to existing DOCKER_OPTS)
restart your Docker daemon: on ubuntu, this is usually service docker stop && service docker start

Resources