Share SSH Key to Docker Machine - docker

I have an exisiting VM with docker installed (CoreOS) and I can connect to docker with the following powershell command.
docker-machine create --driver generic --generic-ip-address=$IP --generic-ssh-key=$keyPath --generic-ssh-user=user vm
docker-machine env vm | Invoke-Expression # Set Environment Variables
Everything worked fine. I was able to build and run containers.
Then I told my build server to run the powershell script and it was running successfully. But then I lost the connection on my dev machine and got the following exception
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "$IP": x509: certificate signed by unknown authority
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
When I recreate my config with docker-machine rm vm it works again.
How can I share an SSH key to a remote docker host without recreating the docker-machine?

Related

Docker-Machine creation failing

I am trying to create a docker-machine in my windows 10 Enterprise machine.
I am creating using driver hyperv but the machine creation is failing with error Error creating machine: Error detecting OS: OS type not recognized
>>docker-machine create --driver hyperv loc-machine1
I can see the loc-machine1 under docker-machine ls
But while trying to communicate from local client docker-machine env loc-machine1 ,its throwing error
Error checking TLS connection: Error checking and/or regenerating the certs:
There was an error validating certificates for host "
[fe80::215:5dff:fe17:100c]:2376": dial tcp [fe80::215:5dff:fe17:100c]:2376:
connectex: A socket operation was attempted to an unreachable network.
You can attempt to regenerate them using 'docker-machine regenerate-certs
[name]'.
Be advised that this will trigger a Docker daemon restart which might stop
running containers.
Tried regenerate-certs but its not working.
Docker version :- 17.03.1-ce
As described in docker troubleshooting documentation, you can try :
Regenerate certificates then restart docker host.
or
Create a new docker-machine

Connect with ssh to docker daemon on Windows

I installed Docker Desktop for Windows on Windows 10 with https://docs.docker.com/docker-for-windows/install/#install-docker-for-windows. It not uses VirtualBox and default VM to host docker.
I am able to run containers but how I connect to a docker with ssh?
docker-machine ls does not show my docker host.
Tried to connect to docker#10.0.75.1 but it requires password. And tcuser that used for boot2docker VM not matching:
ssh docker#10.0.75.1 Could not create directory '/home/stan/.ssh'. The
authenticity of host '10.0.75.1 (10.0.75.1)' can't be established. RSA
key fingerprint is .... Are you sure you want to continue connecting
(yes/no)? yes Failed to add the host to the list of known hosts
(/home/stan/.ssh/known_hosts). docker#10.0.75.1's password: Write
failed: Connection reset by peer
Run this:
docker run -it --rm --privileged --pid=host justincormack/nsenter1
Just run this from your CLI and it'll drop you in a container with
full permissions on the Moby VM. Only works for Moby Linux VM (doesn't
work for Windows Containers). Note this also works on Docker for Mac.
Reference:
https://www.bretfisher.com/getting-a-shell-in-the-docker-for-windows-vm/
As far as I know you can't connect to the docker VM using SSH and you cannot connect to the console/terminal using Hyper-V Manager either. https://forums.docker.com/t/how-can-i-ssh-into-the-betas-mobylinuxvm/10991/17

Cannot connect to remote docker host via SSH forwarded domain socket

I want to be able to user docker-compose with a remote daemon. I created a local forward to the remote daemon socket like so:
#!/bin/sh
export SOCKET_DIR=$HOME/.remote-sockets
mkdir -p $SOCKET_DIR
echo "Creating SSH Dokcer Socket Tunnel"
socat "UNIX-LISTEN:/$SOCKET_DIR/docker.sock,reuseaddr,fork" \
EXEC:'ssh freebsd#107.170.216.79 socat STDIO UNIX-CONNECT\:/var/run/docker.sock'
With that script running, I export the following environment variables:
DOCKER_API_VERSION 1.19
COMPOSE_API_VERSION 1.19
DOCKER_HOST unix://$HOME/.remote-sockets/docker.sock
With those variables set, I can verify running docker ps shows me the remote containers and not my local daemon's containers. Furthermore, docker-compose ps also seems to connect to the remote daemon, as it returns an empty list. If I shut down the SSH tunnel, it fails saying it can't connect to the docker daemon.
The trouble is when I try to run docker-compose up. I've also tried docker-compose -H unix://$HOME/.remote-sockets/docker.sock up. Both commands give me the following:
ERROR: Couldn't connect to Docker daemon - you might need to run `docker-machine start default`.
Local Versions:
Docker version 1.11.0, build 4dc5990
docker-compose version 1.8.0, build 94f7016
(Gentoo Linux)
Remote Versions:
Docker version 1.7.0-dev, build 582db78
(FreeBSD 11.0-RELEASE-p1)
Why won't docker-compose up work with a different socket when the other commands seem to communicate with it fine?

Docker Swarm and self-signed Docker Registry

Does Docker Swarm support usage of Docker Registry with self-signed certificate?
I've created my cluster based on step from official Docker documentation, it uses swarm master/nodes running inside containers.
It works well, but as soon as I try to login to my Docker Registry I'm getting error message:
$ docker -H :4000 login https://...:443
...
Error response from daemon: Get https://.../v1/users/: x509: certificate signed by unknown authority
Is there an additional option which needs to be set, like --insecure-registry? Or do I need to somehow update Docker Swarm container?
You need to add your self signed cert or personal CA to the list of trusted certificates on the host. For some reason, docker doesn't use the certificates on the daemon for this authentication. Here are the commands for a debian host:
sudo mkdir -p /usr/local/share/ca-certificates
sudo cp ca.pem /usr/local/share/ca-certificates/ca-local.crt
sudo update-ca-certificates
sudo systemctl restart docker
The docker restart at the end is required for the daemon to reload the OS certificates.
As luka5z saw in the latest documentation, you can also add the certs directly to each docker engine by copying the cert to /etc/docker/certs.d/myregistrydomain.com:5000/ca.crt. This avoids trusting the self signed CA on the entire OS.
is there a way I could update it with required certificates?
Docker 17.06 will bring the command docker swarm ca (PR 48).
Meaning a docker swarm ca --rotate will be enough.
root#ubuntu:~# docker swarm ca --help
Usage: docker swarm ca [OPTIONS]
Manage root CA
Options:
--ca-cert pem-file Path to the PEM-formatted root CA certificate to use for the new cluster
--ca-key pem-file Path to the PEM-formatted root CA key to use for the new cluster
--cert-expiry duration Validity period for node certificates (ns|us|ms|s|m|h) (default 2160h0m0s)
-d, --detach Exit immediately instead of waiting for the root rotation to converge
--external-ca external-ca Specifications of one or more certificate signing endpoints
--help Print usage
-q, --quiet Suppress progress output
--rotate Rotate the swarm CA - if no certificate or key are provided, new ones will be generated
Here is a demo.
I also encountered your problem.
I was not able to identify the root cause of this, or what sets this limitation.
But i managed a workaround:
if it is insecure make sure you start each docker daemon accordingly on each host.
you can find info on how to change daemon options: https://docs.docker.com/engine/admin/systemd/
eg: from my conf. --insecure-registry <private registry> after that:
systemctl daemon-reload
systemctl restart docker
docker login <private registry>
on each docker host and pull the needed images.
after that you have all the images and it will not try to pull them anymore.
i know this is not the best solution :(
PS: I also had to add these parameters to each docker daemon:
--cluster-advertise=<host:ip> --cluster-store=consul://<consul ip:consul port>
without these i could not run containers on different hosts. They were all running on one host randomly chosen.

Connect docker container

I´ve been looking in google but i cannot find any answer.
It is possible connect to a virtualbox docker container that I just start up. I have the IP of the virtual machine, but if I try to connect by SSH of course ask me for a password.
Regards.
see
https://github.com/BITPlan/docker-stackoverflowanswers/tree/master/33232371
to repeat steps.
On my Mac OS X machine
docker-machine env default
shows
export DOCKER_HOST="tcp://192.168.99.100:2376"
So i added an entry
192.168.99.100 docker
to my /etc/hosts
so that ping docker works.
As a Dockerfile i am using:
# Ubuntu image
FROM ubuntu:14.04
which I am building with
docker build -t bitplan/sshtest:0.0.1 .
and testing with
docker run -it bitplan/sshtest:0.0.1 /bin/bash
Now ssh docker will react with
The authenticity of host 'docker (192.168.99.100)' can't be established.
ECDSA key fingerprint is SHA256:osRuE6B8bCIGiL18uBBrtySH5+iGPkiHHiq5PZNfDmc.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'docker,192.168.99.100' (ECDSA) to the list of known hosts.
wf#docker's password:
But here you are connecting to the docker machine not your image!
The ssh port is at port 22. You need to redirect it to another port and configure your image to support ssh to root or a valid user.
See e.g. https://docs.docker.com/examples/running_ssh_service/
Are you trying to connect to a running container or trying to connect to the virtualbox image running the docker daemon?
If the first, you cannot just SSH into a running container unless that container is running an ssh daemon. The easiest way to get a shell into a running container is with docker exec -ti <container name/id> /bin/sh. Do a docker ps to see running containers.
If the second, if your host was created with docker-machine then you can ssh into it with docker-machine ssh <machine name>. You can see all of you're running machines with docker-machine ls.
If this doesn't help can you clarify your question a little and provide details around how your creating your image and starting the container.
You can use ssh keys to access passwordless.
Here's some intro
https://wiki.archlinux.org/index.php/SSH_keys

Resources