accessing docker private registry - docker

I have my private docker registry running on a remote machine, which is secured by TLS and uses HTTPS. Now I want to access it from my local docker-machine installed on Windows 7. I have copied the certificates to "/etc/docker/certs.d/" in the docker-machine VM and restarted docker.
After this I can successfully login to my private registry using credentials, but when I try to push an image to it, it gives me a certificate signed by unknown authority error. After researching a little I restarted the docker daemon with docker -d --insecure-registry https://<registry-host>, and it worked.
My question is: if I have copied my certificates to the host machine, why do I need to start the registry with the --insecure-registry option?
I can only access the registry from another host with certificates as well as restarting docker with --insecure-registry , which looks a little wrong to me.
Docker version: 1.8.3
Any pointers on this would be really helpful.

certificate signed by unknown authority
The error message gives it away - your certificates are self-signed (as in not trusted by a known CA).
See here.

If you would like to access your registry with HTTP, follow the instructions here
Basically (do this on the machine from which you try to access the registry):
edit the file /etc/default/docker so that there is a line that reads: DOCKER_OPTS="--insecure-registry myregistrydomain.com:5000" (or add that to existing DOCKER_OPTS)
restart your Docker daemon: on ubuntu, this is usually service docker stop && service docker start

Related

Docker remote access

I followed Docker Documentation in order to access a remote docker (installed on a server B) daemon from a server A.
So, all certificates were generated on server B and copied in the docker client machine, server A.
I had already tested the remote access by running the following command :
docker --tlsverify -H=$MY_HOST:$MY_PORT
--tlscacert=$MY_PATH/ca.pem
--tlscert=$MY_PATH/client-cert.pem
--tlskey=$MY_PATH/client-key.pem
Everything is looking good so far, and I had succefully access the remote docker daemon.
However, when I tried to access it by exporting Docker envrionment variables
export DOCKER_HOST=tcp://MY_HOST:$MY_PORT DOCKER_TLS_VERIFY=1 DOCKER_CERT_PATH=~/certs
things don't turn out as expected (tls: bad certificate) :
The server probably has client authentication (--tlsverify) enabled. Please check your TLS client certification settings: Get https://MY_HOST:MY_PORT/v1.40/containers/json?all=1: remote error: tls: bad certificate
Anyone knows how to fix this?

Self-signed docker registry in CircleCI

I'm using CircleCI 2.0 and I have a private docker registry with self-signed certificate. I'm able to configure my local docker, just like documented here, the problem is in CircleCI:
I'm using remote dockers so when I try to login in Docker registry it's failing with Error response from daemon: Get https://docker-registry.mycompany.com/v2/: x509: certificate signed by unknown authority.
Is there a way to install the certificate in a remote docker? I don't have access to the docker host's shell. I don't want to use machine executor type.
It's not possible. It could only be accomplished by using CircleCI's Enterprise level system.

Connect to remote docker host / deploy a stack

I created a docker stack to deploy to a swarm. Now I´m a bit confused how the proper way looks like to deploy it to a real server?
Of course I can
scp my docker-stack.yml file to a node of my swarm
ssh into the node
run docker stack deploy -c docker-stack.yml stackname
So there is the docker-machine tool I thought.
I tried
docker-machine -d none --url=tcp://<RemoteHostIp>:2375 node1
what only seems to work if you open the port without TLS?
I received following:
$ docker-machine env node1
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "192.168.178.49:2375": dial tcp 192.168.178.49:2375: connectex: No connection could be made because the target machine actively refused it.
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
I already tried to generate a certificate & copy it over to the host:
ssh-keygen -t rsa
ssh-copy-id myuser#node1
Then I ran
docker-machine --tls-ca-cert PathToMyCert --tls-client-cert PathToMyCert create -d none --url=tcp://192.168.178.49:2375 node1
With the following result:
$ docker-machine env node1
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host "node1:2375": There was an error reading certificate
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which might stop running containers.
I also tried it with the generic driver
$ docker-machine create -d generic --generic-ssh-port "22" --generic-ssh-user "MyRemoteUser" --generic-ip-address 192.168.178.49 node1
Running pre-create checks...
Creating machine...
(node1) No SSH key specified. Assuming an existing key at the default location.
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Error creating machine: Error detecting OS: OS type not recognized
How do I add the remote docker host with docker-machine properly with TLS?
Or is there a better way to deploy stacks to a server/into production?
I read often that you shouldn´t expose the docker port but not once how to do it. And I can´t believe that they doesn´t provide a simple way to do this.
Update & Solution
I think both answers have there qualification.
I found Deploy to Azure Offical Doc (it´s the same for AWS).
The answer from #Tarun Lalwani pointed me into the right direction and it´s almost the official solution. Thats the reason I accepted his answer.
For me the following commands worked:
ssh -fNL localhost:2374:/var/run/docker.sock myuser#node1
Then you can run either:
docker -H localhost:2374 stack deploy -c stack-compose.yml stackname
or
DOCKER_HOST=localhost:2374
docker stack deploy -c stack-compose.yml stackname
The answer from #BMitch is also valid and the security concern he mentioned shouldn´t be ignored.
Update 2
The answer from #bretf is a awesome way to connect to your swarm. Especially if you have more than one. It´s still beta but works for swarms which are available to the internet and don´t have a ARM architecture.
I would prefer not opening/exposing the docker port even if I am thinking of TLS. I would rather use a SSH tunnel and then do the deployment
ssh -L 2375:127.0.0.1:2375 myuser#node1
And then use
DOCKER_HOST=tcp://127.0.0.1:2375
docker stack deploy -c docker-stack.yml stackname
You don't need docker-machine for this. Docker has the detailed steps to configure TLS in their documentation. The steps involve:
creating a CA
create and sign a server certificate
configuring the dockerd daemon to use this cert and validate client certs
create and sign a client certificate
copy the ca and client certificate files to your client machine's .docker folder
set some environment variables to use the client certificates and remote docker host
I wouldn't use the ssh tunnel method on a multi-user environment since any user with access to 127.0.0.1 would have root access to the remote docker host without a password or any auditing.
If you're using Docker for Windows or Docker for Mac, Docker Cloud has a more automated way to setup your TLS certs and get you securely connected to a remote host for free. Under "Swarms" there's "Bring your own Swarm" which runs an agent container on your Swarm managers to let you easily use your local docker cli without manual cert setup. It still requires the Swarm port open to internet, but this setup ensures it has TLS mutual auth enabled.
Here's a youtube video showing how to set it up. It can also support group permissions for adding/removing other admins from remotely accessing the Swarm.

docker is using the v1 registry api when it should use v2

I'm trying to use a self hosted docker registry v2. I should be able to push a docker image, which does work locally on the host server (coreos) running the registry v2 container. However, on a separate machine (also coreos, same version) when I try to push to the registry, it's try to push to v1, giving this error:
Error response from daemon: v1 ping attempt failed with error: Get
https://172.22.22.11:5000/v1/_ping: dial tcp 172.22.22.11:5000: i/o timeout.
If this private registry supports only HTTP or HTTPS with an unknown CA
certificate, please add `--insecure-registry 172.22.22.11:5000` to the
daemon's arguments. In the case of HTTPS, if you have access to the registry's
CA certificate, no need for the flag; simply place the CA certificate at
/etc/docker/certs.d/172.22.22.11:5000/ca.crt
both machine's docker executable is v1.6.2. Why is it that one works and is pushing to v2 but the other is v1?
Here's the repo for the registry: https://github.com/docker/distribution
You need to secure the registry before you can access it remotely, or explicitly allow all your Docker daemons to access insecure registries.
To secure the registry the easiest choice is to buy an SSL certificate for your server, but you can also self-sign the certificate and distribute to clients.
To allow insecure access add the argument --insecure-registry myregistrydomain.com:5000 to all the daemons who need to access the registry. (Obviously replace the domain name and port with yours).
The full instructions (including an example of your error message) are available at: https://github.com/docker/distribution/blob/master/docs/deploying.md
Regarding the error message, I guess Docker tries to use v2 first, fails because of the security issue then tries v1 and fails again.
This may be due to an env variable being set. I had a very similar issue when using a system with this env variable set.
export DOCKER_HOST="tcp://hostname:5000"
Running docker login http://hostname:5000 did not work and gave the same v1 behaviour. I did not expect the env variable to take precedence over an argument passed directly to the command.
Go to /etc/docker/daemon.json. If the file is not present, create a file and add the following
{
"insecure-registries": ["hosted-registry-IP:port"]
}
After that restart the docker service by
service docker restart

docker client can't read from both "docker private registry" and "online docker registry" if custom ssl certificate are used

Docker version 1.2.0, build 2a2f26c/1.2.0,
docker registry 0.8.1
i setup docker private registry on cenots7 and created my custom ssl cert. when I try to access my docker registry using https I get x509: certificate signed by unknown authority. i found a solution for this by placing the cert file under "/etc/pki/tls/certs" then do
"update-ca-trust"
"service docker restart"
now it started to read my certificate.i can login and pull and push to docker private registry
"https://localdockerregistry".
now when i tries to read from online docker registry(https://index.docker.io/v1/search?q=centos) like
"docker search centos"
i get
"Error response from daemon: Get https://index.docker.io/v1/search?q=centos: x509: certificate signed by unknown authority"
i exported docker.io cert from firefox brower and put it under "/etc/pki/tls/certs" then do "update-ca-trust" and "service docker restart" but same error. it looks like docker client cant decide which cert to use for which repository.
Any ideas how we can fix "x509: certificate signed by unknown authority" for online docker registry while using your own docker private registry.
The correct place to put the certificate is on the machine running your docker daemon (not the client) in this location: /etc/docker/certs.d/my.registry.com:5000/ca.crt where my.registry.com:5000 is the address of your private registry and :5000 is the port where your registry is reachable. If the path /etc/docker/certs.d/ does not exist, you should create it -- that is where the Docker daemon will look by default.
This way you can have a private certificate per private registry and not affect the public registry.
This is per the docs on http://docs.docker.com/reference/api/registry_api/
I had the problem with a docker registry running in a container behind a Nginx proxy with a StartSSL certificate.
In that case you have to append the intermediate ca certs to the nginx ssl certificate, see https://stackoverflow.com/a/25006442/1130611

Resources