I'm having issue with connecting to private Docker registry.
Setup:
Windows10 (using HyperV)
Docker for Windows version: 18.03.0-ce
Steps I did:
Placed client certificates in C:/ProgramData/Docker/certs.d/<private_registry_url>/
Placed client certificates in C:/Users//.docker/certs.d/<private_registry_url>/
Root certificate imported in Windows "Trusted Root Certification Authorities" (otherwise I'm getting "X509: certificate signed by unknown authority")
Result: Executing docker login <private_registry_url> and inserting correct credentials gives me:
Error response from daemon: Get https://<private_registry_url>/v2/: remote error: tls: bad certificate
which means that Docker is not sending correct client certificates. (When I execute curl -client <file.cert> -key <file.key> https://<private_registry_url>/v2/ with client certificates I'm able to connect.
When I connect to running HyperV machine there is missing /etc/docker/certs.d/ folder. If I manually create /etc/docker/certs.d/<private_registry_url> and put my client certificates inside, everything starts to work! But after restarting Docker for Windows, folder certs.d in HyperV machine is missing and I'm not able to connect to private registry again.
Any ideas what am I doing wrong?
Related
I followed Docker Documentation in order to access a remote docker (installed on a server B) daemon from a server A.
So, all certificates were generated on server B and copied in the docker client machine, server A.
I had already tested the remote access by running the following command :
docker --tlsverify -H=$MY_HOST:$MY_PORT
--tlscacert=$MY_PATH/ca.pem
--tlscert=$MY_PATH/client-cert.pem
--tlskey=$MY_PATH/client-key.pem
Everything is looking good so far, and I had succefully access the remote docker daemon.
However, when I tried to access it by exporting Docker envrionment variables
export DOCKER_HOST=tcp://MY_HOST:$MY_PORT DOCKER_TLS_VERIFY=1 DOCKER_CERT_PATH=~/certs
things don't turn out as expected (tls: bad certificate) :
The server probably has client authentication (--tlsverify) enabled. Please check your TLS client certification settings: Get https://MY_HOST:MY_PORT/v1.40/containers/json?all=1: remote error: tls: bad certificate
Anyone knows how to fix this?
I have a website that I'm running on a digital ocean droplet, which I want to continuously deploy via docker and a Teamcity build server which I have running on my home server. I want to enable https on my docker repo, with a self signed certificate, and without a domain name.
Let's say my home's ip address is 10.10.10.10 and the docker repo is running on port 5000.
I followed the steps here, however docker on my website complained that it cannot connect to the docker repo on my homeserver because it doesn't specify an IP in the SAN extension.
Okay. So I created a new certificate without the CN field and only an IP in the SAN, and now my cert config on my website looks like...
/etc/docker/certs.d/10.10.10.10:5000/ca.crt
I also added the cert to my general certs (Ubuntu 16.04 btw)
Then I try to pull the image from my home server to my website...
docker pull 10.10.10.10:5000/personal_site:latest
However, I'm getting this error.
Error response from daemon: Get https://10.10.10.10:5000/v1/_ping: x509:
certificate signed by unknown authority (possibly because of "x509:
invalid signature: parent certificate cannot sign this kind of
certificate" while trying to verify candidate authority certificate "serial:xxx")
I thought by adding my cert to the /etc/docker/... it would accept a self-signed cert. Anyone have any advice here?
You can't used a self signed certificate for this, it needs to be a CA certificate. Follow the same steps required to create a certificate for a docker host and store your CA in /etc/docker/certs.d/.... Or you can also define 10.10.10.10 as an insecure registry as part of the docker daemon startup (dockerd --insecure-registry 10.10.10.10:5000 ...) and docker should ignore any certificate issues.
I just did the same thing with this instructions create private repo with password without domain and ssl. That will require you to add certificate on client and domain on host file (if you love to have an domain yourself without registering new domain)
I have install docker on a windows 7 machine, if I connect to Internet outside my company network everything works fine, but when I connect to Internet from my company network, and try to pull a image from dockerhub, I just get the "docker: Network timed out while trying to connect to .... You may want to check your internet connection or if you are behind a proxy..".
I have edited the /var/lib/boot2docker/profile file by adding following two lines
export "HTTP_PROXY=http://me:mypassword#proxyhost:proxyport"
export "HTTPS_PROXY=http://me:mypassword#proxyhost:proxyport"
rebooted the docker machine and try to pull an image and get the following error;
Error while pulling image: Get https://index.docker.io/v1/repositories/library/ubuntu/images: x509: certificate signed by unknown authority
edit: CA certification details
The problem is your corporate proxy is using it's own SSL certificate which Docker doesn't trust. What you're going to have to do is to download a copy of the CA certificate and trust it on any machines you want to use behind the firewall. Check this answer for how to trust a certificate:
Docker behind proxy that changes ssl certificate
I have my private docker registry running on a remote machine, which is secured by TLS and uses HTTPS. Now I want to access it from my local docker-machine installed on Windows 7. I have copied the certificates to "/etc/docker/certs.d/" in the docker-machine VM and restarted docker.
After this I can successfully login to my private registry using credentials, but when I try to push an image to it, it gives me a certificate signed by unknown authority error. After researching a little I restarted the docker daemon with docker -d --insecure-registry https://<registry-host>, and it worked.
My question is: if I have copied my certificates to the host machine, why do I need to start the registry with the --insecure-registry option?
I can only access the registry from another host with certificates as well as restarting docker with --insecure-registry , which looks a little wrong to me.
Docker version: 1.8.3
Any pointers on this would be really helpful.
certificate signed by unknown authority
The error message gives it away - your certificates are self-signed (as in not trusted by a known CA).
See here.
If you would like to access your registry with HTTP, follow the instructions here
Basically (do this on the machine from which you try to access the registry):
edit the file /etc/default/docker so that there is a line that reads: DOCKER_OPTS="--insecure-registry myregistrydomain.com:5000" (or add that to existing DOCKER_OPTS)
restart your Docker daemon: on ubuntu, this is usually service docker stop && service docker start
Docker version 1.2.0, build 2a2f26c/1.2.0,
docker registry 0.8.1
i setup docker private registry on cenots7 and created my custom ssl cert. when I try to access my docker registry using https I get x509: certificate signed by unknown authority. i found a solution for this by placing the cert file under "/etc/pki/tls/certs" then do
"update-ca-trust"
"service docker restart"
now it started to read my certificate.i can login and pull and push to docker private registry
"https://localdockerregistry".
now when i tries to read from online docker registry(https://index.docker.io/v1/search?q=centos) like
"docker search centos"
i get
"Error response from daemon: Get https://index.docker.io/v1/search?q=centos: x509: certificate signed by unknown authority"
i exported docker.io cert from firefox brower and put it under "/etc/pki/tls/certs" then do "update-ca-trust" and "service docker restart" but same error. it looks like docker client cant decide which cert to use for which repository.
Any ideas how we can fix "x509: certificate signed by unknown authority" for online docker registry while using your own docker private registry.
The correct place to put the certificate is on the machine running your docker daemon (not the client) in this location: /etc/docker/certs.d/my.registry.com:5000/ca.crt where my.registry.com:5000 is the address of your private registry and :5000 is the port where your registry is reachable. If the path /etc/docker/certs.d/ does not exist, you should create it -- that is where the Docker daemon will look by default.
This way you can have a private certificate per private registry and not affect the public registry.
This is per the docs on http://docs.docker.com/reference/api/registry_api/
I had the problem with a docker registry running in a container behind a Nginx proxy with a StartSSL certificate.
In that case you have to append the intermediate ca certs to the nginx ssl certificate, see https://stackoverflow.com/a/25006442/1130611