I followed Docker Documentation in order to access a remote docker (installed on a server B) daemon from a server A.
So, all certificates were generated on server B and copied in the docker client machine, server A.
I had already tested the remote access by running the following command :
docker --tlsverify -H=$MY_HOST:$MY_PORT
--tlscacert=$MY_PATH/ca.pem
--tlscert=$MY_PATH/client-cert.pem
--tlskey=$MY_PATH/client-key.pem
Everything is looking good so far, and I had succefully access the remote docker daemon.
However, when I tried to access it by exporting Docker envrionment variables
export DOCKER_HOST=tcp://MY_HOST:$MY_PORT DOCKER_TLS_VERIFY=1 DOCKER_CERT_PATH=~/certs
things don't turn out as expected (tls: bad certificate) :
The server probably has client authentication (--tlsverify) enabled. Please check your TLS client certification settings: Get https://MY_HOST:MY_PORT/v1.40/containers/json?all=1: remote error: tls: bad certificate
Anyone knows how to fix this?
Related
After following the installation instructions to install docker provided in the official page I ran into the following error when I tried to run
docker: error pulling image configuration: Get https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/fc/fce289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e/data?verify=1549989486-DEdrDDaoZskZzHXF84y4VY%2FxRpw%3D: x509: certificate signed by unknown authority
I am not finding information about solving this issue. Please note I am behind corporate proxy.
I have set the proxy in the file
/etc/systemd/system/docker.service.d/http-proxy.conf
with the following content
[Service]
Environment="HTTP_PROXY=http://proxyurl:8080/" "HTTPS_PROXY=http://proxyurl:8080/"
First, are you sure, your HTTPS_PROXY=http://proxyurl:8080/ ? Check that port is configured properly, it is more likely to be 443.
Second, your proxy can work in man-in-the-middle mode, that means it establishes two separate connections: with you and with targeted server, deciphering and enciphering all traffic. In this case it signs the data it sends to you with it's own ssl certificate and you have to obtain this one and add to your trusted ones in the system.
It seems that the image you are trying to pull is stored within a private registry. Have you logged to that registry?
Meanwhile, try to pull a hello-world image to check that proxy is not blocking outgoing connections from your Docker host.
I'm having issue with connecting to private Docker registry.
Setup:
Windows10 (using HyperV)
Docker for Windows version: 18.03.0-ce
Steps I did:
Placed client certificates in C:/ProgramData/Docker/certs.d/<private_registry_url>/
Placed client certificates in C:/Users//.docker/certs.d/<private_registry_url>/
Root certificate imported in Windows "Trusted Root Certification Authorities" (otherwise I'm getting "X509: certificate signed by unknown authority")
Result: Executing docker login <private_registry_url> and inserting correct credentials gives me:
Error response from daemon: Get https://<private_registry_url>/v2/: remote error: tls: bad certificate
which means that Docker is not sending correct client certificates. (When I execute curl -client <file.cert> -key <file.key> https://<private_registry_url>/v2/ with client certificates I'm able to connect.
When I connect to running HyperV machine there is missing /etc/docker/certs.d/ folder. If I manually create /etc/docker/certs.d/<private_registry_url> and put my client certificates inside, everything starts to work! But after restarting Docker for Windows, folder certs.d in HyperV machine is missing and I'm not able to connect to private registry again.
Any ideas what am I doing wrong?
From reading the Docker Remote API documentation:
Docker Daemon over SSL
Ruby Docker-API
It appears the the correct way to connect to remote Docker machines is by letting the application know the location of the certificates to connect to a machine and connect using SSL/TLS with the certificates.
Is there a way to not have a user hand over the certificate, key, and CA? This would give whomever has those certificates root access to a docker machine.
I have my private docker registry running on a remote machine, which is secured by TLS and uses HTTPS. Now I want to access it from my local docker-machine installed on Windows 7. I have copied the certificates to "/etc/docker/certs.d/" in the docker-machine VM and restarted docker.
After this I can successfully login to my private registry using credentials, but when I try to push an image to it, it gives me a certificate signed by unknown authority error. After researching a little I restarted the docker daemon with docker -d --insecure-registry https://<registry-host>, and it worked.
My question is: if I have copied my certificates to the host machine, why do I need to start the registry with the --insecure-registry option?
I can only access the registry from another host with certificates as well as restarting docker with --insecure-registry , which looks a little wrong to me.
Docker version: 1.8.3
Any pointers on this would be really helpful.
certificate signed by unknown authority
The error message gives it away - your certificates are self-signed (as in not trusted by a known CA).
See here.
If you would like to access your registry with HTTP, follow the instructions here
Basically (do this on the machine from which you try to access the registry):
edit the file /etc/default/docker so that there is a line that reads: DOCKER_OPTS="--insecure-registry myregistrydomain.com:5000" (or add that to existing DOCKER_OPTS)
restart your Docker daemon: on ubuntu, this is usually service docker stop && service docker start
I'm trying to use a self hosted docker registry v2. I should be able to push a docker image, which does work locally on the host server (coreos) running the registry v2 container. However, on a separate machine (also coreos, same version) when I try to push to the registry, it's try to push to v1, giving this error:
Error response from daemon: v1 ping attempt failed with error: Get
https://172.22.22.11:5000/v1/_ping: dial tcp 172.22.22.11:5000: i/o timeout.
If this private registry supports only HTTP or HTTPS with an unknown CA
certificate, please add `--insecure-registry 172.22.22.11:5000` to the
daemon's arguments. In the case of HTTPS, if you have access to the registry's
CA certificate, no need for the flag; simply place the CA certificate at
/etc/docker/certs.d/172.22.22.11:5000/ca.crt
both machine's docker executable is v1.6.2. Why is it that one works and is pushing to v2 but the other is v1?
Here's the repo for the registry: https://github.com/docker/distribution
You need to secure the registry before you can access it remotely, or explicitly allow all your Docker daemons to access insecure registries.
To secure the registry the easiest choice is to buy an SSL certificate for your server, but you can also self-sign the certificate and distribute to clients.
To allow insecure access add the argument --insecure-registry myregistrydomain.com:5000 to all the daemons who need to access the registry. (Obviously replace the domain name and port with yours).
The full instructions (including an example of your error message) are available at: https://github.com/docker/distribution/blob/master/docs/deploying.md
Regarding the error message, I guess Docker tries to use v2 first, fails because of the security issue then tries v1 and fails again.
This may be due to an env variable being set. I had a very similar issue when using a system with this env variable set.
export DOCKER_HOST="tcp://hostname:5000"
Running docker login http://hostname:5000 did not work and gave the same v1 behaviour. I did not expect the env variable to take precedence over an argument passed directly to the command.
Go to /etc/docker/daemon.json. If the file is not present, create a file and add the following
{
"insecure-registries": ["hosted-registry-IP:port"]
}
After that restart the docker service by
service docker restart