I'm trying to use a self hosted docker registry v2. I should be able to push a docker image, which does work locally on the host server (coreos) running the registry v2 container. However, on a separate machine (also coreos, same version) when I try to push to the registry, it's try to push to v1, giving this error:
Error response from daemon: v1 ping attempt failed with error: Get
https://172.22.22.11:5000/v1/_ping: dial tcp 172.22.22.11:5000: i/o timeout.
If this private registry supports only HTTP or HTTPS with an unknown CA
certificate, please add `--insecure-registry 172.22.22.11:5000` to the
daemon's arguments. In the case of HTTPS, if you have access to the registry's
CA certificate, no need for the flag; simply place the CA certificate at
/etc/docker/certs.d/172.22.22.11:5000/ca.crt
both machine's docker executable is v1.6.2. Why is it that one works and is pushing to v2 but the other is v1?
Here's the repo for the registry: https://github.com/docker/distribution
You need to secure the registry before you can access it remotely, or explicitly allow all your Docker daemons to access insecure registries.
To secure the registry the easiest choice is to buy an SSL certificate for your server, but you can also self-sign the certificate and distribute to clients.
To allow insecure access add the argument --insecure-registry myregistrydomain.com:5000 to all the daemons who need to access the registry. (Obviously replace the domain name and port with yours).
The full instructions (including an example of your error message) are available at: https://github.com/docker/distribution/blob/master/docs/deploying.md
Regarding the error message, I guess Docker tries to use v2 first, fails because of the security issue then tries v1 and fails again.
This may be due to an env variable being set. I had a very similar issue when using a system with this env variable set.
export DOCKER_HOST="tcp://hostname:5000"
Running docker login http://hostname:5000 did not work and gave the same v1 behaviour. I did not expect the env variable to take precedence over an argument passed directly to the command.
Go to /etc/docker/daemon.json. If the file is not present, create a file and add the following
{
"insecure-registries": ["hosted-registry-IP:port"]
}
After that restart the docker service by
service docker restart
Related
I followed Docker Documentation in order to access a remote docker (installed on a server B) daemon from a server A.
So, all certificates were generated on server B and copied in the docker client machine, server A.
I had already tested the remote access by running the following command :
docker --tlsverify -H=$MY_HOST:$MY_PORT
--tlscacert=$MY_PATH/ca.pem
--tlscert=$MY_PATH/client-cert.pem
--tlskey=$MY_PATH/client-key.pem
Everything is looking good so far, and I had succefully access the remote docker daemon.
However, when I tried to access it by exporting Docker envrionment variables
export DOCKER_HOST=tcp://MY_HOST:$MY_PORT DOCKER_TLS_VERIFY=1 DOCKER_CERT_PATH=~/certs
things don't turn out as expected (tls: bad certificate) :
The server probably has client authentication (--tlsverify) enabled. Please check your TLS client certification settings: Get https://MY_HOST:MY_PORT/v1.40/containers/json?all=1: remote error: tls: bad certificate
Anyone knows how to fix this?
After following the installation instructions to install docker provided in the official page I ran into the following error when I tried to run
docker: error pulling image configuration: Get https://production.cloudflare.docker.com/registry-v2/docker/registry/v2/blobs/sha256/fc/fce289e99eb9bca977dae136fbe2a82b6b7d4c372474c9235adc1741675f587e/data?verify=1549989486-DEdrDDaoZskZzHXF84y4VY%2FxRpw%3D: x509: certificate signed by unknown authority
I am not finding information about solving this issue. Please note I am behind corporate proxy.
I have set the proxy in the file
/etc/systemd/system/docker.service.d/http-proxy.conf
with the following content
[Service]
Environment="HTTP_PROXY=http://proxyurl:8080/" "HTTPS_PROXY=http://proxyurl:8080/"
First, are you sure, your HTTPS_PROXY=http://proxyurl:8080/ ? Check that port is configured properly, it is more likely to be 443.
Second, your proxy can work in man-in-the-middle mode, that means it establishes two separate connections: with you and with targeted server, deciphering and enciphering all traffic. In this case it signs the data it sends to you with it's own ssl certificate and you have to obtain this one and add to your trusted ones in the system.
It seems that the image you are trying to pull is stored within a private registry. Have you logged to that registry?
Meanwhile, try to pull a hello-world image to check that proxy is not blocking outgoing connections from your Docker host.
I'm running Docker on Windows 10, and I noticed that by default, DOCKER_TLS_VERIFY=0.
I have a few questions about it:
Does it mean that I'm pulling images from Docker Hub insecurly?
Does Docker pull images through https when it is set to 0?
Is it recommended to set it to 1?
Communication with external registry servers like docker hub will default to TLS, this option is for something very different.
DOCKER_TLS_VERIFY tells the docker client (aka the docker command) whether to communicate with the docker daemon (dockerd) with any TLS verification. If set to 1, the server needs to have a private CA signed key pair, and the client also needs to have a key pair signed by the same CA. This setting tells the client to verify that server key it receives is signed by the private CA. The daemon/server will have a similar setting to verify client certificates.
If you are communicating with a remote docker engine over the network, this would be very bad since it implies that the remote docker engine allows anyone to hit the API (which gives root level access) without any client credentials. When communicating to a local socket that is protected with filesystem permissions, this feature is not needed.
This variable is documented here: https://docs.docker.com/engine/reference/commandline/cli/
Steps to configure the daemon and clients with TLS keys are documented here: https://docs.docker.com/engine/security/https/
For windows, many of the above steps would need to be translated (e.g. different locations for the daemon.json file). Have a look at the following:
https://docs.docker.com/engine/reference/commandline/dockerd/
https://docs.docker.com/docker-for-windows/
I am trying to use Jenkins to build and push docker images to private registry. However, while trying docker login command, I am getting this error:
http: server gave HTTP response to HTTPS client
I know that this might be happening because the private registry is not added as an insecure registry. But, how I can resolve this in CI pipeline?
Jenkins is set up on a Kubernetes cluster and I am trying to automate the deployment of an application on the cluster.
This has nothing to do with the Jenkins CI pipeline or Kubernetes. Jenkins will not be able to push your images until configure follow either of the below steps
You have two options here
1) Configure your docker client to use the secure registry over HTTPS. This will include setting up self signed certificates or getting certificates from your local certificate authority.
2) Second solution is to use your registry over an unencrypted HTTP connection.
So if you are running docker on kubernetes. You will have to configure the daemon.json file in /etc/docker/daemon.json.
PS: This file might not exist. You will have to create it.
Then add in the below content. Make sure you change the url to match your docker registry
{
"insecure-registries" : ["myregistrydomain.com:5000"]
}
Then restart docker using systemctl restart docker or etc/init.d/docker restart depending on the version of linux distro installed on your cluster
Let me know if you have any questions
I have my private docker registry running on a remote machine, which is secured by TLS and uses HTTPS. Now I want to access it from my local docker-machine installed on Windows 7. I have copied the certificates to "/etc/docker/certs.d/" in the docker-machine VM and restarted docker.
After this I can successfully login to my private registry using credentials, but when I try to push an image to it, it gives me a certificate signed by unknown authority error. After researching a little I restarted the docker daemon with docker -d --insecure-registry https://<registry-host>, and it worked.
My question is: if I have copied my certificates to the host machine, why do I need to start the registry with the --insecure-registry option?
I can only access the registry from another host with certificates as well as restarting docker with --insecure-registry , which looks a little wrong to me.
Docker version: 1.8.3
Any pointers on this would be really helpful.
certificate signed by unknown authority
The error message gives it away - your certificates are self-signed (as in not trusted by a known CA).
See here.
If you would like to access your registry with HTTP, follow the instructions here
Basically (do this on the machine from which you try to access the registry):
edit the file /etc/default/docker so that there is a line that reads: DOCKER_OPTS="--insecure-registry myregistrydomain.com:5000" (or add that to existing DOCKER_OPTS)
restart your Docker daemon: on ubuntu, this is usually service docker stop && service docker start