Private docker registry works in curl, but not in docker: x509: certificate signed by unknown authority - docker-registry

I followed the docker manuals for setting up a private registry, and acquired a Let's Encrypt certificate. This is my docker-compose.yml:
version: '2'
services:
registry:
restart: always
image: registry:2.3.1
ports:
- 5000:5000
environment:
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/live/git.xxxx.com/fullchain.pem
REGISTRY_HTTP_TLS_KEY: /certs/live/git.xxxx.com/privkey.pem
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
volumes:
- ./data:/var/lib/registry
- /etc/letsencrypt:/certs
- ./auth:/auth
This is my curl command and result:
curl https://git.xxxx.com:5000/v2/
<htpassword auth succeeds>
{}
Also Chrome/Firefox are green and can reach this without cert errors.
But docker login keeps failing.
docker login https://git.xxxx.com:5000/v2/
Username: raarts
Password:
Email:
Error response from daemon: invalid registry endpoint https://git.xxxx.com:5000/v2/: Get https://git.xxxx.com:5000/v2/: x509: certificate signed by unknown authority. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add `--insecure-registry git.xxxx.com:5000` to the daemon's arguments. In the case of HTTPS, if you have access to the registry's CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/git.xxxx.com:5000/ca.crt
Using docker 1.10.3

I fixed the problem. And it's embarrassing. I'd rather not talk about it if it weren't for the stupid and confusing error message I got.
I had on my own laptop pointed git.xxxx.com to another ip. So docker could not actually reach the registry server, connections were refused.
But the error message I got really pointed me in the wrong direction and cost me several hours of my time.

Related

Docker private registry token authentication failed with status: 400 Bad Request

Following is my docker-compose.yml file where I have hosted my private docker registry with domain registry.MY-DOMAIN.com
version: "3.9"
services:
registry:
image: registry:latest
environment:
REGISTRY_HTTP_SECRET: b8f62d22-9a3f-4c73-bf5e-e0864b400bc8
#S3 bucket as docker storage
REGISTRY_STORAGE: s3
REGISTRY_STORAGE_S3_ACCESSKEY: XXXXXXXXX
REGISTRY_STORAGE_S3_SECRETKEY: XXXXXXXXX
REGISTRY_STORAGE_S3_REGION: us-east-1
REGISTRY_STORAGE_S3_BUCKET: docker-registry
#Docker token based authentication
REGISTRY_AUTH: token
REGISTRY_AUTH_TOKEN_REALM: "https://api.MY-DOMAIN.com/api/developer-auth/login"
REGISTRY_AUTH_TOKEN_SERVICE: Authentication
REGISTRY_AUTH_TOKEN_ISSUER: "Let's Encrypt"
REGISTRY_AUTH_TOKEN_AUTOREDIRECT: false
#Letsencrupt certificate
REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE: "/certs/live/registry.MY-DOMAIN.com/fullchain.pem"
REGISTRY_HTTP_TLS_CERTIFICATE: "/certs/live/registry.MY-DOMAIN.com/fullchain.pem"
REGISTRY_HTTP_TLS_KEY: "/certs/live/registry.MY-DOMAIN.com/privkey.pem"
ports:
- 5000:5000
restart: always
volumes:
- "/etc/letsencrypt:/certs"
When I try to login to my API server it return the following error
❯ docker login registry.MY-DOMAIN.com
Username: Elda86#yahoo.com
Password:
Error response from daemon: login attempt to https://api.MY-DOMAIN.com/v2/ failed with status: 400 Bad Request
I don't have username field in my NodeJS API talking to the MongoDB database. Can I pass the email instead of username?
I want to do Docker Registry Token Authentication with my custom API that is written in NodeJS (ExpressJS) application. So that users can log in as "docker login registry.mydomain.com" and push the image once authenticated. I want the same experience as that of DockerHub. I am setting up a similar to DockerHub for my product. It acts as a docker store.
May I know how can I fix the issue?
It looks like your token auth service is not correctly implemented.
You should implement it according to the specification.
See
Token Authentication Specification
Token Authentication Implementation
Token Scope Documentation
OAuth2 Token Authentication
for more information.
I would also recommend to look into actual existing implementations.
Such as:
https://github.com/cesanta/docker_auth
https://github.com/adigunhammedolalekan/registry-auth
https://github.com/twosigma/docker-repo-auth-demo

How to make Drone Docker Plugin use self-signed certs?

I'm facing the same problem as here - I have set up a private Docker Registry with TLS certification (certificates generated via Certbot), and I can interact with it directly via curl etc. (thus proving that the certificate is correct), but the Docker Plugin in my Drone flow gives an error x509: certificate signed by unknown authority.
As per this StackOverflow answer, I believe that putting the certificate at /etc/docker/certs.d/<my_registry_address:port>/ca.crt should fix this problem, but it doesn't appear to (neither does adding the certificate into the standard /etc/ssl/certs/ca-certificates.crt location)
Demonstration that the certificates work as-expected, having already built the Docker Drone Plugin locally as per https://github.com/drone-plugins/drone-docker:
$ docker run --rm -v <path_to_directory_containing_pems>:/custom-certs -it --entrypoint /bin/sh plugins/docker
/ # ls /custom-certs
accounts archive csr keys live renewal renewal-hooks
/ # apk add curl
...
OK: 28 MiB in 56 packages
/ # curl https://docker-registry.scubbo.org:8843/v2/_catalog
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
/ # curl https://docker-registry.scubbo.org:8843/v2/_catalog --cacert /custom-certs/live/docker-registry.scubbo.org/fullchain.pem
{"repositories":[...]}
/ # cat /custom-certs/live/docker-registry.scubbo.org/fullchain.pem >> /etc/ssl/certs/ca-certificates.crt
/ # curl https://docker-registry.scubbo.org:8843/v2/_catalog
{"repositories":[...]}
Here's my .drone.yml, for a Runner instantiated with --env=DRONE_RUNNER_VOLUMES=/var/run/docker.sock:/var/run/docker.sock,<path_to_directory_containing_pems>:/custom-certs:
kind: pipeline
name: hello-world
type: docker
platform:
os: linux
arch: arm64
steps:
- name: copy-cert-into-place
image: busybox
volumes:
- name: docker-cert-persistence
path: /etc/docker/certs.d/
commands:
# https://stackoverflow.com/a/56410355/1040915
# Note that we need to mount the whole `custom-certs` directory into the workflow and then copy the file to `/etc/...`,
# rather than mounting the file directly into `/etc/...`, because the original file is a symlink and it's not possible (AFAIK)
# to instruct Docker to "mount the eventual-target-of this symlink into <location>"
- mkdir -p /etc/docker/certs.d/docker-registry.scubbo.org:8843
- cp -L /custom-certs/live/docker-registry.scubbo.org/fullchain.pem /etc/docker/certs.d/docker-registry.scubbo.org:8843/ca.crt
- name: check-cert-persists-between-stages
image: alpine
volumes:
- name: docker-cert-persistence
path: /etc/docker/certs.d/
commands:
- apk add curl
# The command below would fail if the cert was unavailable or invalid
- curl https://docker-registry.scubbo.org:8843/v2/_catalog --cacert /etc/docker/certs.d/docker-registry.scubbo.org:8843/ca.crt
- name: build-image
# ...contents irrelevant to this question...
- name: push-built-image
image: plugins/docker
volumes:
- name: docker-cert-persistence
path: /etc/docker/certs.d/
settings:
repo: docker-registry.scubbo.org:8843/scubbo/blog_nginx
tags: built_in_ci
debug: true
launch_debug: true
volumes:
- name: docker-cert-persistence
temp: {}
giving these logs from push-built-image step - ending in...
+ /usr/local/bin/docker tag 472d41d9c03ee60fe9c1965ad9cfd36a1cdb6cbf docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
+ /usr/local/bin/docker push docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
The push refers to repository [docker-registry.scubbo.org:8843/scubbo/blog_nginx]
Get "https://docker-registry.scubbo.org:8843/v2/": x509: certificate signed by unknown authority
exit status 1
How should I go about providing the CA Certificate to my Drone Docker Plugin step to permit it to communicate over TLS with a secure Docker registry? This answer suggests simply reverting to insecure integration, which works but is unsatisfactory.
EDIT: After re-reading this documentation, I extended the copy-cert-into-place commands to copy all 3 certificate-related files:
commands:
- mkdir -p /etc/docker/certs.d/docker-registry.scubbo.org:8843
- cp -L /custom-certs/live/docker-registry.scubbo.org/fullchain.pem /etc/docker/certs.d/docker-registry.scubbo.org:8843/ca.crt
- cp -L /custom-certs/live/docker-registry.scubbo.org/privkey.pem /etc/docker/certs.d/docker-registry.scubbo.org:8843/client.key
- cp -L /custom-certs/live/docker-registry.scubbo.org/cert.pem /etc/docker/certs.d/docker-registry.scubbo.org:8843/client.cert
but that did not resolve the problem - same x509: certificate signed by unknown authority error.
EDIT2: I directly confirmed (directly on a host, outside the context of a plugin or docker container) that adding the certificate to the path used above is sufficient to permit interaction with the registry:
$ docker pull docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
Error response from daemon: Get "https://docker-registry.scubbo.org:8843/v2/": x509: certificate signed by unknown authority
$ sudo cp -L <path_to_directory_containing_pems>/live/docker-registry.scubbo.org/chain.pem /etc/docker/certs.d/docker-registry.scubbo.org\:8843/ca.crt
$ docker pull docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
built_in_ci: Pulling from scubbo/blog_nginx
Digest: sha256:3a17f86f23050303d94443f24318b49fb1a5e2d0cc9228270678c8aa55b4d2c2
Status: Image is up to date for docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
This isn't a complete answer, but I was able to get secure registry access working by switching from mounting a directory, to mounting the file directly:
I changed the docker run option to --env=DRONE_RUNNER_VOLUMES=/var/run/docker.sock:/var/run/docker.sock,$(readlink -f <path_to_directory_containing_pems>/live/docker-registry.scubbo.org/chain.pem):/registry_cert.crt
I changed the commands in copy-cert-into-place to:
- mkdir -p /etc/docker/certs.d/docker-registry.scubbo.org:8843
- cp /registry_cert.crt /etc/docker/certs.d/docker-registry.scubbo.org:8843/ca.crt
I don't consider this a complete answer (and would love further input or advice!), because:
I don't know why copying the file out of the mounted directory into /etc/docker/... (as in the original question) didn't work, but mounting the file directly from the host filesystem worked. (Note that the check-cert-persists-between-stages stage confirms that the certificate is correct, so it's not a mistake of copying a wrong or empty file)
I don't know how to mount the file directly into an in-stage path that contains a colon - this answer indicates how to mount a path containing a colon directly into a container, but in this case we're passing the path to DRONE_RUNNER_VOLUMES

GitLab and Docker registry on seperate servers

I'm a little bit desperate and starting to going mad.
I tried to configure my gitlab instance (omnibus) to work with external private docker image registry. Initialy I thought is should be relatively easy task. But now I totaly confused.
My initial installation looked like this:
generate selfsigned cert
clear instance of docker registry on Ubuntu 18.04 with docker-compose and nginx. Secured with letsEncrypt on registry.domain.com
I use following script:
version: '3'
services:
registry:
restart: always
image: registry:2
ports:
- "5000:5000"
environment:
REGISTRY_AUTH: token
REGISTRY_AUTH_TOKEN_REALM: https://registry.domain.com:5000/auth
REGISTRY_AUTH_TOKEN_SERVICE: "Docker registry"
REGISTRY_AUTH_TOKEN_ISSUER: "gitlab-issuer"
REGISTRY_AUTH_TOKEN_ROOTCERTBUNDLE: /etc/gitlab/registry-certs/registry-auth.crt
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /data
volumes:
- ./auth:/auth
- ./data:/data
clear instance of gitlab on Ubuntu 18.04. Secured with letsEncrypt on gitlab.domain.com
some changes in gitlab.rb like:
registry_external_url ‘https://registry.domain.com/’
gitlab_rails[‘registry_enabled’] = true
gitlab_rails[‘registry_host’] = “registry.domain.com”
gitlab_rails[‘registry_port’] = “5000”
gitlab_rails[‘registry_api_url’] = “htps://registry.prismstudio.space:5000”
gitlab_rails[‘registry_key_path’] = “/etc/gitlab/registry-certs/registry-auth.key”
gitlab_rails[‘registry_issuer’] = “gitlab-issuer”
After gitlab-ctl reconfigure i receive letsEncrypt error:
letsencrypt_certificate[gitlab.domain.net] (letsencrypt::http_authorization line 5) had an error: RuntimeError: acme_certificate[staging] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/letsencrypt/resources/certificate.rb line 25) had an error: RuntimeError: ruby_block[create certificate for gitlab.domain.net] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/acme/resources/certificate.rb line 108) had an error: RuntimeError: [gitlab.domain.com] Validation failed, unable to request certificate
I literary try everything but nothing helps me.
Is there any straightforward way to se tup GitLab server with external Docker registry? How to configure it properly? I'm open to burn everything to the ground and make it once again with working configuration.

Ensuring docker containers have certificate

I have an issue where a self-signed certificate has been added to a testing environment.
So this means my selenium grid that is hosted in Docker containers is unable to get to this environment due to the certificate.
I get this error when executing tests
Message: OpenQA.Selenium.WebDriverException : The HTTP request to the remote WebDriver server for URL http://xxx.xx.x.x:4444/wd/hub/session/0ee03d72bff0d5527cff926121b496bb/url timed out after 60 seconds.
----> System.Net.WebException : The request was aborted: The operation has timed out.
TearDown : OpenQA.Selenium.WebDriverException : The HTTP request to the remote WebDriver server for URL http://xxx.xx.x.x:4444/wd/hub/session/0ee03d72bff0d5527cff926121b496bb/screenshot timed out after 60 seconds.
----> System.Net.WebException : The operation has timed out
The docker environment is set up with docker-compose and using chrome and hub images.
Compose file is this
version: "3"
services:
selenium-hub:
image: selenium/hub:latest
container_name: selenium-hub
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome:latest
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
I added the certificates to the host hoping this would be enough but obviously not as each container is separated.
My question is how do I insert the certificates into each chrome node that spins up?
More information
When running a curl from within the container I get the following error
#b94ed81b0110:/etc# curl https://xxxx.xxxx.co.uk
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
But I have installed the required certificates to the container
root#b94ed81b0110:/etc# update-ca-certificates
Updating certificates in /etc/ssl/certs...
2 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
Adding debian:admin.pem
Adding debian:assessor.pem
done.
done.

Docker Registry behind TLS enabled reverse proxy (Traefik) - Remote Error: Bad Certificate

So I am doing everything Dockerized here. Traefik is running in a container, as is my docker Registry instance. I am able to push/pull just fine from the registry if I hit it at mydomain.com:5000/myimage.
The problem comes when I try to hit it through 443 using mydomain.com/myimage. The setup I have here is Traefik reverse proxy listening on 443 at mydomain.com, and forwarding that request internally to :5000 of my Registry instance.
When I go to push/pull from the Traefik url, it hangs and counts down waiting to retry on a loop. When I look at the logs of Registry, each I can see the instance IS in fact in communication with the reverse proxy Traefik, however, I get this error in the log over and over (on each push retry from the client side):
2018/05/31 21:10:43 http: TLS handshake error from proxy_container_ip:port: remote error: tls: bad certificate
Docker Registry is really tight and strict when it comes to the TLS issue. I'm using all self signed certs here, as I'm still in development. Any idea what is causing this error? I'm assuming that either the Traefik proxy detects that the certificate offered from Registry is not to be trusted (self-signed), and therefore does not complete the "push" request, or the other way around - Registry, when sending the response back through to the Traefik proxy detects that it is not to be trusted.
I can provide additional information if needed. Current setup is that both Traefik and Registry have their own set of .crt and .key files. Both (of course) TLS enabled.
Thanks.
Here is a working solution with a self-signed certificate that you can try out on https://labs.play-with-docker.com
Server
Add a new instance node1 in your Docker playground. We configure it as our server. Create a directory for the certificates:
mkdir /root/certs
Create wildcard certificate *.domain.local:
$ openssl req -newkey rsa:2048 -nodes -keyout /root/certs/domain.local.key -x509 -days 365 -out /root/certs/domain.local.crt
Generating a 2048 bit RSA private key
...........+++
...........+++
writing new private key to '/root/certs/domain.local.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) []:
State or Province Name (full name) []:
Locality Name (eg, city) []:
Organization Name (eg, company) []:
Organizational Unit Name (eg, section) []:
Common Name (eg, fully qualified host name) []:*.domain.local
Email Address []:
Create two files docker-compose.yml and traefik.toml in directory /root. You can download them using:
wget https://gist.github.com/maiermic/cc9c9aab939f7ea791cff3d974725e4a/raw/8c5d787998d33c752f2ab369a9393905780d551c/docker-compose.yml
wget https://gist.github.com/maiermic/cc9c9aab939f7ea791cff3d974725e4a/raw/8c5d787998d33c752f2ab369a9393905780d551c/traefik.toml
docker-compose.yml
version: '3'
services:
frontproxy:
image: traefik
command: --api --docker --docker.swarmmode
ports:
- "80:80"
- "443:443"
volumes:
- ./certs:/etc/ssl:ro
- ./traefik.toml:/etc/traefik/traefik.toml:ro
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events
deploy:
labels:
- traefik.port=8080
- traefik.frontend.rule=Host:traefik.domain.local
docker-registry:
image: registry:2
deploy:
labels:
- traefik.port=5000 # default port exposed by the registry
- traefik.frontend.rule=Host:registry.domain.local
- traefik.frontend.auth.basic=user:$$apr1$$9Cv/OMGj$$ZomWQzuQbL.3TRCS81A1g/ # user:password, see https://docs.traefik.io/configuration/backends/docker/#on-containers
traefik.toml
defaultEntryPoints = ["http", "https"]
# Redirect HTTP to HTTPS and use certificate, see https://docs.traefik.io/configuration/entrypoints/
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "/etc/ssl/domain.local.crt"
keyFile = "/etc/ssl/domain.local.key"
# Docker Swarm Mode Provider, see https://docs.traefik.io/configuration/backends/docker/#docker-swarm-mode
[docker]
endpoint = "tcp://127.0.0.1:2375"
domain = "docker.localhost"
watch = true
swarmMode = true
Initialize Docker Swarm (replace <ip-of-node1> with the IP address of node1, for example 192.168.0.13):
docker swarm init --advertise-addr <ip-of-node1>
Deploy traefik and Docker registry:
docker stack deploy myregistry -c ~/docker-compose.yml
Client
Since we don't have a DNS server, we change /etc/hosts (replace <ip-of-node1> with the IP address of our server node1, for example 192.168.0.13):
echo "<ip-of-node1> registry.domain.local traefik.domain.local" >> /etc/hosts
You should be able now to request the health status from traefik
$ curl -ksS https://traefik.domain.local/health | jq .
{
"pid": 1,
"uptime": "1m37.501499911s",
"uptime_sec": 97.501499911,
"time": "2018-07-19 07:30:35.137546789 +0000 UTC m=+97.600568916",
"unixtime": 1531985435,
"status_code_count": {},
"total_status_code_count": {},
"count": 0,
"total_count": 0,
"total_response_time": "0s",
"total_response_time_sec": 0,
"average_response_time": "0s",
"average_response_time_sec": 0
}
and you should be able to request all images (none) from our registry
$ curl -ksS -u user:password https://registry.domain.local/v2/_catalog | jq .
{
"repositories": []
}
Let's configure docker on our client. Create the directory for the registry certificates:
mkdir -p /etc/docker/certs.d/registry.domain.local/
Get the certificate from our server:
scp root#registry.domain.local:/root/certs/domain.local.crt /etc/docker/certs.d/registry.domain.local/ca.crt # Are you sure you want to continue connecting (yes/no)? yes
Now you should be able to login to our registry and add an image:
docker login -u user -p password https://registry.domain.local
docker pull hello-world:latest
docker tag hello-world:latest registry.domain.local/hello-world:latest
docker push registry.domain.local/hello-world:latest
If you request all images from our registry after that, you should see
$ curl -ksS -u user:password https://registry.domain.local/v2/_catalog | jq .
{
"repositories": [
"hello-world"
]
}

Resources