Why does pushing to private, secured docker registry fail? - docker

I would like to run a private, secure, authenticated docker registry on a self healing AWS ECS cluster. The cluster setup is done and works properly, but I struggled getting the registry:latest running. The problem was, that every time I push an image, pushing the blobs fails, and goes into a retry cycle unless I get a timeout.
To make sure, that my ECS setup isn’t the blocker, I tried to setup everything locally using Docker4Mac 1.12.0-a.
First, the very basic setup works. I created my own version of the registry image, where there I put my TLS certificate bundle and key as well as the necessary htpasswd file directly into the image. [I know, this is insecure, I just do that for the testing purpose]. So here is my Dockerfile:
FROM registry:latest
COPY htpasswd /etc/docker
COPY server_bundle.pem /etc/docker
COPY server_key.pem /etc/docker
server_bundle.pem has a wildcard certificate for my domain mydomain.com (CN=*.mydomain.com) as the first one, followed by the intermediate CA certificates, so clients should be happy. My htpasswd file was created using the recommended approach:
docker run --entrypoint htpasswd registry:2 -Bbn heyitsme mysupersecurepassword > htpasswd
I build my image:
docker build -t heyitsme/registry .
and afterwards I run a very basic version w/o TLS and authentication:
docker run --restart=always -p 5000:5000 heyitsme/registry
and I can actually pull, tag and re-push an image:
docker pull alpine
docker tag alpine localhost:5000/alpine
docker push localhost:5000/alpine
This works. Next I make TLS and basic auth work via environment variables:
docker run -h registry.mydomain.com --name registry --restart=always -p 5000:5000 \
-e REGISTRY_HTTP_HOST=http://registry.mydomain.com:5000 \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/etc/docker/server_bundle.pem \
-e REGISTRY_HTTP_TLS_KEY=/etc/docker/server_key.pem \
-e REGISTRY_AUTH=htpasswd \
-e REGISTRY_AUTH_HTPASSWD_REALM=Registry-Realm \
-e REGISTRY_AUTH_HTPASSWD_PATH=/etc/docker/htpasswd heyitsme/registry
For the time being I create an entry in /etc/hosts which says:
127.0.0.1 registry.mydomain.com
And then I login:
docker login registry.mydomain.com:5000
Username: heyitsme
Password: ***********
Login Succeeded
So now, let’s tag and push the image here:
docker tag alpine registry.mydomain.com:5000/alpine
docker push registry.mydomain.com:5000/alpine
The push refers to a repository [registry.mydomain.com:5000/alpine]
4fe15f8d0ae6: Retrying in 4 seconds
What happens is, that the docker clients tries to push fragments and fails. Then it retries and fails again until I get a timeout. So next check, whether the V2 API works properly:
curl -i -XGET https://registry.mydomain.com:5000/v2/
HTTP/1.1 401 Unauthorized
Content-Type: application/json; charset=utf-8
Docker-Distribution-Api-Version: registry/2.0
Www-Authenticate: Basic realm="Registry-Realm"
X-Content-Type-Options: nosniff
Date: Thu, 15 Sep 2016 10:06:04 GMT
Content-Length: 87
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}
Ok, as expected. So let’s authenticate next time:
curl -i -XGET https://heyitsme:mysupersecretpassword#registry.mydomain.com:5000/v2/
HTTP/1.1 200 OK
Content-Length: 2
Content-Type: application/json; charset=utf-8
Docker-Distribution-Api-Version: registry/2.0
X-Content-Type-Options: nosniff
Date: Thu, 15 Sep 2016 10:06:16 GMT
{}%
Works. But pushing still fails.
The logs say:
time="2016-09-15T10:24:34Z" level=warning msg="error authorizing context: basic authentication challenge for realm \"Registry-Realm\": invalid authorization credential" go.version=go1.6.3 http.request.host="registry.mydomain.com:5000" http.request.id=6d2ec080-6824-4bf7-aac2-5af31db44877 http.request.method=GET http.request.remoteaddr="172.17.0.1:40878" http.request.uri="/v2/" http.request.useragent="docker/1.12.0 go/go1.6.3 git-commit/8eab29e kernel/4.4.15-moby os/linux arch/amd64 UpstreamClient(Docker-Client/1.12.0 \\(darwin\\))" instance.id=be3a8877-de64-4574-b47a-70ab036e7b79 version=v2.5.1
172.17.0.1 - - [15/Sep/2016:10:24:34 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/1.12.0 go/go1.6.3 git-commit/8eab29e kernel/4.4.15-moby os/linux arch/amd64 UpstreamClient(Docker-Client/1.12.0 \\(darwin\\))"
time="2016-09-15T10:24:34Z" level=info msg="response completed" go.version=go1.6.3 http.request.host="registry.mydomain.com:5000" http.request.id=8f81b455-d592-431d-b67d-0bc34155ddbf http.request.method=POST http.request.remoteaddr="172.17.0.1:40882" http.request.uri="/v2/alpine/blobs/uploads/" http.request.useragent="docker/1.12.0 go/go1.6.3 git-commit/8eab29e kernel/4.4.15-moby os/linux arch/amd64 UpstreamClient(Docker-Client/1.12.0 \\(darwin\\))" http.response.duration=30.515131ms http.response.status=202 http.response.written=0 instance.id=be3a8877-de64-4574-b47a-70ab036e7b79 version=v2.5.1
172.17.0.1 - - [15/Sep/2016:10:24:34 +0000] "POST /v2/alpine/blobs/uploads/ HTTP/1.1" 202 0 "" "docker/1.12.0 go/go1.6.3 git-commit/8eab29e kernel/4.4.15-moby os/linux arch/amd64 UpstreamClient(Docker-Client/1.12.0 \\(darwin\\))"
2016/09/15 10:24:34 http: TLS handshake error from 172.17.0.1:40886: tls: first record does not look like a TLS handshake
I also tested different versions of the original registry image, especially several versions above 2. All yield the same error. If someone could help me on that issue, that would be awesome.

Solved:
-e REGISTRY_HTTP_HOST=https://registry.mydomain.com:5000 \
as environment variable did the tick. Just the prior use of http instead https made the connection fail.

Thanks a lot, it's working now!
I added the variable in my kubernetes deployment manifest:
(...)
- env:
- name: REGISTRY_STORAGE_DELETE_ENABLED
value: "true"
- name: REGISTRY_HTTP_HOST
value: "https://registry.k8sprod.acme.domain:443"
(...)
For those who use nginx ingress in front, you have to set extra attribute in your ingress manifest file if you have push size errors:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-registry
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "8192m"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-next-upstream-timeout: "600"
nginx.ingress.kubernetes.io/proxy-next-upstream-tries: "10"
nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
labels:
k8s-app: kube-registry
app: kube-registry
spec:
rules:
- host: registry.k8sprod.acme.domain
http:
paths:
- path: /
backend:
serviceName: svc-kube-registry
servicePort: 5000

Related

Load balancing Harbor Repo is not working when path_beg is having multiple paths

I have configured my harbor docker registry using HA Proxy. Here is my HA Proxy configuration.
frontend ashok_registry
bind 0.0.0.0:8080
use_backend maven-repo if { path_beg -i /repository }
use_backend harbor-repo if { path_beg -i /v2/dev-images/ }
backend maven-repo
server maven-repo mvn01-ashok-dev.net:443 ssl verify none check inter 5s
backend harbor-repo
server harbor-repo hub.ashok.com:443 ssl verify none check inter 5s
Here whenever i used multiple paths (/v2/dev-images) in path_beg. I am unable to pull the docker image. Getting following error
$ docker pull localhost:8080/dev-images/myapp/myapp-image:1.1.0
Error response from daemon: unauthorized: authorize header needed to send HEAD to repository: authorize header needed to send HEAD to repository
Here is my HAProxy server logs
Dec 28 12:39:42 hydlpt391 haproxy[40706]: 127.0.0.1:53422 [28/Dec/2022:12:39:42.908] ashok_registry ashok_registry/<NOSRV> -1/-1/-1/-1/0 400 0 - - PR-- 1/1/0/0/0 0/0 "<BADREQ>"
Dec 28 12:39:42 hydlpt391 haproxy[40706]: 127.0.0.1:53436 [28/Dec/2022:12:39:42.909] ashok_registry ashok_registry/<NOSRV> -1/-1/-1/-1/0 503 216 - - SC-- 1/1/0/0/0 0/0 "GET /v2/ HTTP/1.1"
Dec 28 12:39:43 hydlpt391 haproxy[40706]: 127.0.0.1:53448 [28/Dec/2022:12:39:42.914] ashok_registry harbor-repo/harbor-repo 0/0/756/309/1065 401 480 - - ---- 1/1/0/0/0 0/0 "HEAD /v2/dev-images/myapp/myapp-image/manifests/1.1.0 HTTP/1.1"
Dec 28 12:39:44 hydlpt391 haproxy[40706]: 127.0.0.1:53452 [28/Dec/2022:12:39:43.981] ashok_registry harbor-repo/harbor-repo 0/0/758/251/1009 401 632 - - ---- 1/1/0/0/0 0/0 "GET /v2/dev-images/myapp/myapp-image/manifests/1.1.0 HTTP/1.1"
If i remove multiple paths in the path_beg then iam able to pull the image through HAProxy
frontend ashok_registry
bind 0.0.0.0:8080
use_backend maven-repo if { path_beg -i /repository }
use_backend harbor-repo if { path_beg -i /v2/ }
backend maven-repo
server maven-repo mvn01-ashok-dev.net:443 ssl verify none check inter 5s
backend harbor-repo
server harbor-repo hub.ashok.com:443 ssl verify none check inter 5s
And pull command usage
docker pull localhost:8080/dev-images/myapp/myapp-image:1.1.0
1.0.0: Pulling from dev-images/myapp/myapp-image
213ec9aee27d: Already exists
24b464698217: Pull complete
4f4fb700ef54: Pull complete
b4c5e6d1ca25: Pull complete
4c437a1beb75: Pull complete
357d1bd31d98: Pull complete
72cf3d73d8a4: Pull complete
6476114140cd: Pull complete
f1f11b5c7106: Pull complete
Here I need /v2/dev-images/ in bath_beg because i have multiple /v2/ paths in multiple urls. How can i solve this issue?

Unable to pull docker images from harbor through proxy if default_backend is not mentioned

I have a private docker harbour repository which is having docker images and helm charts. I don't want to expose my harbour repository directly. I want to pull docker images and helm charts via HA Proxy server. So here is the my haproxy configuration.
frontend ashok_registry
bind 0.0.0.0:8080
use_backend content-repo if { path_beg -i /repository }
use_backend gcp-repo if { path_beg -i /mygcp-registry }
use_backend harbor-repo if { path_beg -i /v2/myharbour-registry }
Docker pull command
docker pull localhost:8080/myharbour-registry/app-images/myimage:1.1.0
Error response from daemon: unauthorized: authorize header needed to send HEAD to repository: authorize header needed to send HEAD to repository
Here is my HA Proxy logs, based on url haproxy is redirect to harbor-repo backend. But getting above error.
Dec 15 11:06:07 ubuntu haproxy[24156]: 127.0.0.1:46896 [15/Dec/2022:11:06:07.006] ashok_registry ashok_registry/<NOSRV> -1/-1/-1/-1/0 503 221 - - SC-- 1/1/0/0/0 0/0 "GET /v2/ HTTP/1.1"
Dec 15 11:06:08 ubuntu haproxy[24156]: 127.0.0.1:46904 [15/Dec/2022:11:06:07.007] ashok_registry harbor-repo/harbor-repo 0/0/1021/369/1390 401 485 - - ---- 1/1/0/0/0 0/0 "HEAD /v2/myharbour-registry/app-images/myimage/manifests/1.1.0 HTTP/1.1"
Dec 15 11:06:09 ubuntu haproxy[24156]: 127.0.0.1:46912 [15/Dec/2022:11:06:08.399] ashok_registry harbor-repo/harbor-repo 0/0/621/278/899 401 637 - - ---- 1/1/0/0/0 0/0 "GET /v2/myharbour-registry/app-images/myimage/manifests/1.1.0 HTTP/1.1"
But Same pull command is working fine without any issue when we mention default_backend harbor-repo. What's wrong with my ha proxy configuration.

kubectl works on laptop but times out from within a docker container

Context:
I'm setting up a deployment tooling image which contains aws-cli, kubectl and helm. Testing the image locally, I found out that kubectl times out in the container despite working fine on the host (my laptop).
Tested with alpine/k8s:1.19.16 image as well (same docker run command options) and ran into the same issue.
What I did:
I'm on OS X and have kubectl, aws-cli and helm installed via brew
I have valid (not expired yet) AWS credential (~/.aws/credentials) and ~/.kube/config
running aws s3 ls s3://my-bucket works on my laptop, returning the correct response
running kubectl get pods -A works on my laptop, returning the correct response
switching to running these in containers with docker run. no context change. this issue exists in both the image I created and an official k8s tooling image from alpine. for simplicity reason I'll use alpine/k8s:1.19.16 in my command
command for launching container console: docker run --rm -it --entrypoint="" -e AWS_PROFILE -v /Users/myself/.aws:/root/.aws -v /Users/myself/.kube/config:/root/.kube/config -e SSH_AUTH_SOCK="/run/host-services/ssh-auth.sock" -v /run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock -e GIT_SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" alpine/k8s:1.19.16 /bin/bash
in the launched console:
running aws s3 ls s3://my-bucket still works fine, returning the correct response
running kubectl get pods -A times out
I compared the verbose output of kubectl get pods --v=8 (with the same context and ~/.kube/config):
on the host (my laptop)
I0826 01:24:11.999265 43571 loader.go:372] Config loaded from file: /Users/myself/.kube/config
I0826 01:24:12.014315 43571 round_trippers.go:463] GET https://<current-context-k8s-dns-name>/apis/external.metrics.k8s.io/v1beta1?timeout=32s
I0826 01:24:12.014330 43571 round_trippers.go:469] Request Headers:
I0826 01:24:12.014351 43571 round_trippers.go:473] User-Agent: kubectl/v1.24.0 (darwin/amd64) kubernetes/4ce5a89
I0826 01:24:12.014358 43571 round_trippers.go:473] Accept: application/json, */*
I0826 01:24:12.443152 43571 round_trippers.go:574] Response Status: 200 OK in 428 milliseconds
in the console (docker container):
I0826 08:25:47.066787 19 loader.go:375] Config loaded from file: /root/.kube/config
I0826 08:25:47.067505 19 round_trippers.go:421] GET https://<current-context-k8s-dns-name>/api?timeout=32s
I0826 08:25:47.067532 19 round_trippers.go:428] Request Headers:
I0826 08:25:47.067538 19 round_trippers.go:432] Accept: application/json, */*
I0826 08:25:47.067542 19 round_trippers.go:432] User-Agent: kubectl/v1.19.16 (linux/amd64) kubernetes/e37e4ab
I0826 08:26:17.047076 19 round_trippers.go:447] Response Status: in 30000 milliseconds
The ~/.kube/config was mounted correctly and the verbose log verified that it's loaded correctly, pointing to the right https endpoint. I tried ssh (by IP) to one of the master node by ip (from both container and laptop): laptop worked but the same ssh command from container timed out too.
nslookup <current-context-k8s-dns-name> from both container and laptop gave slightly different output.
from my laptop(host):
nslookup <current-context-k8s-dns-name>
Server: 10.253.0.2
Address: 10.253.0.2#53
Non-authoritative answer:
Name: <current-context-k8s-dns-name>
Address: 172.20.50.40
Name: <current-context-k8s-dns-name>
Address: 172.20.50.41
Name: <current-context-k8s-dns-name>
Address: 172.20.50.42
from the container:
nslookup <current-context-k8s-dns-name>
Server: 192.168.65.5
Address: 192.168.65.5:53
Non-authoritative answer:
Name: <current-context-k8s-dns-name>
Address: 172.20.50.40
Name: <current-context-k8s-dns-name>
Address: 172.20.50.41
Name: <current-context-k8s-dns-name>
Address: 172.20.50.42
I have a feeling that this has something to do with docker network but I don't know enough to solve this. I'll be deeply grateful if anyone can help explain this to me.
Thanks in advance
this issue is clearly a docker networking issue.
ran tcpdump + telnet on both host and container to compare the output and seems like the packets are not even routed to the host.
end up doing a docker network prune and this issue is resolved. nothing wrong with the setup, it's a known issue for docker on OSX

Artifactory Docker Login to Repository Localhost

I have a local Jfrog Artifactory Pro.
I use "http://localhost:8081/artifactory/webapp/#/home" to go to my Artifactory.
I created local Docker registry:
I configured a direct reverse proxy, from Rest API:
$ curl -u admin:xxxxxx-i "http://127.0.0.1:8081/artifactory/api/system/configuration/webServer"
HTTP/1.1 200 OK
Server: Artifactory/6.2.0
X-Artifactory-Id: de89ec654198c960:3f9aa2d0:167a7c20d5e:-8000
Access-Control-Allow-Methods: GET, POST, DELETE, PUT
Access-Control-Allow-Headers: X-Requested-With, Content-Type, X-Codingpedia
Cache-Control: no-store
Content-Type: application/json
Transfer-Encoding: chunked
Date: Sun, 16 Dec 2018 13:04:52 GMT
{
"key" : "direct",
"webServerType" : "DIRECT",
"artifactoryAppContext" : "artifactory",
"publicAppContext" : "artifactory",
"serverName" : "127.0.0.1",
"serverNameExpression" : "*.localhost",
"artifactoryServerName" : "localhost",
"artifactoryPort" : 8081,
"dockerReverseProxyMethod" : "SUBDOMAIN",
"useHttps" : false,
"useHttp" : true,
"httpsPort" : 443,
"httpPort" : 8081,
"upStreamName" : "artifactory"
}
Configuration from artifactory:
I want to login to my registry "mylocaldocker" via docker client but I get an error:
$ docker login mylocaldocker.localhost -u admin -p xxxxxx
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get https://mylocaldocker.localhost/v2/: dial tcp: lookup mylocaldocker.localhost on 192.168.65.1:53: no such host
How can I log in to artifactory docker registry? and pull/push images to it!?
First: docker login related to Artifactory -> Configurations -> HTTP Settings I used "Docker access method" as "Repository path"
docker login -u admin -p **** x.x.x.x:8081
Second:
Due to a limitation in docker, we cannot use login to localhost.
we should replace "localhost" or "127.0.0.1" by real machine IP (private IP).
And add private_ip:8081 (x.x.x.x:8081) to insecure registries in docker.
See the answer in this link: Pull Artifactory Docker Images

setting gitlab with docker registry error 500

I have running docker with docker registry on example.domain.com
docker run -d -p 5000:5000 --restart=always --name registry \
-v /etc/ssl/certs/:/certs \
-e REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry \
-v /git/docker_registry:/var/lib/registry \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/server.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/server.key \
registry:2
I can push and pull to this docker registry but when i try to connect it with gitlab which is running on the same machine example.domain.com using gitlab.yml config:
registry:
enabled: true
host: example.domain.com
port: 5005
api_url: http://localhost:5000/
key: /etc/ssl/certs/server.key
path: /git/docker_registry
In web browser enabling docker registry on project works fine, but when i go to project page and open Regisry page i get error 500
Gitlab logs shows:
Started POST "/api/v3/internal/allowed" for 10.10.200.96 at 2016-11-25 10:15:01 +0100
Started POST "/api/v3/internal/allowed" for 10.10.200.96 at 2016-11-25 10:15:01 +0100
Started POST "/api/v3/internal/allowed" for 10.10.200.96 at 2016-11-25 10:15:01 +0100
Started GET "/data-access-servicess/centipede-rest/container_registry" for 10.11.0.232 at 2016-11-25 10:15:01 +0100
Processing by Projects::ContainerRegistryController#index as HTML
Parameters: {"namespace_id"=>"data-access-servicess", "project_id"=>"centipede-rest"}
Completed 500 Internal Server Error in 195ms (ActiveRecord: 25.9ms)
Faraday::ConnectionFailed (wrong status line: "\x15\x03\x01\x00\x02\x02"):
lib/container_registry/client.rb:19:in `repository_tags'
lib/container_registry/repository.rb:22:in `manifest'
lib/container_registry/repository.rb:31:in `tags'
app/controllers/projects/container_registry_controller.rb:8:in `index'
lib/gitlab/request_profiler/middleware.rb:15:in `call'
lib/gitlab/middleware/go.rb:16:in `call'
and Docker Registry log:
2016/11/25 09:15:01 http: TLS handshake error from 172.17.0.1:44608: tls: first record does not look like a TLS handshake
The problem is that gitlab tries to connect to the registry via http and not httpS. Hence your are getting the TLS handshake error.
Change your gitlab config from
registry:
api_url: http://localhost:5000/
to
registry:
api_url: https://localhost:5000/
If you are using a self-signed certificate, don't forget to trust it on the machine where gitlab is installed. See -> https://docs.docker.com/registry/insecure/#troubleshooting-insecure-registry

Resources