setting gitlab with docker registry error 500 - docker

I have running docker with docker registry on example.domain.com
docker run -d -p 5000:5000 --restart=always --name registry \
-v /etc/ssl/certs/:/certs \
-e REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry \
-v /git/docker_registry:/var/lib/registry \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/server.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/server.key \
registry:2
I can push and pull to this docker registry but when i try to connect it with gitlab which is running on the same machine example.domain.com using gitlab.yml config:
registry:
enabled: true
host: example.domain.com
port: 5005
api_url: http://localhost:5000/
key: /etc/ssl/certs/server.key
path: /git/docker_registry
In web browser enabling docker registry on project works fine, but when i go to project page and open Regisry page i get error 500
Gitlab logs shows:
Started POST "/api/v3/internal/allowed" for 10.10.200.96 at 2016-11-25 10:15:01 +0100
Started POST "/api/v3/internal/allowed" for 10.10.200.96 at 2016-11-25 10:15:01 +0100
Started POST "/api/v3/internal/allowed" for 10.10.200.96 at 2016-11-25 10:15:01 +0100
Started GET "/data-access-servicess/centipede-rest/container_registry" for 10.11.0.232 at 2016-11-25 10:15:01 +0100
Processing by Projects::ContainerRegistryController#index as HTML
Parameters: {"namespace_id"=>"data-access-servicess", "project_id"=>"centipede-rest"}
Completed 500 Internal Server Error in 195ms (ActiveRecord: 25.9ms)
Faraday::ConnectionFailed (wrong status line: "\x15\x03\x01\x00\x02\x02"):
lib/container_registry/client.rb:19:in `repository_tags'
lib/container_registry/repository.rb:22:in `manifest'
lib/container_registry/repository.rb:31:in `tags'
app/controllers/projects/container_registry_controller.rb:8:in `index'
lib/gitlab/request_profiler/middleware.rb:15:in `call'
lib/gitlab/middleware/go.rb:16:in `call'
and Docker Registry log:
2016/11/25 09:15:01 http: TLS handshake error from 172.17.0.1:44608: tls: first record does not look like a TLS handshake

The problem is that gitlab tries to connect to the registry via http and not httpS. Hence your are getting the TLS handshake error.
Change your gitlab config from
registry:
api_url: http://localhost:5000/
to
registry:
api_url: https://localhost:5000/
If you are using a self-signed certificate, don't forget to trust it on the machine where gitlab is installed. See -> https://docs.docker.com/registry/insecure/#troubleshooting-insecure-registry

Related

kubectl works on laptop but times out from within a docker container

Context:
I'm setting up a deployment tooling image which contains aws-cli, kubectl and helm. Testing the image locally, I found out that kubectl times out in the container despite working fine on the host (my laptop).
Tested with alpine/k8s:1.19.16 image as well (same docker run command options) and ran into the same issue.
What I did:
I'm on OS X and have kubectl, aws-cli and helm installed via brew
I have valid (not expired yet) AWS credential (~/.aws/credentials) and ~/.kube/config
running aws s3 ls s3://my-bucket works on my laptop, returning the correct response
running kubectl get pods -A works on my laptop, returning the correct response
switching to running these in containers with docker run. no context change. this issue exists in both the image I created and an official k8s tooling image from alpine. for simplicity reason I'll use alpine/k8s:1.19.16 in my command
command for launching container console: docker run --rm -it --entrypoint="" -e AWS_PROFILE -v /Users/myself/.aws:/root/.aws -v /Users/myself/.kube/config:/root/.kube/config -e SSH_AUTH_SOCK="/run/host-services/ssh-auth.sock" -v /run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock -e GIT_SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" alpine/k8s:1.19.16 /bin/bash
in the launched console:
running aws s3 ls s3://my-bucket still works fine, returning the correct response
running kubectl get pods -A times out
I compared the verbose output of kubectl get pods --v=8 (with the same context and ~/.kube/config):
on the host (my laptop)
I0826 01:24:11.999265 43571 loader.go:372] Config loaded from file: /Users/myself/.kube/config
I0826 01:24:12.014315 43571 round_trippers.go:463] GET https://<current-context-k8s-dns-name>/apis/external.metrics.k8s.io/v1beta1?timeout=32s
I0826 01:24:12.014330 43571 round_trippers.go:469] Request Headers:
I0826 01:24:12.014351 43571 round_trippers.go:473] User-Agent: kubectl/v1.24.0 (darwin/amd64) kubernetes/4ce5a89
I0826 01:24:12.014358 43571 round_trippers.go:473] Accept: application/json, */*
I0826 01:24:12.443152 43571 round_trippers.go:574] Response Status: 200 OK in 428 milliseconds
in the console (docker container):
I0826 08:25:47.066787 19 loader.go:375] Config loaded from file: /root/.kube/config
I0826 08:25:47.067505 19 round_trippers.go:421] GET https://<current-context-k8s-dns-name>/api?timeout=32s
I0826 08:25:47.067532 19 round_trippers.go:428] Request Headers:
I0826 08:25:47.067538 19 round_trippers.go:432] Accept: application/json, */*
I0826 08:25:47.067542 19 round_trippers.go:432] User-Agent: kubectl/v1.19.16 (linux/amd64) kubernetes/e37e4ab
I0826 08:26:17.047076 19 round_trippers.go:447] Response Status: in 30000 milliseconds
The ~/.kube/config was mounted correctly and the verbose log verified that it's loaded correctly, pointing to the right https endpoint. I tried ssh (by IP) to one of the master node by ip (from both container and laptop): laptop worked but the same ssh command from container timed out too.
nslookup <current-context-k8s-dns-name> from both container and laptop gave slightly different output.
from my laptop(host):
nslookup <current-context-k8s-dns-name>
Server: 10.253.0.2
Address: 10.253.0.2#53
Non-authoritative answer:
Name: <current-context-k8s-dns-name>
Address: 172.20.50.40
Name: <current-context-k8s-dns-name>
Address: 172.20.50.41
Name: <current-context-k8s-dns-name>
Address: 172.20.50.42
from the container:
nslookup <current-context-k8s-dns-name>
Server: 192.168.65.5
Address: 192.168.65.5:53
Non-authoritative answer:
Name: <current-context-k8s-dns-name>
Address: 172.20.50.40
Name: <current-context-k8s-dns-name>
Address: 172.20.50.41
Name: <current-context-k8s-dns-name>
Address: 172.20.50.42
I have a feeling that this has something to do with docker network but I don't know enough to solve this. I'll be deeply grateful if anyone can help explain this to me.
Thanks in advance
this issue is clearly a docker networking issue.
ran tcpdump + telnet on both host and container to compare the output and seems like the packets are not even routed to the host.
end up doing a docker network prune and this issue is resolved. nothing wrong with the setup, it's a known issue for docker on OSX

can't connect to MySQL with GCP VM Instance docker ERROR 2002 (HY000)

I have a VM instance booting on container optimised OS and with the following Startup script:
docker pull gcr.io/cloudsql-docker/gce-proxy:1.16
docker run -d \
-p 0.0.0.0:3306:3306 \
gcr.io/cloudsql-docker/gce-proxy:1.16 /cloud_sql_proxy \
-instances=<cloudsql-connection-name>=tcp:0.0.0.0:3306
When trying to connect to the db running the following command from the cloud shell mysql -ppass -u root I have the following error:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
What does this mean? What should I do?
The context is that I need to use this vm mysql proxy to connect data fusion.
Add this command line option to connect via TCP:
-h 127.0.0.1
Example:
mysql -h 127.0.0.1 -ppass -u root
Note: You are specifying an older version of the Cloud SQL Auth Proxy container.
https://console.cloud.google.com/gcr/images/cloudsql-docker/GLOBAL/gce-proxy

Portainer not loading properly

I have downloaded the Portainer image and created the container in the Docker manager node, by using the below command.
docker run -d -p 61010:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer
But after some time the container is getting excited. Also when I access the Portainer with the above port it's just saying Portainer loading and nothing is happening. PFB the logs for the Portainer
2019/10/16 16:20:58 server: Reverse tunnelling enabled
2019/10/16 16:20:58 server: Fingerprint 43:68:57:37:e4:3f:f7:98:bd:52:13:39:c6:6d:24:c9
2019/10/16 16:20:58 server: Listening on 0.0.0.0:8000...
2019/10/16 16:20:58 Starting Portainer 1.22.1 on :9000
2019/10/16 16:20:58 [DEBUG] [chisel, monitoring] [check_interval_seconds: 10.000000] [message:
starting tunnel management process]
2019/10/16 16:25:58 No administrator account was created after 5 min. Shutting down the Portainer
instance for security reasons.
2019/10/16 16:30:12 Templates already registered inside the database. Skipping template import.
2019/10/16 16:30:12 server: Reverse tunnelling enabled
2019/10/16 16:30:12 server: Fingerprint 43:68:57:37:e4:3f:f7:98:bd:52:13:39:c6:6d:24:c9
2019/10/16 16:30:12 server: Listening on 0.0.0.0:8000...
2019/10/16 16:30:12 Starting Portainer 1.22.1 on :9000
2019/10/16 16:30:12 [DEBUG] [chisel, monitoring] [check_interval_seconds: 10.000000] [message:
starting tunnel management process]
2019/10/16 16:35:12 No administrator account was created after 5 min. Shutting down the Portainer
instance for security reasons.
I am not sure whether the Porainer is running on 61010. Also, do i need to install Agent for this to work Please help to resolve this.
Follow the docs and it should work:
Quick start If you are running Linux, deploying Portainer is as simple
as:
$ docker volume create portainer_data
$ docker run -d -p 9000:9000 -p 8000:8000 --name portainer --restart always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
Voilà, you can now use Portainer by accessing the
port 9000 on the server where Portainer is running.
Once you access the localhost:9000 in the browser, you will be required to created admin account, afterwards you will see the Portainer ui

Why does pushing to private, secured docker registry fail?

I would like to run a private, secure, authenticated docker registry on a self healing AWS ECS cluster. The cluster setup is done and works properly, but I struggled getting the registry:latest running. The problem was, that every time I push an image, pushing the blobs fails, and goes into a retry cycle unless I get a timeout.
To make sure, that my ECS setup isn’t the blocker, I tried to setup everything locally using Docker4Mac 1.12.0-a.
First, the very basic setup works. I created my own version of the registry image, where there I put my TLS certificate bundle and key as well as the necessary htpasswd file directly into the image. [I know, this is insecure, I just do that for the testing purpose]. So here is my Dockerfile:
FROM registry:latest
COPY htpasswd /etc/docker
COPY server_bundle.pem /etc/docker
COPY server_key.pem /etc/docker
server_bundle.pem has a wildcard certificate for my domain mydomain.com (CN=*.mydomain.com) as the first one, followed by the intermediate CA certificates, so clients should be happy. My htpasswd file was created using the recommended approach:
docker run --entrypoint htpasswd registry:2 -Bbn heyitsme mysupersecurepassword > htpasswd
I build my image:
docker build -t heyitsme/registry .
and afterwards I run a very basic version w/o TLS and authentication:
docker run --restart=always -p 5000:5000 heyitsme/registry
and I can actually pull, tag and re-push an image:
docker pull alpine
docker tag alpine localhost:5000/alpine
docker push localhost:5000/alpine
This works. Next I make TLS and basic auth work via environment variables:
docker run -h registry.mydomain.com --name registry --restart=always -p 5000:5000 \
-e REGISTRY_HTTP_HOST=http://registry.mydomain.com:5000 \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/etc/docker/server_bundle.pem \
-e REGISTRY_HTTP_TLS_KEY=/etc/docker/server_key.pem \
-e REGISTRY_AUTH=htpasswd \
-e REGISTRY_AUTH_HTPASSWD_REALM=Registry-Realm \
-e REGISTRY_AUTH_HTPASSWD_PATH=/etc/docker/htpasswd heyitsme/registry
For the time being I create an entry in /etc/hosts which says:
127.0.0.1 registry.mydomain.com
And then I login:
docker login registry.mydomain.com:5000
Username: heyitsme
Password: ***********
Login Succeeded
So now, let’s tag and push the image here:
docker tag alpine registry.mydomain.com:5000/alpine
docker push registry.mydomain.com:5000/alpine
The push refers to a repository [registry.mydomain.com:5000/alpine]
4fe15f8d0ae6: Retrying in 4 seconds
What happens is, that the docker clients tries to push fragments and fails. Then it retries and fails again until I get a timeout. So next check, whether the V2 API works properly:
curl -i -XGET https://registry.mydomain.com:5000/v2/
HTTP/1.1 401 Unauthorized
Content-Type: application/json; charset=utf-8
Docker-Distribution-Api-Version: registry/2.0
Www-Authenticate: Basic realm="Registry-Realm"
X-Content-Type-Options: nosniff
Date: Thu, 15 Sep 2016 10:06:04 GMT
Content-Length: 87
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}
Ok, as expected. So let’s authenticate next time:
curl -i -XGET https://heyitsme:mysupersecretpassword#registry.mydomain.com:5000/v2/
HTTP/1.1 200 OK
Content-Length: 2
Content-Type: application/json; charset=utf-8
Docker-Distribution-Api-Version: registry/2.0
X-Content-Type-Options: nosniff
Date: Thu, 15 Sep 2016 10:06:16 GMT
{}%
Works. But pushing still fails.
The logs say:
time="2016-09-15T10:24:34Z" level=warning msg="error authorizing context: basic authentication challenge for realm \"Registry-Realm\": invalid authorization credential" go.version=go1.6.3 http.request.host="registry.mydomain.com:5000" http.request.id=6d2ec080-6824-4bf7-aac2-5af31db44877 http.request.method=GET http.request.remoteaddr="172.17.0.1:40878" http.request.uri="/v2/" http.request.useragent="docker/1.12.0 go/go1.6.3 git-commit/8eab29e kernel/4.4.15-moby os/linux arch/amd64 UpstreamClient(Docker-Client/1.12.0 \\(darwin\\))" instance.id=be3a8877-de64-4574-b47a-70ab036e7b79 version=v2.5.1
172.17.0.1 - - [15/Sep/2016:10:24:34 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/1.12.0 go/go1.6.3 git-commit/8eab29e kernel/4.4.15-moby os/linux arch/amd64 UpstreamClient(Docker-Client/1.12.0 \\(darwin\\))"
time="2016-09-15T10:24:34Z" level=info msg="response completed" go.version=go1.6.3 http.request.host="registry.mydomain.com:5000" http.request.id=8f81b455-d592-431d-b67d-0bc34155ddbf http.request.method=POST http.request.remoteaddr="172.17.0.1:40882" http.request.uri="/v2/alpine/blobs/uploads/" http.request.useragent="docker/1.12.0 go/go1.6.3 git-commit/8eab29e kernel/4.4.15-moby os/linux arch/amd64 UpstreamClient(Docker-Client/1.12.0 \\(darwin\\))" http.response.duration=30.515131ms http.response.status=202 http.response.written=0 instance.id=be3a8877-de64-4574-b47a-70ab036e7b79 version=v2.5.1
172.17.0.1 - - [15/Sep/2016:10:24:34 +0000] "POST /v2/alpine/blobs/uploads/ HTTP/1.1" 202 0 "" "docker/1.12.0 go/go1.6.3 git-commit/8eab29e kernel/4.4.15-moby os/linux arch/amd64 UpstreamClient(Docker-Client/1.12.0 \\(darwin\\))"
2016/09/15 10:24:34 http: TLS handshake error from 172.17.0.1:40886: tls: first record does not look like a TLS handshake
I also tested different versions of the original registry image, especially several versions above 2. All yield the same error. If someone could help me on that issue, that would be awesome.
Solved:
-e REGISTRY_HTTP_HOST=https://registry.mydomain.com:5000 \
as environment variable did the tick. Just the prior use of http instead https made the connection fail.
Thanks a lot, it's working now!
I added the variable in my kubernetes deployment manifest:
(...)
- env:
- name: REGISTRY_STORAGE_DELETE_ENABLED
value: "true"
- name: REGISTRY_HTTP_HOST
value: "https://registry.k8sprod.acme.domain:443"
(...)
For those who use nginx ingress in front, you have to set extra attribute in your ingress manifest file if you have push size errors:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-registry
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "8192m"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-next-upstream-timeout: "600"
nginx.ingress.kubernetes.io/proxy-next-upstream-tries: "10"
nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
labels:
k8s-app: kube-registry
app: kube-registry
spec:
rules:
- host: registry.k8sprod.acme.domain
http:
paths:
- path: /
backend:
serviceName: svc-kube-registry
servicePort: 5000

Docker neo4j container just hangs

Pretty straightforward:
christian#christian:~/development$ docker -v
Docker version 1.6.2, build 7c8fca2
I ran these instructions to start docker.
docker run --detach --name neo4j --publish 7474:7474 \
--volume $HOME/neo4j/data:/data neo4j
Nothing exciting here; this should all just work.
But, http://localhost:7474 doesn't respond. When I jump into the container, it seems to respond just fine (see debug session). What did I miss?
christian#christian:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2d9e0d5d2f73 neo4j:latest "/docker-entrypoint. 15 minutes ago Up 15 minutes 7473/tcp, 0.0.0.0:7474->7474/tcp neo4j
christian#christian:~$ curl http://localhost:7474
^C
christian#christian:~$ time curl http://localhost:7474
^C
real 0m33.353s
user 0m0.008s
sys 0m0.000s
christian#christian:~$ docker exec -it 2d9e0d5d2f7389ed8b7c91d923af4a664471a93f805deb491b20fe14d389a3d2 /bin/bash
root#2d9e0d5d2f73:/var/lib/neo4j# curl http://localhost:7474
{
"management" : "http://localhost:7474/db/manage/",
"data" : "http://localhost:7474/db/data/"
}root#2d9e0d5d2f73:/var/lib/neo4j# exit
christian#christian:~$ docker logs 2d9e0d5d2f7389ed8b7c91d923af4a664471a93f805deb491b20fe14d389a3d2
Starting Neo4j Server console-mode...
/var/lib/neo4j/data/log was missing, recreating...
2016-03-07 17:37:22.878+0000 INFO No SSL certificate found, generating a self-signed certificate..
2016-03-07 17:37:25.276+0000 INFO Successfully started database
2016-03-07 17:37:25.302+0000 INFO Starting HTTP on port 7474 (4 threads available)
2016-03-07 17:37:25.462+0000 INFO Enabling HTTPS on port 7473
2016-03-07 17:37:25.531+0000 INFO Mounting static content at /webadmin
2016-03-07 17:37:25.579+0000 INFO Mounting static content at /browser
2016-03-07 17:37:26.384+0000 INFO Remote interface ready and available at http://0.0.0.0:7474/
I can't reproduce this. Docker 1.8.2. & 1.10.0 is OK with your case:
docker run --detach --name neo4j --publish 7474:7474 neo4j
curl -i 127.0.0.1:7474
HTTP/1.1 200 OK
Date: Tue, 08 Mar 2016 16:45:46 GMT
Content-Type: application/json; charset=UTF-8
Access-Control-Allow-Origin: *
Content-Length: 100
Server: Jetty(9.2.4.v20141103)
{
"management" : "http://127.0.0.1:7474/db/manage/",
"data" : "http://127.0.0.1:7474/db/data/"
}
Try upgrade Docker and check netfilter rules for forwarding.
Instead of making the request to localhost you'll want to use the docker-machine VM ip address, which you can determine with this command:
docker-machine inspect default | grep IPAddress
or
curl -i http://$(docker-machine ip default):7474/
The default IP address is 192.168.99.100
OK, basically I removed the volume mount in the args to docker and it works. Ultimately, I don't want an out-of-container mount anyways. Thank you #LoadAverage for cluing me in. It's still not 'right' but for my purposes I don't care.
christian#christian:~/development$ docker run --detach --name neo4j --publish 7474:7474 neo4j
6c94527816057f8ca1e325c8f9fa7b441b4a5d26682f72d42ad17614d9251170
christian#christian:~/development$ curl http://127.0.0.1:7474
{
"management" : "http://127.0.0.1:7474/db/manage/",
"data" : "http://127.0.0.1:7474/db/data/"
}
christian#christian:~/development$

Resources