I have set up a private docker registry with self-signed certificates.
docker run -d -p 443:5000 --restart=always --name registry -v `pwd`/auth:/auth
-e "REGISTRY_AUTH=htpasswd" -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm"
-e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -v `pwd`/certs:/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/domain.crt
-e REGISTRY_HTTP_TLS_KEY=/domain.key
domain.crt and domain.key are generated using OpenSSL.
To Connect from a remote host,
cp domain.crt /etc/pki/ca-trust/source/anchors/mydockerregistry.com.crt
update-ca-trust
systemctl daemon-reload
systemctl restart docker
After this able to log in from the remote host
docker login mydockerregistry.com --username=test
password: test
I am able to push/pull the image to this registry and it is successful.
Similarly, I tried to deploy this image in the Kubernetes cluster. I created a secret with the registry with a username and password.
kubectl create secret docker-registry my-registry --docker-server=mydockerregistry.com --docker-username=test --docker-password=test --docker-email=abc.com
Also, I did the self-signed certificates from docker registry steps in worker nodes,
cp domain.crt /etc/pki/ca-trust/source/anchors/mydockerregistry.com.crt
update-ca-trust
systemctl daemon-reload
systemctl restart docker
Given the name in the imagePullSecrets of deployment.yaml file. I am trying to create a POD in the Kubernetes cluster (Calico Network) but it is unable to pull the image.
deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: test-image
labels:
app: test-image
chart: test-image
spec:
containers:
- name: {{ .Chart.Name }}
image: "mydockerregistry.com/test-image:latest"
imagePullPolicy: Always
imagePullSecrets:
- name: my-registry
Warning Failed 45s (x2 over 59s) kubelet,
kube-worker-02 Failed to pull image
"mydockerregistry.com/test-image:latest": rpc error: code = Unknown
desc = unauthorized: authentication required
Warning Failed
45s (x2 over 59s) kubelet, kube-worker-02 Error: ErrImagePull
I checked the docker registry logs,
time="2020-01-13T14:58:05.269921112Z" level=error msg="error
authenticating user "": authentication failure" go.version=go1.11.2
http.request.host=mydockerregistry.com
http.request.id=02fcccff-9a30-443c-8a00-48bcacb90e99
http.request.method=GET http.request.remoteaddr="10.76.112.148:35454"
http.request.uri="/v2/test-image/manifests/latest"
http.request.useragent="docker/1.13.1 go/go1.10.8
kernel/3.10.0-957.21.3.el7.x86_64 os/linux arch/amd64
UpstreamClient(Go-http-client/1.1)" vars.name=test-image
vars.reference=latest
time="2020-01-13T14:58:05.269987492Z" level=warning msg="error
authorizing context: basic authentication challenge for realm
"Registry Realm": authentication failure" go.version=go1.11.2
http.request.host=mydockerregistry.com
http.request.id=02fcccff-9a30-443c-8a00-48bcacb90e99
http.request.method=GET http.request.remoteaddr="10.76.112.148:35454"
http.request.uri="/v2/ca-config-calc/manifests/latest"
http.request.useragent="docker/1.13.1 go/go1.10.8
kernel/3.10.0-957.21.3.el7.x86_64 os/linux arch/amd64
UpstreamClient(Go-http-client/1.1)" vars.name=test-image
vars.reference=latest
I am able to do docker login myregistrydomain and pull the image from worker node
Anything I am missing in the configuration?
You have a typo in the registry name in the create secret command.
kubectl create secret docker-registry my-registry --docker-server=myregistryregistry.com --docker-username=test --docker-password=test --docker-email=abc.com
Change myregistryregistry.com to mydockerregistry.com which you have used with docker login.
I've been able to successfully pull an image from a secure, private, docker registry into kubernetes using this link.
Related
I am running into a strange issue, docker pull works but when using kubectl create or apply -f with kind cluster, it is getting below error
Warning Failed 20m kubelet, kind-control-plane Failed to pull image "quay.io/airshipit/kubernetes-entrypoint:v1.0.0": rpc error: code = Unknown desc = failed to pull and unpack image "quay.io/airshipit/kubernetes-entrypoint:v1.0.0": failed to copy: httpReaderSeeker: failed open: failed to do request: Get https://d3uo42mtx6z2cr.cloudfront.net/sha256/b5/b554c0d094dd848c822804a164c7eb9cc3d41db5f2f5d2fd47aba54454d95daf?Expires=1587558576&Signature=Tt9R1O4K5zI6hFG9GYt-tLAWkwlQyLoAF0NDNouFnff2ywZnPlMSo2x2aopKcQJ5cAMYYTHvYBKm2Zwk8W80tE9cRet1PfP6CnAmo2lzsYzKnRRWbgQhgsyJK8AmAvKzw7iw6lbYdP91JjEiUcpfjMAj7dMPj97tpnEnnd72kljRew8VfgBhClblnhNFvfR9fs9lRS7wNFKrZ1WUSGpNEEJZjNcc9zBNIbOyKeDPfvIpdJ6OthQMJ3EKaFEFfVN6asiyz3lOgM2IMjJ0uBI2ChhCyDx7YHTdNZCOoYAEmw8zo5Ma0n8EQpX3EwU1qSR0IwoGNawF0qV6tFAZi5lpbQ__&Key-Pair-Id=APKAJ67PQLWGCSP66DGA: x509: certificate signed by unknown authority
Here is the ./kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EUXlNakV4TVRNd09Gb1hEVE13TURReU1ERXhNVE13T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDlvCkNiYlFYblBxbXpUV0hwdnl6ZXdPcWo5L0NCSmFLV1lrSEVCZzJHcXhjWnFhWG92aVpOdkt3NVZsQmJvTUlSOTMKVUxiWGFVeFl4MHJyQ3pWanNKU09lWDd5VjVpY3JTOXRZTkF1eHhPZzBMM1F3SElxUEFKWkY5b1JwWG55VnZMcwpIcVBDQ2ZRblhBYWRpM3VsM2J5bjcrbVFhcU5mV0NSQkZhRVJjcXF5cDltbzduRWZ2YktybVM0TUdIUHN3eUV0CkYxeXJjc041Vlo5QkM5TWlXZnhEY1dUL2R5SXIrUjFtL3hWYlU0aGNjdkowYi9CQVJ3aUhVajNHVFpnYUtmbGwKNUE5elJsVFRNMjV6c0t5WHVLOFhWcFJlSTVCNTNqUUo3VGRPM0lkc0NqelNrbnByaFI0YmNFcll5eVNhTWN6cgo4c1l0RHNWYmtWOE9rd0pFTnlNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHdEFyckYrKzdjOGZEN09RZWxHWldxSHpyazkKbFQ2MHhmZjBtTzRFdWI3YUxzZGdSTmZuSjB5UDRVelhIeXBkZEhERFhzUHFzVHZzZ2h6MXBPNFQrVTVCVmRqQQpGWjdxZW9iUWN2NkhnSERZSjhOdy9sTHFObGMyeUtPYVJSNTNsbjRuWERWYkROaTcyeEJTbUlNN0hhOFJQSVNFCmttTndHeHFKQVM3UmFOanN0SDRzbC9LR2xKcUowNFdRZnN0b1lkTUY4MERuc0prYlVuSkQyb29oOGVHTlQ5WGsKOTZPbGdoa05yZ09ybmFOR2hTZlQxYjlxdDJZOFpGUlRrKzhhZGNNczlHWW50RzZZTW1WRzVVZDh0L1phbVlRSwpIWlJ6WDRxM3NoY1p3NWRmR2JZUmRPelVTZkhBcE9scHFOQ1FmZGxyOWMyeDMxdkRpOW4vZE9RMHVNbz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://127.0.0.1:32768
name: kind-kind
contexts:
- context:
cluster: kind-kind
user: kind-kind
name: kind-kind
current-context: kind-kind
kind: Config
preferences: {}
users:
- name: kind-kind
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJSWNDdHVsWUhYaVl3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBME1qSXhNVEV6TURoYUZ3MHlNVEEwTWpJeE1URXpNVEJhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTArZ0JKWHBpUncxK09WaGEKVjU0bG5IMndHTzRMK1hIZjBnUjFadU01MnUwUFV3THQ5cDNCd2d5WDVHODhncUFIMmh3K1c4U2lYUi9WUUM5MgpJd3J3cnc1bFlmcTRrWDZhWEcxdFZLRjFsU2JMUHd4Nk4vejFMczlrbnlRb2piMHdXZkZ2dUJrOUtCMjJuSVozCmdOUEZZVmNVcWwyM2s3ck5yL0xzdGZncEJoVTRaYWdzbCsyZG53Qll2MVh4Z1M1UGFuTGxUcFVYODIxZ3RzQ0QKbUN1aFFyQlQzdzZ0NXlqUU5MSGNrZ3M4Y1JXUkdxZFNnZGMrdGtYczkzNDdoSzRjazdHYUw0OHFBMTgzZzBXKwpNZEllcDR3TUxGbU9XTCtGS2Q5dC83bXpMbjJ5RWdsRXlvNjFpUWRmV2s1S2Q1c1BqQUtVZXlWVTIrTjVBSlBLCndwaGFyUUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFGNXp5d1hSaitiakdzSG1OdjgwRXNvcXBjOEpSdVY4YnpNUQoxV0dkeGl3Mzk3TXBKVHFEaUNsTlZoZjZOOVNhVmJ2UXp2dFJqWW5yNmIybi9HREdvZDdyYmxMUWJhL2NLN1hWCm1ubTNHTXlqSzliNmc0VGhFQjZwUGNTa25yckRReFFHL09tbXE3Ulg5dEVCd2RRMHpXRGdVOFU0R0t3a3NyRmgKMFBYNE5xVnAwdHcyaVRDeE9lU0FpRnBCQ0QzS3ZiRTNpYmdZbHNPUko5S0Y3Y00xVkpuU0YzUTNZeDNsR3oxNgptTm9JanVHNWp2a3NDejc3TlFIL3Ztd2dXRXJLTndCZ0NDeEVQY1BjNFRZREU1SzBrUTY1aXc1MzR6bHZuaW5JCjZRTGYvME9QaHRtdC9FUFhRSU5PS0dKWEpkVFo1ZU9JOStsN0lMcGROREtkZjlGU3pNND0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBMCtnQkpYcGlSdzErT1ZoYVY1NGxuSDJ3R080TCtYSGYwZ1IxWnVNNTJ1MFBVd0x0CjlwM0J3Z3lYNUc4OGdxQUgyaHcrVzhTaVhSL1ZRQzkySXdyd3J3NWxZZnE0a1g2YVhHMXRWS0YxbFNiTFB3eDYKTi96MUxzOWtueVFvamIwd1dmRnZ1Qms5S0IyMm5JWjNnTlBGWVZjVXFsMjNrN3JOci9Mc3RmZ3BCaFU0WmFncwpsKzJkbndCWXYxWHhnUzVQYW5MbFRwVVg4MjFndHNDRG1DdWhRckJUM3c2dDV5alFOTEhja2dzOGNSV1JHcWRTCmdkYyt0a1hzOTM0N2hLNGNrN0dhTDQ4cUExODNnMFcrTWRJZXA0d01MRm1PV0wrRktkOXQvN216TG4yeUVnbEUKeW82MWlRZGZXazVLZDVzUGpBS1VleVZVMitONUFKUEt3cGhhclFJREFRQUJBb0lCQUZzYWsrT1pDa2VoOVhLUwpHY1V4cU5udTc1YklRVDJ0UjV6emJjWWVTdkZrbWdJR2NHaG15cmF5MDFyU3VDRXd6QzlwbFNXL0ZFOFZNSW0zCjNnS1M0WWRobVJUV3hpTkhXdllCMWM5YzIwQ1V2UzBPSUQyUjg1ZDhjclk0eFhhcXIrNzdiaHlvUFRMU0U0Q1kKRHlqRDQwaEdPQXhHM25ZVkNmbHJaM21VaDQ2bEo4YlROcXB5UzFCcVdNZnZwekt1ZDB6TElmMWtTTW9Cbm1XeQo0RzBrNC9qWVdEOWNwdGtSTGxvZXp5WVlCMTRyOVdNQjRENkQ5eE84anhLL0FlOEQraTl2WCtCaUdGOURSYllJCmVVQmRTQzE2QnQybW5XWGhXMmhSRFFqRmR2dzJIQ0gxT0ppcVZuWUlwbGJEcjFYVXI1NzFYWTZQMFJlQ0JRc3kKOUZpMG44RUNnWUVBMUQ3Nmlobm5YaEZyWFE2WkVEWnp3ZGlwcE5mbHExMGJxV0V5WUVVZmpPd2p3ZnJ4bzVEYgppUmoySm5Fei96bDhpVDFEbmh3bFdWZlBNbWo3bUdFMVYwMkFWSkJoT20vdU1tZnhYVmcvWGwxOVEzODdJT0tpCjBKSmdabGZqVjEyUGdRU3NnbnRrckdJa3dPcisrOUFaL3R0UVVkVlU0bFgxWWRuazZ5T1V6YWNDZ1lFQS81Y1kKcHJxMVhuNGZBTUgxMzZ2dVhDK2pVaDhkUk9xS1Zia2ZtWUw0dkI0dG9jL2N1c1JHaGZPaTZmdEZCSngwcDhpSgpDU1ZCdzIxbmNHeGRobDkwNkVjZml2ZG0vTXJlSmlyQmFlMlRRVWdsMjh1cmU3MWJEdXpjbWMrQVRQa1VXVDgyCmJpaDM5b3A1SEo5N2NlU3JVYU5zRTgxaEdIaXNSSzJEL2pCTjU0c0NnWUVBcUExeHJMVlQ5NnlOT1BKZENYUkQKOWFHS3VTWGxDT2xCQkwwYitSUGlKbCsyOUZtd3lGVGpMc3RmNHhKUkhHMjFDS2xFaDhVN1lXRmdna2FUcDVTWQplcGEzM0wwdzd1Yy9VQlB6RFhqWk8rdUVTbFJNU2Y2SThlSmtoOFJoRW9UWElrM0VGZENENXVZU3VkbVhxV1NkCm9LaWdFUnQ4Q1hZTVE3MFdQNFE5eHhNQ2dZQnBkVTJ0bGNJNkQrMzQ0UTd6VUR5VWV1OTNkZkVjdTIyQ3UxU24KZ1p2aCtzMjNRMDMvSGZjL1UreTNnSDdVelQxdzhWUmhtcWJNM1BwZUw4aFRKbFhWZFdzMWFxbHF5c1hvbDZHZwpkRzlhODByenF0REJ5THFtcU9MSThBNHZOR0xLQkVRUUpkQ0J3RmNDa1dkYzhnNGlMRHp1MnNJaVY4QTB3aWVCCkhTczN5d0tCZ1FDeXl2Tk45enk5S3dNOW1nMW5GMlh3WUVzMzB4bmsrNXJmTGdRMzQvVm1sSVA5Y1cyWS9oTWQKWnVlNWd4dnlYREcrZW9GU3Njc281TmcwLytWUDI0Sjk0cGJIcFJWV3FIWENvK2gxZjBnKzdET2p0dWp2aGVBRwpSb240NmF1clJRSG5HUStxeldWcWtpS2l1dDBybFpHd2ZzUGs4eWptVjcrWVJuamxES1hUWUE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
I ran into a similar issue (I think) on OpenShift - I could pull images, but I couldn't push or get k8s to pull them. To resolve it, I had to update the docker config at /etc/sysconfig/docker and add the registry as an insecure registry. For openshift, the default route was required.
OPTIONS=' <some existing config stuff here> --insecure-registry=<fqdn-of-your-registry>'
Then systemctl restart docker to have the changes take effect.
You might also need to create a docker pull secret with your credentials in kubernetes to allow it to access the registry. Details here
I've installed Watchtower, and followed the docs for setting up private registry auth with auth helpers. In debug mode, I see it logging a message that auth value was obtained, but then it fails to pull image with "no basic auth credentials." Inspecting auth value, it's just host name from my config, and no credential. I verified that on the host system (Raspbian) I'm able to pull new version using the same docker config, without having to do any custom auth, everything works out of the box, using the same binary.
Here's my docker config:
{
"auths" : {
"0000000000.dkr.ecr.us-east-1.amazonaws.com" : {}
},
"credHelpers": {
"0000000000.dkr.ecr.us-east-1.amazonaws.com": "ecr-login"
}
}
Here's my docker compose:
version: "3"
services:
cavo:
image: 0000000000.dkr.ecr.us-east-1.amazonaws.com/test:1
ports:
- "8080:80"
restart: always
watchtower:
image: containrrr/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /root/.docker/config.json:/config.json
- /usr/bin/docker-credential-ecr-login:/bin/docker-credential-ecr-login
environment:
- AWS_REGION=us-east-1
- AWS_ACCESS_KEY_ID=AAAAAAAAAAAAA
- AWS_SECRET_ACCESS_KEY=aaaaaaaaaaaaaaa
command: --debug --interval 30
restart: always
And when watchtower attempts to check for new image, here is the log file:
watchtower_1 | time="2019-12-25T22:49:34Z" level=debug msg="Pulling 0000000000.dkr.ecr.us-east-1.amazonaws.com/test:1 for /root_test_1"
watchtower_1 | time="2019-12-25T22:49:34Z" level=debug msg="Loaded auth credentials { 0000000000.dkr.ecr.us-east-1.amazonaws.com } from /config.json"
watchtower_1 | time="2019-12-25T22:49:34Z" level=debug msg="Got auth value: eyJzZXJ2ZXJhZGRyZXNzIjoiMDAwMDAwMDAwMC5ka3IuZWNyLnVzLWVhc3QtMS5hbWF6b25hd3MuY29tIn0="
watchtower_1 | time="2019-12-25T22:49:34Z" level=debug msg="Got image name: 0000000000.dkr.ecr.us-east-1.amazonaws.com/test:1"
watchtower_1 | time="2019-12-25T22:49:35Z" level=debug msg="Error pulling image 0000000000.dkr.ecr.us-east-1.amazonaws.com/sump-pump-v2:1, Error response from daemon: Get https://0000000000.dkr.ecr.us-east-1.amazonaws.com/v2/test/manifests/1: no basic auth credentials"
watchtower_1 | time="2019-12-25T22:49:35Z" level=info msg="Unable to update container /root_test_1. Proceeding to next."
watchtower_1 | time="2019-12-25T22:49:35Z" level=debug msg="Error response from daemon: Get https://0000000000.dkr.ecr.us-east-1.amazonaws.com/v2/test/manifests/1: no basic auth credentials"
Unpacking the auth value, it just has the hostname. No repository credential.
I was trying to follow "Credential helpers" documentation, but I'm not sure I understand where the aforementioned Dockerfile belongs.
Any pointers in the right direction would be appreciated. Thanks!
Try the folowing:
Create a docker volume named helper
docker volume create helper
Build de image from de Dockerfile in the docs
docker build -t aws-ecr-dock-cred-helper .
Run the container
docker run -d --rm --name aws-cred-helper --volume helper:/go/bin aws-ecr-dock-cred-helper
The container will start and mount helper volume into /go/bin where is the docker-credential-ecr-login and then stop.
You can check the content of helper volume with
docker run --rm --it -v helper:/go/bin alpine
then do
ls /go/bin
you should see the docker-credential-ecr-login binary.
I didn't use docker compose, but you have to mount the helper volume into the watchtower container on /go/bin and for some reason export your $PATH with /go/bin
like in the docs:
environment:
- HOME=/
- PATH=$PATH:/go/bin
I'm going to setup a local registry by following https://docs.docker.com/registry/deploying/.
docker run -d -p 5000:5000 --restart=always --name reg ubuntu:16.04
When I try to run the following command:
$ docker push localhost:5000/my-ubuntu
I get Error:
Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: connect:connection refused
Any idea?
Connection refused usually means that the service you are trying to connect to isn't actually up and running like it should. There could be other reasons as outlined in this question, but essentially, for your case, it simply means that the registry is not up yet.
Wait for the registry container to be created properly before you do anything else - docker run -d -p 5000:5000 --restart=always --name registry registry:2 that creates a local registry from the official docker image.
Make sure that the registry container is up by running docker ps | grep registry, and then proceed further.
More comments about
Kubenetes(K8s) / Minikube
docker / image / registry, container
If you are using Minikube, and want to pull down an image from 127.0.0.1:5000,
then you meet the errors below:
Failed to pull image "127.0.0.1:5000/nginx_operator:latest": rpc error: code = Unknown desc = Error response from daemon: Get http://127.0.0.1:5000/v2/: dial tcp 127.0.0.1:5000: connect: connection refused
Full logs:
$ kubectl describe pod/your_pod
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m29s default-scheduler Successfully assigned tj-blue-whale-05-system/tj-blue-whale-05-controller-manager-6c8f564575-kwxdv to minikube
Normal Pulled 2m25s kubelet Container image "gcr.io/kubebuilder/kube-rbac-proxy:v0.5.0" already present on machine
Normal Created 2m24s kubelet Created container kube-rbac-proxy
Normal Started 2m23s kubelet Started container kube-rbac-proxy
Normal BackOff 62s (x5 over 2m22s) kubelet Back-off pulling image "127.0.0.1:5000/nginx_operator:latest"
Warning Failed 62s (x5 over 2m22s) kubelet Error: ImagePullBackOff
Normal Pulling 48s (x4 over 2m23s) kubelet Pulling image "127.0.0.1:5000/nginx_operator:latest"
Warning Failed 48s (x4 over 2m23s) kubelet Failed to pull image "127.0.0.1:5000/nginx_operator:latest": rpc error: code = Unknown desc = Error response from daemon: Get http://127.0.0.1:5000/v2/: dial tcp 127.0.0.1:5000: connect: connection refused
Warning Failed 48s (x4 over 2m23s) kubelet Error: ErrImagePull
Possible root cause:
The registry must be setup inside the Minikube side instead of your host side.
i.e.
host: registry (127.0.0.1:5000)
minikube: no registry (the K8s could not find your image)
How to check?
Step1: check your Minikube container
$ docker ps -a
CONTAINER ID IMAGE ... STATUS PORTS NAMES
8c6f49491dd6 gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4 ... Up 15 hours 127.0.0.1:49156->22/tcp, 127.0.0.1:49155->2376/tcp, 127.0.0.1:49154->5000/tcp, 127.0.0.1:49153->8443/tcp minikube
# your Minikube is under running
# host:49154 <--> minikube:5000
# where:
# - port 49154 was allocated randomly by the docker service
# - port 22: for ssh
# - port 2376: for docker service
# - port 5000: for registry (image repository)
# - port 8443: for Kubernetes
Step2: login to your Minikube
$ minikube ssh
docker#minikube:~$ curl 127.0.0.1:5000
curl: (7) Failed to connect to 127.0.0.1 port 5000: Connection refused
# setup
# =====
# You did not setup the registry.
# Let's try to setup it.
docker#minikube:~$ docker run --restart=always -d -p 5000:5000 --name registry registry:2
# test
# ====
# test the registry using the following commands
docker#minikube:~$ curl 127.0.0.1:5000
docker#minikube:~$ curl 127.0.0.1:5000/v2
Moved Permanently.
docker#minikube:~$ curl 127.0.0.1:5000/v2/_catalog
{"repositories":[]}
# it's successful
docker#minikube:~$ exit
logout
Step3: build your image, and push it into the registry of your Minikube
# Let's take nginx as an example. (You can build your own image)
$ docker pull nginx
# modify the repository (the source and the name)
$ docker tag nginx 127.0.0.1:49154/nginx_operator
# check the new repository (source and the name)
$ docker images | grep nginx
REPOSITORY TAG IMAGE ID CREATED SIZE
127.0.0.1:49154/nginx_operator latest ae2feff98a0c 3 weeks ago 133MB
# push the image into the registry of your Minikube
$ docker push 127.0.0.1:49154/nginx_operator
Step4: login to your Minikube again
$ minikube ssh
# check the registry
$ curl 127.0.0.1:5000/v2/_catalog
{"repositories":["nginx_operator"]}
# it's successful
# get the image info
$ curl 127.0.0.1:5000/v2/nginx_operator/manifests/latest
docker#minikube:~$ exit
logout
Customize exposed ports of Minikube
if you would like to use the port 5000 on the host side instead of using 49154 (which was allocated randomly by the docker service)
i.e.
host:5000 <--> minikube:5000
you need to recreate a minikube instance with the flag --ports
# delete the old minikube instance
$ minkube delete
# create a new one (with the docker driver)
$ minikube start --ports=5000:5000 --driver=docker
# or
$ minikube start --ports=127.0.0.1:5000:5000 --driver=docker
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5d1e5b61a3bf gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4 "/usr/local/bin/entr…" About a minute ago Up About a minute 0.0.0.0:5000->5000/tcp, 127.0.0.1:49162->22/tcp, 127.0.0.1:49161->2376/tcp, 127.0.0.1:49160->5000/tcp, 127.0.0.1:49159->8443/tcp minikube
$ docker port minikube
22/tcp -> 127.0.0.1:49162
2376/tcp -> 127.0.0.1:49161
5000/tcp -> 127.0.0.1:49160
5000/tcp -> 0.0.0.0:5000
8443/tcp -> 127.0.0.1:49159
you can see: 0.0.0.0:5000->5000/tcp
Re-test your registry in the Minikube
# in the host side
$ docker pull nginx
$ docker tag nginx 127.0.0.1:5000/nginx_operator
$ docker ps -a
$ docker push 127.0.0.1:5000/nginx_operator
$ minikube ssh
docker#minikube:~$ curl 127.0.0.1:5000/v2/_catalog
{"repositories":["nginx_operator"]}
# Great!
Using minikube and docker on my local Ubuntu workstation I get the following error in the Minikube web UI:
Failed to pull image "localhost:5000/samples/myserver:snapshot-180717-213718-0199": rpc error: code = Unknown desc = Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: getsockopt: connection refused
after I have created the below deployment config with:
kubectl apply -f hello-world-deployment.yaml
hello-world-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
template:
metadata:
labels:
app: hello-world
tier: backend
spec:
containers:
- name: hello-world
image: localhost:5000/samples/myserver:snapshot-180717-213718-0199
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 8080
And output from docker images:
REPOSITORY TAG IMAGE ID CREATED SIZE
samples/myserver latest aa0a1388cd88 About an hour ago 435MB
samples/myserver snapshot-180717-213718-0199 aa0a1388cd88 About an hour ago 435MB
k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 3 months ago 97MB
Based on this guide:
How to use local docker images with Minikube?
I have also run:
eval $(minikube docker-env)
and based on this:
https://github.com/docker/for-win/issues/624
I have added:
"InsecureRegistry": [
"localhost:5000",
"127.0.0.1:5000"
],
to /etc/docker/daemon.json
Any suggestion on what I missing to get the image pull to work in minikube?
I have followed the steps in the below answer but when I get to this step:
$ kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep kube-registry-v0 | awk '{print $1;}') 5000:5000
it just hangs like this:
$ kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep kube-registry-v0 | awk '{print $1;}') 5000:5000
Forwarding from 127.0.0.1:5000 -> 5000
Forwarding from [::1]:5000 -> 5000
and I get the same error in minikube dashboard after I create my deploymentconfig.
Based on answer from BMitch I have now tried to create a local docker repository and push an image to it with:
$ docker run -d -p 5000:5000 --restart always --name registry registry:2
$ docker pull ubuntu
$ docker tag ubuntu localhost:5000/ubuntu:v1
$ docker push localhost:5000/ubuntu:v1
Next when I do docker images I get:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 74f8760a2a8b 4 days ago 82.4MB
localhost:5000/ubuntu v1 74f8760a2a8b 4 days ago 82.4MB
I have then updated my deploymentconfig hello-world-deployment.yaml to:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
template:
metadata:
labels:
app: hello-world
tier: backend
spec:
containers:
- name: hello-world
image: localhost:5000/ubuntu:v1
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 8080
and
kubectl create -f hello-world-deployment.yaml
But in Minikube I still get similar error:
Failed to pull image "localhost:5000/ubuntu:v1": rpc error: code = Unknown desc = Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: getsockopt: connection refused
So seems Minikube is not allowed to see the local registry I just created?
It looks like you’re facing a problem with localhost on your computer and localhost used within the context of minikube VM.
To have registry working, you have to set an additional port forwarding.
If your minikube installation is currently broken due to a lot of attempts to fix registry problems,
I would suggest restarting minikube environment:
minikube stop && minikube delete && rm -fr $HOME/.minikube && minikube start
Next, get kube registry yaml file:
curl -O https://gist.githubusercontent.com/coco98/b750b3debc6d517308596c248daf3bb1/raw/6efc11eb8c2dce167ba0a5e557833cc4ff38fa7c/kube-registry.yaml
Then, apply it on minikube:
kubectl create -f kube-registry.yaml
Test if registry inside minikube VM works:
minikube ssh && curl localhost:5000
On Ubuntu, forward ports to reach registry at port 5000:
kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep kube-registry-v0 | awk '{print $1;}') 5000:5000
If you would like to share your private registry from your machine, you may be interested in sharing local registry for minikube blog entry.
If you're specifying the image source as the local registry server, you'll need to run a registry server there, and push your images to it.
You can self host a registry server with multiple 3rd party options, or run this one that is packaged inside a docker container: https://hub.docker.com/_/registry/
This only works on a single node environment unless you setup TLS keys, trust the CA, or tell all other nodes of the additional insecure registry.
You can also specify the imagePullPolicy as Never.
Both of these solutions were already in your linked question and I'm not seeing any evidence of you trying either in this question. Without showing how you tried those steps and experienced a different problem, this question should probably be closed as a duplicate.
it is unclear from your question how many nodes do you have?
If you have more than one, your problem is in your deployment with replicas: 1.
If not, please ignore this answer.
You don't know where and what that replica will be. So if you don't have docker local registry on all of your nodes, and you got unlucky that kubernetes is trying to use some node without docker registry, you will end up with that error.
Same thing happened to me, same error connection refused because deployment went to node without local docker registry.
As I am typing this, I think this can be resolved with ingress.
You do registry as deployment, add service, add volume for images and put it to ingress.
Little more of work but at least all your nodes will be sync (all of your pods sorry).
I'm trying (for tests purpose) to expose to kubernetes a very simple image pong http:
FROM golang:onbuild
EXPOSE 8000
I built the docker image:
docker build -t pong .
I started a private registry (with certificates):
docker run -d --restart=always --name registry -v `pwd`/certs:/certs -e REGISTRY_HTTP_ADDR=0.0.0.0:443 -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key -p 443:443 registry:2.6.2
I created a secret:
kubectl create secret docker-registry regsecret --docker-server=localhost --docker-username=johndoe --docker-password=johndoe --docker-email=johndoe#yopmail.com
I uploaded the image:
docker tag 9c0bb659fea1 localhost/pong
docker push localhost/pong
I had an insecure registry configuration
{
"storage-driver" : "aufs",
"insecure-registries" : [
"localhost"
],
"debug" : true,
"experimental" : true
}
So I tried to create my kubernetes pods with:
apiVersion: v1
kind: Pod
metadata:
name: pong
spec:
containers:
- name: pong
image: localhost/pong:latest
imagePullPolicy: Always
imagePullSecrets:
- name: regsecret
I'm on MacOS with docker Version 17.12.0-ce-mac49 (21995).
If I use image: localhost/pong:latest I got:
waiting:
message: 'rpc error: code = Unknown desc = Error response from daemon: error
parsing HTTP 404 response body: invalid character ''d'' looking for beginning
of value: "default backend - 404"'
reason: ErrImagePull
I'm stuck on it since the beginning of the week, without success.
It was not a problem of registry configuration.
I forgot to mention that I used minikube.
For the flags to be taken into account, I had to delete the minikube configuration and recreate it
minikube delete
minikube start --insecure-registry="10.0.4.0/24"
Hey try to browse your registry using this nice front end app https://hub.docker.com/r/konradkleine/docker-registry-frontend/
Perhaps this will give you some hint , it looks like the registry has some configuration issue...
instead of deleting the cluster first (minikube delete) the configuration json may be editied at ~/.minikube/config/config.json to add this section accordingly:
{
...
"HostOptions": {
...
"InsecureRegistry": [
"private.docker.registry:5000"
],
...
},
...
}
...
}
this only works on started clusters, as the configuration file won't be populated otherwise. the answer above using minikube --insecure-registry="" is fine.