I have a private registry, that it's accessed through the https protocol.
But Kubernetes + Docker, always tries to use the http protocol http://myserver.com:8080 instead of https://myserver.com:8080.
How to force https protocol?
Snippet of my yaml file that declares a Pod:
containers:
- name: apl
image: myserver.com:8080/myimage
Details of my environment:
CentOS 7.3
Docker 18.06
Kubernetes (Minikube) 1.13.1
Error message in Kubernetes logs:
Normal Pulling 30s (x4 over 2m2s) kubelet, minikube pulling image "docker.mydomain.com:30500/vision-ssh"
Warning Failed 30s (x4 over 2m2s) kubelet, minikube Failed to pull image "docker.mydomain.com:30500/vision-ssh": rpc error: code = Unknown desc = Error response from daemon: Get http://docker.mydomain.com:30500/v2/: net/http: HTTP/1.x transport connection broken: malformed HTTP response "\x15\x03\x01\x00\x02\x02"
Warning Failed 30s (x4 over 2m2s) kubelet, minikube Error: ErrImagePull
Warning Failed 19s (x6 over 2m2s) kubelet, minikube Error: ImagePullBackOff
Normal BackOff 4s (x7 over 2m2s) kubelet, minikube Back-off pulling image "docker.fccma.com:30500/vision-ssh"
If I try to specify the protocol in the name of the image, it complains:
couldn't parse image reference "https://docker.mydomain.com:30500/vision-ssh": invalid reference format
Followed this guide in order to create the image registry. It is already secured (HTTPS protocol and protected by user/password).
In the /etc/hosts file, the server docker.mydomain.com is mapped to 127.0.0.1. I've read in the docker docs that local registries are always considered insecure.
If I use a name that is mapped to the external IP, then Docker tries https.
Your private docker registry might not be secured. If it is secured private registry it always use https otherwise it refers to http.
For more details refer doc:
Docker uses the https:// protocol to communicate with a registry, unless the registry is allowed to be accessed over an insecure connection. Refer to the insecure registries section for more information.
https://docs.docker.com/engine/reference/commandline/dockerd/#insecure-registries
So to force https , secure your registry. There are many articles available on net to secure your registry.
Run https proxy service fronting the container registry service. Look at nginx as https proxy
Related
I installed vanilla k8s on a Rocky Linux 8.6 with together with docker.
I created the /etc/docker/daemon.json:
{
"insecure-registries":["rocky-master.mfr.org:5000"],
"exec-opts":["native.cgroupdriver=systemd"],
"storage-driver":"overlay2"
}
But my pod on worker1 says:
Warning Failed 5m26s (x4 over 6m48s) kubelet Failed to pull image "rocky-master.mfr.org:5000/sametime-init:20220712-1935": rpc error: code = Unknown desc = failed to pull and unpack image "rocky-master.mfr.org:5000/sametime-init:20220712-1935": failed to resolve reference "rocky-master.mfr.org:5000/sametime-init:20220712-1935": failed to do request: Head "https://rocky-master.mfr.org:5000/v2/sametime-init/manifests/20220712-1935": http: server gave HTTP response to HTTPS client
Any idea?
I have a local kubernetes cluster (minikube), that is trying to load images from my local Docker repo.
When I do a "docker images", I get:
cluster.local/container-images/app-shiny-app-validation-app-converter 1.6.9
cluster.local/container-images/app-shiny-app-validation 1.6.9
Given I know the above images are there, I run some helm commands which uses these images, but I get the below error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 66s (x2 over 2m12s) kubelet Back-off pulling image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9"
Warning Failed 66s (x2 over 2m12s) kubelet Error: ImagePullBackOff
Normal Pulling 51s (x3 over 3m24s) kubelet Pulling image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9"
Warning Failed 11s (x3 over 2m13s) kubelet Failed to pull image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9": rpc error: code = Unknown desc = Error response from daemon: Get https://cluster.local/v2/: dial tcp: lookup cluster.local: Temporary failure in name resolution
Warning Failed 11s (x3 over 2m13s) kubelet Error: ErrImagePull
Anyone know how I can fix this? Seems the biggest problem is Get https://cluster.local/v2/: dial tcp: lookup cluster.local: Temporary failure in name resolution
Since minikube is being used, you can refer to their documentation.
It is recommended that if a imagePullPolicy is being used, it needs to be set to Never. If set to Always, it will try to reach out and pull from the network.
From docs: https://minikube.sigs.k8s.io/docs/handbook/pushing/
"Tip 1: Remember to turn off the imagePullPolicy:Always (use imagePullPolicy:IfNotPresent or imagePullPolicy:Never) in your yaml file. Otherwise Kubernetes won’t use your locally build image and it will pull from the network."
Add cluster.local to your /etc/hosts file in all your kubernetes nodes.
192.168.12.34 cluster.local
Check whether you can login to registry using docker login cluster.local
If your registry has self-signed certificates, copy cluster.local.crt key to all kubernetes worker nodes /etc/docker/certs.d/cluster.local/ca.crt
I tried to deploy 'deployments' in kubernetes which is pull docker image from private registry (I don't know who did this setup) but during "docker pull images" through kubernetes i'm getting following error
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 85s default-scheduler Successfully assigned default/trusted-enc-assettag1-deployment-8467b74958-6fbp7 to k8s-node
Normal BackOff 24s (x2 over 61s) kubelet, k8s-node Back-off pulling image "10.105.168.81:5000/simplehttpserverenc:enc_v1"
Warning Failed 24s (x2 over 61s) kubelet, k8s-node Error: ImagePullBackOff
Normal Pulling 12s (x3 over 82s) kubelet, k8s-node Pulling image "10.105.168.81:5000/simplehttpserverenc:enc_v1"
Warning Failed 0s (x3 over 62s) kubelet, k8s-node Failed to pull image "10.105.168.81:5000/simplehttpserverenc:enc_v1": rpc error: code = Unknown desc = Error response from daemon: Get https://10.105.168.81:5000/v2/: net/http: TLS handshake timeout
Warning Failed 0s (x3 over 62s) kubelet, k8s-node Error: ErrImagePull
[root#k8s-master ~]# docker pull 10.105.168.81:5000/simplehttpserverenc:enc_v1
ImagePullBackOff and net/http: TLS handshake timeout error.
Initially this "net/http: TLS handshake timeout" error is observed in docker pull as well. I referred some answers and
configured certificate(/etc/docker/certs.d//ca.crt ) and
proxy (/etc/systemd/system/docker.service.d/proxy.conf)
after that able to perform docker pull from private image.
[root#k8s-master ~]# docker pull 10.105.168.81:5000/simplehttpserverenc:enc_v1
enc_v1: Pulling from simplehttpserverenc
54fec2fa59d0: Pull complete
cd3f35d84cab: Pull complete
a0afc8e92ef0: Pull complete
9691f23efdb7: Pull complete
6512e60b314b: Pull complete
a8ac6632d329: Pull complete
68f4c4e0aa8c: Pull complete
Digest: sha256:0358708cd11e96f6cf6f22b29d46a8eec50d7107597b866e1616a73a198fe797
Status: Downloaded newer image for 10.105.168.81:5000/simplehttpserverenc:enc_v1
10.105.168.81:5000/simplehttpserverenc:enc_v1
[root#k8s-master ~]#
But still unable to perform this docker pull through kubernetes. How to solve this ?
If you use docker as container engine in your k8s, AFAIK it's the same with Understand the configuration. Because the image pulling is conducted by the container engine and it depends the proprietary configuration of each one on the certificates. How about pulling the same image on the worker node in your k8s ? Is it possible to pull the one without errors ?
As your dockerconfigjson is not working properly. Try this method :
kubectl create secret docker-registry regcred --docker-server=10.105.168.81:5000 --docker-username=<your-name> --docker-password=<your-pword>
And in Kubernetes manifest :
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: 10.105.168.81:5000/simplehttpserverenc:enc_v1
imagePullSecrets:
- name: regcred
I had encounted this many times, when I forgot to configure these secrets. Also if you have any othernamespace, you will have to generate secrets for each of these namespaces separately passing -n <your-ns> to above kubectl create secret
Edit : As you can not pull the image from worker node.
Make sure you copied docker-registry ca.crt to /etc/docker/certs.d/ca.crt
and then try docker pull .
I am very new to Kuberetes and I have done some work with docker previously. I am trying to accomplish following:
Spin up Minikube
Use Kube-ctl to spin up a docker image from docker hub.
I started minikube and things look like they are up and running. Then I pass following command
kubectl run nginx --image=nginx (Please note I do not have this image anywhere on my machine and I am expecting k8 to fetch it for me)
Now, when I do that, it spins up the pod but the status is ImagePullBackOff. So I ran kubectl describe pod command on it and the results look like following:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m default-scheduler Successfully assigned default/ngix-67c6755c86-qm5mv to minikube
Warning Failed 8m kubelet, minikube Failed to pull image "nginx": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.64.1:53: read udp 192.168.64.2:52133->192.168.64.1:53: read: connection refused
Normal Pulling 8m (x2 over 8m) kubelet, minikube Pulling image "nginx"
Warning Failed 8m (x2 over 8m) kubelet, minikube Error: ErrImagePull
Warning Failed 8m kubelet, minikube Failed to pull image "nginx": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.64.1:53: read udp 192.168.64.2:40073->192.168.64.1:53: read: connection refused
Normal BackOff 8m (x3 over 8m) kubelet, minikube Back-off pulling image "nginx"
Warning Failed 8m (x3 over 8m) kubelet, minikube Error: ImagePullBackOff
Then I searched around to see if anyone has faced similar issues and it turned out that some people have and they did resolve it by restarting minikube using some more flags which look like below:
minikube start --vm-driver="xhyve" --insecure-registry="$REG_IP":80
when I do nslookup inside Minikube, it does resolve with following information:
Server: 10.12.192.22
Address: 10.12.192.22#53
Non-authoritative answer:
hub.docker.com canonical name = elb-default.us-east-1.aws.dckr.io.
elb-default.us-east-1.aws.dckr.io canonical name = us-east-1-elbdefau-1nlhaqqbnj2z8-140214243.us-east-1.elb.amazonaws.com.
Name: us-east-1-elbdefau-1nlhaqqbnj2z8-140214243.us-east-1.elb.amazonaws.com
Address: 52.205.36.130
Name: us-east-1-elbdefau-1nlhaqqbnj2z8-140214243.us-east-1.elb.amazonaws.com
Address: 3.217.62.246
Name: us-east-1-elbdefau-1nlhaqqbnj2z8-140214243.us-east-1.elb.amazonaws.com
Address: 35.169.212.184
still no luck. Is there anything that I am doing wrong here?
There error message suggests that the Docker daemon running in the minikube VM can't resolve the registry-1.docker.io hostname because the DNS nameserver it's configured to use for DNS resolution (192.168.64.1:53) is refusing connection. It's strange to me that the Docker deamon is trying to resolve registry-1.docker.io via a nameserver at 192.168.64.1 but when you nslookup on the VM it's using a nameserver at 10.12.192.22. I did an Internet search for "minkube Get registry-1.docker.io/v2: dial tcp: lookup registry-1.docker.io on 192.168.64.1:53" and found an issue where someone made this comment, seems identical to your problem, and seems specific to xhyve.
In that comment the person says:
This issue does look like an xhyve issue not seen with virtualbox.
and
Switching to virtualbox fixed this issue for me.
I stopped minikube, deleted it, started it without --vm-driver=xhyve (minikube uses virtualbox driver by default), and then docker build -t hello-node:v1 . worked fine without errors
In my case it was caused by running dnsmasq, a dns server, on my Mac using Homebrew, which caused the DNS requests to fail inside minikube. After stopping dnsmasq, everything worked.
I got this problem with my local minikube setup and I wasn't able to pull any images I added to a simple deployment manifest.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test1 0/1 ImagePullBackOff 0 68s
Tried to execute the below test:
apiVersion: v1
kind: Pod
metadata:
name: test1
labels:
site: blog
spec:
containers:
- name: web
image: nginx:latest
It was possible or fixed only after restarting the minikube.
Maybe the dnsmasq was really the cause in this case.
You have:
minukube running with default settings.
docker building your images
(*) configured minikube to point to your docker images local repo
And now minikube can't pull images from public "container" registries, like docker hub.
stop and start minikube, then point it back to your local docker images repo. The commands to do this (and (*) this):
minikube stop
minikube start
minikube -p minikube docker-env
eval $(minikube -p minikube docker-env)
Since running the above I was able to pull nginx, alpine and frens from hub.docker.come just by setting image: alpine in the yaml spec.
The issue was just a short drop in my network connectivity. So if you have no dns/vpn/xhyve complications and it just stops, the fix is easy enough.
I am trying to create a pod using kubernetes with the following simple command
kubectl run example --image=nginx
It runs and assigns the pod to the minion correctly but the status is always in ContainerCreating status due to the following error. I have not hosted GCR or GCloud on my machine. So not sure why its picking from there only.
1h 29m 14s {kubelet centos-minion1} Warning FailedSync Error syncing pod, skipping:
failed to "StartContainer" for "POD" with ErrImagePull: "image pull failed
for gcr.io/google_containers/pause:2.0, this may be because there are no
credentials on this request. details: (unable to ping registry endpoint
https://gcr.io/v0/\nv2 ping attempt failed with error: Get https://gcr.io/v2/:
http: error connecting to proxy http://87.254.212.120:8080: dial tcp
87.254.212.120:8080: i/o timeout\n v1 ping attempt failed with error:
Get https://gcr.io/v1/_ping: http: error connecting to proxy
http://87.254.212.120:8080: dial tcp 87.254.212.120:8080: i/o timeout)
Kubernetes is trying to create a pause container for your pod; this container is used to create the pod's network namespace. See this question and its answers for more general information on the pause container.
To your specific error: Kubernetes tries to pull the pause container's image (which would be gcr.io/google_containers/pause:2.0, according to your error message) from the Google Container Registry (gcr.io). Apparently, your Docker engine tries to connect to GCR using a HTTP proxy located at 87.254.212.120:8080, to which it apparently cannot connect (i/o timeout).
To correct this error, either make sure that you HTTP proxy server is online and does not block HTTP requests to GCR, or (if you do have public Internet access) disable the proxy connection for your Docker engine (this would typically be done using the http_proxy and https_proxy environment variables, which would have been set in /etc/sysconfig/docker or /etc/default/docker, depending on your Linux distribution).