Docker insecure registries with Rocky Linux 8.6 - docker

I installed vanilla k8s on a Rocky Linux 8.6 with together with docker.
I created the /etc/docker/daemon.json:
{
"insecure-registries":["rocky-master.mfr.org:5000"],
"exec-opts":["native.cgroupdriver=systemd"],
"storage-driver":"overlay2"
}
But my pod on worker1 says:
Warning Failed 5m26s (x4 over 6m48s) kubelet Failed to pull image "rocky-master.mfr.org:5000/sametime-init:20220712-1935": rpc error: code = Unknown desc = failed to pull and unpack image "rocky-master.mfr.org:5000/sametime-init:20220712-1935": failed to resolve reference "rocky-master.mfr.org:5000/sametime-init:20220712-1935": failed to do request: Head "https://rocky-master.mfr.org:5000/v2/sametime-init/manifests/20220712-1935": http: server gave HTTP response to HTTPS client
Any idea?

Related

Failed to resolve reference “docker.io/hashicorp/vault-k8s:0.16.1”

I’m following this guide: Vault Installation to Google Kubernetes Engine via Helm | Vault - HashiCorp Learn: https://learn.hashicorp.com/tutorials/vault/kubernetes-google-cloud-gke
However, after running the Helm install command as below, my vault-agent-injector pod isn’t working as expected.
I ran:
helm install vault hashicorp/vault
–set=‘server.ha.enabled=true’
–set=‘server.ha.raft.enabled=true’
I then see the following events when describing the pod:
Normal Scheduled 51s default-scheduler Successfully assigned default/vault-agent-injector-f59c7f985-n6b72 to gke-test-cluster-test-cluster-np-680d0af5-2lw8
Normal Pulling 51s kubelet Pulling image “hashicorp/vault-k8s:0.16.1”
Warning Failed kubelet Failed to pull image “hashicorp/vault-k8s:0.16.1”: rpc error: code = Unknown desc = failed to pull and unpack image “docker.io/hashicorp/vault-k8s:0.16.1”: failed to resolve reference “docker.io/hashicorp/vault-k8s:0.16.1”: failed to do request: Head "https://registry-1.docker.io/v2/hashicorp/vault-k8s/manifests/0.16.1": dial tcp 44.207.51.64:443: i/o timeout
Warning Failed kubelet Error: ErrImagePull
Normal BackOff kubelet Back-off pulling image “hashicorp/vault-k8s:0.16.1”
Warning Failed kubelet Error: ImagePullBackOff
Normally Helm installs work perfectly fine, so I’m not sure what’s going on here. Could someone please advise?

Getting an error when trying to find a local image with helm/docker

I have a local kubernetes cluster (minikube), that is trying to load images from my local Docker repo.
When I do a "docker images", I get:
cluster.local/container-images/app-shiny-app-validation-app-converter 1.6.9
cluster.local/container-images/app-shiny-app-validation 1.6.9
Given I know the above images are there, I run some helm commands which uses these images, but I get the below error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 66s (x2 over 2m12s) kubelet Back-off pulling image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9"
Warning Failed 66s (x2 over 2m12s) kubelet Error: ImagePullBackOff
Normal Pulling 51s (x3 over 3m24s) kubelet Pulling image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9"
Warning Failed 11s (x3 over 2m13s) kubelet Failed to pull image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9": rpc error: code = Unknown desc = Error response from daemon: Get https://cluster.local/v2/: dial tcp: lookup cluster.local: Temporary failure in name resolution
Warning Failed 11s (x3 over 2m13s) kubelet Error: ErrImagePull
Anyone know how I can fix this? Seems the biggest problem is Get https://cluster.local/v2/: dial tcp: lookup cluster.local: Temporary failure in name resolution
Since minikube is being used, you can refer to their documentation.
It is recommended that if a imagePullPolicy is being used, it needs to be set to Never. If set to Always, it will try to reach out and pull from the network.
From docs: https://minikube.sigs.k8s.io/docs/handbook/pushing/
"Tip 1: Remember to turn off the imagePullPolicy:Always (use imagePullPolicy:IfNotPresent or imagePullPolicy:Never) in your yaml file. Otherwise Kubernetes won’t use your locally build image and it will pull from the network."
Add cluster.local to your /etc/hosts file in all your kubernetes nodes.
192.168.12.34 cluster.local
Check whether you can login to registry using docker login cluster.local
If your registry has self-signed certificates, copy cluster.local.crt key to all kubernetes worker nodes /etc/docker/certs.d/cluster.local/ca.crt

Kubernetes failed to pull image k8s.gcr.io

I am trying to install Kubernetes on my CentOS machine, when I intialize the cluster, I have the following error.
I specify that I am behind a corporate proxy. I have already configured it for Docker in the directory: /etc/systemd/system/docker.service.d/http-proxy.conf
Docker work fine.
No matter how hard I look, I can't find a solution to this problem.
Thank you for your help.
# kubeadm init
W1006 14:29:38.432071 7560 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": x509: certificate signed by unknown authority
W1006 14:29:38.432147 7560 version.go:103] falling back to the local client version: v1.19.2
W1006 14:29:38.432367 7560 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.2
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING HTTPProxy]: Connection to "https://192.168.XXX.XXX" uses proxy "http://proxyxxxxx.xxxx.xxx:xxxx/". If that is not intended, adjust your proxy settings
[WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://proxyxxxxx.xxxx.xxx:xxxx/". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.19.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.19.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.19.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.19.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.13-0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.7.0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
# kubeadm config images pull
W1006 17:33:41.362395 80605 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": x509: certificate signed by unknown authority
W1006 17:33:41.362454 80605 version.go:103] falling back to the local client version: v1.19.2
W1006 17:33:41.362685 80605 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
failed to pull image "k8s.gcr.io/kube-apiserver:v1.19.2": output: Error response from daemon: Get https://k8s.gcr.io/v2/: remote error: tls: handshake failure
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher
Maybe root certificates on your machine are outdated - so it does not consider certificate of k8s.gcr.io as valid one. This message x509: certificate signed by unknown authority hints to it.
Try to update them: yum update ca-certificates || yum reinstall ca-certificates
I just did a dig to k8s.gcr.io, and I added the IP given by the request to /etc/hosts.
# dig k8s.gcr.io
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.2 <<>> k8s.gcr.io
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 44303
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;k8s.gcr.io. IN A
;; ANSWER SECTION:
k8s.gcr.io. 21599 IN CNAME googlecode.l.googleusercontent.com.
googlecode.l.googleusercontent.com. 299 IN A 64.233.168.82
;; Query time: 72 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Tue Nov 24 11:45:37 CST 2020
;; MSG SIZE rcvd: 103
# cat /etc/hosts
64.233.168.82 k8s.gcr.io
And now it works!
# kubeadm config images pull
W1124 11:46:41.297352 50730 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.19.4
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.19.4
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.19.4
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.19.4
[config/images] Pulled k8s.gcr.io/pause:3.2
[config/images] Pulled k8s.gcr.io/etcd:3.4.13-0
[config/images] Pulled k8s.gcr.io/coredns:1.7.0
Working also with v1.19.2 - I've got the same error.
It seems to be related to the issue mentioned here (and I think also in here).
I re-install kubeadm on the node and ran the kubeadm init workflow again - it is now working with v1.19.3 and the errors are gone.
All master nodes images are pulled successfully.
Also verified with:
sudo kubeadm config images pull
(*) You can run kubeadm init with --kubernetes-version=X.Y.Z (1.19.3 in our case).
I had the same error. Maybe as others say, it's because of an outdated certificate. I believe it's not required to delete anything.
Simple solution was running one of those two commands, which will reconnect to Container repositories via:
podman login
docker login
Source: podman-login
I had this issue on version version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.2" when i tried joining a second control panel.
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:v1.9.3: output: E0923 04:47:51.763983 1598 remote_image.go:242] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"k8s.gcr.io/coredns:v1.9.3\": failed to resolve reference \"k8s.gcr.io/coredns:v1.9.3\": k8s.gcr.io/coredns:v1.9.3: not found" image="k8s.gcr.io/coredns:v1.9.3"
time="2022-09-23T04:47:51Z"...
See #99321 it's now k8s.gcr.io/coredns/coredns:v1.9.3 instead of
k8s.gcr.io/coredns:v1.9.3 and i don't now why
by kluevandrew,
refererence: https://github.com/kubernetes/kubernetes/issues/112131
This worked, am using containerd:
crictl pull k8s.gcr.io/coredns/coredns:v1.9.3
ctr --namespace=k8s.io image tag k8s.gcr.io/coredns/coredns:v1.9.3 k8s.gcr.io/coredns:v1.9.3
docker solution:
docker pull k8s.gcr.io/coredns/coredns:v1.9.3
docker tag k8s.gcr.io/coredns/coredns:v1.9.3 k8s.gcr.io/coredns:v1.9.3
Check imageRepository in kubeadm-config configmap (or your kubeadm config file, if You run something like kubeadm init --config=/tmp/kubeadm-config.yml).

ImagePullBackOff after Kubectl run

I am new to Kubernetes. I am using Minikube for Mac with VM hyperkit. I also have docker-desktop installed (in which I have tried both enable/disable Kubernetes).
docker pull is executed smoothly with no error.
but on
kubectl run kubernetes-jenkins --image=jenkins:latest --port=8080
(or any image, be it gcr.io/google-samples/kubernetes-bootcamp:v1) it fails with ImagePullBackOff
Trimming few parts from kubectl cluster-info dump:
I1230 10:20:56.812648 1 serving.go:312] Generated self-signed
cert in-memory W1230 10:20:58.777494 1
configmap_cafile_content.go:102] unable to load initial CA bundle for:
"client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
due to: configmap "extension-apiserver-authentication" not found W1230
10:20:58.778005 1 configmap_cafile_content.go:102] unable to
load initial CA bundle for:
"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
due to: configmap "extension-apiserver-authentication" not found W1230
10:20:58.849619 1 authorization.go:47] Authorization is disabled
W1230 10:20:58.850375 1 authentication.go:92] Authentication is
disabled
"reason": "Failed",
"message": "Failed to pull image \"jenkins:latest\": rpc error: code = Unknown desc = Error response from daemon: Get
https://registry-1.docker.io/v2/: dial tcp: lookup
registry-1.docker.io on 192.168.64.1:53: read udp
192.168.64.3:38558-\u003e192.168.64.1:53: read: connection refused",
"source": {
"component": "kubelet",
"host": "minikube"
}
Why kubectl is unable to pull image from the repository?
In minikube your local docker registry docker image can't be found,so you have to set your docker env to use minikube registry for local image you build and pull
eval $(minikube docker-env)
if that doesn't solve your problem, you have to start minikube by telling it's registry
minikube start --vm-driver="virtualbox" --insecure-registry=$(docker-machine ip registry):80

Use https for accessing Docker private registry

I have a private registry, that it's accessed through the https protocol.
But Kubernetes + Docker, always tries to use the http protocol http://myserver.com:8080 instead of https://myserver.com:8080.
How to force https protocol?
Snippet of my yaml file that declares a Pod:
containers:
- name: apl
image: myserver.com:8080/myimage
Details of my environment:
CentOS 7.3
Docker 18.06
Kubernetes (Minikube) 1.13.1
Error message in Kubernetes logs:
Normal Pulling 30s (x4 over 2m2s) kubelet, minikube pulling image "docker.mydomain.com:30500/vision-ssh"
Warning Failed 30s (x4 over 2m2s) kubelet, minikube Failed to pull image "docker.mydomain.com:30500/vision-ssh": rpc error: code = Unknown desc = Error response from daemon: Get http://docker.mydomain.com:30500/v2/: net/http: HTTP/1.x transport connection broken: malformed HTTP response "\x15\x03\x01\x00\x02\x02"
Warning Failed 30s (x4 over 2m2s) kubelet, minikube Error: ErrImagePull
Warning Failed 19s (x6 over 2m2s) kubelet, minikube Error: ImagePullBackOff
Normal BackOff 4s (x7 over 2m2s) kubelet, minikube Back-off pulling image "docker.fccma.com:30500/vision-ssh"
If I try to specify the protocol in the name of the image, it complains:
couldn't parse image reference "https://docker.mydomain.com:30500/vision-ssh": invalid reference format
Followed this guide in order to create the image registry. It is already secured (HTTPS protocol and protected by user/password).
In the /etc/hosts file, the server docker.mydomain.com is mapped to 127.0.0.1. I've read in the docker docs that local registries are always considered insecure.
If I use a name that is mapped to the external IP, then Docker tries https.
Your private docker registry might not be secured. If it is secured private registry it always use https otherwise it refers to http.
For more details refer doc:
Docker uses the https:// protocol to communicate with a registry, unless the registry is allowed to be accessed over an insecure connection. Refer to the insecure registries section for more information.
https://docs.docker.com/engine/reference/commandline/dockerd/#insecure-registries
So to force https , secure your registry. There are many articles available on net to secure your registry.
Run https proxy service fronting the container registry service. Look at nginx as https proxy

Resources