Unsupported docker v1 repository request. Error in Openshift - docker

Background:
I am running a Openshift build via Jenkins plugin in Openshift, which is then pushed to Artifactory. So image schema version 1 is deprecated for the push in Artifacrory. Intermittently, I guess some nodes are pushing the image schema version 1 in the build.
Container Runtime Version: docker://1.13.1
oc v3.10.0+0c4577e-1
kubernetes v1.10.0+b81c8f8
Ask:
Would like to understand what causes these tags in the build ? is there a build agent controller that is responsible for tagging the image schema version? is there an article or doc, that I could read to understand this? How to troubleshoot this error. As far as I am aware all the agent nodes use the same versions, I guess out of all nodes, the issue is with only one node. As when the same build goes to other nodes it gets completed with schema version 2. However, again the version of the agent nodes are similar, not sure where to start.
Error message:
Warning Failed 1m (x3 over 2m) kubelet, Failed to pull image "products-docker-stage.artifactory.": rpc error: code = Unknown desc = Error: Status 400 trying to pull repository licence-module: "{\n "errors" : [ {\n "status" : 400,\n "message" : "Unsupported docker v1 repository request for 'products-docker-stage'"\n } ]\n}"
Warning Failed 1m (x3 over 2m) kubelet, Error: ErrImagePull
Normal BackOff 1m (x3 over 2m) kubelet, Back-off pulling image "products-docker-stage.artifactory.**"
Warning BackOff 43s (x3 over 1m) kubelet, Back-off restarting failed container
Error in Openshift
error: build error: Failed to push image: unauthorized: Pushing Docker images with manifest v2 schema 1 to this repository is blocked. For more information visit https://www.jfrog.com/confluence/display/RTF/Advanced+Topics#AdvancedTopics-DockerManifestV2Schema1Deprecation

Related

Image pull back off on keda operator & keda-operator-metrics-apiserver POD

When we tried to install keda on our kubernetes cluster by following instruction mentioned in keda deploy document we are getting image pullbackoff error in keda-operator & keda-operator-metrics-apiserver pod. Please help us on this.
Error:
Type Reason Age From Message
Normal Scheduled 4m23s default-scheduler Successfully assigned keda/keda-metrics-apiserver-5bbcc67cd8-7mm59 to aks-agentpool
Warning Failed 3m3s (x6 over 4m21s) kubelet Error: ImagePullBackOff
Normal Pulling 2m49s (x4 over 4m22s) kubelet Pulling image "ghcr.io/kedacore/keda-metrics-apiserver:2.8.0"
Warning Failed 2m49s (x4 over 4m22s) kubelet Failed to pull image "ghcr.io/kedacore/keda-metrics-apiserver:2.8.0": rpc error: code = Unknown desc = failed to pull and unpack image "ghcr.io/kedacore/keda-metrics-apiserver:2.8.0": failed to resolve reference "ghcr.io/kedacore/keda-metrics-apiserver:2.8.0": failed to do request: Head "https://ghcr.io/v2/kedacore/keda-metrics-apiserver/manifests/2.8.0": EOF
Warning Failed 2m49s (x4 over 4m22s) kubelet Error: ErrImagePull
Normal BackOff 2m38s (x7 over 4m21s) kubelet Back-off pulling image "ghcr.io/kedacore/keda-metrics-apiserver:2.8.0"

Failed to resolve reference “docker.io/hashicorp/vault-k8s:0.16.1”

I’m following this guide: Vault Installation to Google Kubernetes Engine via Helm | Vault - HashiCorp Learn: https://learn.hashicorp.com/tutorials/vault/kubernetes-google-cloud-gke
However, after running the Helm install command as below, my vault-agent-injector pod isn’t working as expected.
I ran:
helm install vault hashicorp/vault
–set=‘server.ha.enabled=true’
–set=‘server.ha.raft.enabled=true’
I then see the following events when describing the pod:
Normal Scheduled 51s default-scheduler Successfully assigned default/vault-agent-injector-f59c7f985-n6b72 to gke-test-cluster-test-cluster-np-680d0af5-2lw8
Normal Pulling 51s kubelet Pulling image “hashicorp/vault-k8s:0.16.1”
Warning Failed kubelet Failed to pull image “hashicorp/vault-k8s:0.16.1”: rpc error: code = Unknown desc = failed to pull and unpack image “docker.io/hashicorp/vault-k8s:0.16.1”: failed to resolve reference “docker.io/hashicorp/vault-k8s:0.16.1”: failed to do request: Head "https://registry-1.docker.io/v2/hashicorp/vault-k8s/manifests/0.16.1": dial tcp 44.207.51.64:443: i/o timeout
Warning Failed kubelet Error: ErrImagePull
Normal BackOff kubelet Back-off pulling image “hashicorp/vault-k8s:0.16.1”
Warning Failed kubelet Error: ImagePullBackOff
Normally Helm installs work perfectly fine, so I’m not sure what’s going on here. Could someone please advise?

Installing Jenkins onminikube shows Failed to pull image "jenkins/jenkins:2.303.3-jdk11"

Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 51s default-scheduler Successfully assigned jenkins/jenkins-0 to minikube
Normal BackOff 31s kubelet, minikube Back-off pulling image "jenkins/jenkins:2.303.3-jdk11"
Warning Failed 31s kubelet, minikube Error: ImagePullBackOff
Normal Pulling 17s (x2 over 47s) kubelet, minikube Pulling image "jenkins/jenkins:2.303.3-jdk11"
Warning Failed 1s (x2 over 32s) kubelet, minikube Failed to pull image "jenkins/jenkins:2.303.3-jdk11": rpc error: code = Unknown desc = Error response from daemon: Get "https://registry-1.docker.io/v2/": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Failed 1s (x2 over 32s) kubelet, minikube Error: ErrImagePull
The above error is what I am seeing, when trying to install Jenkins on a minikube cluster. I am using this link and following along: https://www.jenkins.io/doc/book/installing/kubernetes/
appreciate any ideas.
I tried with minikube with virtualbox and that worked out of the box.
But, wanted to get docker working, which I wasn't able to.
Finally, I deleted everything (even reinstalled ubuntu) and re-setup k8s cluster with latest k8s version (before i tried with --version=1.19.0 of k8s)
I used: minikube start --driver=docker. and then followed the official jenkins install with helm3, that too latest, lts –

Getting an error when trying to find a local image with helm/docker

I have a local kubernetes cluster (minikube), that is trying to load images from my local Docker repo.
When I do a "docker images", I get:
cluster.local/container-images/app-shiny-app-validation-app-converter 1.6.9
cluster.local/container-images/app-shiny-app-validation 1.6.9
Given I know the above images are there, I run some helm commands which uses these images, but I get the below error:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 66s (x2 over 2m12s) kubelet Back-off pulling image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9"
Warning Failed 66s (x2 over 2m12s) kubelet Error: ImagePullBackOff
Normal Pulling 51s (x3 over 3m24s) kubelet Pulling image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9"
Warning Failed 11s (x3 over 2m13s) kubelet Failed to pull image "cluster.local/container-images/app-shiny-app-validation-app-converter:1.6.9": rpc error: code = Unknown desc = Error response from daemon: Get https://cluster.local/v2/: dial tcp: lookup cluster.local: Temporary failure in name resolution
Warning Failed 11s (x3 over 2m13s) kubelet Error: ErrImagePull
Anyone know how I can fix this? Seems the biggest problem is Get https://cluster.local/v2/: dial tcp: lookup cluster.local: Temporary failure in name resolution
Since minikube is being used, you can refer to their documentation.
It is recommended that if a imagePullPolicy is being used, it needs to be set to Never. If set to Always, it will try to reach out and pull from the network.
From docs: https://minikube.sigs.k8s.io/docs/handbook/pushing/
"Tip 1: Remember to turn off the imagePullPolicy:Always (use imagePullPolicy:IfNotPresent or imagePullPolicy:Never) in your yaml file. Otherwise Kubernetes won’t use your locally build image and it will pull from the network."
Add cluster.local to your /etc/hosts file in all your kubernetes nodes.
192.168.12.34 cluster.local
Check whether you can login to registry using docker login cluster.local
If your registry has self-signed certificates, copy cluster.local.crt key to all kubernetes worker nodes /etc/docker/certs.d/cluster.local/ca.crt

kubernetes unable to pull image docker private registry

I tried to deploy 'deployments' in kubernetes which is pull docker image from private registry (I don't know who did this setup) but during "docker pull images" through kubernetes i'm getting following error
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 85s default-scheduler Successfully assigned default/trusted-enc-assettag1-deployment-8467b74958-6fbp7 to k8s-node
Normal BackOff 24s (x2 over 61s) kubelet, k8s-node Back-off pulling image "10.105.168.81:5000/simplehttpserverenc:enc_v1"
Warning Failed 24s (x2 over 61s) kubelet, k8s-node Error: ImagePullBackOff
Normal Pulling 12s (x3 over 82s) kubelet, k8s-node Pulling image "10.105.168.81:5000/simplehttpserverenc:enc_v1"
Warning Failed 0s (x3 over 62s) kubelet, k8s-node Failed to pull image "10.105.168.81:5000/simplehttpserverenc:enc_v1": rpc error: code = Unknown desc = Error response from daemon: Get https://10.105.168.81:5000/v2/: net/http: TLS handshake timeout
Warning Failed 0s (x3 over 62s) kubelet, k8s-node Error: ErrImagePull
[root#k8s-master ~]# docker pull 10.105.168.81:5000/simplehttpserverenc:enc_v1
ImagePullBackOff and net/http: TLS handshake timeout error.
Initially this "net/http: TLS handshake timeout" error is observed in docker pull as well. I referred some answers and
configured certificate(/etc/docker/certs.d//ca.crt ) and
proxy (/etc/systemd/system/docker.service.d/proxy.conf)
after that able to perform docker pull from private image.
[root#k8s-master ~]# docker pull 10.105.168.81:5000/simplehttpserverenc:enc_v1
enc_v1: Pulling from simplehttpserverenc
54fec2fa59d0: Pull complete
cd3f35d84cab: Pull complete
a0afc8e92ef0: Pull complete
9691f23efdb7: Pull complete
6512e60b314b: Pull complete
a8ac6632d329: Pull complete
68f4c4e0aa8c: Pull complete
Digest: sha256:0358708cd11e96f6cf6f22b29d46a8eec50d7107597b866e1616a73a198fe797
Status: Downloaded newer image for 10.105.168.81:5000/simplehttpserverenc:enc_v1
10.105.168.81:5000/simplehttpserverenc:enc_v1
[root#k8s-master ~]#
But still unable to perform this docker pull through kubernetes. How to solve this ?
If you use docker as container engine in your k8s, AFAIK it's the same with Understand the configuration. Because the image pulling is conducted by the container engine and it depends the proprietary configuration of each one on the certificates. How about pulling the same image on the worker node in your k8s ? Is it possible to pull the one without errors ?
As your dockerconfigjson is not working properly. Try this method :
kubectl create secret docker-registry regcred --docker-server=10.105.168.81:5000 --docker-username=<your-name> --docker-password=<your-pword>
And in Kubernetes manifest :
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: 10.105.168.81:5000/simplehttpserverenc:enc_v1
imagePullSecrets:
- name: regcred
I had encounted this many times, when I forgot to configure these secrets. Also if you have any othernamespace, you will have to generate secrets for each of these namespaces separately passing -n <your-ns> to above kubectl create secret
Edit : As you can not pull the image from worker node.
Make sure you copied docker-registry ca.crt to /etc/docker/certs.d/ca.crt
and then try docker pull .

Resources