GKE problem when running cronjob by pulling image from Artifact Registry - docker

I created a cronjob with the following spec in GKE:
# cronjob.yaml
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: collect-data-cj-111
spec:
schedule: "*/5 * * * *"
concurrencyPolicy: Allow
startingDeadlineSeconds: 100
suspend: false
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: collect-data-cj-111
image: collect_data:1.3
restartPolicy: OnFailure
I create the cronjob with the following command:
kubectl apply -f collect_data.yaml
When I later watch if it is running or not (as I scheduled it to run every 5th minute for for the sake of testing), here is what I see:
$ kubectl get pods --watch
NAME READY STATUS RESTARTS AGE
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 Pending 0 0s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 Pending 0 1s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ContainerCreating 0 1s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ErrImagePull 0 3s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ImagePullBackOff 0 17s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ErrImagePull 0 30s
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 0/1 ImagePullBackOff 0 44s
It does not seem to be able to pull the image from Artifact Registry. I have both GKE and Artifact Registry created under the same project.
What can be the reason? After spending several hours in docs, I still could not make progress and I am quite new in the world of GKE.
If you happen to recommend me to check anything, I really appreciate if you also describe where in GCP I should check/control your recommendation.
ADDENDUM:
When I run the following command:
kubectl describe pods
The output is quite large but I guess the following message should indicate the problem.
Failed to pull image "collect_data:1.3": rpc error: code = Unknown
desc = failed to pull and unpack image "docker.io/library/collect_data:1.3":
failed to resolve reference "docker.io/library/collect_data:1.3": pull
access denied, repository does not exist or may require authorization:
server message: insufficient_scope: authorization failed
How do I solve this problem step by step?

From the error shared, I can tell that the image is not being pulled from Artifact Registry, and the reason for failure is because, by default, GKE pulls it directly from Docker Hub unless specified otherwise. Since there is no collect_data image there, hence the error.
The correct way to specify an image stored in Artifact Registry is as follows:
image: <location>-docker.pkg.dev/<project>/<repo-name>/<image-name:tag>
Be aware that the registry format has to be set to "docker" if you are using a docker-containerized image.
Take a look at the Quickstart for Docker guide, where it is specified how to pull and push docker images to Artifact Registry along with the permissions required.

Related

Kubernetes ImagePullBackOff with Private Registry on Docker Hub

I have a private Docker Hub registry with a (rather large) image in it that I control.
I also have a Helm deployment chart that specifies an imagePullSecret, after having followed the instructions here https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/.
No matter what I do, though, when installing the Helm chart, I always end up with the following (taken from kubectl describe pod <pod-id>):
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 26m default-scheduler Successfully assigned default/<release>-69584657b7-vkps6 to <node>
Warning Failed 6m28s (x3 over 20m) kubelet Failed to pull image "<registry-username>/<image>:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/<registry-username>/<image>:latest": failed to copy: httpReadSeeker: failed open: server message: invalid_token: authorization failed
Warning Failed 6m28s (x3 over 20m) kubelet Error: ErrImagePull
Normal BackOff 5m50s (x5 over 20m) kubelet Back-off pulling image "<registry-username>/<image>:latest"
Warning Failed 5m50s (x5 over 20m) kubelet Error: ImagePullBackOff
Normal Pulling 5m39s (x4 over 26m) kubelet Pulling image "<registry-username>/<image>:latest"
I have looked high and low on the internet for answers pertaining to this invalid_token output, but have yet to find anything concrete.
I have verified that I can run docker pull manually with the image in question both on the K8s node as well as other boxes. It works just fine.
I have tried using docker.io as the repository URI, as well as (the recommended) https://index.docker.io/v1/.
I have tried using my own Docker Hub password as well as a generated Personal Access Token (I can actually see in Docker Hub that the PAT was, in fact, used, despite the pull failing).
I've examined the secrets via kubectl to verify they're of the expected format and contain the correct data (username, password/token, etc.). They're all fine and match what I'd get when I run docker login on the command line.
I have used this node to deploy other releases via Helm and they have all worked fine (although at least one has been from a different registry).
I am relatively new to K8s and Helm, but I've used Docker for a long while now and I'm at a loss as to this invalid_token issue.
Any help would be greatly appreciated.
Thank you in advance.
UPDATE
Here's the (sanitized) output of helm template:
---
# Source: <deployment>/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-<deployment>
labels:
helm.sh/chart: <deployment>-0.1.0
app.kubernetes.io/name: <deployment>
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: <deployment>
app.kubernetes.io/instance: release-name
template:
metadata:
labels:
app.kubernetes.io/name: <deployment>
app.kubernetes.io/instance: release-name
spec:
imagePullSecrets:
- name: regcred-docker-pat
securityContext:
{}
containers:
- name: <deployment>
securityContext:
{}
image: "<registry-username>/<image>:latest"
imagePullPolicy: IfNotPresent
resources:
{}
I've also confirmed that any secrets I have tried are, in fact, in the same namespace as the pod (in this case, the default namespace).
Is the imagepullsecret created by the helm chart?
Is the imagepullsecret available when the deployment is created?
Do you apply the deployment before the imagepullsecret is available?
I remember the order matters when applying the imagepullsecret; the kube-api does not retry pulling after failure because of authentication.

Created pod get ErrImgPull when using :latest tagged docker image

I'm starting in Kubernetes and I'm trying to update the Image in DockerHub that is used for the Kubernetes's Pod creation and then with kubectl rollout restart deployment deploymentName command it should pull the newest image and rebuild the pods.
The problem I'm facing is that it only works when I specify a version in the tag both in the image and the deployment.yaml` file.
In my repo I have 2 images fixit-server:latest and fixit-server:0.0.2 (the actual latest one).
With deployment.yaml file set as
spec:
containers:
- name: fixit-server-container
image: vinnytwice/fixit-server
# imagePullPolicy: Never
resources:
limits:
memory: "128Mi"
cpu: "500m"
I run kubectl apply -f infrastructure/k8s/server-deployment.yaml and it gets created, but when running kubectl get pods I get
vincenzocalia#vincenzos-MacBook-Air server-node % kubectl get pods
NAME READY STATUS RESTARTS AGE
fixit-server-5c7bfbc5b7-cgk24 0/1 ErrImagePull 0 7s
fixit-server-5c7bfbc5b7-g7f8x 0/1 ErrImagePull 0 7s
I then instead specify the version number in the deployment.yaml file
spec:
containers:
- name: fixit-server-container
image: vinnytwice/fixit-server:0.0.2
# imagePullPolicy: Never
resources:
limits:
memory: "128Mi"
cpu: "500m"
run again kubectl apply -f infrastructure/k8s/server-deployment.yaml and get configured as expected.
Running kubectl rollout restart deployment fixit-server I get restarted as expected.
But still running kubectl get pods shows
vincenzocalia#vincenzos-MacBook-Air server-node % kubectl get pods
NAME READY STATUS RESTARTS AGE
fixit-server-5c7bfbc5b7-cgk24 0/1 ImagePullBackOff 0 12m
fixit-server-5d78f8848c-bbxzx 0/1 ImagePullBackOff 0 2m58s
fixit-server-66cb98855c-mg2jn 0/1 ImagePullBackOff 0 74s
So I deleted the deployment and applied it again and pods are now running correctly.
Why when omitting a version number for the image to use ( which should imply :latest) the :latest tagged image doesn't get pulled from the repo?
What's the correct way of using the :latest tagged image?
Thank you very much.
Cheers
repo:
images:
REPOSITORY TAG IMAGE ID CREATED SIZE
vinnytwice/fixit-server 0.0.2 53cac5b0a876 10 hours ago 1.3GB
vinnytwice/fixit-server latest 53cac5b0a876 10 hours ago 1.3GB
You can use docker_image_find_tag.sh to check if your image has a latest tag or not.
It will show the tag/version for shows image:<none> or image:latest.
That way, you can check if, that mentioned in "How to fix ErrImagePull and ImagePullBackoff" if this is linked to:
Cause: Pod specification provides an invalid tag, or fails to provide a tag
Resolution: Edit pod specification and provide the correct tag.
If the image does not have a latest tag, you must provide a valid tag
And:
What's the correct way of using the :latest tagged image
Ideally, by not using it ;) latest can shift at any time, and by using a fixed label, you ensure a better reproducibility of your deployment.

What does Kubernetes Pods `ErrImagePull` means?

I am at the initial stage of Kubernetes. I've just created a pod using the command:
kubectl apply -f posts.yaml
It returns me the following:
pod/posts created
After that when I run kubectl get pods
I found the result as following:
NAME READY STATUS RESTARTS AGE
posts 0/1 ErrImagePull 0 2m4s
Here is my posts.yaml file in below:
apiVersion: v1
kind: Pod
metadata:
name: posts
spec:
containers:
- name: posts
image: bappa/posts:0.0.1
This means that kubernetes could not pull the image from the repository. Does the repo maybe need some authorization to allow image pull?
You can do
kubectl describe pod posts
to get some more info.
After applying yaml and looking into the kubectl describe pod posts you can clearly see below error:
Normal BackOff 21s kubelet Back-off pulling image "bappa/posts:0.0.1"
Warning Failed 21s kubelet Error: ImagePullBackOff
Normal Pulling 9s (x2 over 24s) kubelet Pulling image "bappa/posts:0.0.1"
Warning Failed 8s (x2 over 22s) kubelet Failed to pull image "bappa/posts:0.0.1": rpc error: code = Unknown desc = Error response from daemon: pull access denied for bappa/posts, repository does not exist or may require 'docker login'
Warning Failed 8s (x2 over 22s) kubelet Error: ErrImagePull
Failed to pull image "bappa/posts:0.0.1": rpc error: code = Unknown desc = Error response from daemon: pull access denied for bappa/posts, repository does not exist or may require 'docker login'
That means either you have posts image in your PRIVATE bappa repository, or you use non-exist image at all. So if this is your private repo - you should be authorized.
Maybe you wanted to use cleptes/posts:0.01 ?
apiVersion: v1
kind: Pod
metadata:
name: posts
spec:
containers:
- name: posts
image: cleptes/posts:0.01
kubectl get pods posts
NAME READY STATUS RESTARTS AGE
posts 1/1 Running 0 26m10s
kubectl describe pod posts
Normal Pulling 20s kubelet Pulling image "cleptes/posts:0.01"
Normal Pulled 13s kubelet Successfully pulled image "cleptes/posts:0.01"
Normal Created 13s kubelet Created container posts
Normal Started 12s kubelet Started container posts
Basically ErrImagePull means kubernetes is unable to locate the image, bappa/posts:0.0.1 This could either be the registry settings are not correct in the worker nodes or your image name or tags are not correct.
Just like #Henry explained issue a 'kubectl describe pod posts and inspect (and share) the error messages.
If you are using private repository you need to be authorized. If you are authorized and you can't reach the repository I think it might be related you using free account on docker hub and you have more private repositories than one which is for free. If you try to push your repository again you should get an error 'denied: requested access to the resource is denied'.
If you make your repository public it should solve your issue.

Where is kube-apiserver located

Base question: When I try to use kube-apiserver on my master node, I get command not found error. How I can install/configure kube-apiserver? Any link to example will help.
$ kube-apiserver --enable-admission-plugins DefaultStorageClass
-bash: kube-apiserver: command not found
Details: I am new to Kubernetes and Docker and was trying to create StatefulSet with volumeClaimTemplates. My problem is that the automatic PVs are not created and I get this message in the PVC log: "persistentvolume-controller waiting for a volume to be created". I am not sure if I need to define DefaultStorageClass and so needed kube-apiserver to define it.
Name: nfs
Namespace: default
StorageClass: example-nfs
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner=example.com/nfs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 3m (x2401 over 10h) persistentvolume-controller waiting for a volume to be created, either by external provisioner "example.com/nfs" or manually created by system administrator
Here is get pvc result:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs Pending example-nfs 10h
And get storageclass:
$ kubectl describe storageclass example-nfs
Name: example-nfs
IsDefaultClass: No
Annotations: <none>
Provisioner: example.com/nfs
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
How can I troubleshoot this issue (e.g. logs for why the storage was not created)?
You are asking two different questions here, one about kube-apiserver configuration, one about troubleshooting your StorageClass.
Here's an answer for your first question:
kube-apiserver is running as a Docker container on your master node. Therefore, the binary is within the container, not on your host system. It is started by the master's kubelet from a file located at /etc/kubernetes/manifests. kubelet is watching this directory and will start any Pod defined here as "static pods".
To configure kube-apiserver command line arguments you need to modify /etc/kubernetes/manifests/kube-apiserver.yaml on your master.
I'll refer to the question regarding the location of the api-server.
Basic answer (specific to the question title):
The kube apiserver is located on the master node (known as the control plane).
It can be executed:
1 ) Via the host's init system (like systemd).
2 ) As a pod (I'll explain below).
In both cases it will be located on the control plane (left side below):
If its running under systemD you can run: systemctl status api-server to see the path to the configuration (drop-in) file.
If it is running as pod you can view it under the kube-system namespace with all other control panel components (plus kube-proxy and maybe network solution like weave below):
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-f9fd979d6-lpdlc 1/1 Running 1 2d22h
coredns-f9fd979d6-vcs7g 1/1 Running 1 2d22h
etcd-my-master 1/1 Running 1 2d22h
kube-apiserver-my-master 1/1 Running 1 2d22h #<----Here
kube-controller-manager-my-master 1/1 Running 1 2d22h
kube-proxy-kh2lc 1/1 Running 1 2d22h
kube-scheduler-my-master 1/1 Running 1 2d22h
weave-net-59r5b 2/2 Running 3 2d22h
You can run:
kubectl describe pod/kube-apiserver-my-master -n kube-system
In order to get more details regarding the pod.
A bit more advanced answer:
(regarding the location of /etc/kubernetes/manifests)
Lets say we have no idea where to find the relevant path for the kube-api-server config file.
But we need to remember two important things:
1 ) The kube-api-server is running on the master node.
2 ) The Kubelet isn't running as pod and when the control plane components (plus kube-proxy) are executed as static pods - it is done by the Kubelet on the master node.
So we can start our journey for reaching the manifests path by investigating the Kubelet logs.
If the Kubelet is running for a long time it will be a very large file and we'll need to dump it somewhere and go to the begging - or if Kubelet was started 5 minutes ago we can run:
sudo journalctl -u kubelet --since -5m >> kubelet_5_minutes.log
And a quick search for "api-server" will bring us to the 2 lines below where the path of the manifests in mentioned:
my-master kubelet[71..]: 00:03:21 kubelet.go:261] Adding pod path: /etc/kubernetes/manifests
my-master kubelet[71..]: 00:03:21 kubelet.go:273] Watching apiserver
And also we can see that the Kubelet is trying to create the kube-apiserver pod under my-master node and inside the kube-system namespace:
my-master kubelet[71..]: 00:03:29.05 kubelet.go:1576] ..
Creating a mirror pod for "kube-apiserver-my-master_kube-system
To make the storage class "example-nfs" default, you need to run the below command:
kubectl patch storageclass example-nfs -p '{"metadata":
{"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}'

What is the meaning of ImagePullBackOff status on a Kubernetes pod?

I'm trying to run my first kubernetes pod locally.
I've run the following command (from here):
export ARCH=amd64
docker run -d \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged \
gcr.io/google_containers/hyperkube-${ARCH}:${K8S_VERSION} \
/hyperkube kubelet \
--containerized \
--hostname-override=127.0.0.1 \
--api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--allow-privileged --v=2
Then, I've trying to run the following:
kubectl create -f ./run-aii.yaml
run-aii.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: aii
spec:
replicas: 2
template:
metadata:
labels:
run: aii
spec:
containers:
- name: aii
image: aii
ports:
- containerPort: 5144
env:
- name: KAFKA_IP
value: kafka
volumeMounts:
- mountPath: /root/script
name: scripts-data
readOnly: true
- mountPath: /home/aii/core
name: core-aii
readOnly: true
- mountPath: /home/aii/genome
name: genome-aii
readOnly: true
- mountPath: /home/aii/main
name: main-aii
readOnly: true
- name: kafka
image: kafkazoo
volumeMounts:
- mountPath: /root/script
name: scripts-data
readOnly: true
- mountPath: /root/config
name: config-data
readOnly: true
- name: ws
image: ws
ports:
- containerPort: 3000
volumes:
- name: scripts-data
hostPath:
path: /home/aii/general/infra/script
- name: config-data
hostPath:
path: /home/aii/general/infra/config
- name: core-aii
hostPath:
path: /home/aii/general/core
- name: genome-aii
hostPath:
path: /home/aii/general/genome
- name: main-aii
hostPath:
path: /home/aii/general/main
Now, when I run: kubectl get pods
I'm getting:
NAME READY STATUS RESTARTS AGE
aii-806125049-18ocr 0/3 ImagePullBackOff 0 52m
aii-806125049-6oi8o 0/3 ImagePullBackOff 0 52m
aii-pod 0/3 ImagePullBackOff 0 23h
k8s-etcd-127.0.0.1 1/1 Running 0 2d
k8s-master-127.0.0.1 4/4 Running 0 2d
k8s-proxy-127.0.0.1 1/1 Running 0 2d
nginx-198147104-9kajo 1/1 Running 0 2d
BTW: docker images return:
REPOSITORY TAG IMAGE ID CREATED SIZE
ws latest fa7c5f6ef83a 7 days ago 706.8 MB
kafkazoo latest 84c687b0bd74 9 days ago 697.7 MB
aii latest bd12c4acbbaf 9 days ago 1.421 GB
node 4.4 1a93433cee73 11 days ago 647 MB
gcr.io/google_containers/hyperkube-amd64 v1.2.4 3c4f38def75b 11 days ago 316.7 MB
nginx latest 3edcc5de5a79 2 weeks ago 182.7 MB
docker_kafka latest e1d954a6a827 5 weeks ago 697.7 MB
spotify/kafka latest 30d3cef1fe8e 12 weeks ago 421.6 MB
wurstmeister/zookeeper latest dc00f1198a44 3 months ago 468.7 MB
centos latest 61b442687d68 4 months ago 196.6 MB
centos centos7.2.1511 38ea04e19303 5 months ago 194.6 MB
gcr.io/google_containers/etcd 2.2.1 a6cd91debed1 6 months ago 28.19 MB
gcr.io/google_containers/pause 2.0 2b58359142b0 7 months ago 350.2 kB
sequenceiq/hadoop-docker latest 5c3cc170c6bc 10 months ago 1.766 GB
why do I get the ImagePullBackOff ??
By default Kubernetes looks in the public Docker registry to find images. If your image doesn't exist there it won't be able to pull it.
You can run a local Kubernetes registry with the registry cluster addon.
Then tag your images with localhost:5000:
docker tag aii localhost:5000/dev/aii
Push the image to the Kubernetes registry:
docker push localhost:5000/dev/aii
And change run-aii.yaml to use the localhost:5000/dev/aii image instead of aii. Now Kubernetes should be able to pull the image.
Alternatively, you can run a private Docker registry through one of the providers that offers this (AWS ECR, GCR, etc.), but if this is for local development it will be quicker and easier to get setup with a local Kubernetes Docker registry.
One issue that may cause an ImagePullBackOff especially if you are pulling from a private registry is if the pod is not configured with the imagePullSecret of the private registry.
An authentication error may cause an imagePullBackOff.
I had the same problem what caused it was that I already had created a pod from the docker image via the .yml file, however I mistyped the name, i.e test-app:1.0.1 when I needed test-app:1.0.2 in my .yml file. So I did kubectl delete pods --all to remove the faulty pod then redid the kubectl create -f name_of_file.yml which solved my problem.
You can specify also imagePullPolicy: Never in the container's spec:
containers:
- name: nginx
imagePullPolicy: Never
image: custom-nginx
ports:
- containerPort: 80
The issue arises when the image is not present on the cluster and k8s engine is going to pull the respective registry.
k8s Engine enables 3 types of ImagePullPolicy mentioned :
Always : It always pull the image in container irrespective of changes in the image
Never : It will never pull the new image on the container
IfNotPresent : It will pull the new image in cluster if the image is not present.
Best Practices : It is always recommended to tag the new image in both docker file as well as k8s deployment file. So That it can pull the new image in container.
I too had this problem, when I checked I image that I was pulling from a private registry was removed
If we describe pod it will show pulling event and the image it's trying to pull
kubectl describe pod <POD_NAME>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 18h (x35 over 20h) kubelet, gsk-kub Pulling image "registeryName:tag"
Normal BackOff 11m (x822 over 20h) kubelet, gsk-kub Back-off pulling image "registeryName:tag"
Warning Failed 91s (x858 over 20h) kubelet, gsk-kub Error: ImagePullBackOff
Despite all the other great answers none helped me until I found a comment that pointed out this Updating images:
The default pull policy is IfNotPresent which causes the kubelet to skip pulling an image if it already exists.
That's exactly what I wanted, but didn't seem to work.
Reading further said the following:
If you would like to always force a pull, you can do one of the following:
omit the imagePullPolicy and use :latest as the tag for the image to use.
When I replaced latest with a version (that I had pushed to minikube's Docker daemon), it worked fine.
$ kubectl create deployment presto-coordinator \
--image=warsaw-data-meetup/presto-coordinator:beta0
deployment.apps/presto-coordinator created
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
presto-coordinator 1/1 1 1 3s
Find the pod of the deployment (using kubectl get pods) and use kubectl describe pod to find out more on the pod.
Debugging step:
kubectl get pod [name] -o yaml
Run this command to get the YAML configuration of the pod (Get YAML for deployed Kubernetes services?). In my case, it was under this section:
state:
waiting:
message: 'rpc error: code = Unknown desc = Error response from daemon: Get
https://repository:9999/v2/abc/location/image/manifests/tag:
unauthorized: BAD_CREDENTIAL'
reason: ErrImagePull
My issue got resolved upon adding the appropriate tag to the image I wanted to pull from the DockerHub.
Previously:
containers:
- name: nginx
image: alex/my-app-image
Corrected Version:
containers:
- name: nginx
image: alex/my-app-image:1.1
The image has only one version, which was 1.1. Since I skipped that initially, it has thrown an error.
After correctly mentioning the version, it worked fine!!
I had similar problem when using minikube over hyperv with 2048GB memory.
I found that in HyperV manager the Memory Demand was higher than allocated.
So I stopped minikube and assigned somewhere between 4096-6144GB. It worked fine after that, all pods running!
I don't know if this can nail down the issue in every case. But just have a look at the memory and disk allocated to the minikube.
I had face same issue.
imagePullBackOff means it is not able to pull docker image from registry or smoking issue with your registry.
the solution would be as below.
1. Check you image registry name.
2. check image pull secrets.
3. check image is present with same tag or name.
4. check you registry is working.
ImagepullBackoff mesns you have not passed secret in your yaml or secret is wrong and might be you image name is wrong.
If you pulling image from private registry you have to provide image pull secret then it will able to pull image.
you also need to creat secrete before you deploy the pod. you can use below command to create secrete.
kubectl create secret docker-registry regcred --docker-server=artifacts.exmple.int --docker-username=<username> --docker-password=<password> -n <namespace>
you can pass secret in yaml like below.
imagePullSecrets:
- name: regcred
I had this error when I tried to create a replicationcontroller. The issue was, I wrongly spelt the nginx image name in template definition.
Note: This error occurs when kubernetes is unable to pull the specified image from the repository.
I had the same issue.
[mayur#mayur_cloudtest ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-598b589c46-zcr5d 0/1 ImagePullBackOff 0 6m21s
Later I found that the docker on which the pod is created is using a private registry for images and Nginx was not present in it.
I have changed the docker registry to default and reloaded the daemon.
Post that issue got resolved.
[mayur#mayur_cloudtest ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-598b589c46-7cbjf 1/1 Running 0 33s
[mayur#mayur_cloudtest ~]$
[mayur#mayur_cloudtest ~]$
[mayur#mayur_cloudtest ~]$ kubectl exec -it nginx-598b589c46-7cbjf -- /bin/bash
root#nginx-598b589c46-7cbjf:/# ls
bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
boot docker-entrypoint.d etc lib media opt root sbin sys usr
root#nginx-598b589c46-7cbjf:/#
For my case, Kubernetes was not able to communicate to my private registry running on localhost:5000 after update to MacOS Monterey. It was running fine previously. The reason was Apple Airplay now listen to port 5000.
In order to resolve this issue, I disabled Apple Airplay receiver.
Go To System preference > Sharing > Disable checkbox for Airplay receiver.
Source Link: https://developer.apple.com/forums/thread/682332
To handle this error, Just have to create Kubernetes secrets and use it in manifest.yaml file
If it is private repository then it is mandatory to use user secrets
To generate secrets -
kubectl create secret docker-registry docker-secrets --docker-server=https://index.docker.io/v1/ --docker-username=ExamplaName --docker-password=ExamplePassword --docker-email=example#gmail.com
for --docker-server, use https://index.docker.io/v1/
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test
image: ExampleUsername/test:tagname
ports:
- containerPort: 3015
imagePullSecrets:
- name: docker-secrets

Resources