Kubernetes ImagePullBackOff with Private Registry on Docker Hub - docker

I have a private Docker Hub registry with a (rather large) image in it that I control.
I also have a Helm deployment chart that specifies an imagePullSecret, after having followed the instructions here https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/.
No matter what I do, though, when installing the Helm chart, I always end up with the following (taken from kubectl describe pod <pod-id>):
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 26m default-scheduler Successfully assigned default/<release>-69584657b7-vkps6 to <node>
Warning Failed 6m28s (x3 over 20m) kubelet Failed to pull image "<registry-username>/
</images>
</configuration>
</plugin>
ContainerD (and Windows)
I had the same error, while trying to run a custom windows container on a node. I had imagePullPolicy set to Never and a locally existing image present on the node. The image also wasn't tagged with latest, so the comment from Timo Reimann wasn't relevant.
Also, on the node machine, the image showed up when using nerdctl image. However they didn't show up in crictl images.
Thanks to a comment on Github, I found out that the actual problem is a different namespace of ContainerD.
As shown by the following two commands, images are not automatically build in the correct namespace:
ctr -n default images ls # shows the application images (wrong namespace)
ctr -n k8s.io images ls # shows the base images
To solve the problem, export and reimport the images to the correct namespace k8s.io by using the following command:
ctr --namespace k8s.io image import exported-app-image.tar
I was facing similar issue .Image was present in local but k8s was not able to pick it up.
So I went to terminal ,deleted the old image and ran eval $(minikube -p minikube docker-env) command.
Rebuilt the image and the redeployed the deployment yaml ,and it worked

Resources