What is the meaning of ImagePullBackOff status on a Kubernetes pod? - docker

I'm trying to run my first kubernetes pod locally.
I've run the following command (from here):
export ARCH=amd64
docker run -d \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged \
gcr.io/google_containers/hyperkube-${ARCH}:${K8S_VERSION} \
/hyperkube kubelet \
--containerized \
--hostname-override=127.0.0.1 \
--api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--allow-privileged --v=2
Then, I've trying to run the following:
kubectl create -f ./run-aii.yaml
run-aii.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: aii
spec:
replicas: 2
template:
metadata:
labels:
run: aii
spec:
containers:
- name: aii
image: aii
ports:
- containerPort: 5144
env:
- name: KAFKA_IP
value: kafka
volumeMounts:
- mountPath: /root/script
name: scripts-data
readOnly: true
- mountPath: /home/aii/core
name: core-aii
readOnly: true
- mountPath: /home/aii/genome
name: genome-aii
readOnly: true
- mountPath: /home/aii/main
name: main-aii
readOnly: true
- name: kafka
image: kafkazoo
volumeMounts:
- mountPath: /root/script
name: scripts-data
readOnly: true
- mountPath: /root/config
name: config-data
readOnly: true
- name: ws
image: ws
ports:
- containerPort: 3000
volumes:
- name: scripts-data
hostPath:
path: /home/aii/general/infra/script
- name: config-data
hostPath:
path: /home/aii/general/infra/config
- name: core-aii
hostPath:
path: /home/aii/general/core
- name: genome-aii
hostPath:
path: /home/aii/general/genome
- name: main-aii
hostPath:
path: /home/aii/general/main
Now, when I run: kubectl get pods
I'm getting:
NAME READY STATUS RESTARTS AGE
aii-806125049-18ocr 0/3 ImagePullBackOff 0 52m
aii-806125049-6oi8o 0/3 ImagePullBackOff 0 52m
aii-pod 0/3 ImagePullBackOff 0 23h
k8s-etcd-127.0.0.1 1/1 Running 0 2d
k8s-master-127.0.0.1 4/4 Running 0 2d
k8s-proxy-127.0.0.1 1/1 Running 0 2d
nginx-198147104-9kajo 1/1 Running 0 2d
BTW: docker images return:
REPOSITORY TAG IMAGE ID CREATED SIZE
ws latest fa7c5f6ef83a 7 days ago 706.8 MB
kafkazoo latest 84c687b0bd74 9 days ago 697.7 MB
aii latest bd12c4acbbaf 9 days ago 1.421 GB
node 4.4 1a93433cee73 11 days ago 647 MB
gcr.io/google_containers/hyperkube-amd64 v1.2.4 3c4f38def75b 11 days ago 316.7 MB
nginx latest 3edcc5de5a79 2 weeks ago 182.7 MB
docker_kafka latest e1d954a6a827 5 weeks ago 697.7 MB
spotify/kafka latest 30d3cef1fe8e 12 weeks ago 421.6 MB
wurstmeister/zookeeper latest dc00f1198a44 3 months ago 468.7 MB
centos latest 61b442687d68 4 months ago 196.6 MB
centos centos7.2.1511 38ea04e19303 5 months ago 194.6 MB
gcr.io/google_containers/etcd 2.2.1 a6cd91debed1 6 months ago 28.19 MB
gcr.io/google_containers/pause 2.0 2b58359142b0 7 months ago 350.2 kB
sequenceiq/hadoop-docker latest 5c3cc170c6bc 10 months ago 1.766 GB
why do I get the ImagePullBackOff ??

By default Kubernetes looks in the public Docker registry to find images. If your image doesn't exist there it won't be able to pull it.
You can run a local Kubernetes registry with the registry cluster addon.
Then tag your images with localhost:5000:
docker tag aii localhost:5000/dev/aii
Push the image to the Kubernetes registry:
docker push localhost:5000/dev/aii
And change run-aii.yaml to use the localhost:5000/dev/aii image instead of aii. Now Kubernetes should be able to pull the image.
Alternatively, you can run a private Docker registry through one of the providers that offers this (AWS ECR, GCR, etc.), but if this is for local development it will be quicker and easier to get setup with a local Kubernetes Docker registry.

One issue that may cause an ImagePullBackOff especially if you are pulling from a private registry is if the pod is not configured with the imagePullSecret of the private registry.
An authentication error may cause an imagePullBackOff.

I had the same problem what caused it was that I already had created a pod from the docker image via the .yml file, however I mistyped the name, i.e test-app:1.0.1 when I needed test-app:1.0.2 in my .yml file. So I did kubectl delete pods --all to remove the faulty pod then redid the kubectl create -f name_of_file.yml which solved my problem.

You can specify also imagePullPolicy: Never in the container's spec:
containers:
- name: nginx
imagePullPolicy: Never
image: custom-nginx
ports:
- containerPort: 80

The issue arises when the image is not present on the cluster and k8s engine is going to pull the respective registry.
k8s Engine enables 3 types of ImagePullPolicy mentioned :
Always : It always pull the image in container irrespective of changes in the image
Never : It will never pull the new image on the container
IfNotPresent : It will pull the new image in cluster if the image is not present.
Best Practices : It is always recommended to tag the new image in both docker file as well as k8s deployment file. So That it can pull the new image in container.

I too had this problem, when I checked I image that I was pulling from a private registry was removed
If we describe pod it will show pulling event and the image it's trying to pull
kubectl describe pod <POD_NAME>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 18h (x35 over 20h) kubelet, gsk-kub Pulling image "registeryName:tag"
Normal BackOff 11m (x822 over 20h) kubelet, gsk-kub Back-off pulling image "registeryName:tag"
Warning Failed 91s (x858 over 20h) kubelet, gsk-kub Error: ImagePullBackOff

Despite all the other great answers none helped me until I found a comment that pointed out this Updating images:
The default pull policy is IfNotPresent which causes the kubelet to skip pulling an image if it already exists.
That's exactly what I wanted, but didn't seem to work.
Reading further said the following:
If you would like to always force a pull, you can do one of the following:
omit the imagePullPolicy and use :latest as the tag for the image to use.
When I replaced latest with a version (that I had pushed to minikube's Docker daemon), it worked fine.
$ kubectl create deployment presto-coordinator \
--image=warsaw-data-meetup/presto-coordinator:beta0
deployment.apps/presto-coordinator created
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
presto-coordinator 1/1 1 1 3s
Find the pod of the deployment (using kubectl get pods) and use kubectl describe pod to find out more on the pod.

Debugging step:
kubectl get pod [name] -o yaml
Run this command to get the YAML configuration of the pod (Get YAML for deployed Kubernetes services?). In my case, it was under this section:
state:
waiting:
message: 'rpc error: code = Unknown desc = Error response from daemon: Get
https://repository:9999/v2/abc/location/image/manifests/tag:
unauthorized: BAD_CREDENTIAL'
reason: ErrImagePull

My issue got resolved upon adding the appropriate tag to the image I wanted to pull from the DockerHub.
Previously:
containers:
- name: nginx
image: alex/my-app-image
Corrected Version:
containers:
- name: nginx
image: alex/my-app-image:1.1
The image has only one version, which was 1.1. Since I skipped that initially, it has thrown an error.
After correctly mentioning the version, it worked fine!!

I had similar problem when using minikube over hyperv with 2048GB memory.
I found that in HyperV manager the Memory Demand was higher than allocated.
So I stopped minikube and assigned somewhere between 4096-6144GB. It worked fine after that, all pods running!
I don't know if this can nail down the issue in every case. But just have a look at the memory and disk allocated to the minikube.

I had face same issue.
imagePullBackOff means it is not able to pull docker image from registry or smoking issue with your registry.
the solution would be as below.
1. Check you image registry name.
2. check image pull secrets.
3. check image is present with same tag or name.
4. check you registry is working.

ImagepullBackoff mesns you have not passed secret in your yaml or secret is wrong and might be you image name is wrong.
If you pulling image from private registry you have to provide image pull secret then it will able to pull image.
you also need to creat secrete before you deploy the pod. you can use below command to create secrete.
kubectl create secret docker-registry regcred --docker-server=artifacts.exmple.int --docker-username=<username> --docker-password=<password> -n <namespace>
you can pass secret in yaml like below.
imagePullSecrets:
- name: regcred

I had this error when I tried to create a replicationcontroller. The issue was, I wrongly spelt the nginx image name in template definition.
Note: This error occurs when kubernetes is unable to pull the specified image from the repository.

I had the same issue.
[mayur#mayur_cloudtest ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-598b589c46-zcr5d 0/1 ImagePullBackOff 0 6m21s
Later I found that the docker on which the pod is created is using a private registry for images and Nginx was not present in it.
I have changed the docker registry to default and reloaded the daemon.
Post that issue got resolved.
[mayur#mayur_cloudtest ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-598b589c46-7cbjf 1/1 Running 0 33s
[mayur#mayur_cloudtest ~]$
[mayur#mayur_cloudtest ~]$
[mayur#mayur_cloudtest ~]$ kubectl exec -it nginx-598b589c46-7cbjf -- /bin/bash
root#nginx-598b589c46-7cbjf:/# ls
bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
boot docker-entrypoint.d etc lib media opt root sbin sys usr
root#nginx-598b589c46-7cbjf:/#

For my case, Kubernetes was not able to communicate to my private registry running on localhost:5000 after update to MacOS Monterey. It was running fine previously. The reason was Apple Airplay now listen to port 5000.
In order to resolve this issue, I disabled Apple Airplay receiver.
Go To System preference > Sharing > Disable checkbox for Airplay receiver.
Source Link: https://developer.apple.com/forums/thread/682332

To handle this error, Just have to create Kubernetes secrets and use it in manifest.yaml file
If it is private repository then it is mandatory to use user secrets
To generate secrets -
kubectl create secret docker-registry docker-secrets --docker-server=https://index.docker.io/v1/ --docker-username=ExamplaName --docker-password=ExamplePassword --docker-email=example#gmail.com
for --docker-server, use https://index.docker.io/v1/
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test
image: ExampleUsername/test:tagname
ports:
- containerPort: 3015
imagePullSecrets:
- name: docker-secrets

Related

Created pod get ErrImgPull when using :latest tagged docker image

I'm starting in Kubernetes and I'm trying to update the Image in DockerHub that is used for the Kubernetes's Pod creation and then with kubectl rollout restart deployment deploymentName command it should pull the newest image and rebuild the pods.
The problem I'm facing is that it only works when I specify a version in the tag both in the image and the deployment.yaml` file.
In my repo I have 2 images fixit-server:latest and fixit-server:0.0.2 (the actual latest one).
With deployment.yaml file set as
spec:
containers:
- name: fixit-server-container
image: vinnytwice/fixit-server
# imagePullPolicy: Never
resources:
limits:
memory: "128Mi"
cpu: "500m"
I run kubectl apply -f infrastructure/k8s/server-deployment.yaml and it gets created, but when running kubectl get pods I get
vincenzocalia#vincenzos-MacBook-Air server-node % kubectl get pods
NAME READY STATUS RESTARTS AGE
fixit-server-5c7bfbc5b7-cgk24 0/1 ErrImagePull 0 7s
fixit-server-5c7bfbc5b7-g7f8x 0/1 ErrImagePull 0 7s
I then instead specify the version number in the deployment.yaml file
spec:
containers:
- name: fixit-server-container
image: vinnytwice/fixit-server:0.0.2
# imagePullPolicy: Never
resources:
limits:
memory: "128Mi"
cpu: "500m"
run again kubectl apply -f infrastructure/k8s/server-deployment.yaml and get configured as expected.
Running kubectl rollout restart deployment fixit-server I get restarted as expected.
But still running kubectl get pods shows
vincenzocalia#vincenzos-MacBook-Air server-node % kubectl get pods
NAME READY STATUS RESTARTS AGE
fixit-server-5c7bfbc5b7-cgk24 0/1 ImagePullBackOff 0 12m
fixit-server-5d78f8848c-bbxzx 0/1 ImagePullBackOff 0 2m58s
fixit-server-66cb98855c-mg2jn 0/1 ImagePullBackOff 0 74s
So I deleted the deployment and applied it again and pods are now running correctly.
Why when omitting a version number for the image to use ( which should imply :latest) the :latest tagged image doesn't get pulled from the repo?
What's the correct way of using the :latest tagged image?
Thank you very much.
Cheers
repo:
images:
REPOSITORY TAG IMAGE ID CREATED SIZE
vinnytwice/fixit-server 0.0.2 53cac5b0a876 10 hours ago 1.3GB
vinnytwice/fixit-server latest 53cac5b0a876 10 hours ago 1.3GB
You can use docker_image_find_tag.sh to check if your image has a latest tag or not.
It will show the tag/version for shows image:<none> or image:latest.
That way, you can check if, that mentioned in "How to fix ErrImagePull and ImagePullBackoff" if this is linked to:
Cause: Pod specification provides an invalid tag, or fails to provide a tag
Resolution: Edit pod specification and provide the correct tag.
If the image does not have a latest tag, you must provide a valid tag
And:
What's the correct way of using the :latest tagged image
Ideally, by not using it ;) latest can shift at any time, and by using a fixed label, you ensure a better reproducibility of your deployment.

httpd Docker image CrashLoopBackOff on Kubernetes

I have a simple docker image which is working fine locally.
It is basically the same as the example on apache's httpd page.
FROM httpd:2.4
COPY ./public-html/ /usr/local/apache2/htdocs/
As per the page example, I can build and run my image as follows:
$ docker build -t gcr.io/${PROJECT_ID}/hello-app:v1 .
$ docker run -dit --name my-running-app -p 8080:80 <img_id>
I then head over to http://localhost:8080 , and everything seems to be working as it should.
However, when I try to create a deployment for my Google Cloud Kubernetes instance, my pod fails and gets to the state of CrashLoopBackOff. (This is after I have pushed the image to Google Cloud Registry, so that the deployment may grab the image from there.)
I think that this CrashLoopBackOff problem is happening due to me not having an ENTRYPOINT to my container; ie, the pod spawns, no command is issued, and then it is completed and crashes.
I have 2 questions then:
What command should I add to my Dockerfile to get the http server up and running on the pod (assuming my assessment of the problem is indeed correct)?
How is this running locally? Locally I simply $ docker run -dit --name my-running-app -p 8080:80 <img_id>. I do not specify that the container should run httpd, yet it does? How is this happening?
Edit - additional information:
I deployed onto K8's by doing the following:
$ kubectl create deployment hello-app --image=gcr.io/${PROJECT_ID}/hello-app:v1
Kubectl logs:
$ kubectl logs <pod_name>
standard_init_linux.go:211: exec user process caused "exec format error"
kubectl describe:
$ kubectl describe pod hello-app-6b89cd98f6-gn65p
Name: <name>
Namespace: default
Priority: 0
Node: <my_node>
Start Time: Mon, 22 Mar 2021 12:32:51 +0200
Labels: app=hello-app
pod-template-hash=6b89cd98f6
Annotations: <none>
Status: Running
IP: 10.12.1.13
IPs:
IP: 10.12.1.13
Controlled By: <replica_set>
Containers:
hello-app:
Container ID: <cid>
Image: <img>
Image ID: <img_id>
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 22 Mar 2021 15:12:18 +0200
Finished: Mon, 22 Mar 2021 15:12:18 +0200
Ready: False
Restart Count: 36
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-b8p9t (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-b8p9t:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-b8p9t
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 4m9s (x741 over 164m) kubelet Back-off restarting failed container
CrashLoopBackOff error means the pod keeps crashing and kubernetes has given up on it. You have to determine what is causing the crash.
Overally the cause of problem may be that:
type of parameters of the pod or container have been configured incorrectly
the application inside the container keeps crashing
an error occurred while deploying Kubernetes
You can type watch kubectl describe <pod-name> to check events as the pod is being created. But if the pod crashes after it starts up, you need to get the container logs kubectl logs -f <your-pod-name>.
Read more: kubernetes-crashloopbackoff.
As #Krishna Chaurasia said check the thread which is implying that the default command being run is not an executable - executable formats could be different for different platforms. As #Sagar Velankar mentioned use in Docker file in FROM line --platform flag to specify linux/amd64 as the target architecture. See: dockerfile-from.
You can use docker buildx docs.docker.com/docker-for-mac/multi-arch to build and push multi architecture images and kubelet will pull the image with correct architecture.

127.0.0.1:5000: getsockopt: connection refused in Minikube

Using minikube and docker on my local Ubuntu workstation I get the following error in the Minikube web UI:
Failed to pull image "localhost:5000/samples/myserver:snapshot-180717-213718-0199": rpc error: code = Unknown desc = Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: getsockopt: connection refused
after I have created the below deployment config with:
kubectl apply -f hello-world-deployment.yaml
hello-world-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
template:
metadata:
labels:
app: hello-world
tier: backend
spec:
containers:
- name: hello-world
image: localhost:5000/samples/myserver:snapshot-180717-213718-0199
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 8080
And output from docker images:
REPOSITORY TAG IMAGE ID CREATED SIZE
samples/myserver latest aa0a1388cd88 About an hour ago 435MB
samples/myserver snapshot-180717-213718-0199 aa0a1388cd88 About an hour ago 435MB
k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 3 months ago 97MB
Based on this guide:
How to use local docker images with Minikube?
I have also run:
eval $(minikube docker-env)
and based on this:
https://github.com/docker/for-win/issues/624
I have added:
"InsecureRegistry": [
"localhost:5000",
"127.0.0.1:5000"
],
to /etc/docker/daemon.json
Any suggestion on what I missing to get the image pull to work in minikube?
I have followed the steps in the below answer but when I get to this step:
$ kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep kube-registry-v0 | awk '{print $1;}') 5000:5000
it just hangs like this:
$ kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep kube-registry-v0 | awk '{print $1;}') 5000:5000
Forwarding from 127.0.0.1:5000 -> 5000
Forwarding from [::1]:5000 -> 5000
and I get the same error in minikube dashboard after I create my deploymentconfig.
Based on answer from BMitch I have now tried to create a local docker repository and push an image to it with:
$ docker run -d -p 5000:5000 --restart always --name registry registry:2
$ docker pull ubuntu
$ docker tag ubuntu localhost:5000/ubuntu:v1
$ docker push localhost:5000/ubuntu:v1
Next when I do docker images I get:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 74f8760a2a8b 4 days ago 82.4MB
localhost:5000/ubuntu v1 74f8760a2a8b 4 days ago 82.4MB
I have then updated my deploymentconfig hello-world-deployment.yaml to:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
template:
metadata:
labels:
app: hello-world
tier: backend
spec:
containers:
- name: hello-world
image: localhost:5000/ubuntu:v1
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 8080
and
kubectl create -f hello-world-deployment.yaml
But in Minikube I still get similar error:
Failed to pull image "localhost:5000/ubuntu:v1": rpc error: code = Unknown desc = Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: getsockopt: connection refused
So seems Minikube is not allowed to see the local registry I just created?
It looks like you’re facing a problem with localhost on your computer and localhost used within the context of minikube VM.
To have registry working, you have to set an additional port forwarding.
If your minikube installation is currently broken due to a lot of attempts to fix registry problems,
I would suggest restarting minikube environment:
minikube stop && minikube delete && rm -fr $HOME/.minikube && minikube start
Next, get kube registry yaml file:
curl -O https://gist.githubusercontent.com/coco98/b750b3debc6d517308596c248daf3bb1/raw/6efc11eb8c2dce167ba0a5e557833cc4ff38fa7c/kube-registry.yaml
Then, apply it on minikube:
kubectl create -f kube-registry.yaml
Test if registry inside minikube VM works:
minikube ssh && curl localhost:5000
On Ubuntu, forward ports to reach registry at port 5000:
kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep kube-registry-v0 | awk '{print $1;}') 5000:5000
If you would like to share your private registry from your machine, you may be interested in sharing local registry for minikube blog entry.
If you're specifying the image source as the local registry server, you'll need to run a registry server there, and push your images to it.
You can self host a registry server with multiple 3rd party options, or run this one that is packaged inside a docker container: https://hub.docker.com/_/registry/
This only works on a single node environment unless you setup TLS keys, trust the CA, or tell all other nodes of the additional insecure registry.
You can also specify the imagePullPolicy as Never.
Both of these solutions were already in your linked question and I'm not seeing any evidence of you trying either in this question. Without showing how you tried those steps and experienced a different problem, this question should probably be closed as a duplicate.
it is unclear from your question how many nodes do you have?
If you have more than one, your problem is in your deployment with replicas: 1.
If not, please ignore this answer.
You don't know where and what that replica will be. So if you don't have docker local registry on all of your nodes, and you got unlucky that kubernetes is trying to use some node without docker registry, you will end up with that error.
Same thing happened to me, same error connection refused because deployment went to node without local docker registry.
As I am typing this, I think this can be resolved with ingress.
You do registry as deployment, add service, add volume for images and put it to ingress.
Little more of work but at least all your nodes will be sync (all of your pods sorry).

Running kubernetes autoscalar

I have a replication controller running with the following spec:
apiVersion: v1
kind: ReplicationController
metadata:
name: owncloud-controller
spec:
replicas: 1
selector:
app: owncloud
template:
metadata:
labels:
app: owncloud
spec:
containers:
- name: owncloud
image: adimania/owncloud9-centos7
ports:
- containerPort: 80
volumeMounts:
- name: userdata
mountPath: /var/www/html/owncloud/data
resources:
requests:
cpu: 400m
volumes:
- name: userdata
hostPath:
path: /opt/data
Now I run a hpa using autoscale command.
$ kubectl autoscale rc owncloud-controller --max=5 --cpu-percent=10
I have also started heapster using kubernetes run command.
$ kubectl run heapster --image=gcr.io/google_containers/heapster:v1.0.2 --command -- /heapster --source=kubernetes:http://192.168.0.103:8080?inClusterConfig=false --sink=log
After all this, the autoscaling never kicks in. From logs, it seems that the actual CPU utilization is not getting reported.
$ kubectl describe hpa owncloud-controller
Name: owncloud-controller
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Thu, 26 May 2016 14:24:51 +0530
Reference: ReplicationController/owncloud-controller/scale
Target CPU utilization: 10%
Current CPU utilization: <unset>
Min replicas: 1
Max replicas: 5
ReplicationController pods: 1 current / 1 desired
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
44m 8s 92 {horizontal-pod-autoscaler } Warning FailedGetMetrics failed to get CPU consumption and request: metrics obtained for 0/1 of pods
44m 8s 92 {horizontal-pod-autoscaler } Warning FailedComputeReplicas failed to get CPU utilization: failed to get CPU consumption and request: metrics obtained for 0/1 of pods
What am I missing here?
Most probably heapster is running in a wrong namespace ("default"). HPA expects heapster to be in "kube-system" namespace. Please, add --namespace=kube-system to kubectl run heapster command.
I installed hepaster under the name space "kube-system" and it worked. After running heapster, make sure it's running before you use HPA for your application.
How to run Heapster with Kubernetes cluster
I put all files here https://gitlab.com/abushoeb/kubernetes/tree/master/heapster. They are collected from the official Kubernetes Repository and made minor changes.
How to run Heapster
Go to the directory heapster where you have grafana.yaml, heapster.yaml and influxdb.yaml and run following command
$ kubectl create -f .
How to stop Heapster
Go to the same heapster directory and then run following command
$ kubectl delete -f .
How to check Heapster is running
You can access heapster metric model from the pod where heapster is running to make sure heapster is working. It can be accessed via web browser by accessing http://heapster-pod-ip:heapster-service-port/api/v1/model/metrics/. The same result can be seen by executing following command.
$ curl -L http://heapster-pod-ip:heapster-service-port/api/v1/model/metrics/
If you see the list of metrics then heapster is running correctly. You can also browse grafana dashboard to see it (find the ip of the pod where grafana is running and the access it http://grafana-pod-ip:grafana-service-port).
Full documentation of Heapster Metric Model are available here.
Also just run ($ kubectl cluster-info) and see if it shows results like this:
Kubernetes master is running at https://cluster-ip:6443
Heapster is running at https://cluster-ip:6443/api/v1/proxy/namespaces/kube-system/services/heapster
kubernetes-dashboard is running at https://cluster-ip:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
monitoring-grafana is running at https://cluster-ip:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
monitoring-influxdb is running at https://cluster-ip:6443/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
Check influxdb
You can also check influxdb if it has data in it. Install Influxdb Client on your local machine to get connected to infuxdb database.
$ influx -host <cluster-ip> -port <influxdb-service-port>
Some Sample influxdb queries
show databases
use db-name
show measurements
select value from "cpu/node_capacity"
Reference and Help
https://github.com/kubernetes/heapster/blob/master/docs/influxdb.md
https://github.com/kubernetes/heapster/blob/master/docs/debugging.md
https://blog.kublr.com/how-to-utilize-the-heapster-influxdb-grafana-stack-in-kubernetes-for-monitoring-pods-4a553f4d36c9
http://www.dasblinkenlichten.com/installing-cadvisor-and-heapster-on-bare-metal-kubernetes/
http://blog.arungupta.me/kubernetes-monitoring-heapster-influxdb-grafana/

kubernetes cannot pull local image

I am using kubernetes on a single machine for testing, I have built a custom image from the nginx docker image, but when I try to use the image in kubernetes I get an image pull error?????
MY POD YAML
kind: Pod
apiVersion: v1
metadata:
name: yumserver
labels:
name: frontendhttp
spec:
containers:
- name: myfrontend
image: my/nginx:latest
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
imagePullSecrets:
- name: myregistrykey
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim-1
MY KUBERNETES COMMAND
kubectl create -f pod-yumserver.yaml
THE ERROR
kubectl describe pod yumserver
Name: yumserver
Namespace: default
Image(s): my/nginx:latest
Node: 127.0.0.1/127.0.0.1
Start Time: Tue, 26 Apr 2016 16:31:42 +0100
Labels: name=frontendhttp
Status: Pending
Reason:
Message:
IP: 172.17.0.2
Controllers: <none>
Containers:
myfrontend:
Container ID:
Image: my/nginx:latest
Image ID:
QoS Tier:
memory: BestEffort
cpu: BestEffort
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim-1
ReadOnly: false
default-token-64w08:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-64w08
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
13s 13s 1 {default-scheduler } Normal Scheduled Successfully assigned yumserver to 127.0.0.1
13s 13s 1 {kubelet 127.0.0.1} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
12s 12s 1 {kubelet 127.0.0.1} spec.containers{myfrontend} Normal Pulling pulling image "my/nginx:latest"
8s 8s 1 {kubelet 127.0.0.1} spec.containers{myfrontend} Warning Failed Failed to pull image "my/nginx:latest": Error: image my/nginx:latest not found
8s 8s 1 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "myfrontend" with ErrImagePull: "Error: image my/nginx:latest not found"
So you have the image on your machine aready. It still tries to pull the image from Docker Hub, however, which is likely not what you want on your single-machine setup. This is happening because the latest tag sets the imagePullPolicy to Always implicitly. You can try setting it to IfNotPresent explicitly or change to a tag other than latest. – Timo Reimann Apr 28 at 7:16
For some reason Timo Reimann did only post this above as a comment, but it definitely should be the official answer to this question, so I'm posting it again.
Run eval $(minikube docker-env) before building your image.
Full answer here: https://stackoverflow.com/a/40150867
This should work irrespective of whether you are using minikube or not :
Start a local registry container:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Do docker images to find out the REPOSITORY and TAG of your local image. Then create a new tag for your local image :
docker tag <local-image-repository>:<local-image-tag> localhost:5000/<local-image-name>
If TAG for your local image is <none>, you can simply do:
docker tag <local-image-repository> localhost:5000/<local-image-name>
Push to local registry :
docker push localhost:5000/<local-image-name>
This will automatically add the latest tag to localhost:5000/<local-image-name>.
You can check again by doing docker images.
In your yaml file, set imagePullPolicy to IfNotPresent :
...
spec:
containers:
- name: <name>
image: localhost:5000/<local-image-name>
imagePullPolicy: IfNotPresent
...
That's it. Now your ImagePullError should be resolved.
Note: If you have multiple hosts in the cluster, and you want to use a specific one to host the registry, just replace localhost in all the above steps with the hostname of the host where the registry container is hosted. In that case, you may need to allow HTTP (non-HTTPS) connections to the registry:
5 (optional). Allow connection to insecure registry in worker nodes:
sudo echo '{"insecure-registries":["<registry-hostname>:5000"]}' > /etc/docker/daemon.json
just add imagePullPolicy to your deployment file
it worked for me
spec:
containers:
- name: <name>
image: <local-image-name>
imagePullPolicy: Never
The easiest way to further analysis ErrImagePull problems is to ssh into the node and try to pull the image manually by doing docker pull my/nginx:latest. I've never set up Kubernetes on a single machine but could imagine that the Docker daemon isn't reachable from the node for some reason. A handish pull attempt should provide more information.
If you are using a vm driver, you will need to tell Kubernetes to use the Docker daemon running inside of the single node cluster instead of the host.
Run the following command:
eval $(minikube docker-env)
Note - This command will need to be repeated anytime you close and restart the terminal session.
Afterward, you can build your image:
docker build -t USERNAME/REPO .
Update, your pod manifest as shown above and then run:
kubectl apply -f myfile.yaml
in your case your yaml file should have
imagePullPolicy: Never
see below
kind: Pod
apiVersion: v1
metadata:
name: yumserver
labels:
name: frontendhttp
spec:
containers:
- name: myfrontend
image: my/nginx:latest
imagePullPolicy: Never
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
imagePullSecrets:
- name: myregistrykey
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim-1
found this here
https://keepforyourself.com/docker/run-a-kubernetes-pod-locally/
Are you using minikube on linux? You need to install docker ( I think), but you don't need to start it. Minikube will do that. Try using the KVM driver with this command:
minikube start --vm-driver kvm
Then run the eval $(minikube docker-env) command to make sure you use the minikube docker environment. build your container with a tag build -t mycontainername:version .
if you then type docker ps you should see a bunch of minikube containers already running.
kvm utils are probably already on your machine, but they can be installed like this on centos/rhel:
yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python
Make sure that your "Kubernetes Context" in Docker Desktop is actually a "docker-desktop" (i.e. not a remote cluster).
(Right click on Docker icon, then select "Kubernetes" in menu)
All you need to do is just do a docker build from your dockerfile, or get all the images on the nodes of your cluster, do a suitable docker tag and create the manifest.
Kubernetes doesn't directly pull from the registry. First it searches for the image on local storage and then docker registry.
Pull latest nginx image
docker pull nginx
docker tag nginx:latest test:test8970
Create a deployment
kubectl run test --image=test:test8970
It won't go to docker registry to pull the image. It will bring up the pod instantly.
And if image is not present on local machine it will try to pull from docker registry and fail with ErrImagePull error.
Also if you change the imagePullPolicy: Never. It will never look for the registry to pull the image and will fail if image is not found with error ErrImageNeverPull.
kind: Deployment
metadata:
labels:
run: test
name: test
spec:
replicas: 1
selector:
matchLabels:
run: test
template:
metadata:
creationTimestamp: null
labels:
run: test
spec:
containers:
- image: test:test8070
name: test
imagePullPolicy: Never
Adding another answer here as the above gave me enough to figure out the cause of my particular instance of this issue. Turns out that my build process was missing the tagging needed to make :latest work. As soon as I added a <tags> section to my docker-maven-plugin configuration in my pom.xml, everything was hunky-dory. Here's some example configuration:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.27.2</version>
<configuration>
<images>

</images>
</configuration>
</plugin>
ContainerD (and Windows)
I had the same error, while trying to run a custom windows container on a node. I had imagePullPolicy set to Never and a locally existing image present on the node. The image also wasn't tagged with latest, so the comment from Timo Reimann wasn't relevant.
Also, on the node machine, the image showed up when using nerdctl image. However they didn't show up in crictl images.
Thanks to a comment on Github, I found out that the actual problem is a different namespace of ContainerD.
As shown by the following two commands, images are not automatically build in the correct namespace:
ctr -n default images ls # shows the application images (wrong namespace)
ctr -n k8s.io images ls # shows the base images
To solve the problem, export and reimport the images to the correct namespace k8s.io by using the following command:
ctr --namespace k8s.io image import exported-app-image.tar
I was facing similar issue .Image was present in local but k8s was not able to pick it up.
So I went to terminal ,deleted the old image and ran eval $(minikube -p minikube docker-env) command.
Rebuilt the image and the redeployed the deployment yaml ,and it worked

Resources