Related
I am building my docker image with jenkins using:
docker build --build-arg VCS_REF=$GIT_COMMIT \
--build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` \
--build-arg BUILD_NUMBER=$BUILD_NUMBER -t $IMAGE_NAME\
I was using Docker but I am migrating to k8.
With docker I could access those labels via:
docker inspect --format "{{ index .Config.Labels \"$label\"}}" $container
How can I access those labels with Kubernetes ?
I am aware about adding those labels in .Metadata.labels of my yaml files but I don't like it that much because
- it links those information to the deployment and not the container itself
- can be modified anytime
...
kubectl describe pods
Thank you
Kubernetes doesn't expose that data. If it did, it would be part of the PodStatus API object (and its embedded ContainerStatus), which is one part of the Pod data that would get dumped out by kubectl get pod deployment-name-12345-abcde -o yaml.
You might consider encoding some of that data in the Docker image tag; for instance, if the CI system is building a tagged commit then use the source control tag name as the image tag, otherwise use a commit hash or sequence number. Another typical path is to use a deployment manager like Helm as the principal source of truth about deployments, and if you do that there can be a path from your CD system to Helm to Kubernetes that can pass along labels or annotations. You can also often set up software to know its own build date and source control commit ID at build time, and then expose that information via an informational-only API (like an HTTP GET /_version call or some such).
I'll add another option.
I would suggest reading about the Recommended Labels by K8S:
Key Description
app.kubernetes.io/name The name of the application
app.kubernetes.io/instance A unique name identifying the instance of an application
app.kubernetes.io/version The current version of the application (e.g., a semantic version, revision hash, etc.)
app.kubernetes.io/component The component within the architecture
app.kubernetes.io/part-of The name of a higher level application this one is part of
app.kubernetes.io/managed-by The tool being used to manage the operation of an application
So you can use the labels to describe a pod:
apiVersion: apps/v1
kind: Pod # Or via Deployment
metadata:
labels:
app.kubernetes.io/name: wordpress
app.kubernetes.io/instance: wordpress-abcxzy
app.kubernetes.io/version: "4.9.4"
app.kubernetes.io/managed-by: helm
app.kubernetes.io/component: server
app.kubernetes.io/part-of: wordpress
And use the downward api (which works in a similar way to reflection in programming languages).
There are two ways to expose Pod and Container fields to a running Container:
1 ) Environment variables.
2 ) Volume Files.
Below is an example for using volumes files:
apiVersion: v1
kind: Pod
metadata:
name: kubernetes-downwardapi-volume-example
labels:
version: 4.5.6
component: database
part-of: etl-engine
annotations:
build: two
builder: john-doe
spec:
containers:
- name: client-container
image: k8s.gcr.io/busybox
command: ["sh", "-c"]
args: # < ------ We're using the mounted volumes inside the container
- while true; do
if [[ -e /etc/podinfo/labels ]]; then
echo -en '\n\n'; cat /etc/podinfo/labels; fi;
if [[ -e /etc/podinfo/annotations ]]; then
echo -en '\n\n'; cat /etc/podinfo/annotations; fi;
sleep 5;
done;
volumeMounts:
- name: podinfo
mountPath: /etc/podinfo
volumes: # < -------- We're mounting in our example the pod's labels and annotations
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "annotations"
fieldRef:
fieldPath: metadata.annotations
Notice that in the example we accessed the labels and annotations that were passed and mounted to the /etc/podinfo path.
Beside labels and annotations, the downward API exposed multiple additional options like:
The pod's IP address.
The pod's service account name.
The node's name and IP.
A Container's CPU limit , CPU request , memory limit, memory request.
See full list in here.
(*) A nice blog discussing the downward API.
(**) You can view all your pods labels with
$ kubectl get pods --show-labels
NAME ... LABELS
my-app-xxx-aaa pod-template-hash=...,run=my-app
my-app-xxx-bbb pod-template-hash=...,run=my-app
my-app-xxx-ccc pod-template-hash=...,run=my-app
fluentd-8ft5r app=fluentd,controller-revision-hash=...,pod-template-generation=2
fluentd-fl459 app=fluentd,controller-revision-hash=...,pod-template-generation=2
kibana-xyz-adty4f app=kibana,pod-template-hash=...
recurrent-tasks-executor-xaybyzr-13456 pod-template-hash=...,run=recurrent-tasks-executor
serviceproxy-1356yh6-2mkrw app=serviceproxy,pod-template-hash=...
Or viewing only specific label with $ kubectl get pods -L <label_name>.
Using minikube and docker on my local Ubuntu workstation I get the following error in the Minikube web UI:
Failed to pull image "localhost:5000/samples/myserver:snapshot-180717-213718-0199": rpc error: code = Unknown desc = Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: getsockopt: connection refused
after I have created the below deployment config with:
kubectl apply -f hello-world-deployment.yaml
hello-world-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
template:
metadata:
labels:
app: hello-world
tier: backend
spec:
containers:
- name: hello-world
image: localhost:5000/samples/myserver:snapshot-180717-213718-0199
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 8080
And output from docker images:
REPOSITORY TAG IMAGE ID CREATED SIZE
samples/myserver latest aa0a1388cd88 About an hour ago 435MB
samples/myserver snapshot-180717-213718-0199 aa0a1388cd88 About an hour ago 435MB
k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 3 months ago 97MB
Based on this guide:
How to use local docker images with Minikube?
I have also run:
eval $(minikube docker-env)
and based on this:
https://github.com/docker/for-win/issues/624
I have added:
"InsecureRegistry": [
"localhost:5000",
"127.0.0.1:5000"
],
to /etc/docker/daemon.json
Any suggestion on what I missing to get the image pull to work in minikube?
I have followed the steps in the below answer but when I get to this step:
$ kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep kube-registry-v0 | awk '{print $1;}') 5000:5000
it just hangs like this:
$ kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep kube-registry-v0 | awk '{print $1;}') 5000:5000
Forwarding from 127.0.0.1:5000 -> 5000
Forwarding from [::1]:5000 -> 5000
and I get the same error in minikube dashboard after I create my deploymentconfig.
Based on answer from BMitch I have now tried to create a local docker repository and push an image to it with:
$ docker run -d -p 5000:5000 --restart always --name registry registry:2
$ docker pull ubuntu
$ docker tag ubuntu localhost:5000/ubuntu:v1
$ docker push localhost:5000/ubuntu:v1
Next when I do docker images I get:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 74f8760a2a8b 4 days ago 82.4MB
localhost:5000/ubuntu v1 74f8760a2a8b 4 days ago 82.4MB
I have then updated my deploymentconfig hello-world-deployment.yaml to:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-world
spec:
replicas: 1
template:
metadata:
labels:
app: hello-world
tier: backend
spec:
containers:
- name: hello-world
image: localhost:5000/ubuntu:v1
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 8080
and
kubectl create -f hello-world-deployment.yaml
But in Minikube I still get similar error:
Failed to pull image "localhost:5000/ubuntu:v1": rpc error: code = Unknown desc = Error response from daemon: Get http://localhost:5000/v2/: dial tcp 127.0.0.1:5000: getsockopt: connection refused
So seems Minikube is not allowed to see the local registry I just created?
It looks like you’re facing a problem with localhost on your computer and localhost used within the context of minikube VM.
To have registry working, you have to set an additional port forwarding.
If your minikube installation is currently broken due to a lot of attempts to fix registry problems,
I would suggest restarting minikube environment:
minikube stop && minikube delete && rm -fr $HOME/.minikube && minikube start
Next, get kube registry yaml file:
curl -O https://gist.githubusercontent.com/coco98/b750b3debc6d517308596c248daf3bb1/raw/6efc11eb8c2dce167ba0a5e557833cc4ff38fa7c/kube-registry.yaml
Then, apply it on minikube:
kubectl create -f kube-registry.yaml
Test if registry inside minikube VM works:
minikube ssh && curl localhost:5000
On Ubuntu, forward ports to reach registry at port 5000:
kubectl port-forward --namespace kube-system $(kubectl get po -n kube-system | grep kube-registry-v0 | awk '{print $1;}') 5000:5000
If you would like to share your private registry from your machine, you may be interested in sharing local registry for minikube blog entry.
If you're specifying the image source as the local registry server, you'll need to run a registry server there, and push your images to it.
You can self host a registry server with multiple 3rd party options, or run this one that is packaged inside a docker container: https://hub.docker.com/_/registry/
This only works on a single node environment unless you setup TLS keys, trust the CA, or tell all other nodes of the additional insecure registry.
You can also specify the imagePullPolicy as Never.
Both of these solutions were already in your linked question and I'm not seeing any evidence of you trying either in this question. Without showing how you tried those steps and experienced a different problem, this question should probably be closed as a duplicate.
it is unclear from your question how many nodes do you have?
If you have more than one, your problem is in your deployment with replicas: 1.
If not, please ignore this answer.
You don't know where and what that replica will be. So if you don't have docker local registry on all of your nodes, and you got unlucky that kubernetes is trying to use some node without docker registry, you will end up with that error.
Same thing happened to me, same error connection refused because deployment went to node without local docker registry.
As I am typing this, I think this can be resolved with ingress.
You do registry as deployment, add service, add volume for images and put it to ingress.
Little more of work but at least all your nodes will be sync (all of your pods sorry).
I'm trying to run my first kubernetes pod locally.
I've run the following command (from here):
export ARCH=amd64
docker run -d \
--volume=/:/rootfs:ro \
--volume=/sys:/sys:ro \
--volume=/var/lib/docker/:/var/lib/docker:rw \
--volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
--volume=/var/run:/var/run:rw \
--net=host \
--pid=host \
--privileged \
gcr.io/google_containers/hyperkube-${ARCH}:${K8S_VERSION} \
/hyperkube kubelet \
--containerized \
--hostname-override=127.0.0.1 \
--api-servers=http://localhost:8080 \
--config=/etc/kubernetes/manifests \
--cluster-dns=10.0.0.10 \
--cluster-domain=cluster.local \
--allow-privileged --v=2
Then, I've trying to run the following:
kubectl create -f ./run-aii.yaml
run-aii.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: aii
spec:
replicas: 2
template:
metadata:
labels:
run: aii
spec:
containers:
- name: aii
image: aii
ports:
- containerPort: 5144
env:
- name: KAFKA_IP
value: kafka
volumeMounts:
- mountPath: /root/script
name: scripts-data
readOnly: true
- mountPath: /home/aii/core
name: core-aii
readOnly: true
- mountPath: /home/aii/genome
name: genome-aii
readOnly: true
- mountPath: /home/aii/main
name: main-aii
readOnly: true
- name: kafka
image: kafkazoo
volumeMounts:
- mountPath: /root/script
name: scripts-data
readOnly: true
- mountPath: /root/config
name: config-data
readOnly: true
- name: ws
image: ws
ports:
- containerPort: 3000
volumes:
- name: scripts-data
hostPath:
path: /home/aii/general/infra/script
- name: config-data
hostPath:
path: /home/aii/general/infra/config
- name: core-aii
hostPath:
path: /home/aii/general/core
- name: genome-aii
hostPath:
path: /home/aii/general/genome
- name: main-aii
hostPath:
path: /home/aii/general/main
Now, when I run: kubectl get pods
I'm getting:
NAME READY STATUS RESTARTS AGE
aii-806125049-18ocr 0/3 ImagePullBackOff 0 52m
aii-806125049-6oi8o 0/3 ImagePullBackOff 0 52m
aii-pod 0/3 ImagePullBackOff 0 23h
k8s-etcd-127.0.0.1 1/1 Running 0 2d
k8s-master-127.0.0.1 4/4 Running 0 2d
k8s-proxy-127.0.0.1 1/1 Running 0 2d
nginx-198147104-9kajo 1/1 Running 0 2d
BTW: docker images return:
REPOSITORY TAG IMAGE ID CREATED SIZE
ws latest fa7c5f6ef83a 7 days ago 706.8 MB
kafkazoo latest 84c687b0bd74 9 days ago 697.7 MB
aii latest bd12c4acbbaf 9 days ago 1.421 GB
node 4.4 1a93433cee73 11 days ago 647 MB
gcr.io/google_containers/hyperkube-amd64 v1.2.4 3c4f38def75b 11 days ago 316.7 MB
nginx latest 3edcc5de5a79 2 weeks ago 182.7 MB
docker_kafka latest e1d954a6a827 5 weeks ago 697.7 MB
spotify/kafka latest 30d3cef1fe8e 12 weeks ago 421.6 MB
wurstmeister/zookeeper latest dc00f1198a44 3 months ago 468.7 MB
centos latest 61b442687d68 4 months ago 196.6 MB
centos centos7.2.1511 38ea04e19303 5 months ago 194.6 MB
gcr.io/google_containers/etcd 2.2.1 a6cd91debed1 6 months ago 28.19 MB
gcr.io/google_containers/pause 2.0 2b58359142b0 7 months ago 350.2 kB
sequenceiq/hadoop-docker latest 5c3cc170c6bc 10 months ago 1.766 GB
why do I get the ImagePullBackOff ??
By default Kubernetes looks in the public Docker registry to find images. If your image doesn't exist there it won't be able to pull it.
You can run a local Kubernetes registry with the registry cluster addon.
Then tag your images with localhost:5000:
docker tag aii localhost:5000/dev/aii
Push the image to the Kubernetes registry:
docker push localhost:5000/dev/aii
And change run-aii.yaml to use the localhost:5000/dev/aii image instead of aii. Now Kubernetes should be able to pull the image.
Alternatively, you can run a private Docker registry through one of the providers that offers this (AWS ECR, GCR, etc.), but if this is for local development it will be quicker and easier to get setup with a local Kubernetes Docker registry.
One issue that may cause an ImagePullBackOff especially if you are pulling from a private registry is if the pod is not configured with the imagePullSecret of the private registry.
An authentication error may cause an imagePullBackOff.
I had the same problem what caused it was that I already had created a pod from the docker image via the .yml file, however I mistyped the name, i.e test-app:1.0.1 when I needed test-app:1.0.2 in my .yml file. So I did kubectl delete pods --all to remove the faulty pod then redid the kubectl create -f name_of_file.yml which solved my problem.
You can specify also imagePullPolicy: Never in the container's spec:
containers:
- name: nginx
imagePullPolicy: Never
image: custom-nginx
ports:
- containerPort: 80
The issue arises when the image is not present on the cluster and k8s engine is going to pull the respective registry.
k8s Engine enables 3 types of ImagePullPolicy mentioned :
Always : It always pull the image in container irrespective of changes in the image
Never : It will never pull the new image on the container
IfNotPresent : It will pull the new image in cluster if the image is not present.
Best Practices : It is always recommended to tag the new image in both docker file as well as k8s deployment file. So That it can pull the new image in container.
I too had this problem, when I checked I image that I was pulling from a private registry was removed
If we describe pod it will show pulling event and the image it's trying to pull
kubectl describe pod <POD_NAME>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 18h (x35 over 20h) kubelet, gsk-kub Pulling image "registeryName:tag"
Normal BackOff 11m (x822 over 20h) kubelet, gsk-kub Back-off pulling image "registeryName:tag"
Warning Failed 91s (x858 over 20h) kubelet, gsk-kub Error: ImagePullBackOff
Despite all the other great answers none helped me until I found a comment that pointed out this Updating images:
The default pull policy is IfNotPresent which causes the kubelet to skip pulling an image if it already exists.
That's exactly what I wanted, but didn't seem to work.
Reading further said the following:
If you would like to always force a pull, you can do one of the following:
omit the imagePullPolicy and use :latest as the tag for the image to use.
When I replaced latest with a version (that I had pushed to minikube's Docker daemon), it worked fine.
$ kubectl create deployment presto-coordinator \
--image=warsaw-data-meetup/presto-coordinator:beta0
deployment.apps/presto-coordinator created
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
presto-coordinator 1/1 1 1 3s
Find the pod of the deployment (using kubectl get pods) and use kubectl describe pod to find out more on the pod.
Debugging step:
kubectl get pod [name] -o yaml
Run this command to get the YAML configuration of the pod (Get YAML for deployed Kubernetes services?). In my case, it was under this section:
state:
waiting:
message: 'rpc error: code = Unknown desc = Error response from daemon: Get
https://repository:9999/v2/abc/location/image/manifests/tag:
unauthorized: BAD_CREDENTIAL'
reason: ErrImagePull
My issue got resolved upon adding the appropriate tag to the image I wanted to pull from the DockerHub.
Previously:
containers:
- name: nginx
image: alex/my-app-image
Corrected Version:
containers:
- name: nginx
image: alex/my-app-image:1.1
The image has only one version, which was 1.1. Since I skipped that initially, it has thrown an error.
After correctly mentioning the version, it worked fine!!
I had similar problem when using minikube over hyperv with 2048GB memory.
I found that in HyperV manager the Memory Demand was higher than allocated.
So I stopped minikube and assigned somewhere between 4096-6144GB. It worked fine after that, all pods running!
I don't know if this can nail down the issue in every case. But just have a look at the memory and disk allocated to the minikube.
I had face same issue.
imagePullBackOff means it is not able to pull docker image from registry or smoking issue with your registry.
the solution would be as below.
1. Check you image registry name.
2. check image pull secrets.
3. check image is present with same tag or name.
4. check you registry is working.
ImagepullBackoff mesns you have not passed secret in your yaml or secret is wrong and might be you image name is wrong.
If you pulling image from private registry you have to provide image pull secret then it will able to pull image.
you also need to creat secrete before you deploy the pod. you can use below command to create secrete.
kubectl create secret docker-registry regcred --docker-server=artifacts.exmple.int --docker-username=<username> --docker-password=<password> -n <namespace>
you can pass secret in yaml like below.
imagePullSecrets:
- name: regcred
I had this error when I tried to create a replicationcontroller. The issue was, I wrongly spelt the nginx image name in template definition.
Note: This error occurs when kubernetes is unable to pull the specified image from the repository.
I had the same issue.
[mayur#mayur_cloudtest ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-598b589c46-zcr5d 0/1 ImagePullBackOff 0 6m21s
Later I found that the docker on which the pod is created is using a private registry for images and Nginx was not present in it.
I have changed the docker registry to default and reloaded the daemon.
Post that issue got resolved.
[mayur#mayur_cloudtest ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-598b589c46-7cbjf 1/1 Running 0 33s
[mayur#mayur_cloudtest ~]$
[mayur#mayur_cloudtest ~]$
[mayur#mayur_cloudtest ~]$ kubectl exec -it nginx-598b589c46-7cbjf -- /bin/bash
root#nginx-598b589c46-7cbjf:/# ls
bin dev docker-entrypoint.sh home lib64 mnt proc run srv tmp var
boot docker-entrypoint.d etc lib media opt root sbin sys usr
root#nginx-598b589c46-7cbjf:/#
For my case, Kubernetes was not able to communicate to my private registry running on localhost:5000 after update to MacOS Monterey. It was running fine previously. The reason was Apple Airplay now listen to port 5000.
In order to resolve this issue, I disabled Apple Airplay receiver.
Go To System preference > Sharing > Disable checkbox for Airplay receiver.
Source Link: https://developer.apple.com/forums/thread/682332
To handle this error, Just have to create Kubernetes secrets and use it in manifest.yaml file
If it is private repository then it is mandatory to use user secrets
To generate secrets -
kubectl create secret docker-registry docker-secrets --docker-server=https://index.docker.io/v1/ --docker-username=ExamplaName --docker-password=ExamplePassword --docker-email=example#gmail.com
for --docker-server, use https://index.docker.io/v1/
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test
image: ExampleUsername/test:tagname
ports:
- containerPort: 3015
imagePullSecrets:
- name: docker-secrets
I am using kubernetes on a single machine for testing, I have built a custom image from the nginx docker image, but when I try to use the image in kubernetes I get an image pull error?????
MY POD YAML
kind: Pod
apiVersion: v1
metadata:
name: yumserver
labels:
name: frontendhttp
spec:
containers:
- name: myfrontend
image: my/nginx:latest
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
imagePullSecrets:
- name: myregistrykey
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim-1
MY KUBERNETES COMMAND
kubectl create -f pod-yumserver.yaml
THE ERROR
kubectl describe pod yumserver
Name: yumserver
Namespace: default
Image(s): my/nginx:latest
Node: 127.0.0.1/127.0.0.1
Start Time: Tue, 26 Apr 2016 16:31:42 +0100
Labels: name=frontendhttp
Status: Pending
Reason:
Message:
IP: 172.17.0.2
Controllers: <none>
Containers:
myfrontend:
Container ID:
Image: my/nginx:latest
Image ID:
QoS Tier:
memory: BestEffort
cpu: BestEffort
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
mypd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: myclaim-1
ReadOnly: false
default-token-64w08:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-64w08
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
13s 13s 1 {default-scheduler } Normal Scheduled Successfully assigned yumserver to 127.0.0.1
13s 13s 1 {kubelet 127.0.0.1} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
12s 12s 1 {kubelet 127.0.0.1} spec.containers{myfrontend} Normal Pulling pulling image "my/nginx:latest"
8s 8s 1 {kubelet 127.0.0.1} spec.containers{myfrontend} Warning Failed Failed to pull image "my/nginx:latest": Error: image my/nginx:latest not found
8s 8s 1 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "myfrontend" with ErrImagePull: "Error: image my/nginx:latest not found"
So you have the image on your machine aready. It still tries to pull the image from Docker Hub, however, which is likely not what you want on your single-machine setup. This is happening because the latest tag sets the imagePullPolicy to Always implicitly. You can try setting it to IfNotPresent explicitly or change to a tag other than latest. – Timo Reimann Apr 28 at 7:16
For some reason Timo Reimann did only post this above as a comment, but it definitely should be the official answer to this question, so I'm posting it again.
Run eval $(minikube docker-env) before building your image.
Full answer here: https://stackoverflow.com/a/40150867
This should work irrespective of whether you are using minikube or not :
Start a local registry container:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Do docker images to find out the REPOSITORY and TAG of your local image. Then create a new tag for your local image :
docker tag <local-image-repository>:<local-image-tag> localhost:5000/<local-image-name>
If TAG for your local image is <none>, you can simply do:
docker tag <local-image-repository> localhost:5000/<local-image-name>
Push to local registry :
docker push localhost:5000/<local-image-name>
This will automatically add the latest tag to localhost:5000/<local-image-name>.
You can check again by doing docker images.
In your yaml file, set imagePullPolicy to IfNotPresent :
...
spec:
containers:
- name: <name>
image: localhost:5000/<local-image-name>
imagePullPolicy: IfNotPresent
...
That's it. Now your ImagePullError should be resolved.
Note: If you have multiple hosts in the cluster, and you want to use a specific one to host the registry, just replace localhost in all the above steps with the hostname of the host where the registry container is hosted. In that case, you may need to allow HTTP (non-HTTPS) connections to the registry:
5 (optional). Allow connection to insecure registry in worker nodes:
sudo echo '{"insecure-registries":["<registry-hostname>:5000"]}' > /etc/docker/daemon.json
just add imagePullPolicy to your deployment file
it worked for me
spec:
containers:
- name: <name>
image: <local-image-name>
imagePullPolicy: Never
The easiest way to further analysis ErrImagePull problems is to ssh into the node and try to pull the image manually by doing docker pull my/nginx:latest. I've never set up Kubernetes on a single machine but could imagine that the Docker daemon isn't reachable from the node for some reason. A handish pull attempt should provide more information.
If you are using a vm driver, you will need to tell Kubernetes to use the Docker daemon running inside of the single node cluster instead of the host.
Run the following command:
eval $(minikube docker-env)
Note - This command will need to be repeated anytime you close and restart the terminal session.
Afterward, you can build your image:
docker build -t USERNAME/REPO .
Update, your pod manifest as shown above and then run:
kubectl apply -f myfile.yaml
in your case your yaml file should have
imagePullPolicy: Never
see below
kind: Pod
apiVersion: v1
metadata:
name: yumserver
labels:
name: frontendhttp
spec:
containers:
- name: myfrontend
image: my/nginx:latest
imagePullPolicy: Never
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: mypd
imagePullSecrets:
- name: myregistrykey
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim-1
found this here
https://keepforyourself.com/docker/run-a-kubernetes-pod-locally/
Are you using minikube on linux? You need to install docker ( I think), but you don't need to start it. Minikube will do that. Try using the KVM driver with this command:
minikube start --vm-driver kvm
Then run the eval $(minikube docker-env) command to make sure you use the minikube docker environment. build your container with a tag build -t mycontainername:version .
if you then type docker ps you should see a bunch of minikube containers already running.
kvm utils are probably already on your machine, but they can be installed like this on centos/rhel:
yum install qemu-kvm qemu-img virt-manager libvirt libvirt-python
Make sure that your "Kubernetes Context" in Docker Desktop is actually a "docker-desktop" (i.e. not a remote cluster).
(Right click on Docker icon, then select "Kubernetes" in menu)
All you need to do is just do a docker build from your dockerfile, or get all the images on the nodes of your cluster, do a suitable docker tag and create the manifest.
Kubernetes doesn't directly pull from the registry. First it searches for the image on local storage and then docker registry.
Pull latest nginx image
docker pull nginx
docker tag nginx:latest test:test8970
Create a deployment
kubectl run test --image=test:test8970
It won't go to docker registry to pull the image. It will bring up the pod instantly.
And if image is not present on local machine it will try to pull from docker registry and fail with ErrImagePull error.
Also if you change the imagePullPolicy: Never. It will never look for the registry to pull the image and will fail if image is not found with error ErrImageNeverPull.
kind: Deployment
metadata:
labels:
run: test
name: test
spec:
replicas: 1
selector:
matchLabels:
run: test
template:
metadata:
creationTimestamp: null
labels:
run: test
spec:
containers:
- image: test:test8070
name: test
imagePullPolicy: Never
Adding another answer here as the above gave me enough to figure out the cause of my particular instance of this issue. Turns out that my build process was missing the tagging needed to make :latest work. As soon as I added a <tags> section to my docker-maven-plugin configuration in my pom.xml, everything was hunky-dory. Here's some example configuration:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.27.2</version>
<configuration>
<images>

</images>
</configuration>
</plugin>
ContainerD (and Windows)
I had the same error, while trying to run a custom windows container on a node. I had imagePullPolicy set to Never and a locally existing image present on the node. The image also wasn't tagged with latest, so the comment from Timo Reimann wasn't relevant.
Also, on the node machine, the image showed up when using nerdctl image. However they didn't show up in crictl images.
Thanks to a comment on Github, I found out that the actual problem is a different namespace of ContainerD.
As shown by the following two commands, images are not automatically build in the correct namespace:
ctr -n default images ls # shows the application images (wrong namespace)
ctr -n k8s.io images ls # shows the base images
To solve the problem, export and reimport the images to the correct namespace k8s.io by using the following command:
ctr --namespace k8s.io image import exported-app-image.tar
I was facing similar issue .Image was present in local but k8s was not able to pick it up.
So I went to terminal ,deleted the old image and ran eval $(minikube -p minikube docker-env) command.
Rebuilt the image and the redeployed the deployment yaml ,and it worked
I create a image centos_etcd_socat from the base image centos. I want to create a pod. the content follows:
apiVersion: V1
kind: Pod
metadata:
name: testetcd
labels:
app: testetcd
spec:
containers:
- name: testetcd
image: centos_etcd_socat
nodeselector:
noderole: nodeslave1
When I create the pod it is always in the restarting state:
NAME READY STATUS RESTARTS AGE NODE
nginx 1/1 Running 1 35d 127.0.0.1
nodeselectiontest 1/1 Running 0 3h 127.0.0.1
testetcd 0/1 Running 12 41m 192.168.10.10
testubuntu 1/1 Running 1 1d 192.168.10.10
Why the pod is always restart?
what is the content of your image?
If the container is an ephemeral service (i.e. a one time command) or it does not have any specific long running command as entrypoint, it will terminate right away.
Then the Replication Controller will try to restart it.
So, first things first, try your container with Docker to see if it runs. If it runs as intended, run it on Kubernetes with the Replication Controller.
You can also check the logs with
kubectl logs <container-id>
which should tell you what is going on in your pod (if you log anything of course)