How to resolve ImagePullBackOff error in local? - docker

Net core application image and I am trying to create deployment in local kubernetes.
I created docker image as below.
docker tag microservicestest:dev microservicestest .
docker build -t microservicestest .
docker run -d -p 8080:80 --name myapp microservicetest
Then I created deployment as below.
kubectl run microservicestest-deployment --image=microservicestest:latest --port 80 --replicas=3
kubectl expose deployment microservicestest-deployment --type=NodePort
then when I see Kubectl get pods I see below error
Below is the output when I run docker images
Below is the output
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2020-09-22T04:29:14Z"
generation: 1
labels:
run: microservicestest-deployment
name: microservicestest-deployment
namespace: default
resourceVersion: "17282"
selfLink: /apis/apps/v1/namespaces/default/deployments/microservicestest-deployment
uid: bf75410a-d332-4016-9757-50d534114599
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
run: microservicestest-deployment
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: microservicestest-deployment
spec:
containers:
- image: microservicestest:latest
imagePullPolicy: Always
name: microservicestest-deployment
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
conditions:
- lastTransitionTime: "2020-09-22T04:29:14Z"
lastUpdateTime: "2020-09-22T04:29:14Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2020-09-22T04:29:14Z"
lastUpdateTime: "2020-09-22T04:29:14Z"
message: ReplicaSet "microservicestest-deployment-5c67d587b9" is progressing.
reason: ReplicaSetUpdated
status: "True"
type: Progressing
observedGeneration: 1
replicas: 3
unavailableReplicas: 3
updatedReplicas: 3
I am not able to understand why my pods are not able to pull the image from local. Can someone help me to identify the issue What I am making here. Any help would be appreciated. Thank you

if you are using minikube you first need to build the images in the docker hosted in the minikube machine doing this in your bash session eval $(minikube docker-env) for windows check here
then you need to tell Kubernetes your image pull policy to be Never or IfNotPresent to look for local images
spec:
containers:
- image: my-image:my-tag
name: my-app
imagePullPolicy: Never
check here the official documentation
By default, the kubelet tries to pull each image from the specified registry. However, if the imagePullPolicy property of the container is set to IfNotPresent or Never, then a local image is used (preferentially or exclusively, respectively).
as you are not using yaml file you can create the resources like this
kubectl run microservicestest-deployment --image=microservicestest:latest --image-pull-policy=Never --port 80 --replicas=3
kubectl expose deployment microservicestest-deployment --type=NodePort

Related

Unable to deploy pods on GKE cluster created using 'Ubuntu with Docker' image

I am trying to deploy few pods on GKE cluster created using image "Ubuntu with docker" and they are giving the error below. I did not find any solution on the internet. Any help would be greatly appreciated.
Error response from daemon: OCI runtime create failed: invalid mount {Destination:[/sys/fs/cgroup Type:bind Source:/var/lib/docker/volumes/d9e3b871f4cc210e3dba6471f326dcbf7b404daad7906ed9fc669e207c093ec2/_data Options:[rbind]}: mount destination [/sys/fs/cgroup not absolute: unknown
The spec file
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
diamanti.com/app: armada
diamanti.com/control-plane: 'true'
name: armada
namespace: diamanti-system
spec:
selector:
matchLabels:
diamanti.com/app: armada
diamanti.com/control-plane: 'true'
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
labels:
diamanti.com/app: armada
diamanti.com/control-plane: 'true'
spec:
containers:
- envFrom:
- configMapRef:
name: armada-config
image: 'diamanti/armada:v3.3.1-197'
name: armada
dnsPolicy: ClusterFirstWithHostNet
nodeSelector:
spektra.diamanti.io/node: "true"
hostNetwork: true
restartPolicy: Always
schedulerName: default-scheduler
serviceAccount: diamanti-node-runner
serviceAccountName: diamanti-node-runner
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
The serviceaccount diamanti-node-runner is bound to cluster-admin role.
As Kubernetes is removing the support for docker runtime you can use the other container runtime. Use their default, it works fine. You do not need to change anything at your end related to docker images.

Issues with setting up kubernetes for local testing using docker image

I created a docker image of my app which is running an internal server exposed at 8080.
Then I tried to create a local kubernetes cluster for testing, using the following set of commands.
$ kubectl create deployment --image=test-image test-app
$ kubectl set env deployment/test-app DOMAIN=cluster
$ kubectl expose deployment test-app --port=8080 --name=test-service
I am using Docker-desktop on windows to run run kubernetes. This exposes my cluster to external IP localhost but i cannot access my app. I checked the status of the pods and noticed this issue:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-66-ps2 0/1 ImagePullBackOff 0 8h
test-6f-6jh 0/1 InvalidImageName 0 7h42m
May I know what could be causing this issue? And how can i make it work on local ?
Thanks, Look forward to the suggestions!
My YAML file for reference:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "4"
creationTimestamp: "2021-10-13T18:00:15Z"
generation: 4
labels:
app: test-app
name: test-app
namespace: default
resourceVersion: "*****"
uid: ************
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: test-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: test-app
spec:
containers:
- env:
- name: DOMAIN
value: cluster
image: C:\Users\test-image
imagePullPolicy: Always
name: e20f23453f27
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
conditions:
- lastTransitionTime: "2021-10-13T18:00:15Z"
lastUpdateTime: "2021-10-13T18:00:15Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2021-10-13T18:39:51Z"
lastUpdateTime: "2021-10-13T18:39:51Z"
message: ReplicaSet "test-66" has timed out progressing.
reason: ProgressDeadlineExceeded
status: "False"
type: Progressing
observedGeneration: 4
replicas: 2
unavailableReplicas: 2
updatedReplicas: 1
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2021-10-13T18:01:49Z"
labels:
app: test-app
name: test-service
namespace: default
resourceVersion: "*****"
uid: *****************
spec:
clusterIP: 10.161.100.100
clusterIPs:
- 10.161.100.100
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 41945
port: 80
protocol: TCP
targetPort: 8080
selector:
app: test-app
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: localhost
The reason you are facing ImagePullBackOff and InvalidImageName issue is because your app image does not exist on the kubernetes cluster you deployed via docker, rather it exists on your local machine!
To resolve this issue for testing purpose what you can do is to mount the project workspace and create your image there on your kubernetes cluster and then build image using docker on the k8s cluster or upload your image to docker hub and then setting your deployment to pick image from docker hub!

Need a working Kubectl binary inside an image

My goal is to have a pod with a working Kubectl binary inside.
Unfortunatly every kubectl image from docker hub I booted using basic yaml resulted in CrashLoopbackOff or else.
Has anyone got some yaml (deployment, pod, etc) that would get me my kubectl ?
I tried a bunch of images with this basic yaml there:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubectl-demo
labels:
app: deploy
role: backend
spec:
replicas: 1
selector:
matchLabels:
app: deploy
role: backend
template:
metadata:
labels:
app: deploy
role: backend
spec:
containers:
- name: kubectl-demo
image: <SOME_IMAGE>
ports:
- containerPort: 80
Thx
Or, you can do this. It works in my context, with kubernetes on VMs, where I know where is kubeconfig file. You would need to make the necessary changes, to make it work in your environment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubectl
spec:
replicas: 1
selector:
matchLabels:
role: kubectl
template:
metadata:
labels:
role: kubectl
spec:
containers:
- image: viejo/kubectl
name: kubelet
tty: true
securityContext:
privileged: true
volumeMounts:
- name: kube-config
mountPath: /root/.kube/
volumes:
- name: kube-config
hostPath:
path: /home/$USER/.kube/
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: Exists
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
This is the result:
$ kubectl get po
NAME READY STATUS RESTARTS AGE
kubectl-cb8bfc6dd-nv6ht 1/1 Running 0 70s
$ kubectl exec kubectl-cb8bfc6dd-nv6ht -- kubectl get no
NAME STATUS ROLES AGE VERSION
kubernetes-1-17-master Ready master 16h v1.17.3
kubernetes-1-17-worker Ready <none> 16h v1.17.3
As Suren already explained in the comments that kubectl is not a daemon so kubectl will run, exit and cause the container to restart.
There are a couple of workarounds for this. One of these is to use sleep command with infinity argument. This would keep the Pod alive, prevent it from restarting and allow you to exec into it.
Here`s an example how to do that:
spec:
containers:
- image: bitnami/kubectl
command:
- sleep
- "infinity"
name: kctl
Let me know if this helps.

Does kubernetes kubectl run with image creates deployment yaml file

I am trying to use Minikube and Docker to understand the concepts of Kubernetes architecture.
I created a spring boot application with Dockerfile, created tag and pushed to Dockerhub.
In order to deploy the image in K8s cluster, i issued the below command,
# deployed the image
$ kubectl run <deployment-name> --image=<username/imagename>:<version> --port=<port the app runs>
# exposed the port as nodeport
$ kubectl expose deployment <deployment-name> --type=NodePort
Everything worked and i am able to see the 1 pods running kubectl get pods
The Docker image i pushed to Dockerhub didn't had any deployment YAML file.
Below command produced an yaml output
Does kubectl command creates deployment Yaml file out of the box?
$ kubectl get deployments --output yaml
apiVersion: v1
items:
- apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2019-12-24T14:59:14Z"
generation: 1
labels:
run: hello-service
name: hello-service
namespace: default
resourceVersion: "76195"
selfLink: /apis/apps/v1/namespaces/default/deployments/hello-service
uid: 90950172-1c0b-4b9f-a339-b47569366f4e
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: hello-service
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-service
spec:
containers:
- image: thirumurthi/hello-service:0.0.1
imagePullPolicy: IfNotPresent
name: hello-service
ports:
- containerPort: 8800
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2019-12-24T14:59:19Z"
lastUpdateTime: "2019-12-24T14:59:19Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2019-12-24T14:59:14Z"
lastUpdateTime: "2019-12-24T14:59:19Z"
message: ReplicaSet "hello-service-75d67cc857" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
kind: List
metadata:
resourceVersion: ""
selfLink: ""
I think the easiest way to understand whats going on under the hood when you create kubernetes resources using imperative commands (versus declarative approach by writing and applying yaml definition files) is to run a simple example with 2 additional flags:
--dry-run
and
--output yaml
Names of these flags are rather self-explanatory so I think there is no further need for comment explaining what they do. You can simply try out the below examples and you'll see the effect:
kubectl run nginx-example --image=nginx:latest --port=80 --dry-run --output yaml
As you can see it produces the appropriate yaml manifest without applying it and creating actual deployment:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: nginx-example
name: nginx-example
spec:
replicas: 1
selector:
matchLabels:
run: nginx-example
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: nginx-example
spec:
containers:
- image: nginx:latest
name: nginx-example
ports:
- containerPort: 80
resources: {}
status: {}
Same with expose command:
kubectl expose deployment nginx-example --type=NodePort --dry-run --output yaml
produces the following output:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: nginx-example
name: nginx-example
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx-example
type: NodePort
status:
loadBalancer: {}
And now the coolest part. You can use simple output redirection:
kubectl run nginx-example --image=nginx:latest --port=80 --dry-run --output yaml > nginx-example-deployment.yaml
kubectl expose deployment nginx-example --type=NodePort --dry-run --output yaml > nginx-example-nodeport-service.yaml
to save generated Deployment and NodePort Service definitions so you can further modify them if needed and apply using either kubectl apply -f filename.yaml or kubectl create -f filename.yaml.
Btw. kubectl run and kubectl expose are generator-based commands and as you may have noticed when creating your deployment (as you probably got the message: kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.) they use --generator flag. If you don't specify it explicitly it gets the default value which for kubectl run is --generator=deployment/apps.v1beta1 so by default it creates a Deployment. But you can modify it by providing --generator=run-pod/v1 nginx-example and instead of Deployment it will create a single Pod. When we go back to our previous example it may look like this:
kubectl run --generator=run-pod/v1 nginx-example --image=nginx:latest --port=80 --dry-run --output yaml
I hope this answered your question and clarified a bit the mechanism of creating kubernetes resources using imperative commands.
Yes, kubectl run creates a deployment. If you look at the label field, you can see run: hello-service. This label is used later in the selector.

Why pulling private image in Pod is not working in Kubernetes Registry addon?

I am very new to Kubernetes and I setup Kubernetes Registry addons just copy and pasting the yaml from Kubernetes Registry Addon just a small change in ReplicationController with emptyDir
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-registry-v0
namespace: kube-system
labels:
k8s-app: kube-registry-upstream
version: v0
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-registry-upstream
version: v0
template:
metadata:
labels:
k8s-app: kube-registry-upstream
version: v0
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: registry
image: registry:2
resources:
limits:
cpu: 100m
memory: 100Mi
env:
- name: REGISTRY_HTTP_ADDR
value: :5000
- name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
value: /var/lib/registry
volumeMounts:
- name: image-store
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
protocol: TCP
volumes:
- name: image-store
emptyDir: {}
Then I forward the 5000 port as follows
$POD=$(kubectl get pods --namespace kube-system -l k8s-app=kube-registry-upstream \
-o template --template '{{range .items}}{{.metadata.name}} {{.status.phase}}{{"\n"}}{{end}}' \
| grep Running | head -1 | cut -f1 -d' ')
$kubectl port-forward --namespace kube-system $POD 5000:5000 &
I can push my images fine as follows
$docker tag alpine localhost:5000/nurrony/alpine
$docker push localhost:5000/nurrony/alpine
Then I write a Pod to test it like below
Version: v1
kind: Pod
metadata:
name: registry-demo
labels:
purpose: registry-demo
spec:
containers:
- name: registry-demo-container
image: localhost:5000/nurrony/alpine
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
env:
- name: MESSAGE
value: "hello world"
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
It is throwing an error
Failed to pull image "localhost:5000/nurrony/alpine": image pull failed for localhost:5000/nurrony/alpine:latest, this may be because there are no credentials on this request. details: (net/http: request canceled)
Any idea why is this happening? Thanks in advance.
Most likely your proxy is not working.
The Docker Registry K8S addon comes with DaemonSet which defines registry proxy for every node which runes your kubelets. What I would suggest you is to inspect those proxies since they will map Docker Registry (K8S) Service to localhost:5000 on every node.
Please note, that even if you have green check mark on your registry proxies that does not mean they work correctly. Open the logs of them and make sure that everything is working.
If your proxy is configured and you are still getting this error then most likely environment variable REGISTRY_HOST inside kube-registry-proxy is wrong. Are you using DNS here like in example? Is your DNS configured correctely? Is it working if you put this variable to ClusterIP of your service?
Also, please be aware that your RC labels need to match SVC selectors, otherwise service cannot discover your pods.
Hope it helps.

Resources