Kubernetes pod status ImagePullBackOff - docker

I am trying to create a pod based on a container image from local machine not from public registry. I am retrieving the status of pod as ImagePullBackoff
Docker file
FROM tensorflow/tensorflow:latest-py3
RUN pip install -q keras==2.3.1
RUN pip install pillow
RUN mkdir -p /app/src
WORKDIR /app/src
COPY . ./
EXPOSE 31700
CMD ["python", "test.py"]
To build the docker image
docker build -t tensor-keras .
To create a pod without using yaml file
kubectl run server --image=tensor-keras:latest
Yaml file
apiVersion: v1
kind: Pod
metadata:
name: server
labels:
app: server
spec:
containers:
- name: tensor-keras
image: tensor-keras:latest
ports:
- containerPort: 31700
I am retreiving the status of the pod as
NAME READY STATUS RESTARTS AGE
server 0/1 ImagePullBackOff 0 27m
Help is highly appreciated thanks

By default, Kubernetes will try to pull your image from a remote container repository. In your case, your image name is not prefixed by a container repository url, so it uses default one, most of the time it is set to Docker Hub.
What is the value of the imagePullPolicy field? For you use-case it should be set to Never to use local image.
Which tool are you using to run your Kubernetes instance?
For example, with minikube, procedure to use a local image is described here: https://stackoverflow.com/a/42564211/2784039
With kind, you should use command kind load docker-image <tensor-keras:latest> o load the image inside your cluser
With k3s, using local image should work out of the box, if imagePullPolicy is set to Never

Related

Containers crashing with "CrashLoopBackOff" status in Minikube

I am new to containerization and am having some difficulties. I have an application that consists of a React frontend, a Python backend using FastAPI, and PostgreSQL databases using SQL Alchemy for object-relational mapping. I decided to put each component inside a Docker container so that I can deploy the application on Azure in the future (I know that some people may have strong opinions on deploying the frontend and database in containers, but I am doing so because it is required by the project's requirements).
After doing this, I started working with Minikube. However, I am having problems where all the containers that should be running inside pods have the status "CrashLoopBackOff". From what I can tell, this means that the images are being pulled from Docker Hub, containers are being started but then failing for some reason.
I tried running "kubectl logs" and nothing is returned. The "kubectl describe" command, in the Events section, returns: "Warning BackOff 30s (x140 over 30m) kubelet Back-off restarting failed container."
I have also tried to minimize the complexity by just trying to run the frontend component. Here are my Dockerfile and manifest file:
Dockerfile:
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "run", "dev"]
manifest file .yml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: xxxtest/my-first-repo:yyy-frontend
ports:
- containerPort: 3000
I do not have a service manifest yet, and I don't think it is related to this issue.
Can anyone provide any help or tips on how to troubleshoot this issue? I would really appreciate any guidance you can offer. Thank you in advance for your time and assistance!
Have a great day!
This CrashLoopBackOff is related to a container error. If you want to fix this error, you need to see the container log, these is my tips:
The best practice in K8s is to redirect the application logs to /dev/stdout or /dev/stderr is not recommended redirect to a file so that you can use the kubectl logs <POD NAME>.
Try to clear the cache of your local containers, download and run the same image, and tag you configured in your deployment file.
If you need any environment variable to run the container locally, you'll also need those env's in your deployment file.
Always use the flag imagePullPolicy: Always mainly if you are using the same image tag. EDIT: Because the default image pull policy is IfNotPresent, if you fixed the container image, the k8s will not pull a new image version.
Docs:
ImagePullPolicy: https://kubernetes.io/docs/concepts/containers/images/
Standard Output: https://kubernetes.io/docs/concepts/cluster-administration/logging/

kubernetes deploy with tar docker image

I have a problem to deploy docker image via kubernetes.
One issue is that, we cannot use any docker image registry service e.g. docker hub or any cloud services. But, yes I have docker images as .tar file.
However, it always fails with following message
Warning Failed 1s kubelet, dell20
Failed to pull image "test:latest": rpc
error: code = Unknown
desc = failed to resolve image "docker.io/library/test:latest":
failed to do request: Head https://registry-1.docker.io/v2/library/test/manifests/latest: dial tcp i/o timeout
I also change deployment description with IfNotPresent or Never. In this case it will fail anyway with ErrImageNeverPull.
My guess is: kubernetes tries to use Docker Hub anyway, since it https://registry-1.docker.io in order to pull the image. I just want to use tar docker image in local disk, rather than pulling from some services.
And yes the image is in docker:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
test latest 9f4916a0780c 6 days ago 1.72GB
Can anyone give me any advices on this problem?
I was successful with using local image with Kubernetes cluster. I provided the explanation with example below:
The only prerequisite is that you need to make sure you have access to upload this image directly to nodes.
Create the image
Pull the default nginx image from docker registry with below command:
$ docker pull nginx:1.17.5
Nginx image is used only for demonstration purposes.
Tag this image with new name as nginx-local with command:
$ docker tag nginx:1.17.5 nginx-local:1.17.5
Save this image as nginx-local.tar executing command:
$ docker save nginx-local:1.17.5 > nginx-local.tar
Link to documentation: docker save
File nginx-local.tar is used as your image.
Copy the image to all of the nodes
The problem with this technique is that you need to ensure all of the nodes have this image.
Lack of image will result in failed pod creation.
To copy it you can use scp. It's secure way to transer files between machines.
Example command for scp:
$ scp /path/to/your/file/nginx-local.tar user#ip_adddress:/where/you/want/it/nginx-local.tar
If image is already on the node, you will need to load it into local docker image repository with command:
$ docker load -i nginx-local.tar
To ensure that image is loaded invoke command
$ docker images | grep nginx-local
Link to documentation: docker load:
It should show something like that:
docker images | grep nginx
nginx-local 1.17.5 540a289bab6c 3 weeks ago 126MB
Creating deployment with local image
The last part is to create deployment with use of nginx-local image.
Please note that:
The image version is explicitly typed inside yaml file.
ImagePullPolicy is set to Never. ImagePullPolicy
Without this options the pod creation will fail.
Below is example deployment which uses exactly that image:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-local
namespace: default
spec:
selector:
matchLabels:
run: nginx-local
replicas: 5
template:
metadata:
labels:
run: nginx-local
spec:
containers:
- image: nginx-local:1.17.5
imagePullPolicy: Never
name: nginx-local
ports:
- containerPort: 80
Create this deployment with command:
$ kubectl create -f local-test.yaml
The result was that pods were created successfully as shown below:
NAME READY STATUS RESTARTS AGE
nginx-local-84ddb99b55-7vpvd 1/1 Running 0 2m15s
nginx-local-84ddb99b55-fgb2n 1/1 Running 0 2m15s
nginx-local-84ddb99b55-jlpz8 1/1 Running 0 2m15s
nginx-local-84ddb99b55-kzgw5 1/1 Running 0 2m15s
nginx-local-84ddb99b55-mc7rw 1/1 Running 0 2m15s
This operation was successful but I would recommend you to use local docker repository. It will easier management process with images and will be inside your infrastructure.
Link to documentation about it: Local Docker Registry

how to deploy the dist folder of npm run build in kubernetes?

I have a node.js application that I am trying to deploy to Kubernetes.To run this normally on my machine without using Kubernetes, i would run the commands npm install and npm run build and then serve the "dist" folder. Normally i would install npm's serve using "npm install -g serve" and then run "serve -s dist".This works fine.But now to deploy to Kubernetes for production how can I create my image?I mean how should the docker file for this look like?
Note: I don't want to use nginx, apache or any kind of web server.I want to do this using node/npm's server for serving the dist folder.Plz help
Dockerfile(What I have tried)
FROM node:8
WORKDIR /usr/src/app
COPY /dist
RUN npm install -g serve
serve -s dist
I am sure if this dockerfile is right.So i need guidance on how to correctly create image to serve dist folder of npm run build.Plz help?
I think that you can find tons of tutorials in the globe about customers web applications integration in Kubernetes cluster and further exposing them to the service visitors.
Actually, application containerized in Docker environment has to be ported in the particular Image from Dockerfile or build up within Docker Compose tool in order to remain all the application’s service dependencies; when the image is ready, it can be stored in public DockerHub or in isolated Private registry, thus Kubernetes container runtime then pulls this image and creates appropriate workloads(Pods) within the cluster according to the declared resource model.
I would recommend the following scenario:
Build docker image from your initial Dockerfile (I've made some correction):
FROM node:8
WORKDIR /usr/src/app
COPY dist/ ./dist/
RUN npm install -g serve
$ sudo docker image build <PATH>
Create tag related to the source image:
$ sudo docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
Export the image to DockerHub or some private registry:
$ sudo docker push [OPTIONS] NAME[:TAG]
Create relevant Kubernetes workload(Pod) and apply it in Kubernetes cluster, starting Node server inside the container, listening on 5000 port:
apiVersion: v1
kind: Pod
metadata:
name: nodetest
labels:
node: test
spec:
containers:
- name: node-test
image: TARGET_IMAGE[:TAG]
ports:
- containerPort: 5000
command: [ "/bin/bash", "-ce", "serve -s dist" ]
If you consider exposing the application for external cluster clients, then look at NodePort service:
$ kubectl expose po nodetest --port=5000 --target-port=5000 --type=NodePort
Update_1:
The application service then might reachable on the host machine within some specific port, you can simply retrieve this port value:
kubectl get svc nodetest -o jsonpath='{.spec.ports[0].nodePort}'
Update_2:
In order to expose NodePort service on some desired port, just apply the following manifest, approaching 30000 port assignment:
apiVersion: v1
kind: Service
metadata:
labels:
node: test
name: nodetest
spec:
ports:
- nodePort: 30000
port: 5000
protocol: TCP
targetPort: 5000
selector:
node: test
type: NodePort

Docker for Windows Kubernetes pod gets ImagePullBackOff after creating a new deployment

I have successfully built Docker images and ran them in a Docker swarm. When I attempt to build an image and run it with Docker Desktop's Kubernetes cluster:
docker build -t myimage -f myDockerFile .
(the above successfully creates an image in the docker local registry)
kubectl run myapp --image=myimage:latest
(as far as I understand, this is the same as using the kubectl create deployment command)
The above command successfully creates a deployment, but when it makes a pod, the pod status always shows:
NAME READY STATUS RESTARTS AGE
myapp-<a random alphanumeric string> 0/1 ImagePullBackoff 0 <age>
I am not sure why it is having trouble pulling the image - does it maybe not know where the docker local images are?
I just had the exact same problem. Boils down to the imagePullPolicy:
PC:~$ kubectl explain deployment.spec.template.spec.containers.imagePullPolicy
KIND: Deployment
VERSION: extensions/v1beta1
FIELD: imagePullPolicy <string>
DESCRIPTION:
Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always
if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated.
More info:
https://kubernetes.io/docs/concepts/containers/images#updating-images
Specifically, the part that says: Defaults to Always if :latest tag is specified.
That means, you created a local image, but, because you use the :latest it will try to find it in whatever remote repository you configured (by default docker hub) rather than using your local. Simply change your command to:
kubectl run myapp --image=myimage:latest --image-pull-policy Never
or
kubectl run myapp --image=myimage:latest --image-pull-policy IfNotPresent
I had this same ImagePullBack error while running a pod deployment with a YAML file, also on Docker Desktop.
For anyone else that finds this via Google (like I did), the imagePullPolicy that Lucas mentions above can also be set in the deployment yaml file. See the spec.templage.spec.containers.imagePullPolicy in the yaml snippet below (3 lines from the bottom).
I added that and my app deployed successfully into my local kube cluser, using the kubectl yaml deploy command: kubectl apply -f .\Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app-deployment
labels:
app: web-app
spec:
replicas: 3
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: node-web-app:latest
imagePullPolicy: Never
ports:
- containerPort: 3000
You didn't specify where myimage:latest is hosted, but essentially ImagePullBackoff means that I cannot pull the image because either:
You don't have networking setup in your Docker VM that can get to your Docker registry (Docker Hub?)
myimage:latest doesn't exist in your registry or is misspelled.
myimage:latest requires credentials (you are pulling from a private registry). You can take a look at this to configure container credentials in a Pod.

Kubernetes not creating docker container

I am really having trouble debugging this and can use some help. I am successfully staring a kubernetes service and deployment using a working docker image.
My service file:
apiVersion: v1
kind: Service
metadata:
name: auth-svc
labels:
app: auth_v1
spec:
type: NodePort
ports:
- port: 3000
nodePort: 30000
protocol: TCP
selector:
app: auth_v1
Deploy File:
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-deploy
labels:
app: auth_v1
spec:
revisionHistoryLimit: 5
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
replicas: 3
selector:
matchLabels:
app: auth_v1
template:
metadata:
labels:
app: auth_v1
spec:
containers:
- name: auth-pod
image: index.docker.io/XXX/auth
command: [ "yarn", "start-staging" ]
imagePullPolicy: Always
ports:
- containerPort: 3000
imagePullSecrets:
- name: myregistrykey
kubectl get pods shows that the pods are up and running. I have tested jumping into the pod/conatiner with shell and tried running my application and it works. When I run kubectl describe auth-deploy I am seeing a container listed as auth-pod. However, I am not seeing any containers when I run docker ps or docker ps -a. Also, the logs for my pods show nothing. Is there something I am doing wrong?
For reference, here is my Dockerfile:
FROM node:8.11.2-alpine AS build
LABEL maintainer="info#XXX.com"
# Copy Root Dir & Set Working Dir
COPY . /src
WORKDIR /src
# Build & Start Our App
RUN apk update
RUN apk add --update alpine-sdk
RUN apk add --update python
RUN yarn install
RUN yarn build-staging
# Build Production Image Using Node Container
FROM node:8.11.2-alpine AS production
# Copy Build to Image
COPY --from=build /src/.next /src/.next/
COPY --from=build /src/production-server /src/production-server/
COPY --from=build /src/static /src/static/
COPY --from=build /src/package.json /src
WORKDIR /src
# Install Essential Pacakges & Start App
RUN apk update
RUN apk add --update alpine-sdk
RUN apk add --update python
RUN yarn install
# Expose Ports Needed
EXPOSE 3000
VOLUME [ "/src/log" ]
# Start App
CMD [ "yarn", "start-staging" ]
Is it possible that you are running docker ps on the K8s-master instead of where the pods are located?
You can find out where your pods are running by running the command below:
$ kubectl describe pod auth-deploy
It should return something similar to below (in my case it's a percona workload):
$ kubectl describe pod percona
Name: percona-b98f87dbd-svq64
Namespace: default
Node: ip-xxx-xx-x-xxx.us-west-2.compute.internal/xxx.xx.x.xxx
Get the IP, SSH into the node, and run docker ps locally from the node your container is located.
$ docker ps | grep percona
010f3d529c55 percona "docker-entrypoint.s…" 7 minutes ago Up 7 minutes k8s_percona_percona-b98f87dbd-svq64_default_4aa2fe83-861a-11e8-9d5f-061181005f56_0
616d70e010bc k8s.gcr.io/pause-amd64:3.1 "/pause" 8 minutes ago Up 7 minutes k8s_POD_percona-b98f87dbd-svq64_default_4aa2fe83-861a-11e8-9d5f-061181005f56_0
Another possibility is that you might be using different container runtime such as rkt, containerd, and lxd instead of docker.
Kubernetes pods are made of grouped containers and running on the dedicated node.
Kubernetes are managing directions where to create pods and
their lifecycle.
Kubernetes configuration consists of worker nodes and the master server.
The master server is able to connect to nodes, create containers,
and bond them into pods. The master node is designed to run only managing commands like kubectl, cluster state database etcd,
and others daemons required to keep cluster up and running.
docker ps
shows nothing in this case.
To get list of running pods:
kubectl get pods
You can then connect to pod already running on node:
kubectl attach -i <podname>
Back to your question.
If you are interested in how Kubernetes are working with containers including your application image and Kubernetes infrastructure,
you have to obtain node’s IP address first:
kubectl describe pod <podname> | grep ^Node:
or by:
kubectl get pods -o wide
Next connect to the node via ssh and then:
docker ps
You will see there are containers including the one you are looking for.

Resources