pod with React app failing with with status CrashLoopBackOff on kubernetes - docker

I am new to kubernetes. I am running my kubernetes cluster inside my Docker Desktop VM. Below are the versions
Docker Desktop Community : 2.3.0.4 (Stable)
Engine: 19.03.12
kubernetes: 1.16.5
i created a simple react app. below is the Docker file.
FROM node:13.12.0-alpine
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package*.json ./
RUN npm install
# add app files
COPY . ./
# start app
CMD ["npm", "start"]
I built a docker image and ran it. it works fine. I added the image in the below deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test-react-app
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: test-react-app
template:
metadata:
labels:
app: test-react-app
spec:
containers:
- name: test-react
image: myrepo/test-react:v2
imagePullPolicy: Never
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: test-service
namespace: dev
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
nodePort: 31000
selector:
app: test-react-app
The pod never starts. Below is the events from describe.
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned dev/test-deployment-7766949554-m2fbz to docker-desktop
Normal Pulled 8m38s (x5 over 10m) kubelet, docker-desktop Container image "myrepo/test-react:v2" already present on machine
Normal Created 8m38s (x5 over 10m) kubelet, docker-desktop Created container test-react
Normal Started 8m38s (x5 over 10m) kubelet, docker-desktop Started container test-react
Warning BackOff 26s (x44 over 10m) kubelet, docker-desktop Back-off restarting failed container
Below is the logs from the container. It looks as if the container is running..
> react-cart#0.1.0 start /app
> react-scripts start
[34mℹ[39m [90m「wds」[39m: Project is running at http://10.1.0.33/
[34mℹ[39m [90m「wds」[39m: webpack output is served from
[34mℹ[39m [90m「wds」[39m: Content not from webpack is served from /app/public
[34mℹ[39m [90m「wds」[39m: 404s will fallback to /
Starting the development server...

It worked!!!
I build the react app into a production app and then copied the docker file. I followed the technique given in this link https://dev.to/rieckpil/deploy-a-react-application-to-kubernetes-in-5-easy-steps-516j.

Related

Unable to pull docker image from local registry for Kubernetes deployment

I've a K8s cluster on Linode and another VM for operating.
I've installed Docker & K8s on operating VM to build images and do deployment on cluster.
Note: I haven't installed minikube on this VM.
I'm able to build my image but not able to pull that from local registry to k8s pod.
Below are the things I've already done & tried to solve the problem.
Create and push docker image to local registry.
Run docker container from the image, but not getting pulled in K8s.
Created "regcred" secret and used it in deployment yaml.
create image and push with VM's IP(10.128.234.123:5000/app-frontend) and use the same in deployment image reference.
Change image pull policy to IfNotPresent
I get the following error in pod description:
Warning ErrImageNeverPull 11s (x4 over 13s) kubelet Container image "localhost:5000/app-frontend" is not present with pull policy of Never
Warning Failed 11s (x4 over 13s) kubelet Error: ErrImageNeverPull
Below is my deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-frontend
labels:
app: app-frontend
spec:
replicas: 1
selector:
matchLabels:
app: app-frontend
template:
metadata:
labels:
app: app-frontend
spec:
containers:
- name: app-frontend
image: localhost:5000/docker-image
imagePullPolicy: Never
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred
Any help or guidance will be grateful.
In the Docs I see this
While with imagePullPolicy set to Never, never pull the image.
Try this instead
imagePullPolicy: IfNotPresent
Also
image: localhost:5000/docker-image
But in point 4. you specify an IP

Skaffold Error: deployment failed because of cleaning up

I have tried so many times to run skaffold from my project directory. It keeps me returning the same error: 1/1 deployment(s) failed
Skaffold.yaml file:
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: ankan00/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
Created a docker image of ankan00/auth by docker build -t ankan00/auth .
It ran successfully when I was working with this project. But I had to uninstall docker for some reason and then when I reinstalled docker built the image again(after deleting the previous instance of the image in docker desktop), then skaffold is not working anymore. I tried to delete skaffold folder and reinstall skaffold but the problem remains the same. Everytime it ends up in cleaning up and throwing 1/1 deployment(s) failed.
My Dockerfile:
FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ["npm", "start"]
my auth-depl.yaml file which is in infra\k8s directory
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: ankan00/auth
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
Okay! I resolved the isses by re-installing the docker desktop and not enabling Kubernetes in it. I installed Minikube and then I ran skaffold dev and this time it was not giving error in deployments to stabilize... stage. Looks like Kubernetes desktop is the culprit? I am not sure though because I ran it successfully before.
New Update!!! I worked again on the Kubernetes desktop. I deleted Minikube because Minicube uses the same port that the ingress-Nginx server uses to run the project. So, I had decided to put back Kubernetes desktop, also Google cloud Kubernetes engine. And scaffold works perfectly this time.

azure-aks docker container python-django url not working

I have created Docker that has debian + python-django that runs on 8000 port. But after deploying into azure-aks, url path is not working under 8000 port. Keeping important detials below.
Step 1:
Dockerfile :
EXPOSE 8000
RUN /usr/local/bin/python3 manage.py migrate
CMD [ "python3", "manage.py", "runserver", "0.0.0.0:8000" ]
Step 2:
After building docker image, pushing it to azure registry.
Step 3:
myfile.yaml : this is to deploy azure registry file into aks cluster.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myops
spec:
replicas: 1
selector:
matchLabels:
app: myops
template:
metadata:
labels:
app: myops
spec:
containers:
- name: myops
image: quantumregistry.azurecr.io/myops:v1.0
ports:
- containerPort: 8000
---
# [START service]
apiVersion: v1
kind: Service
metadata:
name: myops-python
spec:
type: LoadBalancer
ports:
- port: 8000
targetPort: 8888
selector:
app: myops
# [END service]
Deploy into aks : kubectl apply -f myops.yaml
Step 4: check sevice
kubectl get service myops-python --watch
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myops-python LoadBalancer <cluster-ip> <external-ip> 8000:30778/TCP 37m
Note: i have masked IP to not to expose to public.
step 5: i see container is running alright
kubectl get pods
NAME READY STATUS RESTARTS AGE
myops-5bbd459745-cz2vc 1/1 Running 0 19m
step 6: I see container log and it shows that python is running under host 0.0.0.0:8000 port.
kubectl logs -f myops-5bbd459745-cz2vc
Watching for file changes with StatReloader
Performing system checks...
WARNING:param.main: pandas could not register all extension types imports failed with the following error: cannot import name 'ABCIndexClass' from 'pandas.core.dtypes.generic' (/usr/local/lib/python3.9/site-packages/pandas/core/dtypes/generic.py)
System check identified no issues (0 silenced).
September 19, 2021 - 06:47:57
Django version 3.2.5, using settings 'myops_project.settings'
Starting development server at http://0.0.0.0:8000/
Quit the server with CONTROL-C.
The issue is that when I open this in browser http://:8000/myops_app, it is not working and timing out.
The Service myops-python is set up to receive requests on port 8000 but then it will send the request to the pod on target port 8888.
ports:
- port: 8000
targetPort: 8888
The container myops in the Pod myops, however, is not listening on port 8888. Rather it is listening on port 8000.
Dockerfile:
EXPOSE 8000
RUN /usr/local/bin/python3 manage.py migrate CMD [ "python3", "manage.py", "runserver", "0.0.0.0:8000" ]
Please set spec.ports[0].targetPort to 8000 manually or remove targetPort from spec.ports[0] in the Service myops-python. By default and for convenience, the targetPort is set to the same value as the port field. For more information please see Defining a Service.
Tip: You can use kubectl edit service <service-name> -n <namepsace> to edit your Service manifest.

How to pull image from Docker registry within Kubernetes cluster?

I'm learning Kubernetes and want to set up a Docker registry to run within my cluster, deploy any custom code to this private registry, then have my nodes pull images from this private registry to create pods. I've described my setup in this StackOverflow question
Originally I was caught up trying to figure out SSL certificates, but for now I've postponed that and I'm trying to work with an insecure registry. To that end I've created the following pod to run my registry (I know it's a pod and not a replica set or deployment -- this is only for experimental purposes and I'll make it cleaner once it's working):
apiVersion: v1
kind: Pod
metadata:
name: docker-registry
labels:
app: docker-registry
spec:
containers:
- name: docker-registry
image: registry:2
ports:
- containerPort: 80
hostPort: 80
env:
- name: REGISTRY_HTTP_ADDR
value: 0.0.0.0:80
I then created the following NodePort service:
apiVersion: v1
kind: Service
metadata:
name: docker-registry-external
labels:
app: docker-registry
spec:
type: NodePort
ports:
- targetPort: 80
port: 80
nodePort: 32000
selector:
app: docker-registry
I have a load balancer set up in front of my Kubernetes cluster which I configured to route traffic on port 80 to port 32000. So I can hit this registry at http://example.com
I then updated my local /etc/docker/daemon.json as follows:
{
"insecure-registries": ["example.com"]
}
With this I was able to push an image to my registry successfully:
> docker pull ubuntu
> docker tag ubuntu example.com/my-ubuntu
> docker push exapmle.com/my-ubuntu
The push refers to repository [example.com/my-ubuntu]
cc9d18e90faa: Pushed
0c2689e3f920: Pushed
47dde53750b4: Pushed
latest: digest: sha256:1d7b639619bdca2d008eca2d5293e3c43ff84cbee597ff76de3b7a7de3e84956 size: 943
Now I want to try and pull this image when creating a pod. So I created the following ClusterIP service to make my registry accessible within my cluster:
apiVersion: v1
kind: Service
metadata:
name: docker-registry-internal
labels:
app: docker-registry
spec:
type: ClusterIP
ports:
- targetPort: 80
port: 80
selector:
app: docker-registry
Then I created a secret:
apiVersion: v1
kind: Secret
metadata:
name: local-docker
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: ewoJImluc2VjdXJlLXJlZ2lzdHJpZXMiOiBbImRvY2tlci1yZWdpc3RyeS1pbnRlcm5hbCJdCn0K
The base64 bit decodes to:
{
"insecure-registries": ["docker-registry-internal"]
}
Finally, I created the following pod:
apiVersion: v1
kind: Pod
metadata:
name: test-docker
labels:
name: test
spec:
imagePullSecrets:
- name: local-docker
containers:
- name: test
image: docker-registry-internal/my-ubuntu
When I tried to create this pod (kubectl create -f test-pod.yml) and looked at my cluster, this is what I saw:
> kubectl get pods
NAME READY STATUS RESTARTS AGE
test-docker 0/1 ErrImagePull 0 4s
docker-registry 1/1 Running 0 34m
> kubectl describe pod test-docker
...
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m33s default-scheduler Successfully assigned default/test-docker to pool-uqa-dev-3sli8
Normal Pulling 3m22s (x2 over 3m32s) kubelet Pulling image "docker-registry-internal/my-ubuntu"
Warning Failed 3m22s (x2 over 3m32s) kubelet Failed to pull image "docker-registry-internal/my-ubuntu": rpc error: code = Unknown desc = Error response from daemon: pull access denied for docker-registry-internal/my-ubuntu, repository does not exist or may require 'docker login'
Warning Failed 3m22s (x2 over 3m32s) kubelet Error: ErrImagePull
Normal SandboxChanged 3m19s (x7 over 3m32s) kubelet Pod sandbox changed, it will be killed and re-created.
Normal BackOff 3m18s (x6 over 3m30s) kubelet Back-off pulling image "docker-registry-internal/my-ubuntu"
Warning Failed 3m18s (x6 over 3m30s) kubelet Error: ImagePullBackOff
It's clearly failing to find the host "docker-registry-internal", despite the ClusterIP service.
I tried inspecting a pod from the inside using a trick I found online:
> kubectl run -i --tty --rm debug --image=ubuntu --restart=Never -- bash
If you don't see a command prompt, try pressing enter.
root#debug:/# cat /etc/hosts
# Kubernetes-managed hosts file.
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
fe00::0 ip6-mcastprefix
fe00::1 ip6-allnodes
fe00::2 ip6-allrouters
10.244.1.67 debug
It doesn't seem like ClusterIP services are being added to the /etc/hosts file, so I'm not sure how services are supposed to find one another?
I tried watching several Kubernetes tutorials on general service communication (e.g. an app pod communicating with a redis pod) and every time all they did was supply the service name as a host and it magically connected. I'm not sure if I'm missing something. Bear in mind I'm brand new to Kubernetes so the internals are still mystical to me.

How can I use a local docker image in Kubernetes?

I have this basic Dockerfile:
FROM nginx
RUN apt-get -y update && apt install -y curl
In the master node of my Kubernetes cluster I build that image:
docker build -t cnginx:v1 .
docker images shows that the image has been correctly generated:
REPOSITORY TAG IMAGE ID CREATED SIZE
cgninx v1 d3b1b19d069e 39 minutes ago 141MB
I use this deployment referencing this custom image:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: cnginx
image: cnginx:v1
imagePullPolicy: Never
ports:
- containerPort: 80
nodeSelector:
nodetype: webserver
However the image is not found:
NAME READY STATUS RESTARTS AGE
nginx-deployment-7dd98bd746-lw6tp 0/1 ErrImageNeverPull 0 4s
nginx-deployment-7dd98bd746-szr9n 0/1 ErrImageNeverPull 0 4s
Describe pod info:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned nginx-deployment-7dd98bd746-szr9n to kubenode2
Normal SuccessfulMountVolume 1m kubelet, kubenode2 MountVolume.SetUp succeeded for volume "default-token-bpbpl"
Warning ErrImageNeverPull 9s (x9 over 1m) kubelet, kubenode2 Container image "cnginx:v1" is not present with pull policy of Never
Warning Failed 9s (x9 over 1m) kubelet, kubenode2 Error: ErrImageNeverPull
I have also tried using the default imagePullPolicy, and some other things such as tagging the image with latest...
So, how can I make Kubernetes use a locally generated docker image?
Your PODs are scheduled on your worker nodes. Since you set imagePullPolicy to Never you need to make your image available to both nodes. In other words, you need to build it on both nodes as you did on the master.
As a sidenote, it would be probably easier in the long term if you setup a custom docker registry and push your images there.

Resources