Kubernetes Ingress get Unhealthy backend services on Google Kubernetes Engine - docker

I'm trying to deploy two services on Google container engine, I have created a cluster with 3 Nodes.
My docker images are in private docker hub repo that's why I have created a secret and used in Deployments, The ingress is creating a load balancer in the Google cloud console but it shows that backend services are not healthy and inside the kubernetes section under workloads it says Does not have minimum availability.
I'm new to kubernetes, what can be a problem?
Here are my yamls:
Deployment.yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
name: pythonaryapp
labels:
app: pythonaryapp
spec:
replicas: 1 #We always want more than 1 replica for HA
selector:
matchLabels:
app: pythonaryapp
template:
metadata:
labels:
app: pythonaryapp
spec:
containers:
- name: pythonaryapp #1st container
image: docker.io/arycloud/docker_web_app:pythonaryapp #Dockerhub image
ports:
- containerPort: 8080 #Exposes the port 8080 of the container
env:
- name: PORT #Env variable key passed to container that is read by app
value: "8080" # Value of the env port.
readinessProbe:
httpGet:
path: /healthz
port: 8080
periodSeconds: 2
timeoutSeconds: 2
successThreshold: 2
failureThreshold: 10
imagePullSecrets:
- name: docksecret
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: pythonaryapp1
labels:
app: pythonaryapp1
spec:
replicas: 1 #We always want more than 1 replica for HA
selector:
matchLabels:
app: pythonaryapp1
template:
metadata:
labels:
app: pythonaryapp1
spec:
containers:
- name: pythonaryapp1 #1st container
image: docker.io/arycloud/docker_web_app:pythonaryapp1 #Dockerhub image
ports:
- containerPort: 8080 #Exposes the port 8080 of the container
env:
- name: PORT #Env variable key passed to container that is read by app
value: "8080" # Value of the env port.
readinessProbe:
httpGet:
path: /healthz
port: 8080
periodSeconds: 2
timeoutSeconds: 2
successThreshold: 2
failureThreshold: 10
imagePullSecrets:
- name: docksecret
---
And here's services.yaml:
kind: Service
apiVersion: v1
metadata:
name: pythonaryapp
spec:
type: NodePort
selector:
app: pythonaryapp
ports:
- protocol: TCP
port: 8080
---
---
kind: Service
apiVersion: v1
metadata:
name: pythonaryapp1
spec:
type: NodePort
selector:
app: pythonaryapp1
ports:
- protocol: TCP
port: 8080
---
And Here's my ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mysvcs
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: pythonaryapp
servicePort: 8080
- path: /<name>
backend:
serviceName: pythonaryapp1
servicePort: 8080
Update:
Here's flask service code:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello World, from Python Service.', 200
if __name__ == '__main__':
app.run()
And, on running the container of it's docker image it retunrs 200 sttaus code at the root path /.
Thanks in advance!

Have a look at this post. It might contain helpful tips for your issue.
For example I do see a readiness probe but not a liveness probe in your config files.
This post suggests that “Does not have minimum availability” in k8s could be a result of a CrashloopBackoff caused by a failing liveness probe.

In GKE, the ingress is implemented by GCP LoadBalancer. The GCP LB is checking the health of the service by calling it in the service address with the root path '/'. Make sure that your container can respond with 200 on the root, or alternatively change the LB backend service health check route (you can do it in the GCP console)

Related

Which ports needed to be set for a k8s deployment?

I do not understand how to configure ports correctly for a k8s deployment.
Assume there is a nextJS application which listens to port 3003 (default is 3000). I build the docker image:
FROM node:16.14.0
RUN apk add dumb-init
# ...
EXPOSE 3003
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD npx next start -p 3003
So in this Dockerfile there are two places defining the port value 3003. Is this needed?
Then I define this k8s manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
spec:
containers:
- name: example
image: "hub.domain.com/example:1.0.0"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3003
---
apiVersion: v1
kind: Service
metadata:
name: example
spec:
ports:
- protocol: TCP
port: 80
targetPort: 3003
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: default
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
tls:
- hosts:
- domain.com
secretName: tls-key
rules:
- host: domain.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: example
port:
number: 80
The deployment is not working correctly. Calling domain.com shows me a 503 Service Temporarily Unavailable error.
If I do a port forward on the pod, I can see the working app at localhost:3003. I cannot create a port forward on the service.
So obviously I'm doing something wrong with the ports. Can someone explain which value has to be set and why?
You are missing labels from the deployment and the selector from the service. Try this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
labels:
app: example
spec:
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- name: example
image: "hub.domain.com/example:1.0.0"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3003
---
apiVersion: v1
kind: Service
metadata:
name: example
spec:
selector:
app: example
ports:
- protocol: TCP
port: 80
targetPort: 3003
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: default
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
tls:
- hosts:
- domain.com
secretName: tls-key
rules:
- host: domain.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: example
port:
number: 80
Deployment: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
Service: https://kubernetes.io/docs/concepts/services-networking/service/
Labels and selectors: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
You can name your label keys and values anything you like, you could even have a label as whatever: something instead of app: example but these are some recommended labels: https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
https://kubernetes.io/docs/reference/labels-annotations-taints/

Cannot access ingress on local minikube

I am trying to setup a jenkins server inside of a kubernetes container with minikube. Following up the documentation for this, I have the following kubernetes configuration.
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
---
apiVersion: v1
kind: Service
metadata:
name: jenkins-service
spec:
selector:
app: jenkins
ports:
- protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
nginx.ingress.kubernetes.io/default-backend: ingress-nginx-controller
kubernetes.io/ingress.class: nginx
## tells ingress to check for regex in the config file
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- http:
paths:
- path: /?(.*)
pathType: Prefix
backend:
service:
name: jenkins-service
port:
number: 8080
I am trying to create a deployment (pod), service for it and an ingress to expose it.
I got the ip address of the ingress with kubectl get ingress and it is the following:
NAME CLASS HOSTS ADDRESS PORTS AGE
jenkins-ingress <none> * 80 5s
jenkins-ingress <none> * 192.168.49.2 80 5s
When I try to open 192.168.49.2 in the browser, I get timeout. The same happens when I try to ping this ip address from the terminal.
The addon for ingress is enabled in minikube, and when I describe the ingress, I see the following:
Name: jenkins-ingress
Namespace: default
Address: 192.168.49.2
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
*
/?(.*) jenkins-service:8080 (172.17.0.4:8080)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/default-backend: ingress-nginx-controller
nginx.ingress.kubernetes.io/use-regex: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 6m15s (x2 over 6m20s) nginx-ingress-controller Scheduled for sync
Can someone help me what I am doing wrong and how can I access the jenkins instance through the browser?

GKE Kubernetes Ingress not routing traffic to microservices

I'm pretty new to Kubernetes and trying to deploy what I think is a pretty common use case onto a GKE cluster we have created with Terraform, microservices all hosted on one cluster, but cannot for the life of me get the routing to serve traffic to the correct services. The setup I'm trying to create is as follows:
Deployment & Service for each microservice (canvas-service and video-service)
Single Ingress (class GCE) on the cluster, hosted at a static IP, that routes traffic to each service based on path
The current config looks like this:
canvas-service.yaml
# Deployment (Service Manager)
apiVersion: apps/v1
kind: Deployment
metadata:
name: canvas-service-deployment
labels:
name: canvas-service
spec:
replicas: 2
selector:
matchLabels:
app: canvas-service
template:
metadata:
labels:
app: canvas-service
spec:
restartPolicy: Always
containers:
- name: canvas-service
image: gcr.io/emile-learning-dev/canvas-service-image:latest
readinessProbe:
httpGet:
path: /health
port: 8080
ports:
- name: root
containerPort: 8080
resources:
requests:
memory: "4096Mi"
cpu: "1000m"
limits:
memory: "8192Mi"
cpu: "2000m"
---
# Service
apiVersion: v1
kind: Service
metadata:
name: canvas-service
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: canvas-service
video-service.yaml
# Deployment (Service Manager)
apiVersion: apps/v1
kind: Deployment
metadata:
name: video-service-deployment
labels:
name: video-service
spec:
replicas: 2
selector:
matchLabels:
app: video-service
template:
metadata:
labels:
app: video-service
spec:
restartPolicy: Always
containers:
- name: video-service
image: gcr.io/emile-learning-dev/video-service-image:latest
readinessProbe:
httpGet:
path: /health
port: 8080
ports:
- name: root
containerPort: 8080
resources:
requests:
memory: "4096Mi"
cpu: "1000m"
limits:
memory: "8192Mi"
cpu: "2000m"
---
# Service
apiVersion: v1
kind: Service
metadata:
name: video-service
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8080
selector:
app: video-service
services-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: services-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "services-ip"
spec:
defaultBackend:
service:
name: canvas-service
port:
number: 80
rules:
- http:
paths:
- path: /canvas/*
pathType: ImplementationSpecific
backend:
service:
name: canvas-service
port:
number: 80
- path: /video/*
pathType: ImplementationSpecific
backend:
service:
name: video-service
port:
number: 80
The output of kubectl describe ingress services-ingress looks like this:
Name: services-ingress
Namespace: default
Address: 34.107.136.153
Default backend: canvas-service:80 (10.244.2.17:8080,10.244.5.15:8080)
Rules:
Host Path Backends
---- ---- --------
*
/canvas/* canvas-service:80 (10.244.2.17:8080,10.244.5.15:8080)
/video/* video-service:80 (10.244.1.15:8080,10.244.8.51:8080)
Annotations: ingress.kubernetes.io/backends: {"k8s-be-30551--8ba41a687ec15071":"HEALTHY","k8s-be-32145--8ba41a687ec15071":"HEALTHY"}
ingress.kubernetes.io/forwarding-rule: k8s2-fr-xlhz0sas-default-services-ingress-hqyvwyy1
ingress.kubernetes.io/target-proxy: k8s2-tp-xlhz0sas-default-services-ingress-hqyvwyy1
ingress.kubernetes.io/url-map: k8s2-um-xlhz0sas-default-services-ingress-hqyvwyy1
kubernetes.io/ingress.global-static-ip-name: services-ip
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 9m36s loadbalancer-controller UrlMap "k8s2-um-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal Sync 9m34s loadbalancer-controller TargetProxy "k8s2-tp-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal Sync 9m26s loadbalancer-controller ForwardingRule "k8s2-fr-xlhz0sas-default-services-ingress-hqyvwyy1" created
Normal IPChanged 9m26s loadbalancer-controller IP is now 34.107.136.153
Normal Sync 6m56s (x5 over 11m) loadbalancer-controller Scheduled for sync
For testing, I have a healthcheck route at /health for each service. What I'm running into is that when I hit {public_ip}/health (using the default backend) I get the expected response. But when I hit {public_ip}/canvas/health or {public_ip}/video/health, I get a 404 Not Found.
I know it has something to do with the fact that the entire service route structure is on the /canvas or /video route, but thought that the /* was supposed to address exactly that. I'd like to basically make the root route for each service exist on the corresponding subpaths /canvas and /video. Would love to hear any thoughts you guys have as to what I'm doing wrong that's leading to traffic not being routed correctly.
If it's an issue with the GCP default Ingress resource or this isn't within its functionality, I'm totally open to using an nginx Ingress. But, I haven't been able to get an nginx Ingress to expose an IP at all so figured the GCP Ingress would probably be a shorter path to getting this cluster working. If I'm wrong about this also please let me know.
It's due to the paths defined in Ingress. Changing the path type to /video and /canvas will make this work.
To understand the reason behind this, you need to read nginx ingress controller documentation about path and pathType. You can also use the regex pattern in the path according to the documentation here https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/
Tip: When in doubt, you can always go to the nginx ingress controller pods and check the nginx.conf file about the location determined by nginx.

is it possible to put fixed ip and port in minikube?

it is possible to fix the ip and port somewhere in my yaml.
My application has 3 parts: a fronted with its respective balancer, a backend with its respective balancer and the database with statefulset and to persevere the volume, these 3 applications have their respective hpa rules.
I put the yaml of the backend if it is possible to set the ip and the port since I am working local and every so often I have to change the port or the ip.
backend.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: XXXXXXX
command: ["/bin/sh"]
args: ["-c", "node index.js"]
ports:
- containerPort: 4000
imagePullPolicy: IfNotPresent
env:
- name: HOST_DB
value: "172.17.0.3"
- name: PORT_DB
value: "31109"
resources:
requests:
memory: "128Mi"
cpu: "200m"
limits:
memory: "256Mi"
cpu: "1000m"
readinessProbe:
httpGet:
path: /
port: 4000
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 4000
initialDelaySeconds: 15
periodSeconds: 20
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 5000
targetPort: 4000
name: https
type: LoadBalancer
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: backend
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: backend
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
the result image is this
The IP is anyway fixed to the single node's IP in case of minikube.You can hardcode the NodePort by specifying nodePort in the service. Without nodePort specified in the service kubernetes will assign a port from the range 30000-32767
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: NodePort
selector:
app: backend
ports:
- protocol: TCP
port: 5000
targetPort: 4000
name: https
nodePort: 30007
Follow this guide to expose applications via NodePort type service in minikube.

What is the difference between the template section in replicator.yml and pod.yml in Kubernetes?

I'm trying to understand the difference between the manifest files used for bringing up the Kubernetes cluster.
Say suppose I have a file called pod.yml that defines my pod, that is the containers running it:
Pod.yml
apiversion : v1
kind: Pod
metadata:
name : web
spec:
containers:
- name : webserver
image : httpd
ports :
- ContainerPort: 80
HostPort: 80`
And I have replicator.yml file to launch 3 of these pods:
Replicator.yml
kind: "ReplicationController"
apiVersion: "v1"
metadata:
name: "webserver-controller"
spec:
replicas: 3
selector:
app: "webserver"
template:
spec:
containers:
- name: webserver
image: httpd
ports:
- containerPort: 80
hostport: 80`
Can I avoid the template section in the replicator.yml if I'm already using pod.yml to define the images to be used to build the containers in the pod.
Do you need all three manifest files pod.yml, service.yml and replicator.yml or can you just use service.yml and replicator.yml to create the cluster.
If you are using a ReplicationController, Deployment, DaemonSet or a Pet Set, you don't need a separate pod definition. However, the service should be defined if you want to expose the pod and this can be done on the same file.
Example:
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: default
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
k8s-app: default-http-backend
---
apiVersion: v1
kind: ReplicationController
metadata:
name: default-http-backend
namespace: default
spec:
replicas: 1
selector:
k8s-app: default-http-backend
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi

Resources