kubernetes unhealthy ingress backend - docker

I followed the load balancer tutorial: https://cloud.google.com/container-engine/docs/tutorials/http-balancer which is working fine when I use the Nginx image, when I try and use my own application image though the backend switches to unhealthy.
My application redirects on / (returns a 302) but I added a livenessProbe in the pod definition:
livenessProbe:
httpGet:
path: /ping
port: 4001
httpHeaders:
- name: X-health-check
value: kubernetes-healthcheck
- name: X-Forwarded-Proto
value: https
- name: Host
value: foo.bar.com
My ingress looks like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo
spec:
backend:
serviceName: foo
servicePort: 80
rules:
- host: foo.bar.com
Service configuration is:
kind: Service
apiVersion: v1
metadata:
name: foo
spec:
type: NodePort
selector:
app: foo
ports:
- port: 80
targetPort: 4001
Backends health in ingress describe ing looks like:
backends: {"k8s-be-32180--5117658971cfc555":"UNHEALTHY"}
and the rules on the ingress look like:
Rules:
Host Path Backends
---- ---- --------
* * foo:80 (10.0.0.7:4001,10.0.1.6:4001)
Any pointers greatly received, I've been trying to work this out for hours with no luck.
Update
I have added the readinessProbe to my deployment but something still appears to hit / and the ingress is still unhealthy. My probe looks like:
readinessProbe:
httpGet:
path: /ping
port: 4001
httpHeaders:
- name: X-health-check
value: kubernetes-healthcheck
- name: X-Forwarded-Proto
value: https
- name: Host
value: foo.com
I changed my service to:
kind: Service
apiVersion: v1
metadata:
name: foo
spec:
type: NodePort
selector:
app: foo
ports:
- port: 4001
targetPort: 4001
Update2
After I removed the custom headers from the readinessProbe it started working! Many thanks.

You need to add a readinessProbe (just copy your livenessProbe).
It's explained in the GCE L7 Ingress Docs.
Health checks
Currently, all service backends must satisfy either of the following requirements to pass the HTTP health checks sent to it from the GCE loadbalancer: 1. Respond with a 200 on '/'. The content does not matter. 2. Expose an arbitrary url as a readiness probe on the pods backing the Service.
Also make sure that the readinessProbe is pointing to the same port that you expose to the Ingress. In your case that's fine since you have only one port, if you add another one you may run into trouble.

I thought it's worth noting that this is a quite important limitation in the documentation:
Changes to a Pod's readinessProbe do not affect the Ingress after it is created.
After adding my readinessProbe I basically deleted my ingress (kubectl delete ingress <name>) and then applied my yaml file again to re-create it and shortly after everything was working again.

I was having the same issue. Followed Tex's tip but continued to see that message. It turns out I had to wait a few minutes before ingress to validate the service health. If someone is going through the same and done all the steps like readinessProbe and linvenessProbe, just ensure your ingress is pointing to a service that is either a NodePort, and wait a few minutes until the yellow warning icon turns into a green one. Also, check the log on StackDriver to get a better idea of what's going on.

I was also having exactly the same issue, after updating my ingress readinessProbe.
I can see Ingress status labeled Some backend services are in UNKNOWN state status in yellow.
I waited for more than 30 min, yet the changes were not reflected.
After more than 24 hours the changes reflected and status turned green.
I didn't get any official documentation for this but seems like a bug in GCP Ingress resource.

If you don't want to change your pod spec, or rely on the magic of GKE pulling out your readinessProbe, you can also configure a BackendConfig like this to explicitly configure the health check.
This is also helpful if you want to use a script for your readinessProbe, which isn't supported by GKE ingress health checks.
Note that the BackendConfig needs to be explicitly referenced in your Service definition.
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: my-namespace
annotations:
cloud.google.com/neg: '{"ingress":true}'
# This points GKE Ingress to the BackendConfig below
cloud.google.com/backend-config: '{"default": "my-backendconfig"}'
spec:
type: ClusterIP
ports:
- name: health
port: 1234
protocol: TCP
targetPort: 1234
- name: http
...
selector:
...
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-backendconfig
namespace: my-namespace
spec:
healthCheck:
checkIntervalSec: 15
port: 1234
type: HTTP
requestPath: /healthz

Everyone of these answers helped me.
In addition, the http probes need to return a 200 status. Stupidly, mine was returning a 301. So I just added a simple "ping" endpoint and all was well/healthy.

Related

Nginx Ingress works only if nodeport is added to the host name. How to make it work without nodeport?

I'm trying a simple microservices app on a cloud Kubernetes cluster. This is the Ingress yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nginx-nginx-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
spec:
defaultBackend:
service:
name: auth-svc
port:
number: 5000
rules:
- host: "somehostname.xyz"
http:
paths:
- path: "/"
pathType: Prefix
backend:
service:
name: auth-svc
port:
number: 5000
The problem:
When I use this URL, I'm able to access the auth service: http://somehostname.xyz:31840. However, if I use http://somehostname.xyz, I get a "This site can’t be reached somehostname.xyz refused to connect." error.
The auth service sends GET requests to other services too, and I'm able to see the response from those services if I use:
http://somehostname.xyz:31840/go or http://somehostname.xyz:31840/express. But again, these work only if the nodeport 31840 is used.
My questions:
What typically causes such a problem, where I can access the service
using the hostname and nodeport, but it won't work without supplying the
nodeport?
Is there a method to test this in a different way to figure out where
the problem is?
Is it a problem with the Ingress or Auth namespace? Is it a problem
with the hostname in Flask? Is it a problem with the Ingress
controller? How do I debug this?
These are the results of kubectl get all and other commands.
NAME READY STATUS RESTARTS
pod/auth-flask-58ccd5c94c-g257t 1/1 Running 0
pod/ingress-nginx-nginx-ingress-6677d54459-gtr42 1/1 Running 0
NAME TYPE EXTERNAL-IP PORT(S)
service/auth-svc ClusterIP <none> 5000/TCP
service/ingress-nginx-nginx-ingress LoadBalancer 172.xxx.xx.130 80:31840/TCP,443:30550/TCP
NAME READY UP-TO-DATE AVAILABLE
deployment.apps/auth-flask 1/1 1 1
deployment.apps/ingress-nginx-nginx-ingress 1/1 1 1
NAME DESIRED CURRENT READY
replicaset.apps/auth-flask-58ccd5c94c 1 1 1
replicaset.apps/ingress-nginx-nginx-ingress-6677d54459 1 1 1
NAME CLASS HOSTS ADDRESS PORTS
ingress-nginx-nginx-ingress <none> somehostname.xyz 172.xxx.xx.130 80
Describing ingress also seems normal.
kubectl describe ingress ingress-nginx-nginx-ingress
Name: ingress-nginx-nginx-ingress
Namespace: default
Address: 172.xxx.xx.130
Default backend: auth-svc:5000 (10.x.xx.xxx:5000)
Rules:
Host Path Backends
---- ---- --------
somehostname.xyz
/ auth-svc:5000 (10.x.xx.xxx:5000)
Annotations: kubernetes.io/ingress.class: nginx
This is the code of Auth.
import requests
from flask import Flask
app = Flask(__name__)
#app.route('/')
def indexPage():
return ' <!DOCTYPE html><html><head><meta charset="UTF-8" />\
<title>Microservice</title></head> \
<body><div style="text-align: center;">Welcome to the Auth page</div></body></html>'
#app.route('/go')
def getGoJson():
return requests.get('http://analytics-svc:8082/info').content
#app.route('/express')
def getNodeResponse():
return requests.get('http://node-svc:8085/express').content
if __name__ == '__main__':
app.run(debug=True, host="0.0.0.0")
and Auth's Dockerfile:
FROM python:3.8-slim-buster
WORKDIR /usr/src/app
ENV FLASK_APP=app.py
ENV FLASK_RUN_HOST=0.0.0.0
ENV FLASK_ENV=development
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["flask", "run"]
The part of docker-compose yaml for auth:
version: "3.3"
services:
auth:
build: ./auth/
image: nav9/auth-flask:v1
ports:
- "5000:5000"
Auth's Kubernetes manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-flask
spec:
selector:
matchLabels:
any-name: auth-flask
template:
metadata:
labels:
any-name: auth-flask
spec:
containers:
- name: auth-name
image: nav9/auth-flask:v1
imagePullPolicy: Always
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: auth-svc
spec:
# type: ClusterIP
ports:
- targetPort: 5000
port: 5000
selector:
any-name: auth-flask
What typically causes such a problem, where I can access the service using the hostname and nodeport, but it won't work without supplying the nodeport?
If the URL works when using the nodeport and not without the nodeport, then this means that the ingress is not configured properly for what you want to do.
Is there a method to test this in a different way to figure out where the problem is?
Steps for troubleshooting are:
The first step is determine if the error is from the ingress or from your back-end service.
In your case, the error This site can’t be reached somehostname.xyz refused to connect, sounds like the Ingress found the service to map to and used port 5000 to connect to it, and the connection was refused or nothing was listening on port 5000 for that service.
I'd next look at the auth-svc logs to see that that request came into the system and why it was refused.
My guess is that the auth service is listening on port 31840 but your ingress says to connect to port 5000 based on the configuration.
You might try adding a port mapping from 80 to 31840 as a hack/test to see if you get a different error.
Something like:
spec:
rules:
- host: "somehostname.xyz"
http:
paths:
- path: "/"
backend:
service:
port:
number: 31840
I've only included the part needed to show the indentation properly.
So the other way to test this out is to create additional URLs that map to different ports, so for example:
/try1 => auth-svc:5000
/try2 => auth-svc:31840
/try3 => auth-svc:443
The other part that I haven't played with that might be an issue is that you are using http and I don't know of any auth service that would use http, so simply trying to connect using http to an app that wants https will get a connection either refused or a strange error, so that might be related to the problem/error you are seeing.
Hope this gives you some ideas to try.
The solution has three parts:
Use kubectl get all to find out the running ingress service:
NAME TYPE EXTERNAL-IP PORT(S)
service/ingress-nginx-nginx-ingress LoadBalancer 172.xxx.xx.130 80:31840/TCP,443:30550/TCP
Copy the EXTERNAL-IP of the service (in this case 172.xxx.xx.130).
Add a DNS A record named *.somehostname.xyz for the cloud cluster, and use the IP address 172.xxx.xx.130.
When accessing the hostname via the browser, make sure that http is used instead of https.

Can't access kubernetes nodeport service

I don't think I miss anything, but my angular app doesn't seem to be able to contact the service I exposed trough Kubernetes.
Whenever I try to call the exposed nodeport on my localhost, I get a connection refused.
The deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: society-api-gateway-deployment
spec:
replicas: 1
selector:
matchLabels:
app: society-api-gateway-deployment
template:
metadata:
labels:
app: society-api-gateway-deployment
spec:
containers:
- name: society-api-gateway-deployment
image: tbusschaert/society-api-gateway:latest
ports:
- containerPort: 80
The service file
apiVersion: v1
kind: Service
metadata:
name: society-api-gateway-service
spec:
type: NodePort
selector:
app: society-api-gateway-deployment
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30001
I double checked, the call doesn't reach my pod, and this is the error I get when going the call:
I'm using minikube and kubectl on my local machine.
I'm out of options, tried everything I though it could be, thanks in advance.
EDIT 1:
So after following the suggestions, i used the node IP to call the service:
I changed the IP in my angular project, now I get a connection timeout:
As for the port forward, I get a permission error:
So, as I thought, the problem was related to minikube not opening up to my localhost.
First of all I didn't need a NodePort, but the LoadBalancer also fit my need, so my API gateway became a LoadBalancer.
Second, when using minikube, to achieve what I wanted ( running kubernetes on my local machine and my angular client also being on my local machine ), you have to create a minikube tunnel, exactly how they explain it here: https://minikube.sigs.k8s.io/docs/handbook/accessing/#run-tunnel-in-a-separate-terminal
From doc, you see that the templeate will be like this <NodeIP>:<NodePort>.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.
So first, From kubectl get node -o wide comamnd take the NodeIP.
then try with the <NodeIP>:<NodePort>. For example, If the NodeIP is 172.19.0.2 then try 172.19.0.2:30001 with your sub url.
Or, Another way is with port-forwarding. In a terminal first try port-forwarding with kubectl port-forward svc/society-api-gateway-service 80:80. Then use the url you have tried with localhost.

nginx ingress rate limiting

i am not able to understand one point in the rate-limiting of Nginx ingress
i was referring to one article regarding rate limiting with nginx ingress : https://medium.com/titansoft-engineering/rate-limiting-for-your-kubernetes-applications-with-nginx-ingress-2e32721f7f57#:~:text=When%20we%20use%20NGINX%20ingress,configure%20rate%20limits%20with%20annotations.&text=As%20an%20example%20above%2C%20the,qps)%20on%20the%20Hello%20service.
in limitation section at last
It applies to the whole ingress and is not able to configure
exceptions, eg. when you want to exclude a health check path /healthz
from your service.
if i am creating two ingresses with different names, one has path /hello1 and another /hello2 both pointing to the same service backend.
Now if i am adding rate limiting to only one ingress or path /hello1 will it affect another? if the same host or domain is there ???
ingress 1 : example.com/hello1 - rate-limit set
ingress 2 : example.com/hello2 no rate limiting
Thanks in advance
Rate limit will be applied only to that ingress where you specified it. What is basically nginx-ingress doing in the background - it merges rules into 1 huge config, however they applies to different objects.
e.g 2 different ingresses for same host and diff path.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test1
annotations:
kubernetes.io/ingress.class: 'nginx'
nginx.ingress.kubernetes.io/limit-rps: '5'
spec:
rules:
- host: example.com
http:
paths:
- path: /path1
backend:
serviceName: service1
servicePort: 8080
and
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test2
annotations:
kubernetes.io/ingress.class: 'nginx'
nginx.ingress.kubernetes.io/limit-rps: '10'
spec:
rules:
- host: example.com
http:
paths:
- path: /path2
backend:
serviceName: service1
servicePort: 8080

How to set HTTPS as default on GKE Ingress-gce

I currently have a working Frontend and Backend nodeports with an Ingress service setup with GKE's Google-managed certificates.
However, my issue is that by default when a user goes to samplesite.com, it uses http as default. This means that the user needs to specifically type in the browser https://samplesite.com in order to get the https version of my website.
How do I properly disable http on GKE ingress, or how do I redirect all my traffic to https? I understand that this can be forcefully done in my backend code as well but I want to separate concerns and handle this in my Kubernetes setup.
Here is my ingress.yaml file:
kind: Service
apiVersion: v1
metadata:
name: frontend-node-service
namespace: default
spec:
type: NodePort
selector:
app: frontend
ports:
- port: 5000
targetPort: 80
protocol: TCP
name: http
---
kind: Service
apiVersion: v1
metadata:
name: backend-node-service
namespace: default
spec:
type: NodePort
selector:
app: backend
ports:
- port: 8081
targetPort: 9229
protocol: TCP
name: http
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: samplesite-ingress-frontend
namespace: default
annotations:
kubernetes.io/ingress.global-static-ip-name: "samplesite-static-ip"
kubernetes.io/ingress.allow-http: "false"
networking.gke.io/managed-certificates: samplesite-ssl
spec:
backend:
serviceName: frontend-node-service
servicePort: 5000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: samplesite-ingress-backend
namespace: default
annotations:
kubernetes.io/ingress.global-static-ip-name: "samplesite-backend-ip"
kubernetes.io/ingress.allow-http: "false"
networking.gke.io/managed-certificates: samplesite-api-ssl
spec:
backend:
serviceName: backend-node-service
servicePort: 8081
Currently GKE Ingress does not support out of the box HTTP->HTTPS redirect.
There is an ongoing Feature Request for it here:
Issuetracker.google.com: Issues: Redirect all HTTP traffic to HTTPS when using the HTTP(S) Load Balancer
There are some workarounds for it:
Use different Ingress controller like nginx-ingress.
Create a HTTP->HTTPS redirection in GCP Cloud Console.
How do I properly disable http on GKE ingress, or how do I redirect all my traffic to https?
To disable HTTP on GKE you can use following annotation:
kubernetes.io/ingress.allow-http: "false"
This annotation will:
Allow traffic only on port: 443 (HTTPS).
Deny traffic on port: 80 (HTTP) resulting in error code: 404.
Focusing on previously mentioned workarounds:
Use different Ingress controller like nginx-ingress
One of the ways to have the HTTP->HTTPS redirection is to use nginx-ingress. You can deploy it with official documentation:
Kubernetes.github.io: Ingress-nginx: Deploy: GCE-GKE
This Ingress controller will create a service of type LoadBalancer which will be the entry point for your traffic. Ingress objects will respond on LoadBalancer IP. You can download the manifest from installation part and modify it to support the static IP you have requested in GCP. More reference can be found here:
Stackoverflow.com: How to specify static IP address for Kubernetes load balancer?
You will need to provide your own certificates or use tools like cert-manager to have HTTPS traffic as the annotation: networking.gke.io/managed-certificates will not work with nginx-ingress.
I used this YAML definition and without any other annotations I was always redirected to the HTTPS:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx" # IMPORTANT
spec:
tls: # HTTPS PART
- secretName: ssl-certificate # SELF PROVIDED CERT NAME
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
Create a HTTP->HTTPS redirection in GCP Cloud Console.
There is also an option to manually create a redirection rule for your Ingress resource. You will need to follow official documentation:
Cloud.google.com: Load Balancing: Docs: HTTPS: Setting up HTTP -> HTTPS Redirect
Using the part of above documentation, you will need to create a HTTP LoadBalancer responding on the same IP as your Ingress resource (reserved static IP) redirecting traffic to HTTPS.
Disclaimer!
Your Ingress resource will need to have following annotation:
kubernetes.io/ingress.allow-http: "false"
Lack there of will result in forbidding you to create a redirection mentioned above.

How to preserve source IP from traffic arriving on a ClusterIP service with an external IP?

I currently have a service that looks like this:
apiVersion: v1
kind: Service
metadata:
name: httpd
spec:
ports:
- port: 80
targetPort: 80
name: http
protocol: TCP
- port: 443
targetPort: 443
name: https
protocol: TCP
selector:
app: httpd
externalIPs:
- 10.128.0.2 # VM's internal IP
I can receive traffic fine from the external IP bound to the VM, but all of the requests are received by the HTTP with the source IP 10.104.0.1, which is most definitely an internal IP – even when I connect to the VM's external IP from outside the cluster.
How can I get the real source IP for the request without having to set up a load balancer or ingress?
This is not simple to achieve -- because of the way kube-proxy works, your traffic can get forwarded between nodes before it reaches the pod that's backing your Service.
There are some beta annotations that you can use to get around this, specifically service.beta.kubernetes.io/external-traffic: OnlyLocal.
More info in the docs, here: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
But this does not meet your additional requirement of not requiring a LoadBalancer. Can you expand upon why you don't want to involve a LoadBalancer?
If you only have exactly one pod, you can use hostNetwork: true to achieve this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: caddy
spec:
replicas: 1
template:
metadata:
labels:
app: caddy
spec:
hostNetwork: true # <---------
containers:
- name: caddy
image: your_image
env:
- name: STATIC_BACKEND # example env in my custom image
value: $(STATIC_SERVICE_HOST):80
Note that by doing this your pod will inherit the host's DNS resolver and not Kubernetes'. That means you can no longer resolve cluster services by DNS name. For example, in the example above you cannot access the static service at http://static. You still can access services by their cluster IP, which are injected by environment variables.

Resources