How to set HTTPS as default on GKE Ingress-gce - docker

I currently have a working Frontend and Backend nodeports with an Ingress service setup with GKE's Google-managed certificates.
However, my issue is that by default when a user goes to samplesite.com, it uses http as default. This means that the user needs to specifically type in the browser https://samplesite.com in order to get the https version of my website.
How do I properly disable http on GKE ingress, or how do I redirect all my traffic to https? I understand that this can be forcefully done in my backend code as well but I want to separate concerns and handle this in my Kubernetes setup.
Here is my ingress.yaml file:
kind: Service
apiVersion: v1
metadata:
name: frontend-node-service
namespace: default
spec:
type: NodePort
selector:
app: frontend
ports:
- port: 5000
targetPort: 80
protocol: TCP
name: http
---
kind: Service
apiVersion: v1
metadata:
name: backend-node-service
namespace: default
spec:
type: NodePort
selector:
app: backend
ports:
- port: 8081
targetPort: 9229
protocol: TCP
name: http
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: samplesite-ingress-frontend
namespace: default
annotations:
kubernetes.io/ingress.global-static-ip-name: "samplesite-static-ip"
kubernetes.io/ingress.allow-http: "false"
networking.gke.io/managed-certificates: samplesite-ssl
spec:
backend:
serviceName: frontend-node-service
servicePort: 5000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: samplesite-ingress-backend
namespace: default
annotations:
kubernetes.io/ingress.global-static-ip-name: "samplesite-backend-ip"
kubernetes.io/ingress.allow-http: "false"
networking.gke.io/managed-certificates: samplesite-api-ssl
spec:
backend:
serviceName: backend-node-service
servicePort: 8081

Currently GKE Ingress does not support out of the box HTTP->HTTPS redirect.
There is an ongoing Feature Request for it here:
Issuetracker.google.com: Issues: Redirect all HTTP traffic to HTTPS when using the HTTP(S) Load Balancer
There are some workarounds for it:
Use different Ingress controller like nginx-ingress.
Create a HTTP->HTTPS redirection in GCP Cloud Console.
How do I properly disable http on GKE ingress, or how do I redirect all my traffic to https?
To disable HTTP on GKE you can use following annotation:
kubernetes.io/ingress.allow-http: "false"
This annotation will:
Allow traffic only on port: 443 (HTTPS).
Deny traffic on port: 80 (HTTP) resulting in error code: 404.
Focusing on previously mentioned workarounds:
Use different Ingress controller like nginx-ingress
One of the ways to have the HTTP->HTTPS redirection is to use nginx-ingress. You can deploy it with official documentation:
Kubernetes.github.io: Ingress-nginx: Deploy: GCE-GKE
This Ingress controller will create a service of type LoadBalancer which will be the entry point for your traffic. Ingress objects will respond on LoadBalancer IP. You can download the manifest from installation part and modify it to support the static IP you have requested in GCP. More reference can be found here:
Stackoverflow.com: How to specify static IP address for Kubernetes load balancer?
You will need to provide your own certificates or use tools like cert-manager to have HTTPS traffic as the annotation: networking.gke.io/managed-certificates will not work with nginx-ingress.
I used this YAML definition and without any other annotations I was always redirected to the HTTPS:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx" # IMPORTANT
spec:
tls: # HTTPS PART
- secretName: ssl-certificate # SELF PROVIDED CERT NAME
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
Create a HTTP->HTTPS redirection in GCP Cloud Console.
There is also an option to manually create a redirection rule for your Ingress resource. You will need to follow official documentation:
Cloud.google.com: Load Balancing: Docs: HTTPS: Setting up HTTP -> HTTPS Redirect
Using the part of above documentation, you will need to create a HTTP LoadBalancer responding on the same IP as your Ingress resource (reserved static IP) redirecting traffic to HTTPS.
Disclaimer!
Your Ingress resource will need to have following annotation:
kubernetes.io/ingress.allow-http: "false"
Lack there of will result in forbidding you to create a redirection mentioned above.

Related

HTTPS is not working with TLS enabled in GKE Ingress

I have deployed jenkins in GKE using helm, now i am trying to configure DNS for jenkins. I am using cloudflare for DNS and also created TLS secret using my cloudflare certificates. The ingress that i have created works fine for http but HTTPS is not working. Following is my ingress that i used.
apiVersion: networking.k8s.io/v1beta1 # for versions before 1.14 use extensions/v1beta1i
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-forwarded-headers: "true"
nginx.ingress.kubernetes.io/use-proxy-protocol: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- jenkins url
secretName: secret-name
rules:
- host: jenkins url
http:
paths:
- path: /jenkins/*
backend:
serviceName: jenkins
servicePort: 80
The ingress that you have provided does not specify any service or service port for 443 to serve https requests and only has port 80 which is for http.
To enable HTTPS or gRPC over SSL when connecting to the endpoints of services, you need to add the nginx.org/ssl-services annotation to your Ingress resource definition. [1]
[1]https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/ssl-services

Nginx Ingress Controller Returns 404 Kubernetes

I am trying to create an ingress controller that points to a service that I have exposed via NodePort.
Here is the yaml file for the ingress controller (taken from https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/):
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: hello-world.info
http:
paths:
- path: /
backend:
serviceName: appName
servicePort: 80
I can connect directly to the node port and the frontend is displayed.
Please note that I am doing this because the frontend app is unable to connect to other deployments that I have created and I read that an ingress controller would be able to solve the issue. Will I still have to add an Nginx reverse proxy? If so how would I do that? I have tried adding this to the nginx config file but with no success.
location /middleware/ {
proxy_pass http://middleware/;
}
You must use a proper hostname to reach the route defined in the Ingress object. Either update your /etc/hosts file or use curl -H "hello-world.info" localhost type command. Alternatively, you can delete the host mapping and redirect all traffic to one default service.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: appName
servicePort: 80

How to get oauth2_proxy running in kubernetes under one domain to redirect back to original domain that required authentication?

I've been setting up a kubernetes cluster and want to protect the dashboard (running at kube.example.com) behind the bitly/oauth2_proxy (running at example.com/oauth2 on image a5huynh/oauth2_proxy:latest) as I want to re-use the OAuth proxy for other services I will be running. Authentication is working perfectly but after a user logs in, i.e. the callback returns, they are sent to example.com where instead they should be sent to the original host kube.example.com that initiated the flow. How can I do this? (I am using the nginx-ingress-controller).
Annotation on OAuth2 Proxy:
kubernetes.io/ingress.class: "nginx",
nginx.ingress.kubernetes.io/force-ssl-redirect: "true",
nginx.ingress.kubernetes.io/secure-backends: "true",
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
Annotation on Dashboard:
kubernetes.io/ingress.class: "nginx",
nginx.ingress.kubernetes.io/auth-signin: "https://example.com/oauth2/start",
nginx.ingress.kubernetes.io/auth-url: "https://example.com/oauth2/auth",
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS",
nginx.ingress.kubernetes.io/force-ssl-redirect: "true",
nginx.ingress.kubernetes.io/secure-backends: "true",
nginx.ingress.kubernetes.io/ssl-passthrough: "true",
nginx.ingress.kubernetes.io/ssl-redirect: "true"
I expect to be redirected to the original host kube.example.com after OAuth flow is complete but am being sent back to the OAuth2 host example.com
After searching for a bit I came across a blog post about performing this in a super simple manor. Unfortunately I found the provided yaml did not quite work correctly as the oauth2_proxy was never being hit due to nginx intercepting all requests (I am not sure if mine was not working due to me wanting the oauth-proxy url to be example.com/oauth2 rather than oauth2.example.com). To fix this I added back the oauth2-proxy path to the Ingress for the proxy i.e.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: oauth2-proxy
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: oauth2-proxy
servicePort: 80
path: /
- backend:
serviceName: oauth2-proxy
servicePort: 4180
path: /oauth2
and made sure that the service was also still exposed i.e.
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: default
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: http-proxy
port: 4180
protocol: TCP
targetPort: 4180
selector:
k8s-app: oauth2-proxy
Then to protect services behind the oauth proxy I just need to place the following in the Ingress annotations:
nginx.ingress.kubernetes.io/auth-url: "https://example.com/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://example.com/oauth2/start?rd=/redirect/$http_host$request_uri"

How to preserve source IP from traffic arriving on a ClusterIP service with an external IP?

I currently have a service that looks like this:
apiVersion: v1
kind: Service
metadata:
name: httpd
spec:
ports:
- port: 80
targetPort: 80
name: http
protocol: TCP
- port: 443
targetPort: 443
name: https
protocol: TCP
selector:
app: httpd
externalIPs:
- 10.128.0.2 # VM's internal IP
I can receive traffic fine from the external IP bound to the VM, but all of the requests are received by the HTTP with the source IP 10.104.0.1, which is most definitely an internal IP – even when I connect to the VM's external IP from outside the cluster.
How can I get the real source IP for the request without having to set up a load balancer or ingress?
This is not simple to achieve -- because of the way kube-proxy works, your traffic can get forwarded between nodes before it reaches the pod that's backing your Service.
There are some beta annotations that you can use to get around this, specifically service.beta.kubernetes.io/external-traffic: OnlyLocal.
More info in the docs, here: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
But this does not meet your additional requirement of not requiring a LoadBalancer. Can you expand upon why you don't want to involve a LoadBalancer?
If you only have exactly one pod, you can use hostNetwork: true to achieve this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: caddy
spec:
replicas: 1
template:
metadata:
labels:
app: caddy
spec:
hostNetwork: true # <---------
containers:
- name: caddy
image: your_image
env:
- name: STATIC_BACKEND # example env in my custom image
value: $(STATIC_SERVICE_HOST):80
Note that by doing this your pod will inherit the host's DNS resolver and not Kubernetes'. That means you can no longer resolve cluster services by DNS name. For example, in the example above you cannot access the static service at http://static. You still can access services by their cluster IP, which are injected by environment variables.

kubernetes unhealthy ingress backend

I followed the load balancer tutorial: https://cloud.google.com/container-engine/docs/tutorials/http-balancer which is working fine when I use the Nginx image, when I try and use my own application image though the backend switches to unhealthy.
My application redirects on / (returns a 302) but I added a livenessProbe in the pod definition:
livenessProbe:
httpGet:
path: /ping
port: 4001
httpHeaders:
- name: X-health-check
value: kubernetes-healthcheck
- name: X-Forwarded-Proto
value: https
- name: Host
value: foo.bar.com
My ingress looks like:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo
spec:
backend:
serviceName: foo
servicePort: 80
rules:
- host: foo.bar.com
Service configuration is:
kind: Service
apiVersion: v1
metadata:
name: foo
spec:
type: NodePort
selector:
app: foo
ports:
- port: 80
targetPort: 4001
Backends health in ingress describe ing looks like:
backends: {"k8s-be-32180--5117658971cfc555":"UNHEALTHY"}
and the rules on the ingress look like:
Rules:
Host Path Backends
---- ---- --------
* * foo:80 (10.0.0.7:4001,10.0.1.6:4001)
Any pointers greatly received, I've been trying to work this out for hours with no luck.
Update
I have added the readinessProbe to my deployment but something still appears to hit / and the ingress is still unhealthy. My probe looks like:
readinessProbe:
httpGet:
path: /ping
port: 4001
httpHeaders:
- name: X-health-check
value: kubernetes-healthcheck
- name: X-Forwarded-Proto
value: https
- name: Host
value: foo.com
I changed my service to:
kind: Service
apiVersion: v1
metadata:
name: foo
spec:
type: NodePort
selector:
app: foo
ports:
- port: 4001
targetPort: 4001
Update2
After I removed the custom headers from the readinessProbe it started working! Many thanks.
You need to add a readinessProbe (just copy your livenessProbe).
It's explained in the GCE L7 Ingress Docs.
Health checks
Currently, all service backends must satisfy either of the following requirements to pass the HTTP health checks sent to it from the GCE loadbalancer: 1. Respond with a 200 on '/'. The content does not matter. 2. Expose an arbitrary url as a readiness probe on the pods backing the Service.
Also make sure that the readinessProbe is pointing to the same port that you expose to the Ingress. In your case that's fine since you have only one port, if you add another one you may run into trouble.
I thought it's worth noting that this is a quite important limitation in the documentation:
Changes to a Pod's readinessProbe do not affect the Ingress after it is created.
After adding my readinessProbe I basically deleted my ingress (kubectl delete ingress <name>) and then applied my yaml file again to re-create it and shortly after everything was working again.
I was having the same issue. Followed Tex's tip but continued to see that message. It turns out I had to wait a few minutes before ingress to validate the service health. If someone is going through the same and done all the steps like readinessProbe and linvenessProbe, just ensure your ingress is pointing to a service that is either a NodePort, and wait a few minutes until the yellow warning icon turns into a green one. Also, check the log on StackDriver to get a better idea of what's going on.
I was also having exactly the same issue, after updating my ingress readinessProbe.
I can see Ingress status labeled Some backend services are in UNKNOWN state status in yellow.
I waited for more than 30 min, yet the changes were not reflected.
After more than 24 hours the changes reflected and status turned green.
I didn't get any official documentation for this but seems like a bug in GCP Ingress resource.
If you don't want to change your pod spec, or rely on the magic of GKE pulling out your readinessProbe, you can also configure a BackendConfig like this to explicitly configure the health check.
This is also helpful if you want to use a script for your readinessProbe, which isn't supported by GKE ingress health checks.
Note that the BackendConfig needs to be explicitly referenced in your Service definition.
---
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: my-namespace
annotations:
cloud.google.com/neg: '{"ingress":true}'
# This points GKE Ingress to the BackendConfig below
cloud.google.com/backend-config: '{"default": "my-backendconfig"}'
spec:
type: ClusterIP
ports:
- name: health
port: 1234
protocol: TCP
targetPort: 1234
- name: http
...
selector:
...
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-backendconfig
namespace: my-namespace
spec:
healthCheck:
checkIntervalSec: 15
port: 1234
type: HTTP
requestPath: /healthz
Everyone of these answers helped me.
In addition, the http probes need to return a 200 status. Stupidly, mine was returning a 301. So I just added a simple "ping" endpoint and all was well/healthy.

Resources