Configuring end to end SSL using Nginx Ingress controller - docker

I am running a local deployment and trying to redirect HTTPS traffic to my backend pods.
I don't want SSL termination at the Ingress level, which is why I didn't use any tls secrets.
I am creating a self signed cert within the container, and Tomcat starts up by picking that and exposing on 8443.
Here is my Ingress Spec
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-name
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
#nginx.ingress.kubernetes.io/service-upstream: "false"
kubernetes.io/ingress.class: {{ .Values.global.ingressClass }}
nginx.ingress.kubernetes.io/affinity: "cookie"
spec:
rules:
- http:
paths:
- path: /myserver
backend:
serviceName: myserver
servicePort: 8443
I used the above annotation in different combinations but I still can't reach my pod.
My service routes
# service information for myserver
service:
type: ClusterIP
port: 8443
targetPort: 8443
protocol: TCP
I did see a few answers regarding this suggesting annotations, but that didn't seem to work for me. Thanks in advance!
edit: The only thing that remotely worked was when I overwrote the ingress values as
nginx-ingress:
controller:
publishService:
enabled: true
service:
type: NodePort
nodePorts:
https: "40000"
This does enable https, but it picks up kubernetes' fake certs, rather than my cert from the container
Edit 2:
For some reason, the ssl-passthrough is not working. I enforced it as
nginx-ingress:
controller:
extraArgs:
enable-ssl-passthrough: ""
when I describe the deployment, I can see it in the args but when I check with kubectl ingress-nginx backends as described in https://kubernetes.github.io/ingress-nginx/kubectl-plugin/#backends, it says "sslPassThrough:false"

SSL Passthrough requires a specific flag to be passed to the nginx controller while starting since it is disabled by default.
SSL Passthrough is disabled by default and requires starting the controller with the --enable-ssl-passthrough flag.
Since ssl-passthrough works on layer 4 of the OSI model and not on the layer 7 (HTTP) using it will invalidate all the other annotations that you set on the ingress object.
So at your deployment level you have specify this flag under args:
containers:
- name: controller
image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1#sha256:0e072dddd1f7f8fc8909a2ca6f65e76c5f0d2fcfb8be47935ae3457e8bbceb20
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --enable-ssl-passthrough

There are several thingns you need to setup if you want to use ssl-passthrough.
First is to set proper host name:
spec:
rules:
- host: example.com <- HERE
http:
...
It's mentioned in documentation:
SSL Passthrough leverages SNI [Server Name Indication] and reads the virtual domain from the
TLS negotiation, which requires compatible clients. After a connection
has been accepted by the TLS listener, it is handled by the controller
itself and piped back and forth between the backend and the client.
If there is no hostname matching the requested host name, the request
is handed over to NGINX on the configured passthrough proxy port
(default: 442), which proxies the request to the default backend.
Second thing is setting --enable-ssl-passthrough flag as already mentioned in separate answer by #thomas.
Just edit the nginx ingress deployment and add this line to args list:
- --enable-ssl-passthrough
Third thing that has to be done is use to use the following annotations in your ingress object definition:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
IMPORTANT: In docs you can read:
Attention
Because SSL Passthrough works on layer 4 of the OSI model (TCP) and
not on the layer 7 (HTTP), using SSL Passthrough invalidates all the
other annotations set on an Ingress object.
This means that all other annotations are useless from now on. This applies to annotations like force-ssl-redirect, affinity and also to paths you defined (e.g. path: /myserver). Since traffic is end-to-end encrypted, all ingress sees is some gibberish and all it can do is pass this data to the application based on dns name (SNI).

Looks like this is an open issue
https://github.com/kubernetes/ingress-nginx/issues/5686
So, I had to revert back to using my certs as default certificates and mounting it as tls sercets.

Related

How to set HTTPS as default on GKE Ingress-gce

I currently have a working Frontend and Backend nodeports with an Ingress service setup with GKE's Google-managed certificates.
However, my issue is that by default when a user goes to samplesite.com, it uses http as default. This means that the user needs to specifically type in the browser https://samplesite.com in order to get the https version of my website.
How do I properly disable http on GKE ingress, or how do I redirect all my traffic to https? I understand that this can be forcefully done in my backend code as well but I want to separate concerns and handle this in my Kubernetes setup.
Here is my ingress.yaml file:
kind: Service
apiVersion: v1
metadata:
name: frontend-node-service
namespace: default
spec:
type: NodePort
selector:
app: frontend
ports:
- port: 5000
targetPort: 80
protocol: TCP
name: http
---
kind: Service
apiVersion: v1
metadata:
name: backend-node-service
namespace: default
spec:
type: NodePort
selector:
app: backend
ports:
- port: 8081
targetPort: 9229
protocol: TCP
name: http
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: samplesite-ingress-frontend
namespace: default
annotations:
kubernetes.io/ingress.global-static-ip-name: "samplesite-static-ip"
kubernetes.io/ingress.allow-http: "false"
networking.gke.io/managed-certificates: samplesite-ssl
spec:
backend:
serviceName: frontend-node-service
servicePort: 5000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: samplesite-ingress-backend
namespace: default
annotations:
kubernetes.io/ingress.global-static-ip-name: "samplesite-backend-ip"
kubernetes.io/ingress.allow-http: "false"
networking.gke.io/managed-certificates: samplesite-api-ssl
spec:
backend:
serviceName: backend-node-service
servicePort: 8081
Currently GKE Ingress does not support out of the box HTTP->HTTPS redirect.
There is an ongoing Feature Request for it here:
Issuetracker.google.com: Issues: Redirect all HTTP traffic to HTTPS when using the HTTP(S) Load Balancer
There are some workarounds for it:
Use different Ingress controller like nginx-ingress.
Create a HTTP->HTTPS redirection in GCP Cloud Console.
How do I properly disable http on GKE ingress, or how do I redirect all my traffic to https?
To disable HTTP on GKE you can use following annotation:
kubernetes.io/ingress.allow-http: "false"
This annotation will:
Allow traffic only on port: 443 (HTTPS).
Deny traffic on port: 80 (HTTP) resulting in error code: 404.
Focusing on previously mentioned workarounds:
Use different Ingress controller like nginx-ingress
One of the ways to have the HTTP->HTTPS redirection is to use nginx-ingress. You can deploy it with official documentation:
Kubernetes.github.io: Ingress-nginx: Deploy: GCE-GKE
This Ingress controller will create a service of type LoadBalancer which will be the entry point for your traffic. Ingress objects will respond on LoadBalancer IP. You can download the manifest from installation part and modify it to support the static IP you have requested in GCP. More reference can be found here:
Stackoverflow.com: How to specify static IP address for Kubernetes load balancer?
You will need to provide your own certificates or use tools like cert-manager to have HTTPS traffic as the annotation: networking.gke.io/managed-certificates will not work with nginx-ingress.
I used this YAML definition and without any other annotations I was always redirected to the HTTPS:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx" # IMPORTANT
spec:
tls: # HTTPS PART
- secretName: ssl-certificate # SELF PROVIDED CERT NAME
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: hello-service
servicePort: hello-port
Create a HTTP->HTTPS redirection in GCP Cloud Console.
There is also an option to manually create a redirection rule for your Ingress resource. You will need to follow official documentation:
Cloud.google.com: Load Balancing: Docs: HTTPS: Setting up HTTP -> HTTPS Redirect
Using the part of above documentation, you will need to create a HTTP LoadBalancer responding on the same IP as your Ingress resource (reserved static IP) redirecting traffic to HTTPS.
Disclaimer!
Your Ingress resource will need to have following annotation:
kubernetes.io/ingress.allow-http: "false"
Lack there of will result in forbidding you to create a redirection mentioned above.

How to do Dynamic Proxying to Kubernetes Pods?

I'm wanting to create a service that can do some kind of dynamic proxying back to Kubernetes Pods. Basically I'll have hundreds of K8s Pods that are running the same application that map to a random port on the host (like 10456). However, each Pod is unique and I want traffic directed at a specific pod based on hostname. So when a request comes in for abc123.app.com, I'll have a proxy layer that does a lookup in a database to find what host and port that domain is running on (like 10.0.0.5:10456), then forward the request there. Is there a service that supports this? I've worked with Nginx a lot before, but I'm not clear if it could support this lookup functionality.
Has anyone built something like this before? what's the best way to build a proxy layer that can do lookups like that? How would I update the database when a pod moves from one host to another?
Thanks in advance!
EDIT:
I should have put this in there the first time, but the types of traffic going to these pods are RPC traffic and Peer to Peer traffic
You're describing something very similar to what kubernetes ingress definitions do for http traffic.
An ingress definition configures an ingress controller to point requests for a hostname at a service. The service selects endpoints (pods) via label selectors. When pods move, kubernetes updates the service automatically.
The work on your end just becomes pushing out config changes from your database via one of the API clients to kubernetes rather than directing a proxy. If your environment was extremely dynamic requiring reconfiguration all the time or you need to make dynamic decisions about where traffic should go, you might want to continue looking at a custom proxy or istio, openresty.
It sounds like you have unique deployments going to kubernetes already, so in addition to that include a service and ingress definition.
A simple example including a label on the a pod, a service that use the label. Then an ingress definition using the service.
apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: host-abc123
spec:
containers:
- name: host-abc123
image: me/my-app:1.2.1
ports:
- containerPort: 10456
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: host-abc123
spec:
rules:
- host: abc123.bar.com
http:
paths:
- backend:
serviceName: host-abc123
servicePort: 80
apiVersion: v1
kind: Service
metadata:
name: host-abc123
spec:
ports:
- protocol: TCP
port: 80
targetPort: 10456
The single ingress definition could include all hosts but I'm not sure how kubernetes and the ingress controllers would go replacing that regularly.
There are nginx based ingress controllers too. You end up with a nginx server config per ingress/host definition.

Expose a Kubernetes pod on a bare metal cluster

I'm trying to expose a Kubernetes pod on a single node bare metal cluster without a domain.
In my understanding I've the these options:
Expose using NodePort
Expose using an Ingress controller
Expose using ClusterIP and manually set an external IP
As I mentioned already, I only have a single node cluster. This means that the master is master and node at the same time directlly running on a fedora host system.
The simplest solution is to use a NodePort. But the limitation here is (if I'm right), that the service port will be automatically selected from a given port range.
The next better solution is to use an ingress controller. But for this I need a public domain which I haven't. So the ingress controller also doesn't fit to me.
What for other options do I have? I just want to expose my service directly on port 9090.
Why not Option 3 ? you can setup externalIPs to your node ip.
apiVersion: v1
kind: Service
...
spec:
externalIPs:
- your node ip
Also with NodePort, the service port can be specified.
You can set a custom port range for NodePort by adding this option to your apiserver settings (/etc/kubernetes/manifests/kube-apiserver.yaml):
--service-node-port-range portRange
Default: 30000-32767
A port range to reserve for services with NodePort visibility. Example:
'30000-32767'.
Inclusive at both ends of the range.
This is the part from Kubernetes documentation related to Services:
If you want a specific port number, you can specify a value in the
nodePort field, and the system will allocate you that port or else the
API transaction will fail (i.e. you need to take care about possible
port collisions yourself). The value you specify must be in the
configured range for node ports.
Example for this answer was taken from the article Hosting Your Own Kubernetes NodePort Load Balancer:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
- port: 443
nodePort: 30443
name: https
selector:
name: nginx

Nginx Ingress Looking for Service in all Custom Namespaces

I have three namespaces dev, test and staging. test and staging have no pods in them. In dev I have nginx, ingress and a frontend service. For all requests to the nginx it's forwarded to the frontend service.
But the issue is nginx in dev trying to find frontend service in test and staging namespaces also. It's doing round robin between the 3 namespaces. So sometimes the page is loading and sometimes it's 503 error.
Here is the ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend-ingress
namespace: dev
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
And here is the log of nginx:
I0327 07:54:50.867120 1 command.go:76] change in configuration detected. Reloading...
W0327 07:54:50.867339 1 controller.go:841] service test/frontend does not have any active endpoints
W0327 07:54:50.867370 1 controller.go:841] service staging/frontend does not have any active endpoints
W0327 07:54:50.868198 1 controller.go:777] upstream test-frontend-80 does not have any active endpoints. Using default backend
W0327 07:54:50.868219 1 controller.go:777] upstream staging-frontend-80 does not have any active endpoints. Using default backend
Specify --force-namespace-isolation=true argument when deploying nginx pod. And update image to quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0
I'd like to elaborate on #Narayan-Prusty's answer.
I had to add --force-namespace-isolation=true and set image to quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0 but also had to add --watch-namespace=$(POD_NAMESPACE).

Make services accessible only from Private network Kubernetes

I have received a public ip address for my kubernetes service which i can configure as a loadbalancer ip in my NGINX ingress. This public ip address can be accessed from public internet.
Is there a way or some configuration through which i can make these services accessible only from my client network in kubernetes?
With Kubernetes Nginx Ingress it is as simple as setting an annotation on your ingress object like :
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/whitelist-source-range: '8.8.8.8/32'
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/annotations.md#whitelist-source-range
You can as suggested make use of the VPN and create an internal LoadBalancer or you can check the Network Policies that I consider that Kubernetes standard way to implement your solution.
By default, if no policies exist in a namespace, then all ingress and egress traffic is allowed to and from pods in that namespace. The following examples let you change the default behavior in that namespace.
You will need to create a NetworkPolicy Resource, in the spec you will have to describe the behaviour making use of the available fields, I recommend you to check the official documentation to retrieve more info regarding the structure.
PolicyTypes:
...
ingress: Each NetworkPolicy may include a list of whitelist ingress rules. Each rule allows traffic which matches both the from and ports sections. The example policy contains a single rule, which matches traffic on a single port, from one of three sources, the first specified via an ipBlock, the second via a namespaceSelector and the third via a podSelector.
...
Keep in mind that in order to implement them you need to use a
networking solution which supports NetworkPolicy, if you just
create the resource without a controller to implement it will have no
effect.
Example of policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Using Network Policy is nice. But, a simpler approach would be use set ExternalIP of the nginx ingress controller to the IP address in the client network. This exposes the services only on the client network.
Below is the sample configuration for helm:
helm install --name my-ingress stable/nginx-ingress \
--set controller.service.externalIPs=<IP address in client network>

Resources