So here is the deal. I am using Kubernetes and I want to protect the applications inside of the cluster. Therefore I added an oauth2-proxy and, in case the user is not logged in, it is redirected to GitHub. After the login is done, the user is redirected to the app (Login Diagram). For now, I have two dummy deployments of an echo-http server (echo1 and echo2) and Jenkins. I am doing everything locally with minikube, so please don't mind the domain names.
In Jenkins, I installed the Github OAuth plugin and configured it as said in the multiple posts I found (e.g., Jenkins GitHub OAuth). Also created the GitHub OAuth application and set the callback. Since I want to have SSO for multiple applications besides Jenkins, I set the call back to https://auth.int.example.com/oauth2/callback instead of https://jenkins.int.example.com/securityRealm/finishLogin. Therefore, after login on the GitHub, I get redirected to the Jenkins webpage but as a guest. If I try to log in, I end up in an error.
I used Helm to setup the oauth2-proxy (k8s-at-home/oauth2-proxy)
Am I missing something?
These are the ingress configuration of the oauth2-proxy and ingress controller that I am using.
Nginx Ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: echo-ingress
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/auth-url: "https://auth.int.example.com/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://auth.int.example.com/oauth2/start?rd=https%3A%2F%2F$host$request_uri"
spec:
tls:
- hosts:
- echo1.int.example.com
- echo2.int.example.com
- jenkins.int.example.com
secretName: letsencrypt-prod
rules:
- host: echo1.int.example.com
http:
paths:
- backend:
serviceName: echo1
servicePort: 80
- host: echo2.int.example.com
http:
paths:
- backend:
serviceName: echo2
servicePort: 80
- host: jenkins.int.example.com
http:
paths:
- path:
backend:
serviceName: jenkins-service
servicePort: 8080
- path: /securityRealm/finishLogin
backend:
serviceName: jenkins-service
servicePort: 8080
OAuth2-proxy Configuration
config:
existingSecret: oauth2-proxy-creds
extraArgs:
whitelist-domain: .int.example.com
cookie-domain: .int.example.com
provider: github
authenticatedEmailsFile:
enabled: true
restricted_access: |-
my_email#my_email.com
ingress:
enabled: true
path: /
hosts:
- auth.int.example.com
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
tls:
- secretName: oauth2-proxy-https-cert
hosts:
- auth.int.example.com
Nice auth architecture you are building there!
I would say that you may have have overlooked the fact that Jenkins has its own authentication. You also need to configure Jenkins itself to allow Oauth2 access via Github.
So what is really going on? Your Oauth proxy solution is great. You can build apps in your k8s cluster, without having to worry about user management or authentication directly from your app.
However, this is useful only for apps that don't have their own authentication mechanisms.
The Oauth proxy is simply protecting the access to the backend webserver. Once you are allowed by the proxy, you interact directly with the app, so if the app requires authentication, so will you as end user.
My advice would be to use the Oauth proxy for apps that don't have user management mechanisms, and leave open access to apps that have authentication mechanisms, like Jenkins. Otherwise you could end up with double authentication (proxy and Jenkins in this case), which is not so great.
Then, to keep the high level concept of accessing your cluster with Github accounts, you need to configure those user-based apps to also make use of Github Oauth2. This way the access to the cluster is homogeneus (you just need your Github account), but the actual integration has two different types: apps that don't require user management (they are protected by the Oauth proxy), and apps with authentication, which are then configured with Github's Oauth2 independently.
Related
I am new to microservices. I have few apps to deploy as microservices.
I need a API and Load balancer. For API gateway I come to know about Ingress Nginx. But I am not sure hot setup load balancing. However I could configure it for API gateway as below.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: example.dev
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3000
- path: /api/orders/?(.*)
backend:
serviceName: order-srv
servicePort: 3000
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 3000
I also have one confusion:
Load balancer sits before API Gateway i.e nginx controller,
How would it I configure Ingress-Nginx for load balancing?
Because load balancer will redirect request to nginx controller
So, Load balancer -> API Gateway > /api/orders
Now, /api/orders -> order-srv -> pods
Clearly Load balancer should decide to which pod it should route the request?
How can I achieve that?
Well, this is something like -
You got a request from external world and that has been at first level intercepted at load balancer layer and API Gateway should be one of the Microservices where you will keep all of the URL mappings say - /api/orders/{orderId} brings this to your API Gateway and in that API Gateway you can have a logic to redirect this to some order service behind the scenes via Fully Qualified Domain Name of order service (FQDN) :portNumber/{uri}
So its good idea to simply route traffic to frontend and API Gateway via your ingress rules , i.e.
- path: /?(.*) will take it to client service or frontend
- path: /api/?(.*) this shall take it to API Gateway service which has routing map to all of the services appearing behind the scenes
I am running a local deployment and trying to redirect HTTPS traffic to my backend pods.
I don't want SSL termination at the Ingress level, which is why I didn't use any tls secrets.
I am creating a self signed cert within the container, and Tomcat starts up by picking that and exposing on 8443.
Here is my Ingress Spec
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-name
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
#nginx.ingress.kubernetes.io/service-upstream: "false"
kubernetes.io/ingress.class: {{ .Values.global.ingressClass }}
nginx.ingress.kubernetes.io/affinity: "cookie"
spec:
rules:
- http:
paths:
- path: /myserver
backend:
serviceName: myserver
servicePort: 8443
I used the above annotation in different combinations but I still can't reach my pod.
My service routes
# service information for myserver
service:
type: ClusterIP
port: 8443
targetPort: 8443
protocol: TCP
I did see a few answers regarding this suggesting annotations, but that didn't seem to work for me. Thanks in advance!
edit: The only thing that remotely worked was when I overwrote the ingress values as
nginx-ingress:
controller:
publishService:
enabled: true
service:
type: NodePort
nodePorts:
https: "40000"
This does enable https, but it picks up kubernetes' fake certs, rather than my cert from the container
Edit 2:
For some reason, the ssl-passthrough is not working. I enforced it as
nginx-ingress:
controller:
extraArgs:
enable-ssl-passthrough: ""
when I describe the deployment, I can see it in the args but when I check with kubectl ingress-nginx backends as described in https://kubernetes.github.io/ingress-nginx/kubectl-plugin/#backends, it says "sslPassThrough:false"
SSL Passthrough requires a specific flag to be passed to the nginx controller while starting since it is disabled by default.
SSL Passthrough is disabled by default and requires starting the controller with the --enable-ssl-passthrough flag.
Since ssl-passthrough works on layer 4 of the OSI model and not on the layer 7 (HTTP) using it will invalidate all the other annotations that you set on the ingress object.
So at your deployment level you have specify this flag under args:
containers:
- name: controller
image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1#sha256:0e072dddd1f7f8fc8909a2ca6f65e76c5f0d2fcfb8be47935ae3457e8bbceb20
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --enable-ssl-passthrough
There are several thingns you need to setup if you want to use ssl-passthrough.
First is to set proper host name:
spec:
rules:
- host: example.com <- HERE
http:
...
It's mentioned in documentation:
SSL Passthrough leverages SNI [Server Name Indication] and reads the virtual domain from the
TLS negotiation, which requires compatible clients. After a connection
has been accepted by the TLS listener, it is handled by the controller
itself and piped back and forth between the backend and the client.
If there is no hostname matching the requested host name, the request
is handed over to NGINX on the configured passthrough proxy port
(default: 442), which proxies the request to the default backend.
Second thing is setting --enable-ssl-passthrough flag as already mentioned in separate answer by #thomas.
Just edit the nginx ingress deployment and add this line to args list:
- --enable-ssl-passthrough
Third thing that has to be done is use to use the following annotations in your ingress object definition:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
IMPORTANT: In docs you can read:
Attention
Because SSL Passthrough works on layer 4 of the OSI model (TCP) and
not on the layer 7 (HTTP), using SSL Passthrough invalidates all the
other annotations set on an Ingress object.
This means that all other annotations are useless from now on. This applies to annotations like force-ssl-redirect, affinity and also to paths you defined (e.g. path: /myserver). Since traffic is end-to-end encrypted, all ingress sees is some gibberish and all it can do is pass this data to the application based on dns name (SNI).
Looks like this is an open issue
https://github.com/kubernetes/ingress-nginx/issues/5686
So, I had to revert back to using my certs as default certificates and mounting it as tls sercets.
I'm wanting to create a service that can do some kind of dynamic proxying back to Kubernetes Pods. Basically I'll have hundreds of K8s Pods that are running the same application that map to a random port on the host (like 10456). However, each Pod is unique and I want traffic directed at a specific pod based on hostname. So when a request comes in for abc123.app.com, I'll have a proxy layer that does a lookup in a database to find what host and port that domain is running on (like 10.0.0.5:10456), then forward the request there. Is there a service that supports this? I've worked with Nginx a lot before, but I'm not clear if it could support this lookup functionality.
Has anyone built something like this before? what's the best way to build a proxy layer that can do lookups like that? How would I update the database when a pod moves from one host to another?
Thanks in advance!
EDIT:
I should have put this in there the first time, but the types of traffic going to these pods are RPC traffic and Peer to Peer traffic
You're describing something very similar to what kubernetes ingress definitions do for http traffic.
An ingress definition configures an ingress controller to point requests for a hostname at a service. The service selects endpoints (pods) via label selectors. When pods move, kubernetes updates the service automatically.
The work on your end just becomes pushing out config changes from your database via one of the API clients to kubernetes rather than directing a proxy. If your environment was extremely dynamic requiring reconfiguration all the time or you need to make dynamic decisions about where traffic should go, you might want to continue looking at a custom proxy or istio, openresty.
It sounds like you have unique deployments going to kubernetes already, so in addition to that include a service and ingress definition.
A simple example including a label on the a pod, a service that use the label. Then an ingress definition using the service.
apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: host-abc123
spec:
containers:
- name: host-abc123
image: me/my-app:1.2.1
ports:
- containerPort: 10456
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: host-abc123
spec:
rules:
- host: abc123.bar.com
http:
paths:
- backend:
serviceName: host-abc123
servicePort: 80
apiVersion: v1
kind: Service
metadata:
name: host-abc123
spec:
ports:
- protocol: TCP
port: 80
targetPort: 10456
The single ingress definition could include all hosts but I'm not sure how kubernetes and the ingress controllers would go replacing that regularly.
There are nginx based ingress controllers too. You end up with a nginx server config per ingress/host definition.
I have a small java webapp comprising of three microservices - api-service,book-service and db-service all of which are deployed on a kubernetes cluster locally using minikube.
I am planning to keep separate UIs for api-service and book-service , with the common static files served from a separate pod, probably an nginx:alpine image.
I was able to create a front end that serves the static files from nginx:alpine referring to this tutorial.
I would like to use ingress-nginx controller for routing requests to the two services.
The below diagram crudely shows where I am now.
I am confused as to where I should place the pod that serves the static content, and how to connect it to the ingress resource.I guess that keeping a front end pod before ingress defeats the purpose of ingress-nginx controller. What is the best practice to serve static files. Appreciate any help. Thanks.
Looks like you are confusing the fact that users, browsing online, will trigger standard requests to both "download" your static content, and use your 2 APIs (book and api). It's not the NGINX service serving the static content that is accessing your APIs, but the users browsers/applications, and they do that exactly the same for both static content and APIs (former has more/specific headers and data, like auth...).
On your diagram you'll want to put your new static-service at the exact same level as your book-service and api-service, ie behind the ingress. But your static-service won't have a link with the db-service like the other 2. Then just complete your ingress rules, with the static-service at the end as in this example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: your-global-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /book-service
backend:
serviceName: book-service
servicePort: 80
- path: /api-service
backend:
serviceName: api-service
servicePort: 80
- path: /
backend:
serviceName: static-service
servicePort: 80
You'll have to adjust your services names and ports, and pick the paths you want your users to access your APIs, in the example above you'd have:
foo.bar.com/book-service for your book-service
foo.bar.com/api-service for the api-service
foo.bar.com/ ie everything else going to the static-service
You should have, 3 distinct pods I guess :
- static
- book-service
- api-service
The static pod will most likely not scale at the same speed than the two other.
Creating the services for each of your deployment. Then use the ingres to route the traffic on the proper endpoint.
Is it something like that you are trying to achieve?
I have three namespaces dev, test and staging. test and staging have no pods in them. In dev I have nginx, ingress and a frontend service. For all requests to the nginx it's forwarded to the frontend service.
But the issue is nginx in dev trying to find frontend service in test and staging namespaces also. It's doing round robin between the 3 namespaces. So sometimes the page is loading and sometimes it's 503 error.
Here is the ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend-ingress
namespace: dev
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
And here is the log of nginx:
I0327 07:54:50.867120 1 command.go:76] change in configuration detected. Reloading...
W0327 07:54:50.867339 1 controller.go:841] service test/frontend does not have any active endpoints
W0327 07:54:50.867370 1 controller.go:841] service staging/frontend does not have any active endpoints
W0327 07:54:50.868198 1 controller.go:777] upstream test-frontend-80 does not have any active endpoints. Using default backend
W0327 07:54:50.868219 1 controller.go:777] upstream staging-frontend-80 does not have any active endpoints. Using default backend
Specify --force-namespace-isolation=true argument when deploying nginx pod. And update image to quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0
I'd like to elaborate on #Narayan-Prusty's answer.
I had to add --force-namespace-isolation=true and set image to quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0 but also had to add --watch-namespace=$(POD_NAMESPACE).