I have three namespaces dev, test and staging. test and staging have no pods in them. In dev I have nginx, ingress and a frontend service. For all requests to the nginx it's forwarded to the frontend service.
But the issue is nginx in dev trying to find frontend service in test and staging namespaces also. It's doing round robin between the 3 namespaces. So sometimes the page is loading and sometimes it's 503 error.
Here is the ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend-ingress
namespace: dev
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
And here is the log of nginx:
I0327 07:54:50.867120 1 command.go:76] change in configuration detected. Reloading...
W0327 07:54:50.867339 1 controller.go:841] service test/frontend does not have any active endpoints
W0327 07:54:50.867370 1 controller.go:841] service staging/frontend does not have any active endpoints
W0327 07:54:50.868198 1 controller.go:777] upstream test-frontend-80 does not have any active endpoints. Using default backend
W0327 07:54:50.868219 1 controller.go:777] upstream staging-frontend-80 does not have any active endpoints. Using default backend
Specify --force-namespace-isolation=true argument when deploying nginx pod. And update image to quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0
I'd like to elaborate on #Narayan-Prusty's answer.
I had to add --force-namespace-isolation=true and set image to quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0 but also had to add --watch-namespace=$(POD_NAMESPACE).
Related
I got the following code that uses ingress-nginx within infra\k8s-dev\ingress-srv.yaml file.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: mysite.local
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
- path: /?(.*)
pathType: Prefix
backend:
service:
name: client-srv
port:
number: 3000
I searched through the internet and found the following command to install it:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml
But I am not sure where should I run this command? Is there any ingress-nginx package that I also must install from NPM?
I am using Docker-Desktop on Windows 10 machine.
Command you have mentioned is right
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml
it self will download and apply the YAML config from URL, it will deploy the Nginx controller. Nginx ingress controller will manage the Ingress you are creating above.
Ingress is rule to divert the traffic and controller manages these rules (ingress).
So just running the apply to K8s cluster context will download and start the PODs of the Nginx controller.
as you mentioned in your ingress annotation
kubernetes.io/ingress.class: nginx
that specific ingress rule will get connected or managed by nginx controller.
inside the ingress you have mentioned mysite.local so make sure in local setup host file you are mapping the domain to IP.
Once controller is up and running opening the URL (mysite.local) into the browser will show the site.
We have a working Azure Kubernetes Service cluster with dotnet 6.0 web app. The pods are running on port 80 but the public url is running behind https cert which is being handled by an nginx ingress controller with cert secret. All this is working well.
We are adding some new functionality (integration from an external service). When signing into our app, with this new functionality, there's a brief redirect to the external service page. Once the user requests have completed, the external service redirects back to our site using a preconfigured redirect url to which is posted some data (custom header and query string). At this point, our site errors with 502 bad gateway.
When i review the logs on the nginx ingress controller pod, i can see some additional errors:
[error] 664#664: *17279861 upstream prematurely closed connection while reading response header from upstream, client: 10.240.0.5, server: www-dev.application.com, request: "GET /details/c2beac1c-b220-45fa-8fd5-08da12dced76/Success?id=ID-MJCX43A4FJ032551T752200W&token=EC-0G826951TM357702S&SenderID=4FHGRLJDXPXUU HTTP/2.0", upstream: "http://10.244.1.66:80/details/c2beac1c-b220-45fa-8fd5-08da12dced76/Success?id=ID-MJCX43A4FJ032551T752200W&token=EC-0G826951TM357702S&SenderID=4FHGRLJDXPXUU", host: "www-dev.application.com", referrer: "https://www.external.service.com/"
10.244.1.66 is the internal ip of one of the application pods.
at first i thought this was an error related to annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
because the referrer is an https:// site making the request. However adding that annotation makes the site unusuable (probably because the dotnet app pods are listening on port 80).
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: application-web
namespace: application
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
tls:
- hosts:
- www-dev.application.com
secretName: application-ingress-tls
rules:
- host: www-dev.application.com
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: applicationwebsvc
port:
number: 80
Here's the application ingress yaml.
Anyway, does anyone have any idea what the problem could be here? thanks
Ingress class moved from annotation to ingressClassName field, also you do not need to specify https before. Can you please try this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: application-web
namespace: application
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- www-dev.application.com
secretName: application-ingress-tls
rules:
- host: www-dev.application.com
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: applicationwebsvc
port:
number: 80
Please also check to ingress documentation.
This ended up being a resource limits issue. There was one particular request that was causing memory usage to spike. This cause the container to be OOMKilled. This is what was leading to the 502 Bad gateway error message (because when it was killed, the container was no longer there to service the request).
I am new to microservices. I have few apps to deploy as microservices.
I need a API and Load balancer. For API gateway I come to know about Ingress Nginx. But I am not sure hot setup load balancing. However I could configure it for API gateway as below.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: example.dev
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3000
- path: /api/orders/?(.*)
backend:
serviceName: order-srv
servicePort: 3000
- path: /?(.*)
backend:
serviceName: client-srv
servicePort: 3000
I also have one confusion:
Load balancer sits before API Gateway i.e nginx controller,
How would it I configure Ingress-Nginx for load balancing?
Because load balancer will redirect request to nginx controller
So, Load balancer -> API Gateway > /api/orders
Now, /api/orders -> order-srv -> pods
Clearly Load balancer should decide to which pod it should route the request?
How can I achieve that?
Well, this is something like -
You got a request from external world and that has been at first level intercepted at load balancer layer and API Gateway should be one of the Microservices where you will keep all of the URL mappings say - /api/orders/{orderId} brings this to your API Gateway and in that API Gateway you can have a logic to redirect this to some order service behind the scenes via Fully Qualified Domain Name of order service (FQDN) :portNumber/{uri}
So its good idea to simply route traffic to frontend and API Gateway via your ingress rules , i.e.
- path: /?(.*) will take it to client service or frontend
- path: /api/?(.*) this shall take it to API Gateway service which has routing map to all of the services appearing behind the scenes
I have deployed jenkins in GKE using helm, now i am trying to configure DNS for jenkins. I am using cloudflare for DNS and also created TLS secret using my cloudflare certificates. The ingress that i have created works fine for http but HTTPS is not working. Following is my ingress that i used.
apiVersion: networking.k8s.io/v1beta1 # for versions before 1.14 use extensions/v1beta1i
kind: Ingress
metadata:
name: jenkins-ingress
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/use-forwarded-headers: "true"
nginx.ingress.kubernetes.io/use-proxy-protocol: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- jenkins url
secretName: secret-name
rules:
- host: jenkins url
http:
paths:
- path: /jenkins/*
backend:
serviceName: jenkins
servicePort: 80
The ingress that you have provided does not specify any service or service port for 443 to serve https requests and only has port 80 which is for http.
To enable HTTPS or gRPC over SSL when connecting to the endpoints of services, you need to add the nginx.org/ssl-services annotation to your Ingress resource definition. [1]
[1]https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/ssl-services
I'm wanting to create a service that can do some kind of dynamic proxying back to Kubernetes Pods. Basically I'll have hundreds of K8s Pods that are running the same application that map to a random port on the host (like 10456). However, each Pod is unique and I want traffic directed at a specific pod based on hostname. So when a request comes in for abc123.app.com, I'll have a proxy layer that does a lookup in a database to find what host and port that domain is running on (like 10.0.0.5:10456), then forward the request there. Is there a service that supports this? I've worked with Nginx a lot before, but I'm not clear if it could support this lookup functionality.
Has anyone built something like this before? what's the best way to build a proxy layer that can do lookups like that? How would I update the database when a pod moves from one host to another?
Thanks in advance!
EDIT:
I should have put this in there the first time, but the types of traffic going to these pods are RPC traffic and Peer to Peer traffic
You're describing something very similar to what kubernetes ingress definitions do for http traffic.
An ingress definition configures an ingress controller to point requests for a hostname at a service. The service selects endpoints (pods) via label selectors. When pods move, kubernetes updates the service automatically.
The work on your end just becomes pushing out config changes from your database via one of the API clients to kubernetes rather than directing a proxy. If your environment was extremely dynamic requiring reconfiguration all the time or you need to make dynamic decisions about where traffic should go, you might want to continue looking at a custom proxy or istio, openresty.
It sounds like you have unique deployments going to kubernetes already, so in addition to that include a service and ingress definition.
A simple example including a label on the a pod, a service that use the label. Then an ingress definition using the service.
apiVersion: v1
kind: Pod
metadata:
name: my-app
labels:
app: host-abc123
spec:
containers:
- name: host-abc123
image: me/my-app:1.2.1
ports:
- containerPort: 10456
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: host-abc123
spec:
rules:
- host: abc123.bar.com
http:
paths:
- backend:
serviceName: host-abc123
servicePort: 80
apiVersion: v1
kind: Service
metadata:
name: host-abc123
spec:
ports:
- protocol: TCP
port: 80
targetPort: 10456
The single ingress definition could include all hosts but I'm not sure how kubernetes and the ingress controllers would go replacing that regularly.
There are nginx based ingress controllers too. You end up with a nginx server config per ingress/host definition.