i have a case.
Case :
I Create Kubernetes Cluster , I have 4 Node , 4 Worker and Load Balancer 2 ( Master Slave ) ( Bare Metal )
Now , i have a question what is best Flow to expose service ?
1 ) Using Nginx Ingress
2 ) Using Load Balancer/Keepalive
I try second ( Using Load Balancer ) and works fine , i deploy npm project with NodePort then load balancer listen Node Port ( 4 Node ) and do balancing. is it normal ?
i want to understand Is Nginx ingress necessary or not? or can i use both one?
What is best flow on Bare Metal ? disadvantage advantage
This is my flow Image
Flow my Kubernetes Cluster Bare-Metal
I hope I explained everything well. thanks
Related
I have 5 microservices in 5 pods and have deployed each service using specific port using NODE PORT service.
I have a UI app as one service inside another pod which is also exposed using node port service.
Since I can't use pod IP to access urls in UI app as pods live and die so deployed as nodeport service and can I access all 5 services inside UI app seamlessly using respective node port?
Please advise - is this approach going to be reliable?
Yes, you can connect to those Node port services seamlessly.
But remember, you may need higher network bandwidth card and connection (to master nodes) if you get too much traffic to these services.
Also if you have a few master nodes, you can try dedicated master node-ip and nodeport for a service.(If you have 5 master nodes, each service is accessed from one master node's IP etc. This is not mandatory, you can connect to each service using any masterIP:nodeport)
Highly recommend to use load-balancer service for this. If you have baremetal cluster try using MetalLB.
Edit : (after Nagappa LM`s comment)
If its for QA, then no need to worry, but if they perform load test to all the services simultaneously could be a problematic.
Your code change means, only your k8 - deployment is changed, not Kubernetes service. k8 service is where you define nodeport
I'm not used to posting things but I have a problem. On GKE, I set a service of type Loadbalancer for windows pods on windows nodes but the health check of the load balancer is KO for windows nodes. I can use externalTrafficPolicy: cluster but i need Local for my applications.
How to fix it ?
Thanks for your help.
Alexandre Gué,
Team leader Infra
PI ELECTRONIQUE
Have you considered the Health check ranges for different Load Balancers?
For health checks to work, you must create ingress allow firewall
rules so that traffic from Google Cloud probers can connect to your
backends.
A firewall rule should have created per this doc:
The following example creates an ingress firewall rule for the
following load balancers:
Internal TCP/UDP Load Balancing (health checks)
Internal HTTP(S) Load Balancing (health checks)
TCP Proxy Load Balancing (health checks)
SSL Proxy Load Balancing (health checks)
HTTP(S) Load Balancing (health checks and legacy health checks)
For these load balancers, the source
IP ranges for health checks (including legacy health checks if used
for HTTP(S) Load Balancing) are:
35.191.0.0/16
130.211.0.0/22
Also be aware of Windows Server Containers limitations:
There are some Kubernetes features that are not yet supported for
Windows Server containers. In addition, some features are
Linux-specific and do not work for Windows. For the complete list of
supported and unsupported Kubernetes features, see the Kubernetes
documentation.
Not so long ago I decided to deploy my Logstash and Kibana services on Kubernetes , but then I've been caught by a little problem .
Problem : I want to use 2 pods ( to provide load balancing ) of Kibana with the security feature , but when I try to log in it redirects me to a "Log In" page without any "errors".
I'm using images of Logstash 6.8.2 , Kibana 6.8.2 and Elastic cluster is distributed on VMs , all the stack worked perfect , but then I decided to add xpack security feature and found out that I can't use 2 pods of Kibana in the same Deployment at the same moment . After that I tried to use only 1 pod and it works as it supposed to work , I also checked presence of conflicts between VM + container ... there is no problem , tried to add configuration of session affinity in ClusterIP service and it didn't help . I guess that the problem is in my K8S configuration and I'm close to success , but it's not enough .
Thank you for all the support ! I hope I'm not at the dead end and I'll be able to solve my problem with your help :heart:
P.S.: If there is no solution I'm glad to get feedback about your best practice of working with ELK on K8S .
My logs in pods of Kibana are :
{"type":"response","#timestamp":"2019-08-28T12:01:57Z","tags":[],"pid":1,"method":"get","statusCode":200,"req":{"url":"/login?next=%2Fstatus","method":"get","headers":{"host":"<my pod IP>:5601","user-agent":"kube-probe/1.13","referer":"http://<my pod IP>:5601/status","accept-encoding":"gzip","connection":"close"},"remoteAddress":"<my remote address>","userAgent":"<my remote agent ( same as address )>","referer":"http://<my pod IP>:5601/status"},"res":{"statusCode":200,"responseTime":9,"contentLength":9},"message":"GET /login?next=%2Fstatus 200 9ms - 9.0B"}
I have an existing web application with frontend and a backend which runs on the same port (HTTPS/443) on different sub domains. Do I really need a load balancer in my pod to handle all incoming web traffic or does Kubernetes has something already build in, which I missed so far?
I would encurage getting familiar with the concept of Ingress and IngressController http://kubernetes.io/docs/user-guide/ingress/
Simplifying things a bit, you can look at ingress as a sort of vhost/path service router/revproxy.
Is there a way to configure the grails rabbitmq plugin to connect to a clustered
rabbitmq environment for failover, or if there is alternative library/plugin I could use to achieve that.
grails 2.2.0
rabbitmq 1.0.0
I don't think there is an easy way to do this in grails alone...
I would recommend using a load balancer in front of your rabbitmq cluster. This allows you to route traffic to other nodes in the cluster if one fails. Once you have the load balancer configured, just point your rabbitmq.connectionfactory.hostname to the load balancer and it will do the rest!
Load balancer configuration varies depending on the type you use. If you don't already have a load balancer, HAProxy is a good option. There are some good examples online, and step-by-step instructions in the "RabbitMQ in Action" book (if you have it).