This question has been asked and answered before on stackoverflow but because I'm new to K8, I don't understand the answer.
Assuming I have two containers with each container in a separate POD (because I believe this is the recommend approach), I think I need to create a single service for my two pods to be apart of.
How does my java application code get the IP address of the service?
How does my java application code get the IP addresses of another POD/container (from the service)?
This will be a list of IP address because these are stateless and they might be replicated. Is this correct?
How do I select the least busy instance of the POD to communicate with?
Thanks
Siegfried
How does my java application code get the IP address of the service?
You need to create a Service to expose the Pod's port and then you just need to use the Service name and kube-dns will resolve the Pod's IP address
How does my java application code get the IP addresses of another
POD/container (from the service)?
Yes, using the service's name
This will be a list of IP address because these are stateless and they
might be replicated. Is this correct?
The Service will load balance between all pods that matches the selector, so it could be 0, 1 or any number of Pods
How do I select the least busy instance of the POD to communicate with?
Common way is round robin policy but here are other specific balancing policies
https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs
Cheers ;)
You don't need to get any IP, you use the service name (DNS). So if you called your service "java-service-1" and exposed port 80, you can access it this way from inside the cluster:
http://java-service-1
If the service is in a different namespace, you have to add that as well (see https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/)
You also don't select the least busy instance yourself, a service can be configured as LoadBalancer, Kubernetes does all of this for you (see https://kubernetes.io/docs/concepts/services-networking/)
Related
I have a nodejs app which connects to external db , the db will refuse the connection until I whitelist my ip or my pod will fail , so is it possible that my external ip for the service will be stuck on pending if the pod fails?
is it possible that my external ip for the service will be stuck on pending if the pod fails?
The Service and Pods are created separately. So if you're creating a LoadBalancer-type Service and your cluster is correctly configured, you should be able to get an externalIP: address for it even if the Pods aren't correctly starting up.
But:
I have a nodejs app which connects to external db , the db will refuse the connection until I whitelist my ip
The Service only accepts inbound connections. In a cloud environment like AWS, the externalIP: frequently is the address of a specific load balancer. Outbound requests to a database won't usually come from this address.
If your cluster is in the same network environment as the database, you probably need to allow every individual worker node in the database configuration. Tools like the cluster autoscaler can cause the node pool to change, so if you can configure the entire CIDR block containing the cluster that's easier. If the cluster is somewhere else and outbound traffic passes through a NAT gateway of some sort, then you need to allow that gateway.
I have 5 microservices in 5 pods and have deployed each service using specific port using NODE PORT service.
I have a UI app as one service inside another pod which is also exposed using node port service.
Since I can't use pod IP to access urls in UI app as pods live and die so deployed as nodeport service and can I access all 5 services inside UI app seamlessly using respective node port?
Please advise - is this approach going to be reliable?
Yes, you can connect to those Node port services seamlessly.
But remember, you may need higher network bandwidth card and connection (to master nodes) if you get too much traffic to these services.
Also if you have a few master nodes, you can try dedicated master node-ip and nodeport for a service.(If you have 5 master nodes, each service is accessed from one master node's IP etc. This is not mandatory, you can connect to each service using any masterIP:nodeport)
Highly recommend to use load-balancer service for this. If you have baremetal cluster try using MetalLB.
Edit : (after Nagappa LM`s comment)
If its for QA, then no need to worry, but if they perform load test to all the services simultaneously could be a problematic.
Your code change means, only your k8 - deployment is changed, not Kubernetes service. k8 service is where you define nodeport
I have Kuberenets cluster with AWS-EKS, Suddenly on my cluster existing pods were unable to resolve DNS host name when i try to check NSLookup internally in pods.
Can someone please suggest me like
1. How to resolve this DNS resolution among pods
2. What change causes my cluster to go like this all of sudden
Chek youre namespaces.
The kube-dns works in the following way:
get pod in the same namespace: curl http://servicename
get pod in a different namespace: curl http://servicename.namespace
Here and here
Understanding namespaces and DNS
When you create a Service, it creates a corresponding DNS entry. This entry is of the form ..svc.cluster.local, which means that if a container just uses it will resolve to the service which is local to a namespace. This is useful for using the same configuration across multiple namespaces such as Development, Staging and Production. If you want to reach across namespaces, you need to use the fully qualified domain name (FQDN).
I've installed neo4j enterprise from Google cloud market place and it is accessible from within the Kubernetes network but I want to access it from my external application which is not on the same network.
Following this guide from Neo4j I'm able to connect the browser using port forwarding;
MY_CLUSTER_LEADER_POD=mygraph-neo4j-core-0
kubectl port-forward $MY_CLUSTER_LEADER_POD 7687:7687 7474:7474
In the user guide, they suggest that I should not use a load balancer on the server side. I should expose each pod in the cluster separately and use bolt+routing from my application to handle request routing. This is described in Limitations section of the guide.
It should be exposed using Nodeports but I am unable to do it properly. I've tried doing it like this;
kubectl expose pod neo-cluster-neo4j-core-0 --port=7687 --name=neo-leader-pod
But I'm unable to connect using this exposed IP. I'm not good with cloud technologies so I can't figure out what I'm doing wrong.
I went through this article Neo4j Considerations in Orchestration Environments, tells what I should do but not how to do. It assumes prior knowledge of gcloud/kubernaties.
Anyone could guide me in the right direction? Thanks
If I’m not wrong, you create a GKE cluster for neo4j enterprise.
And it works perfectly inside of the cluster network, but not from outside.
Check if you have opened the firewall for these ports.
To create rules or see the existing rules:
Go to cloud.google.com
Go to my Console
Choose your Project
Choose Networking > VPC network
Choose "Firewalls rules"
Choose "Create Firewall Rule" to create the rule if doesn't exist.
To apply the rule to select VM instances, select Targets > "Specified target tags", and enter into "Target tags" the name of the tag. This tag will be used to apply the new firewall rule onto whichever instance you'd like. Then, make sure the instances have the network tag applied.
To allow incoming TCP connections to port 7687 for example, in "Protocols and Ports" enter tcp:7687
Click Create
Check the GKE documentation for a better clue:
https://cloud.google.com/solutions/prep-kubernetes-engine-for-prod
https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy
https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps
:)
I'm running Jenkins on my K8s cluster, and it's currently accessible externally by node_name:port. Some of my users are bothered by accessing the service using a port name, is there a way I could just assign the service a name? for instance: jenkins.mydomain
Thank you.
Have a look at Kubernetes Ingress.
You can define rules that point internally to the Kubernetes Service in front of Jenkins.
https://kubernetes.io/docs/concepts/services-networking/ingress/
You could use an Ingress or a Service of type LoadBalancer that listens on port 80 and forwards to the Jenkins Pods with the custom port. Then you could just create a DNS, for example for jenkins.mydomain.com, record pointing to the IP address of the Service.
Thank you so much for your suggestions, I forgot to mention that my k8s is running on bare-metal so, a solution like ingress on its own won't work.
I ended up using metallb for this.
https://metallb.universe.tf/
Thanks again :)