Communication from Pod to Pod on same node inside kubernetes in GCP - docker

I have dockerized and created deployment and service for both front(REACT) and backend (EXPRESS NODE JS) project in kubernetes. I have successfully deployed in Kubernetes(Single node cluster) in Same Node with Two Pods(i.e One Pod --> REACT APP and SECOND POD --> EXPRESS NODE JS) in Google cloud Platform.
Question:
1.) How to communicate from one pod to another pod inside the Node in Kubernetes cluster?
2.) I have exposed my REACT app to the Outer world by creating LoadBalancer Type Service in kubernetes and i am able to access the React App Endpoint from the Browser. Now, Is it possible to access EXPRESS app from REACT app inside the node without exposing my EXPRESS app to outer world. How to achieve this?
Thanks in Advance.

When the frontend is a browser-based JavaScript app then the JavaScript resources may be hosted from a Pod in the cluster but the logic doesn't run there. The fronted JavaScript runs in the user's browser. Calling any backend endpoints in the cluster from the user's browser requires an external URL somewhere along the chain and not just an internal one.
A typical way to do this is to set up a Service of type LoadBalancer and put the external endpoint into the backend's config. Another is to set up an Ingress Controller and deploy both Service and Ingress along with the backend. With Ingress you can know what the external URL will be before you deploy the Service (and this is easiest if you use DNS). Cluster-internal communication doesn't need Ingress and can be done with Services of type ClusterIP but I think you need external communication.
You will need to expose an external entry point for users to hit the UI anyway (the place where the JS is hosted). With ingress you could configure the route to the backend as a different path on the same (external) host.

Related

Kubernates Node port services in on premise rancher cluster

I have 5 microservices in 5 pods and have deployed each service using specific port using NODE PORT service.
I have a UI app as one service inside another pod which is also exposed using node port service.
Since I can't use pod IP to access urls in UI app as pods live and die so deployed as nodeport service and can I access all 5 services inside UI app seamlessly using respective node port?
Please advise - is this approach going to be reliable?
Yes, you can connect to those Node port services seamlessly.
But remember, you may need higher network bandwidth card and connection (to master nodes) if you get too much traffic to these services.
Also if you have a few master nodes, you can try dedicated master node-ip and nodeport for a service.(If you have 5 master nodes, each service is accessed from one master node's IP etc. This is not mandatory, you can connect to each service using any masterIP:nodeport)
Highly recommend to use load-balancer service for this. If you have baremetal cluster try using MetalLB.
Edit : (after Nagappa LM`s comment)
If its for QA, then no need to worry, but if they perform load test to all the services simultaneously could be a problematic.
Your code change means, only your k8 - deployment is changed, not Kubernetes service. k8 service is where you define nodeport

How can I integrate my application with Kubernetes cluster running Docker containers?

This is more of a research question. If it does not meet the standards of SO, please let me know and I will ask elsewhere.
I am new to Kubernetes and have a few basic questions. I have read a lot of doc on the internet and was hoping someone can help answer few basic questions.
I am trying to create an integration with Kubernetes (user applications running inside Docker containers to be precise) and my application that would act as a backup for certain data in the containers.
My application currently runs in AWS. Would the Kube cluster need to run in AWS as well ? Or can it run in any cloud service or even on-prem as long as the APIs are available ?
My application needs to know the IP of the Master node API server to do POST/GET requests and nothing else ?
For authentication, can I use AD (my application uses AD today for a few things). That would also give me Role based policies for each user. Or do I have to use the Kube Token Reviewer API for authentication always ?
Would the applications running in Kubernetes use the APIs I provide to communicate with my application ?
Would my application use POST/GET to communicate with the Kube Master API server ? Do I need to use kubectl for this and above #4 ?
Thanks for your help.
Your application needn't exist on the same server as k8s. There are several ways to connect to k8s cluster, depending on your use case. Either you can expose the built-in k8s API using kubectl proxy, connect directly to the k8s API on the master, or you can expose services via load balancer or node port.
You would only need to know the IP for the master node if you're connecting to the cluster directly through the built-in k8s API, but in most cases you should only be using this API to internally administer your cluster. The preferred way of accessing k8s pods is to expose them via load balancer, which allows you to access a service on any node from a single IP. k8s also allows you to access a service with a nodePort from any k8s node (except the master) through a preassigned port.
TokenReview is only one of the k8s auth strategies. I don't know anything about Active Directory auth, but at a glance OpenID connect tokens seem to support it. You should review whether or not you need to allow users direct access to the k8s API at all. Consider exposing services via LoadBalancer instead.
I'm not sure what you mean by this, but if you deploy your APIs as k8s deployments you can expose their endpoints through services to communicate with your external application however you like.
Again, the preferred way to communicate with k8s pods from external applications is through services exposed as load balancers, not through the built-in API on the k8s master. In the case of services, it's up to the underlying API to decide which kinds of requests it wants to accept.

deploying Spring Boot Rest Service with https enabled, in kubernetes

I have developed a spring boot based REST API service and enabled https on it by using a self signed cert keystore (to test locally), and it works well.
server.ssl.key-store=classpath:certs/keystore.jks
server.ssl.key-store-password=keystore
server.ssl.key-store-type=PKCS12
server.ssl.key-alias=tomcat
Now, I want to package a docker image and deploy this service in a kubernetes cluster. I know I can expose the service as a NodePort and access it externally.
What I want to know is, I doubt that my self signed cert generated in local machine will work when deployed in kubernetes cluster. I researched and found a couple of solutions using kubernetes ingress, kubernetes secrets, etc. I am confused as to what will be the best way to go about doing this, so that I can access my service running in kubernetes through https. What changes will I need to do to my REST API code?
UPDATED NOTE : Though I have used a self signed cert for testing purposes, I can obtain a CA signed cert from my company and use it for production. My question is more on the lines of, For a REST API service which already uses a SSL/TLS based connection, what are some of the better ways to deploy and access the cert in kubernetes cluster , eg: package in the application itself, use Secrets, or scrap the application's SSL configuration and use Ingres instead, etc. Hope my question makes sense :)
Thanks for any suggestions.
Well it depends on the way you want to expose your service. Basically you have either an ingress, an external load balancer (only in certain cloud evironments available) or a Service thats routed to a Port (either via NodePort or HostPort) as options.
Attention: Our K8S Cluster is self hosted so I have no reliable information about external load balancers in K8S and will therefore omit that option.
If you want to expose your service directly behind one of your domains on port 80 (e.g. https://app.myorg.org) you'll want to use ingress. But if you don't need that and you can live with a specific port the NodePort approach should do the trick (e.g. https://one.ofyourcluster.servers:30000/).
Let's assume you want to try the ingress approach than you need to add the certificates to the ingress definition in K8S instead of the spring boot application or you must additionally specify that the service is reachable via https itself in the ingress. The way to do it may differ from ingress controller to ingress controller.
For the NodePort/HostPort you just need to enable SSL in your application.
Despite that you also need a valid certificate e.g. issued by https://letsencrypt.org/
Actually for K8S there are some projects that can fetch you a letsencrypt certificate automatically if you to use ingresses. (e.g. https://github.com/jetstack/cert-manager/)

How to access a 'private' service of Kubernetes in browser?

I have created a Kubernetes cluster in GKE. The first thing I tried was deploying the cluster, creating a deployment, a service (type: NodePort) and I've created an Ingress above my service.
I was able to visit my pod using a public IP now. This is all working fine but now I want to create a cluster from which I can access the services in my browser using a private IP, but I don't want others to access it.
I've created a new cluster but I've disabled the HTTP loadbalancing addon. So this isn't created inside my cluster. Now I made a new deployment, created a new service which type is ClusterIP.
Now I seem to have a private service, but how can I access this in my browser?
Is it possible to create a VPN solution in GKE to connect to the cluster and get some IP from inside the cluster which will allow me to access the private services in my cluster?
If I'm misunderstanding something, please feel free to correct me.

How can I use vhosts with the same port in kubernetes pod?

I have an existing web application with frontend and a backend which runs on the same port (HTTPS/443) on different sub domains. Do I really need a load balancer in my pod to handle all incoming web traffic or does Kubernetes has something already build in, which I missed so far?
I would encurage getting familiar with the concept of Ingress and IngressController http://kubernetes.io/docs/user-guide/ingress/
Simplifying things a bit, you can look at ingress as a sort of vhost/path service router/revproxy.

Resources