Neo4j cluster: Expose Neo4j cluster to external world - neo4j

I've installed neo4j enterprise from Google cloud market place and it is accessible from within the Kubernetes network but I want to access it from my external application which is not on the same network.
Following this guide from Neo4j I'm able to connect the browser using port forwarding;
MY_CLUSTER_LEADER_POD=mygraph-neo4j-core-0
kubectl port-forward $MY_CLUSTER_LEADER_POD 7687:7687 7474:7474
In the user guide, they suggest that I should not use a load balancer on the server side. I should expose each pod in the cluster separately and use bolt+routing from my application to handle request routing. This is described in Limitations section of the guide.
It should be exposed using Nodeports but I am unable to do it properly. I've tried doing it like this;
kubectl expose pod neo-cluster-neo4j-core-0 --port=7687 --name=neo-leader-pod
But I'm unable to connect using this exposed IP. I'm not good with cloud technologies so I can't figure out what I'm doing wrong.
I went through this article Neo4j Considerations in Orchestration Environments, tells what I should do but not how to do. It assumes prior knowledge of gcloud/kubernaties.
Anyone could guide me in the right direction? Thanks

If I’m not wrong, you create a GKE cluster for neo4j enterprise.
And it works perfectly inside of the cluster network, but not from outside.
Check if you have opened the firewall for these ports.
To create rules or see the existing rules:
Go to cloud.google.com
Go to my Console
Choose your Project
Choose Networking > VPC network
Choose "Firewalls rules"
Choose "Create Firewall Rule" to create the rule if doesn't exist.
To apply the rule to select VM instances, select Targets > "Specified target tags", and enter into "Target tags" the name of the tag. This tag will be used to apply the new firewall rule onto whichever instance you'd like. Then, make sure the instances have the network tag applied.
To allow incoming TCP connections to port 7687 for example, in "Protocols and Ports" enter tcp:7687
Click Create
Check the GKE documentation for a better clue:
https://cloud.google.com/solutions/prep-kubernetes-engine-for-prod
https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy
https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps
:)

Related

Kubernates Node port services in on premise rancher cluster

I have 5 microservices in 5 pods and have deployed each service using specific port using NODE PORT service.
I have a UI app as one service inside another pod which is also exposed using node port service.
Since I can't use pod IP to access urls in UI app as pods live and die so deployed as nodeport service and can I access all 5 services inside UI app seamlessly using respective node port?
Please advise - is this approach going to be reliable?
Yes, you can connect to those Node port services seamlessly.
But remember, you may need higher network bandwidth card and connection (to master nodes) if you get too much traffic to these services.
Also if you have a few master nodes, you can try dedicated master node-ip and nodeport for a service.(If you have 5 master nodes, each service is accessed from one master node's IP etc. This is not mandatory, you can connect to each service using any masterIP:nodeport)
Highly recommend to use load-balancer service for this. If you have baremetal cluster try using MetalLB.
Edit : (after Nagappa LM`s comment)
If its for QA, then no need to worry, but if they perform load test to all the services simultaneously could be a problematic.
Your code change means, only your k8 - deployment is changed, not Kubernetes service. k8 service is where you define nodeport

Neo4j setup in OpenShift

I am having difficulties deploying Neo4j official docker image https://hub.docker.com/_/neo4j to an OpenShift environment and accessing it from outside (from my local machine)
I have performed the following steps:
oc new-app neo4j
Created route for port 7474
Set up the environment variable NEO4J_dbms_connector_bolt_listen__address to 0.0.0.0:7687 which is the equivalent of seting up the dbms.connector.bolt.listen_address=0.0.0.0:7687 in the neo4j.conf file.
Access the route url from local machine which will open the neo4j browser which requires authentication. At this point I am blocked because any combination of urls I try are unsuccessful.
As a workaround I have managed to forward 7687 port to my local machine, install Neo4j Desktop solution and connect via bolt://localhost:7687 but this is not the ideal solution.
Therefore there are two questions:
1. How can I connect from the neo4j browser to it's own database
How can I connect from external environment (trough OpenShift route) to the Neo4j DB
I have no experience with the OpenShift, but try to add the following config:
dbms.default_listen_address=0.0.0.0
Is there any other way for you to connect to Neo4j, so that you could further inspect the issue?
Short answer:
To connect to the DB that is most likely a configuration issue, maybe Tomaž Brataničs answer is the solution. As for accessing the DB from outside, you will most likely need a NodePort.
Long answer:
Note that OpenShift Routes are for HTTP / HTTPS traffic and not for any other kind of traffic. Typically, the "Routers" of an OpenShift cluster listen only on Port 80 and 443, so connecting to your database on any other port will most likely not work (although this heavily depends on your cluster configuration).
The solution for non-HTTP(S) traffic is to use NodePorts as described in the OpenShift documentation: https://docs.openshift.com/container-platform/3.11/dev_guide/expose_service/expose_internal_ip_nodeport.html
Note that also for NodePorts, you might need to have your cluster administrator add additional ports to the loadbalancer or you might need to connect to the OpenShift Nodes directly. Refer to the documentation on how to use NodePorts.

K8s loadbalance between different deployment/replicasets

we have a system that is having 2 endpoint based on geo-location. e.g (east_url, west_url).
One of our application need to load balance between those 2 urls. In the consumer application, created 2 deployment with the same image but different environment variables such as "url=east_url", "url=west_url".
after the deployment, i have following running pod, each of them will have label: "app=consumer-app" and "region=east" or "region=west"
consumer-east-pod-1
consumer-east-pod-2
consumer-west-pod-1
consumer-west-pod-2
when i create a clusterIP service with selector: app=consumer-app, somehow it only picks up one replicaSet. I am just curious if this is actually possible in kubernates to allow Service backed up by different deployments?
Another way of doing this i can think of is to create 2 services, and have ingress controller to loadbalance it, is this possible? we are using Kong as the ingress controller. I am looking for something like openshift which can have "alternativeBackends" to serve the Route. https://docs.openshift.com/container-platform/4.1/applications/deployments/route-based-deployment-strategies.html
I was missing a label for the east replicaSets, after i add the app:consumerAPP, it works fine now.
Thanks
TL;DR: use ISTIO
With ISTIO you can create Virtual Services:
A VirtualService defines a set of traffic routing rules to apply when
a host is addressed. Each routing rule defines matching criteria for
traffic of a specific protocol. If the traffic is matched, then it is
sent to a named destination service (or subset/version of it) defined
in the registry.
The VirtualService will let you send traffic to different backends based on the URI.
Now, if you plan to perform like an A/B TEST, you can use ISTIO's (Destination Rule):[https://istio.io/docs/reference/config/networking/destination-rule/].
DestinationRule defines policies that apply to traffic intended for a
service after routing has occurred.
Version specific policies can be specified by defining a named subset
and overriding the settings specified at the service level
1.- If you are using GKE, the process to install ISTIO can be located in here
2.- If you are using K8s running on a Virtual Machine, the installation process can be found here

How to implement Kubernetes POD to POD Communication?

This question has been asked and answered before on stackoverflow but because I'm new to K8, I don't understand the answer.
Assuming I have two containers with each container in a separate POD (because I believe this is the recommend approach), I think I need to create a single service for my two pods to be apart of.
How does my java application code get the IP address of the service?
How does my java application code get the IP addresses of another POD/container (from the service)?
This will be a list of IP address because these are stateless and they might be replicated. Is this correct?
How do I select the least busy instance of the POD to communicate with?
Thanks
Siegfried
How does my java application code get the IP address of the service?
You need to create a Service to expose the Pod's port and then you just need to use the Service name and kube-dns will resolve the Pod's IP address
How does my java application code get the IP addresses of another
POD/container (from the service)?
Yes, using the service's name
This will be a list of IP address because these are stateless and they
might be replicated. Is this correct?
The Service will load balance between all pods that matches the selector, so it could be 0, 1 or any number of Pods
How do I select the least busy instance of the POD to communicate with?
Common way is round robin policy but here are other specific balancing policies
https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-ipvs
Cheers ;)
You don't need to get any IP, you use the service name (DNS). So if you called your service "java-service-1" and exposed port 80, you can access it this way from inside the cluster:
http://java-service-1
If the service is in a different namespace, you have to add that as well (see https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/)
You also don't select the least busy instance yourself, a service can be configured as LoadBalancer, Kubernetes does all of this for you (see https://kubernetes.io/docs/concepts/services-networking/)

Does JIRA work on Google Compute Engine VM

Is JIRA supported in GCE? If so, how to make it work?
We have installed 64-bit .bin of JIRA(6.4.1), and opened necessary custom http ports under Networks.
Started JIRA as service, but unable to see it work via browser. No error message than, timed out error!
Any help would be highly appreciated.
Note: We are new to Google Cloud Platform.
Did you enable the http and https services on your instance ? By default the GCE instance does not allow Http and Https traffic, you have to do it manually.
The Jira configuration for Google Compute Engine can be tricky. You need to make sure that:
The firewall rules under Netowrking allows a connection to Jira HTTP port or the HTTP enables in VM properties
The global Networking rules allow TCP traffic on this port
The virtual network have routes configured
If you use Apache as proxy for Jira (recommended) then make sure Apache is configured to point to the Tomcat port
Your Tomcat is configured
You have enabled port allocation using setcap utility
Your local machine firewall enables the connection (in Red Hat ipconfig is enabled by default and blocks the connections)
As you can see it may be tricky to install Jira on Google Cloud. It may be a good idea to use a deployment service like Deploy4Me to do this quickly and automatically.

Resources