Neo4j setup in OpenShift - neo4j

I am having difficulties deploying Neo4j official docker image https://hub.docker.com/_/neo4j to an OpenShift environment and accessing it from outside (from my local machine)
I have performed the following steps:
oc new-app neo4j
Created route for port 7474
Set up the environment variable NEO4J_dbms_connector_bolt_listen__address to 0.0.0.0:7687 which is the equivalent of seting up the dbms.connector.bolt.listen_address=0.0.0.0:7687 in the neo4j.conf file.
Access the route url from local machine which will open the neo4j browser which requires authentication. At this point I am blocked because any combination of urls I try are unsuccessful.
As a workaround I have managed to forward 7687 port to my local machine, install Neo4j Desktop solution and connect via bolt://localhost:7687 but this is not the ideal solution.
Therefore there are two questions:
1. How can I connect from the neo4j browser to it's own database
How can I connect from external environment (trough OpenShift route) to the Neo4j DB

I have no experience with the OpenShift, but try to add the following config:
dbms.default_listen_address=0.0.0.0
Is there any other way for you to connect to Neo4j, so that you could further inspect the issue?

Short answer:
To connect to the DB that is most likely a configuration issue, maybe Tomaž Brataničs answer is the solution. As for accessing the DB from outside, you will most likely need a NodePort.
Long answer:
Note that OpenShift Routes are for HTTP / HTTPS traffic and not for any other kind of traffic. Typically, the "Routers" of an OpenShift cluster listen only on Port 80 and 443, so connecting to your database on any other port will most likely not work (although this heavily depends on your cluster configuration).
The solution for non-HTTP(S) traffic is to use NodePorts as described in the OpenShift documentation: https://docs.openshift.com/container-platform/3.11/dev_guide/expose_service/expose_internal_ip_nodeport.html
Note that also for NodePorts, you might need to have your cluster administrator add additional ports to the loadbalancer or you might need to connect to the OpenShift Nodes directly. Refer to the documentation on how to use NodePorts.

Related

Neo4j cluster: Expose Neo4j cluster to external world

I've installed neo4j enterprise from Google cloud market place and it is accessible from within the Kubernetes network but I want to access it from my external application which is not on the same network.
Following this guide from Neo4j I'm able to connect the browser using port forwarding;
MY_CLUSTER_LEADER_POD=mygraph-neo4j-core-0
kubectl port-forward $MY_CLUSTER_LEADER_POD 7687:7687 7474:7474
In the user guide, they suggest that I should not use a load balancer on the server side. I should expose each pod in the cluster separately and use bolt+routing from my application to handle request routing. This is described in Limitations section of the guide.
It should be exposed using Nodeports but I am unable to do it properly. I've tried doing it like this;
kubectl expose pod neo-cluster-neo4j-core-0 --port=7687 --name=neo-leader-pod
But I'm unable to connect using this exposed IP. I'm not good with cloud technologies so I can't figure out what I'm doing wrong.
I went through this article Neo4j Considerations in Orchestration Environments, tells what I should do but not how to do. It assumes prior knowledge of gcloud/kubernaties.
Anyone could guide me in the right direction? Thanks
If I’m not wrong, you create a GKE cluster for neo4j enterprise.
And it works perfectly inside of the cluster network, but not from outside.
Check if you have opened the firewall for these ports.
To create rules or see the existing rules:
Go to cloud.google.com
Go to my Console
Choose your Project
Choose Networking > VPC network
Choose "Firewalls rules"
Choose "Create Firewall Rule" to create the rule if doesn't exist.
To apply the rule to select VM instances, select Targets > "Specified target tags", and enter into "Target tags" the name of the tag. This tag will be used to apply the new firewall rule onto whichever instance you'd like. Then, make sure the instances have the network tag applied.
To allow incoming TCP connections to port 7687 for example, in "Protocols and Ports" enter tcp:7687
Click Create
Check the GKE documentation for a better clue:
https://cloud.google.com/solutions/prep-kubernetes-engine-for-prod
https://cloud.google.com/kubernetes-engine/docs/how-to/network-policy
https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps
:)

Not able connect to Hazelcast instance deployed on Openshift from External client

Deployed Hazelcast image on Openshift and I have created a route but still not able to connect to it from external Java client. I came to know that routes only work for HTTP or HTTPS services , so am I missing anything here or what do I have to do to expose that Hazelcast instance to outer world ?
And the Docker image for Hazelcast is created and it runs Hazelcast.jar inside the image , does this concern the problem I'm facing ?
I tried exposing the service by running the command
oc expose dc hazelcast --type=LoadBalancer --name=hazelcast-ingress
and external IP with different port number was generated and I tried that as well still getting "exception com.hazelcast.core.HazelcastException: java.net.SocketTimeoutException" and not able to connect to it.
Thanks in advance, any guidance would be really helpful.
According to this, "...If the client application is outside the OpenShift project, then the cluster needs to be exposed by the service with externalIP and the Hazelcast client needs to have the Smart Routing feature disabled".

JIRA Usage on AWS

I just set up JIRA on my ec2 instance after installing it via .bin installer file. But when I hit the ec2 url:
ec2-xxxxx.xxxxx.amazonaws.com
It is hitting the test success page for apache2 which I installed after JIRA installation.
How do I get to determine the correct URL for JIRA and hit the JIRA app?
Thanks
JIRA defaut http port is 8080. So you need access it via
ec2-xxxxx.xxxxx.amazonaws.com:8080
if you are not following the detault setting, then you need make sure which port are set by this document Changing JIRA's TCP Ports
You may need open the firewall port 8080 and set in one security group which you assign port 22 to be opened. Otherwise, you can't directly access that port.
Apart from the previous answer you may wish to ensure the following:
Your AWS EC2 Instance security group have the port opened
Your AWS VPC ACL allows TCP traffic on this port
Your VPC have an internet gateway
Your VPC have the routes configured
Your Apache proxy is configured to point to the Tomcat port
Your Tomcat is configured
You have enabled port allocation using setcap utility
Your local machine firewall enables the connection (in Red Hat ipconfig is enabled by default and blocks the connections)
As you can see it may be tricky to install Jira on AWS. It may be a good idea to use a deployment service like Deploy4Me to do this quickly.

neo4j backup error when backing up from ha cluster

I'm trying to setup backup for a Neo4j cluster with 3 instances. Neo4j is embedded.
If I run:
./neo4j-backup -from ha://10.106.4.80:5001,10.106.4.203:5001,10.106.14.164:5001 -to /tmp/neobak2/
from a host outside the 10.106.4.0 network, I get this error:
Could not find backup server in cluster neo4j.ha at 10.106.4.80:5001,10.106.4.203:5001,10.106.14.164:5001, operation timed out.
If I run it from a cluster member it works just fine. Also if I run the backup script with single instead of ha works fine from anywhere.
Below the basic cluster config I'm using:
ha.server_id: 1
ha.initial_hosts:10.106.4.80:5001,10.106.4.203:5001,10.106.14.164:5001
ha.tx_push_factor: 2
I already checked for firewall issues, there aren't any. Neo4j version used is 1.9.5.
The webadmin interface shows the cluster has online backup enabled and listening to the default port.
Any help will be appreciated.
According to RFC 5735 IP Adresses 10.0.0.0/8 are private. So I assume they're not routed from an external host.

Cassandra Cluster Setup getting JMX error

I m trying setup a cassandra cluster as a test bed but gave the JMX remote connection error. I seem to found the answer for my error from cassandra FAQ page
Nodetool says "Connection refused to host: 127.0.1.1" for any remote host. What gives?
Nodetool relies on JMX, which in turn relies on RMI, which in turn sets up it's own listeners and connectors as needed on each end of the exchange. Normally all of this happens behind the scenes transparently, but incorrect name resolution for either the host connecting, or the one being connected to, can result in crossed wires and confusing exceptions.
If you are not using DNS, then make sure that your /etc/hosts files are accurate on both ends. If that fails try passing the -Djava.rmi.server.hostname=$IP option to the JVM at startup (where $IP is the address of the interface you can reach from the remote machine).
But can somebody help me on how to do -Djava.rmi.server.hostname=$IP
Or what to add is hosts file, i know that in hosts normally we add "IP Alias", but whose ip and alias.
I dont know much java or either linux
I m currently working on ubuntu v10.04 and cassandra v0.74
Sudesh
For JMX you need to enable JMX-remoting:
java -Dcom.sun.management.jmxremote
Depending on from where you want to access the jmx-server, you also need to specify a port:
-Dcom.sun.management.jmxremote.port=12345
and set or disable passwords.
Have a look at http://download.oracle.com/javase/1.5.0/docs/guide/management/agent.html for more details.

Resources