How to connect to InfluxDB server(not local) - influxdb

I was given the name of the cluster node to connect to. I was told to target it as my influxDB server but I am not sure how to do that. Do I need to change some settings in the config file? if so, where and how? I do have information such as udername,pw, and database too.

I figured out that you just run your Influx server like normal and you connect to the machine/node through the influx cli. E.g. connect xxxxxxxx:8086

Related

Accessing remote redis server with Grafana

I'm trying to access my redis database via Grafana Cloud on my laptop. The database is a redis container working as a cache on a different device (pi). Accessing the Redis database via Python script on my remote device is no problem but trying to connect to it via Grafana (using Redis Datasource Plugin) doesn't work as intended and throws a connection error. Poorly the documentation leaves me kinda clueless whats the specific cause (any missing plugin dependencies?) so I'm thankful for every hint.
To be able to access Redis Server from Grafana Cloud it should be exposed to the Internet as Jan mentioned.
If you run Grafana in Docker container it should be started in the host network mode (https://docs.docker.com/network/host/) to be able to access it from other devices.
If something is lacking or not clear in the Redis plugins documentation, please open an issue and we will update it: https://github.com/RedisGrafana/RedisGrafana/issues

Neo4j setup in OpenShift

I am having difficulties deploying Neo4j official docker image https://hub.docker.com/_/neo4j to an OpenShift environment and accessing it from outside (from my local machine)
I have performed the following steps:
oc new-app neo4j
Created route for port 7474
Set up the environment variable NEO4J_dbms_connector_bolt_listen__address to 0.0.0.0:7687 which is the equivalent of seting up the dbms.connector.bolt.listen_address=0.0.0.0:7687 in the neo4j.conf file.
Access the route url from local machine which will open the neo4j browser which requires authentication. At this point I am blocked because any combination of urls I try are unsuccessful.
As a workaround I have managed to forward 7687 port to my local machine, install Neo4j Desktop solution and connect via bolt://localhost:7687 but this is not the ideal solution.
Therefore there are two questions:
1. How can I connect from the neo4j browser to it's own database
How can I connect from external environment (trough OpenShift route) to the Neo4j DB
I have no experience with the OpenShift, but try to add the following config:
dbms.default_listen_address=0.0.0.0
Is there any other way for you to connect to Neo4j, so that you could further inspect the issue?
Short answer:
To connect to the DB that is most likely a configuration issue, maybe Tomaž Brataničs answer is the solution. As for accessing the DB from outside, you will most likely need a NodePort.
Long answer:
Note that OpenShift Routes are for HTTP / HTTPS traffic and not for any other kind of traffic. Typically, the "Routers" of an OpenShift cluster listen only on Port 80 and 443, so connecting to your database on any other port will most likely not work (although this heavily depends on your cluster configuration).
The solution for non-HTTP(S) traffic is to use NodePorts as described in the OpenShift documentation: https://docs.openshift.com/container-platform/3.11/dev_guide/expose_service/expose_internal_ip_nodeport.html
Note that also for NodePorts, you might need to have your cluster administrator add additional ports to the loadbalancer or you might need to connect to the OpenShift Nodes directly. Refer to the documentation on how to use NodePorts.

Attaching a Vagrant-hosted Gremlin-Server as a Data Source in PhpStorm

I am running the Gremlin-Server from this Tinkerpop Docker Image within a Vagrant box. I am trying to link this server as a data source so that I can utilize the "Graph Database Console" plugin in PhpStorm. I am attempting to do this through the driver wizard workflow.
However, in the class dropdown it won't give me any configuration options other than java.sql.Driver. It does give me the option of connecting custom driver files, but I am not sure which file I would need to attach from the Gremlin-Server docker image.
What steps would it take to connect a Gremlin-Server as a data-source in PhpStorm?
As it turns out, the TinkerPop3 stack does not offer a JDBC Driver, as it is not neccessarily a database in-and-of itself. There is a SQL port of Gremlin which is purportedly working on a JDBC driver for Gremlin-Server, but there is no option to currently reference a local hosted Gremlin-Server.
However in this instance there is a plugin for PhpStorm called Graph Database Support which allows you to then configure a Local Graph Database by pointing to your local environment and port. In my case it was a Vagrant IP address and a forwarded Docker port which meets that need.

Running an Ant script to prepare a Database in Bluemix

I have an Ant script that I use to populate/prepare a database. All I need is to set the host, port and credentials for the database. It works fine for MySQL and DB2, the DB just need to be reachable from were the script is executed.
The DB service in Bluemix gives me a DB with an IP (75.x.x.x) that is only reachable from the internal network of Bluemix, it is not accessible externally.
My understanding is that my ant script needs to be executed from inside the Bluemix network/servers.
How can I do that?
What would be the alternatives?
I'm considering to create a NodeJS script to trigger that ant internally, but I'm not sure if it will work properly.
dashDB always had the ability for local clients (outside of Bluemix) to connect to the cloud database, and SQL Database later added the feature as well. So you should be able to populate a database as long as you have the correct driver client installed on your local machine.
Can you provide more details on how you tested that the IP is not reachable? Is there a firewall put in place between your local machine and Bluemix? Note that ping is not a good test because the port is blocked for security reasons. You may try the JDBC port indicated on the connection page from the console.
See link for instructions on how to make a connection:
https://www.ng.bluemix.net/docs/#services/SQLDB/index.html#connecting-to-sqldb
You might be able to use a simple custom buildpack. You can start with a sample like this one:
https://github.com/dmikusa-pivotal/cf-test-buildpack
fork it and modify the bin/compile script to run your ant task instead. Then put your ant script (and probably executable as I expect it is not installed in the Bluemix environment) in a directory and run
cf push <appname> -b <your forked git url>
To push it to Bluemix and run it. If you're just using it once you can probably get away with hard-coding the address and credentials, or else you can bind to the same service instance and get the info from VCAP_SERVICES.

neo4j backup error when backing up from ha cluster

I'm trying to setup backup for a Neo4j cluster with 3 instances. Neo4j is embedded.
If I run:
./neo4j-backup -from ha://10.106.4.80:5001,10.106.4.203:5001,10.106.14.164:5001 -to /tmp/neobak2/
from a host outside the 10.106.4.0 network, I get this error:
Could not find backup server in cluster neo4j.ha at 10.106.4.80:5001,10.106.4.203:5001,10.106.14.164:5001, operation timed out.
If I run it from a cluster member it works just fine. Also if I run the backup script with single instead of ha works fine from anywhere.
Below the basic cluster config I'm using:
ha.server_id: 1
ha.initial_hosts:10.106.4.80:5001,10.106.4.203:5001,10.106.14.164:5001
ha.tx_push_factor: 2
I already checked for firewall issues, there aren't any. Neo4j version used is 1.9.5.
The webadmin interface shows the cluster has online backup enabled and listening to the default port.
Any help will be appreciated.
According to RFC 5735 IP Adresses 10.0.0.0/8 are private. So I assume they're not routed from an external host.

Resources