I have huge data in a CSV file. I have a use case to import that CSV data to influxDB. I have 2 Linux AWS machines where Influx DB is installed in one machine and Graphite is installed in another machine. I will have my CSV data in the machine where Graphite is installed. Can someone please help me how to import data from CSV to Influx DB. I had gone through various articles for the solution. I could find the solution if both InfluxDB and Graphite is in same machine. But in my case both are in different machines.
I used the HTTP-API in combination with the line protocol, shoul also work for you, maybe with some reformatting
curl -i -XPOST "http://localhost:8086/write?db=mydb" --data-binary #txtfilewithlineprotocol.csv
https://docs.influxdata.com/influxdb/v1.8/tools/api/
https://docs.influxdata.com/influxdb/v1.8/write_protocols/line_protocol_tutorial/
Related
I want to copy my entire neo4j graph(all nodes and relationships) created so far from my local machine(windows) to VM(windows). Is there any way I can achieve the same. Please advise.
You can create a dump of the database and restore the database dump in the VM. Other than that, you can use APOC to export the node and relationship information and import it in your VM. For more information about the APOC procedures to export and import data you can take a look at the documentation: https://neo4j.com/labs/apoc/4.3/export/
The easiest way:
Stop the neo4j service on both local machine and VM.
copy the /data/ folder with all its content to the new location.
Start the neo4j service in the new location.
Im new here, I have an issue with one plugin,
So im using telegraf to get data from ipmi sensor, and it’s working I can see that in Grafana dashboard via InfluxDB,
so until here everything is working correctly,
I added another input plugin to my telegraf.conf : telegraf-speedtest/speedtest.conf at master · Derek-K/telegraf-speedtest · GitHub
Once im checking telegraf -test , I can see that the first plugin(ipmi sensor) is OK and second plugin (speedtest) is OK as well.
But speedtest measurements are not stored in the influxdb I check it using
root#d5c51db15460:/# influx -execute ‘show measurements’ -database ‘telegraph’
name: measurements
name
ipmi_sensor
Here as you can see there is only the ipmi_sensor :( .
Telegraf is restarted already, and both plugins are working with -test
im not sure where is the issue, I appreciate your help guys
thank you
The telegraf --test option does not send data to outputs. I would suggest using the --once option if you want to ensure data is making it to outputs as well.
Here is the scenario I want to resolve: I have two environments: a local machine and a virtual machine hosted in Azure
In the virtual machine I start a gremlin container which includes the gremlin client, server and connects to a cassandra graph database.
This is the information of the container running when i run the docker container ls command:
CONTAINER ID: 029095e26f53
IMAGE: 3f03c6bfb0a2
COMMAND: "/bin/sh -c /gremlin…"
CREATED: 2 weeks ago
STATUS: Up 2 weeks
PORTS: 0.0.0.0:8182->8182/tcp
NAME: gremlin
When I enter inside the container, I run the following command in order to run the gremlin client:
./bin/gremlin.sh
Once inside the gremlin console i run the following command to connect to the tinkerpop server:
:remote connect tinkerpop.server conf/remote.yaml
==>Connected - localhost/127.0.0.1:8182 ---> answer from gremlin console
If I run the following gremlin query:
:> g.V().count()
I get a number different from zero, telling me that there are a records on the graph database.
Now on the other side I have the Gephi client on my local machine which I want it to be able to show that graph database. Or at least, make Gephi to show the the visual data from a
graph = TinkerFactory.createModern()
running inside the gremlin container.
I want to do this because I need to choose a visualization tool for gremlin and titan ecosystem.
I tried to set up Gephi client feature to connect to the virtual machine's ip and the port 8182 but it shows me the red dot telling me that is not possible. What am i missing? I am pretty sure there are a few steps missing. Thanks in advance,
Juan Ignacio
If your graph is "remote" and not in-memory in the Gremlin Console then you have to devise a way to make it available locally that way. This situation is typical for Graphs that run in Gremlin Server or are wholly remote like CosmosDB, DSE Graph or Amazon Neptune.
They typical method to make it available locally is to use [subgraph()][1]-step to pull out just the portion of the graph that you care about and return that to the Gremlin Console. It will be returned as a TinkerGraph for graphs that support subgraph()-step (like Titan, though I assume you would use JanusGraph), so for your test which is using TinkerFactory and a tiny graph you could just do this:
gremlin> :remote connect tinkerpop.server conf/remote-objects.yaml
Note the configuration of "remote-objects.yaml" because that configuration will return actual objects - an actual TinkerGraph rather than a string representation of a TinkerGraph.
gremlin> :> TinkerFactory.createModern()
That will create the "modern" graph remotely and return the TinkerGraph to the Gremlin Console. You can access that result
gremlin> graph = result[0].object
The :> stores the response from the server in a variable named "result" and that contains your TinkerGraph in a List. This is explained in the reference documentation. From there you can use that "graph" object as you would using the standard Gephi instructions.
I am trying to use the Neo4j-import tool to bulk import csv files into a new neo4j database. I can successfully perform this but for the dockerized neo4j to recognize it, I need to restart the docker. Is there any way to do this without having to restart the container? I have tried using the cypher csv import but that for some reason doesn't support the large dataset ~76k relationships
Is there an Erlang API for bigquery?
I would like to use Bigquery from Google Compute Engine in a Linux instance.
I would like to run Riak NoSQL there.
As far as I can tell, there is no Erlang client for BigQuery. You can always generate the HTTP REST requests by hand -- it is relatively straightforward for most use-cases. Alternately, you could execute a shell command that runs the bq.py command line client.