How do you display a graph's replication factor in the gremlin-console? - datastax-enterprise

I know for DSE graph, in gremlin-console you can create a graph with replication as follows
system.graph('graph_name').replication("{'class' : 'NetworkTopologyStrategy', 'dc1' : 3}")
But how do you find out about an existing graph's replication?

As far as I know, right now it's not possible to do via existing interfaces inside the gremlin-console. Before 5.1.3, there were separate options that was possible to fetch via schema.config().describe(), but they were replaced with replication & systemReplication. Looks like (don't know 100%) that the strings provided via these options could be just passed to corresponding CREATE KEYSPACE commands, so if you have access to cqlsh then you can get replication factor from describe keyspace graph_name. Another possibility is to use Java code to fetch Metadata, and then extract replication factor via getReplication call.

Related

Not able to create Grafana dynamic dashboard on influxDB dot included measurement

The situation is i'm using telegraph for sending data to influxDB and Grafana(5.1.3) to visualize.influxDB storing the data in below formate
api.service-v1.request.status.total
api.service-v1.response.size
api.service-v1.upstream_latency
api.service-v1.user.consumer-001.request.count
api.service-v1.user.consumer-001.request.status.200
api.service-v1.user.consumer-001.request.status.429
api.service-v1.user.consumer-001.request.status.499
api.service-v1.user.consumer-001.request.status.total
And I'm like to create a dynamic dashboard based on service, consumer, and its status and more metrics. Can you please help me on this.
We have Find out the solution.
we can create a $service variable with query(show measurements;) and regex(/.*api.([^.]*).*/) to filter service name.
second variable for $consumer and query(show measurements;) with regex option(/.*api.$service.user.([^.]*).*/) these variable we can use to visualized graph using toggle edit mode in grafana.
Thanks

Cannot find data in Prometheus with InfluxDB remote write/read api

InfluxDB announced Prometheus remote write/read api in ver1.4.
https://docs.influxdata.com/influxdb/v1.4/supported_protocols/prometheus/ https://www.influxdata.com/blog/influxdb-now-supports-prometheus-remote-read-write-natively/
I have deployed a new InfluxDB, created a user called "paul" with password 'foo', created a database called "prometheus" and filled with sample data:
Then, I modified the config yml of Prometheus (I found the '*' in influx doc example should be replaced by '-')
I believe Prometheus and InfluxDB are communicating:
However, I cannot find the sample measurement I inserted in InfluxDB.
I am sure I must miss something simple.... Did I do any silly mistakes? Thanks
We found that the metrics were all put into a single measurement called '_' within the INfluxDB database that we chose (called "metrics", in our case) with the field being 'f64' (float64, I assume). The Prometheus measurement name was attached as a label: 'name'. So, in my experience, the InfluxDB query for your measurement above might be something like:
select "f64" from "prometheus"."_" where "__name__" = "prometheus_target_interval_length_seconds_count"

How Erlang access huge shared data structures like BTree in CouchDB

In CouchDB there's huge BTree data structure and multiple processes (one for each request).
Erlang processes can't share state - so it seems that there should be dedicated process responsible for accessing BTree and communicating with other processes via messages. But it would be inefficient - because there only one process who can access data.
So how such cases are handled in Erland, and how it's handled in this specific case with CouchDB?
Good question this. If you want an authoritative answer the
best place to ask a question about couchdb internals is the couchdb mailing list they are very quick and one of the core devs can probably give you a better answer. I will try to answer this as best as I can just keep in mind that I may be wrong :)
The first clue is provided by the couchdb config file. Start couchdb in the shell mode
couchdb -i
point your browser to
http://localhost:5984/_utils/config.html
You will find that under the daemon section there is a key value pair
index_server {couch_index_server, start_link, []}
aha! so the index is served by a server. What kind of server? We will have to dive into the code:-
It is a gen_server. All the operations to the couchdb view are handled by this gen_server.
A gen_server is an erlang generic implementation of the client server model. It is concurrent by default. So your observation is correct. All the requests to the view are distinct process managed with the help of gen_server.
index_server defines three ets tables. You can verify this by typing
ets:i() in the erlang shell we started earlier and you should see:-
couchdb_indexes_by_db couchdb_indexes_by_db bag 1 320 couch_index_server
couchdb_indexes_by_pid couchdb_indexes_by_pid set 1 316 couch_index_server
couchdb_indexes_by_sig couchdb_indexes_by_sig set 1 316 couch_index_server
When the index_server gets a call to get_index it adds a list of Waiters to the ets couchdb_indexes_by_sig. Or if a process requests it it simply sends a reply with the location of the index.
When the index_server gets a call to async_open it simply iterates over the list of Waiters and sends a reply to them with the location of the index
Similarly there are calls to reset_indexes and other ops on indexes which again send a reply with the location of the index.
When the index is created for the first time couchdb calls async_open to serve the index to all the waiting processes. Afterwards every process is given access to the index.
An important point to note here is that the index server does not do anything special except for making the index available to other processes(for example to couch_mr_view_util.erl). In that respect it acts as a gateway.Index write operations are handled by couch_index.erl, couch-index_updater.erl and couch_index_compactor.erl which (unsurprisingly) are all gen_servers!
When a view is being created for the first time only one process can access it. The query_server process(couchjs by default). After the view has been built it can be read and updated in a concurrent manner. The actual querying of views is handled by couch_mr_view which is exposed to us as the familliar http api.

How can I specify the jvm agent id when querying the metrics on the New Relic v1 REST API?

I am trying to get JVM metrics from my application, which runs three instances, with three separate JVMs. I can see the different data that I am interested in in the New Relic dashboard, on the Monitoring -> JVMs tab. I can also get the information I want for one of those JVMs, by hitting the REST API like so:
% curl -gH "x-api-key:KEY" 'https://api.newrelic.com/api/v1/applications/APPID/data.xml?metrics%5B%5D=GC%2FPS%20Scavenge&field=time_percentage&begin=T1&end=T2'
(I've replaced the values of some fields, but this is the full form of my request.)
I get a response including a long list of elements like this:
<metric name="GC/PS Scavenge" begin="T1" end="T2" app="MYAPP" agent_id="AGENTID">
<field name="time_percentage">0.018822634485032824</field>
</metric>
All of the metric elements include the same agent_id fields, and I never specified which agent to use. How can I either:
get metrics for all agents
specify which agent I am interested in (so I can send multiple requests, one for each JVM)
agent_id can be a particular JVM instance, and while you can't request for multiple agents at once you can request metrics for a single JVM.
You can get the JVM's agent_id in one of two ways:
1) an API call to
https://api.newrelic.com/api/v1/accounts/:account_id/applications/:app_id/instances.xml
2) browse to the JVM in the New Relic user interface (use the 'JVM' drop-down at the top right after you select your app), then grab the ID from the URL.
The ID will look something like [account_id]_i2043442
Some data is not available broken down by JVM, most notably a call to threshold_values.xml won't work if the agent_id isn't an application.
full documentation of the V1 API: http://newrelic.github.io/newrelic_api/

webdis server side join

First of all excuse me if I got some concept wrong, this a bit new to me. I have to retrieve a number of objects from a webdis server. The way it is being done at the moment is:
Get all the objects ids (serverUrl/ZRANGE/objects_index/-X/-1)
For each object, get attributes (serverUrl/GET/attributeY_objectIdX)
So if I have X objects with Y attributes I have to perform X * Y + 1 REST calls to get all he data, that seems highly inefficient.
From what I understand Multi is the command to perform a join but is not supported by webdis rest api (see Ideas, TODO on webdis page).
Is there a simpler solution that I am missing?
Should I reorganise the way the data is stored?
Can I use websockets to send a MULTI/EXEC command through json:
jsonSocket.send(JSON.stringify(["MULTI", "EXEC", "GET", "etc..."]));
First, instead of having one key per attribute, you should consider use hash objects, so you get one key per object, associated to several properties. The benefit is you can use the HGETALL command to retrieve all the properties of a given object at once. Instead of having X*Y+1 calls, you have only X+1.
Instead of:
SET user:1:name Didier
SET user:1:age 41
SET user:1:country FR
you could have:
HMSET user:1 name Didier age 41 country FR
Then, webdis supports HTTP 1.1 and websocket pipelining, and Redis server supports pipelining using its own protocol. So it should be possible to send several commands to webdis, wait for the results (which will be returned in the same order) while only paying for a single roundtrip.
For instance, the websocket example provided on webdis page actually performs a single roundtrip to execute two commands:
var jsonSocket = new WebSocket("ws://127.0.0.1:7379/.json");
jsonSocket.onopen = function() {
console.log("JSON socket connected!");
jsonSocket.send(JSON.stringify(["SET", "hello", "world"]));
jsonSocket.send(JSON.stringify(["GET", "hello"]));
};
jsonSocket.onmessage = function(messageEvent) {
console.log("JSON received:", messageEvent.data);
};
You could do something similar, and aggregate several HGETALL commands to retrieve the data by batch of n objects.
Please note that with Redis itself (i.e. without webdis), I would probably recommend the same strategy (pipelining HGETALL commands).

Resources