I'm attempting to "export" all queries in our Relay codebase from relay-compiler by using the following relay.config.json:
{
"persistConfig": {
"file": "queryMap.json"
}
}
The relay-compiler only performs this step in addition to rewriting all queries in __generated__/ used by the app from "text" to "id", expecting the app to send the query identifier as doc_id parameter with requests, rather than the full query as a query parameter (see Relay docs).
I only want to export the query map, but still continue using the query "text" in the app. That's both for developer ergonomics (easier to reason about queries you can see in the network panel), but most importantly because our server (Hasura) doesn't support persisted queries. The goal is to import the query map into Hasura as an allow-list for security purposes instead.
I'm not too fluent in Rust, but looking through the source code it sounds like that would be a new feature request for the relay-compiler?
I have an application that writes to my neo4j database. Immediately after this write, another application performs a query and expects the previously written item as the result.
This doesn't happen, I don't get any result from my query.
Introducing a 100ms artificial delay between the write and the query yields the expected result, but that's not feasible.
I'm writing in TypeScript using neo4j-driver. I'm awaiting every promise the API's throwing at me. I even promisified the session.close function and I await that too (not sure if that does anything).
Is there a cache on neo4j's side that could be at fault? Can I somehow flush it?
I want to execute multiple cypher queries at same time for the brower, how count i execute that. And i am using noe4j version for 2.2.5. My sample query was,
CREATE(n:Taxonomy{UUID:10001, name:"BOSH", classType:"Interface Type", version:"2.2",isDeleted:"0"});
CREATE(n:Taxonomy{UUID:10002, name:"Iaas", classType:"AWS", version:"0.0",isDeleted:"0"});
CREATE(n:Taxonomy{UUID:10003, name:"order lifecycle", classType:"draft order", version:"0.0",isDeleted:"0"});
CREATE(n:IaaSTemplate{UUID:20001, IaasName:"Iaas Template 1",isDeleted:"0"});
CREATE(n:TemplateFunction{UUID:30001, functionName:"bosh target",isDeleted:"0"});
CREATE(n:TemplateFunction{UUID:30002, functionName:"bosh login",isDeleted:"0"});
Batching multiple queries into one is not (yet) supported by the Browser.
However, the specific queries in your question can be easily combined into a single query by:
Removing the n identifier from all the nodes.
Within a single query, an identifier is associated with a specific instance of a node or relationship (ignoring the effect of WITH clauses). But, since you don't actually use the identifier, getting rid of it would allow all the CREATE clauses to co-exist in the same query.
Removing all semicolons (except the last one).
So, this should work:
CREATE(:Taxonomy{UUID:10001, name:"BOSH", classType:"Interface Type", version:"2.2",isDeleted:"0"})
CREATE(:Taxonomy{UUID:10002, name:"Iaas", classType:"AWS", version:"0.0",isDeleted:"0"})
CREATE(:Taxonomy{UUID:10003, name:"order lifecycle", classType:"draft order", version:"0.0",isDeleted:"0"})
CREATE(:IaaSTemplate{UUID:20001, IaasName:"Iaas Template 1",isDeleted:"0"})
CREATE(:TemplateFunction{UUID:30001, functionName:"bosh target",isDeleted:"0"})
CREATE(:TemplateFunction{UUID:30002, functionName:"bosh login",isDeleted:"0"});
Unfortunately Neo4j Browser doesn't support that yet, it's on the long list of things.
You can use the bin/neo4j-shell that connects to a running browser.
Or a project like cycli which is a colorful, auto-complete shell for Neo4j that talks to the http interface and supports auth etc.
In CouchDB there's huge BTree data structure and multiple processes (one for each request).
Erlang processes can't share state - so it seems that there should be dedicated process responsible for accessing BTree and communicating with other processes via messages. But it would be inefficient - because there only one process who can access data.
So how such cases are handled in Erland, and how it's handled in this specific case with CouchDB?
Good question this. If you want an authoritative answer the
best place to ask a question about couchdb internals is the couchdb mailing list they are very quick and one of the core devs can probably give you a better answer. I will try to answer this as best as I can just keep in mind that I may be wrong :)
The first clue is provided by the couchdb config file. Start couchdb in the shell mode
couchdb -i
point your browser to
http://localhost:5984/_utils/config.html
You will find that under the daemon section there is a key value pair
index_server {couch_index_server, start_link, []}
aha! so the index is served by a server. What kind of server? We will have to dive into the code:-
It is a gen_server. All the operations to the couchdb view are handled by this gen_server.
A gen_server is an erlang generic implementation of the client server model. It is concurrent by default. So your observation is correct. All the requests to the view are distinct process managed with the help of gen_server.
index_server defines three ets tables. You can verify this by typing
ets:i() in the erlang shell we started earlier and you should see:-
couchdb_indexes_by_db couchdb_indexes_by_db bag 1 320 couch_index_server
couchdb_indexes_by_pid couchdb_indexes_by_pid set 1 316 couch_index_server
couchdb_indexes_by_sig couchdb_indexes_by_sig set 1 316 couch_index_server
When the index_server gets a call to get_index it adds a list of Waiters to the ets couchdb_indexes_by_sig. Or if a process requests it it simply sends a reply with the location of the index.
When the index_server gets a call to async_open it simply iterates over the list of Waiters and sends a reply to them with the location of the index
Similarly there are calls to reset_indexes and other ops on indexes which again send a reply with the location of the index.
When the index is created for the first time couchdb calls async_open to serve the index to all the waiting processes. Afterwards every process is given access to the index.
An important point to note here is that the index server does not do anything special except for making the index available to other processes(for example to couch_mr_view_util.erl). In that respect it acts as a gateway.Index write operations are handled by couch_index.erl, couch-index_updater.erl and couch_index_compactor.erl which (unsurprisingly) are all gen_servers!
When a view is being created for the first time only one process can access it. The query_server process(couchjs by default). After the view has been built it can be read and updated in a concurrent manner. The actual querying of views is handled by couch_mr_view which is exposed to us as the familliar http api.
I am trying to get JVM metrics from my application, which runs three instances, with three separate JVMs. I can see the different data that I am interested in in the New Relic dashboard, on the Monitoring -> JVMs tab. I can also get the information I want for one of those JVMs, by hitting the REST API like so:
% curl -gH "x-api-key:KEY" 'https://api.newrelic.com/api/v1/applications/APPID/data.xml?metrics%5B%5D=GC%2FPS%20Scavenge&field=time_percentage&begin=T1&end=T2'
(I've replaced the values of some fields, but this is the full form of my request.)
I get a response including a long list of elements like this:
<metric name="GC/PS Scavenge" begin="T1" end="T2" app="MYAPP" agent_id="AGENTID">
<field name="time_percentage">0.018822634485032824</field>
</metric>
All of the metric elements include the same agent_id fields, and I never specified which agent to use. How can I either:
get metrics for all agents
specify which agent I am interested in (so I can send multiple requests, one for each JVM)
agent_id can be a particular JVM instance, and while you can't request for multiple agents at once you can request metrics for a single JVM.
You can get the JVM's agent_id in one of two ways:
1) an API call to
https://api.newrelic.com/api/v1/accounts/:account_id/applications/:app_id/instances.xml
2) browse to the JVM in the New Relic user interface (use the 'JVM' drop-down at the top right after you select your app), then grab the ID from the URL.
The ID will look something like [account_id]_i2043442
Some data is not available broken down by JVM, most notably a call to threshold_values.xml won't work if the agent_id isn't an application.
full documentation of the V1 API: http://newrelic.github.io/newrelic_api/