First of all excuse me if I got some concept wrong, this a bit new to me. I have to retrieve a number of objects from a webdis server. The way it is being done at the moment is:
Get all the objects ids (serverUrl/ZRANGE/objects_index/-X/-1)
For each object, get attributes (serverUrl/GET/attributeY_objectIdX)
So if I have X objects with Y attributes I have to perform X * Y + 1 REST calls to get all he data, that seems highly inefficient.
From what I understand Multi is the command to perform a join but is not supported by webdis rest api (see Ideas, TODO on webdis page).
Is there a simpler solution that I am missing?
Should I reorganise the way the data is stored?
Can I use websockets to send a MULTI/EXEC command through json:
jsonSocket.send(JSON.stringify(["MULTI", "EXEC", "GET", "etc..."]));
First, instead of having one key per attribute, you should consider use hash objects, so you get one key per object, associated to several properties. The benefit is you can use the HGETALL command to retrieve all the properties of a given object at once. Instead of having X*Y+1 calls, you have only X+1.
Instead of:
SET user:1:name Didier
SET user:1:age 41
SET user:1:country FR
you could have:
HMSET user:1 name Didier age 41 country FR
Then, webdis supports HTTP 1.1 and websocket pipelining, and Redis server supports pipelining using its own protocol. So it should be possible to send several commands to webdis, wait for the results (which will be returned in the same order) while only paying for a single roundtrip.
For instance, the websocket example provided on webdis page actually performs a single roundtrip to execute two commands:
var jsonSocket = new WebSocket("ws://127.0.0.1:7379/.json");
jsonSocket.onopen = function() {
console.log("JSON socket connected!");
jsonSocket.send(JSON.stringify(["SET", "hello", "world"]));
jsonSocket.send(JSON.stringify(["GET", "hello"]));
};
jsonSocket.onmessage = function(messageEvent) {
console.log("JSON received:", messageEvent.data);
};
You could do something similar, and aggregate several HGETALL commands to retrieve the data by batch of n objects.
Please note that with Redis itself (i.e. without webdis), I would probably recommend the same strategy (pipelining HGETALL commands).
Related
I'm attempting to "export" all queries in our Relay codebase from relay-compiler by using the following relay.config.json:
{
"persistConfig": {
"file": "queryMap.json"
}
}
The relay-compiler only performs this step in addition to rewriting all queries in __generated__/ used by the app from "text" to "id", expecting the app to send the query identifier as doc_id parameter with requests, rather than the full query as a query parameter (see Relay docs).
I only want to export the query map, but still continue using the query "text" in the app. That's both for developer ergonomics (easier to reason about queries you can see in the network panel), but most importantly because our server (Hasura) doesn't support persisted queries. The goal is to import the query map into Hasura as an allow-list for security purposes instead.
I'm not too fluent in Rust, but looking through the source code it sounds like that would be a new feature request for the relay-compiler?
Actually I am new to data movement SDK,I want to know how we can used data movement sdk to remove collection from docs which match's specific condition in real time in marklogic ?
Yes, DMSK can reprocess documents in the database including modifying the collections on the documents.
The most efficient way to change document collections on the server might be to take an approach similar to the out-of-the-box ApplyTransformListener (as summarized by
https://docs.marklogic.com/guide/java/data-movement#id_51555) but to execute a custom module instead of a transform.
Summarizing the main points:
Write an SJS (Server-Side JavaScript) module that declares a variable (using the JavaScript var statement) to receive the document URIs sent by the client and modifies the collections on those documents using a function such as
https://docs.marklogic.com/xdmp.documentSetCollections
Install the SJS module in the modules database as described here
https://docs.marklogic.com/guide/java/resourceservices#id_13008
Create a QueryBatcher to get the document URIs either from a query on the database or from a client iterator as described here:
https://docs.marklogic.com/guide/java/data-movement#id_46947
Supply a lambda function for the QueryBatcher.onUrisReady() method - see
https://docs.marklogic.com/javadoc/client/com/marklogic/client/datamovement/QueryBatcher.html#onUrisReady-com.marklogic.client.datamovement.QueryBatchListener-
In the lambda function, construct and execute a ServerEvaluationCall to the SJS module, assigning the variable to the URIs passed to the lambda function - see:
https://docs.marklogic.com/guide/java/resourceservices#id_84134
Be sure to register failure listeners using the QueryBatcher.onQueryFailure() ApplyTransformListener.onFailure​() methods to log error or otherwise respond to the unexpected.
Hoping that helps,
sir,
I am trying to create stateful proxy in opensips 2.4.
I just wanted a variable to hold received message information and process it.
So i checked "core variable" in opensips manual.it says, script variable are process wise. So i should not use to hold header value in script value like $var(Ruri)=$ru?? it will be overwritten by other call??
$var(userName)=$rU;
$var(removePlus) = '+';
# Search the string starting at 0 index
if($(var(userName){s.index, $var(removePlus)})==0){
$rU=$(var(userName){s.substr,1,0});
}
$var variables are process-local, meaning that you can't share them with other SIP workers even if you wanted to! In fact, they are so optimized that their starting value will often be what the same process left-behind during a previous SIP message processing (tip: you can prove this by running opensips with children = 1 and making two calls).
On the other hand, variables such as $avp are shared between processes, but not in a "dangerous" way that you have to worry about two INVITE retransmissions processing in parallel, each overwriting the other one's $avp, etc. No! That is taken care of under the hood. The "sharing" only means that, for example, during a 200 OK reply processed by a different process than the one which relayed the initial INVITE, you will still be able to read and write to the same $avp which you set during request processing.
Finally, your code seems correct, but it can be greatly simplified:
if ($rU =~ "^+")
strip(1);
I used Jedis(2.9.0) API in my application,I found that API not support TIME command of redis,how can i get system time from Redis server? or use lua script to do it? thanks in adance.
ATM Jedis doesn't have a possibility to send raw commands to Redis and TIME command is currently not part from it. If you really need this you need to fork and implemented and afterwards send pull request.
Jedis goal is to be typed safe and simple. Adding new commands there is relatively easy.
Even though not supported by jedis yet, you can easily implement that with lua script.
like this
String script = "local ntime = redis.call('TIME')\n" +
"return ntime";
ArrayList<Long> eval = (ArrayList<Long>)jedisCluster.eval(script, "1");
System.out.println(eval);
The return list "eval" is just what time command returned, as redis website described:
Return value
Array reply, specifically:
A multi bulk reply containing two elements:
unix time in seconds.
microseconds.
I use jediscluster, all eval method required "key" param, so I just input a random key "1", as this key is in fact useless. You can choose suitable client and method, but the code will be similar.
I have a question about Pylons's request.params, which returns a MultiDict object.
Does request.params preserve the ordering of the GET parameters in a reliable way?
For example, if I were to visit http://localhost:5000/hello/index?a=1&a=2 and call request.params, could I guarantee that the MultiDict object returned would be in the following order?
>>> request.params
MultiDict([('a', '1'), ('a', '2')])
I'm guessing not, because Python seems to have a separate OrderedMultiDict object used for, well, ordered MultiDicts.
If not, is there any other way I can obtain the GET parameters and preserve their ordering in Pylons?
As I remember, even if you can get Pylons to preserve the ordering, you're not supposed to rely on that kind of behavior because not all user agents (browsers, bots, etc.) preserve ordering either and that's outside your control.
If it's part of the HTTP spec, it's not reliably followed... I doubt it is.
For example, suppose the user agent is a Python application which handles query parameters using dicts.