Indirection in dust.js - dust.js

Is it possible to achieve variable indirection in dust.js - and therefore to be able to use map-like functionality?
Imagine I have the following context to pass to Dust:
{
"keys": [ "Foo", "Bar", "Baz" ],
"data": [{
"date": "20130101",
"values": {
"Foo": 1,
"Bar": 2,
"Baz": 3
}
}, {
"date": "20130102",
"values": {
"Foo": 4,
"Bar": 5,
"Baz": 6
}
}]
}
And I want to achieve the following output (it would actually be a table, but I've skipped the <tr><td> tags for brevity and replaced them with spaces and newlines):
Date Foo Bar Baz
20130101 1 2 3
20130102 4 5 6
I'm not sure how to loop over the keys property, and use each value x to look up data[i].values[x]. I can get the desired output by hardcoding the keys:
Date{~s}
{#keys}
{.}{~s}
{/keys}
{~n}
{#data}
{date}{~s}
{values.Foo}{~s}
{values.Bar}{~s}
{values.Baz}{~s}
{~n}
{/data}
but the keys will be determined dynamically, so I can't hardcode them into the template. Is there a way to replace the lines that say values.Foo etc., with something like the following:
{#data}
{date}{~s}
{#keys outerMap=values}
{outerMap.{.}}{~s}
{/keys}
{~n}
{/data}
This doesn't work as written; can I capture the output of {.} (the value of the current key) and dynamically use it as (part of) the property name to resolve?

So, the short answer is no, you can't do that in Dust.
Dust is meant to be a logicless language, and this is bordering on too much logic. Generally the answer to problems like this one is to update your JSON so it works correctly in Dust. This could be done easily in the example you have given, but could be much more difficult in a real world situation. Part of the power of Dust lies in its limitations.
If this answer doesn't work for you, you are welcome to submit a pull request on GitHub: https://github.com/linkedin/dustjs

As smfoote says, this is not supported out-of-the-box in Dust.
However, I've realised that handlers can involve some elements of logic, and so it was relatively straightforward to write a handler to do dereferencing:
deref: function(chunk, context, bodies, params) {
chunk.write(params.obj[params.prop]);
}
This handler takes an obj argument which is the object to use as an associative array; and the prop parameter describing the key to look up from that object. With this handler function added to the context, I was then able to write the data part of the template as:
{#data}
{date}{~s}
{#keys}
{#deref obj=values prop=./}{~s}
{/keys}
{~n}
{/data}
This produces the correct output, by iterating over every key and passing it in as the property to read from the values object.
I appreciate that some might consider this inappropriate given Dust's philosophy. However I don't think it constitutes particularly complex logic; indeed, it's something that could quite conceivably be part of a templating framework. Given that smfoote's suggested alternative of updating my JSON isn't an option (the template does not know what the keys will be in advance, so there is no way to write "static" references here), this seems like a reasonable approach.

Related

What's the `location` key in the opa rego resultset expression? can I get locations in input json that caused policy violation?

I'm using go rego package, and the rego.ResultSet when marshalled gives this:
[
{
"expressions": [
{
"value": {...},
"text": "data",
"location": { "row": 1, "col": 1 }
}
]
}
]
I intend to output the location(s) in Input JSON where the keys were responsible for failures so that I can use this in building context for the errors
We used JSON schema earlier for validating JSONs and it used to return the keys from input that we can map with errors. https://www.jsonschemavalidator.net/
I suppose as rego could support far more complex decision making where more than one key would be responsible for making the final outcome, that could be the reason it wouldn’t point to a location in the input for failure context. unless am I missing anything?
To answer the first question:
Every value parsed by OPA retains "location" information identifying where it came from in the source string/file. The location in the ResultSet is the location of the expression in the query that was passed when creating the rego.Rego object.
In your case, the query was "data", i.e., you referred to ALL of the documents in OPA (both base documents which could have be loaded from outside as well as virtual documents generated by any rules you loaded into OPA.) The location of the expression in this case is not very interesting: row 1, column 1.
To answer your second question:
OPA does not currently have a reliable way of returning the location of JSON values in the input however this is something that would be valuable and could be added in the future.

How to filter _source in reactivesearch?

I need to exclude certain fields from the _source field in the elastic response since those fields are huge and transferring them unnecessarily wastes lots of time. In general, in elastic this is done by providing _source parameter in the query, e.g.:
GET /_search
{
"_source": { "excludes": [ "content" ] },
"query" : { ... }
}
Searchkit, for example, does this exclusion for highlighted fields automatically (which would be ideal in my case), but also supports an option for user to provide _source filter irrespective of highlighting too. Reactivesearch DataSearch component seems to be missing this kind of capability.
I can't figure out how to add _source (or any other search parameter) to the reactivesearch DataSearch query. Is that possible?
We currently don't support this behavior in ReactiveSearch, but we should. I have filed an issue for the same https://github.com/appbaseio/reactivesearch/issues/417.
Edit: This is now supported, you can see how to pass it in the documentation of Result components.

Querying TAFFYDB nested records

I have created a data model using TAFFYDB. Some of the fields have nested records. I am facing difficulties querying and updating the nested records.
For example:
var friends = TAFFY([
{
"id":1,
"gender":"M",
"first":"John",
"last":"Smith",
"city":"Seattle, WA",
"comp":
[
{
"id":1,
"audience":"cavern"
},
{
"id":2,
"audience":"cottage"
}
]
},
{
"id":2,
"gender":"F",
"first":"Basic",
"last":"Smith",
"city":"Seattle, WA",
"comp":
[
{
"id":1,
"audience":"bush"
},
{
"id":2,
"audience":"swamp"
}
]
}
]);
Supposing I need to update any of the comp field's audience, how will I go about it?
With regards to queries:
When you have simpler nested arrays, you should be able to select specific records using the has and hasAll methods. However, there is an open issue that states neither of these methods work correctly. There are commits but since the issue has been left open, I assume they are not 100% fixed.
For for complex nested data, like your example, the only thing I found was this old mailing list conversation talking about some sort of find method. No such method seems to exist though nor is there any mention of it in the docs.
With regards to updates:
You should be able to update the "comp" data by passing in the modified JSON that goes with it (assuming you are able to get the data out of the db in the first place) into a normal update. However, there is an open bug showing that update does not work when record values are objects. So even if you were able to query the data and were able to modify it, you wouldn't be able to update a record anyway because of the bug. You can however do a remove and an insert.
Despite what I found above, I did some testing and found that you can update files by passing in objects. So this is a quick example of how to do a simple update:
// To show what TAFFYDB looks like:
console.log(friends().stringify());
"[{"id":1,"gender":"M","first":"John","last":"Smith","city":"Seattle, WA","comp":[{"id":1,"audience":"cavern"},{"id":2,"audience":"cottage"}],"___id":"T000003R000002","___s":true},{"id":2,"gender":"F","first":"Basic","last":"Smith","city":"Seattle, WA","comp":[{"id":1,"audience":"bush"},{"id":2,"audience":"swamp"}],"___id":"T000003R000003","___s":true}]"
// Get a copy of the comp file from the database for what you want to modify.
// In this example, let's get the **first** record matching people with the name "John Smith":
var johnsComp = friends({first:"John",last:"Smith"}).first().comp;
// Remember, if you want to use select("comp") instead, this will return an array of results.
// So to get the first result, you would need to do this despite there being only one matching result:
// friends({first:"John",last:"Smith"}).select("comp")[0];
// There are no nested queries in TAFFYDB so you need to work with the resulting object as if it were normal javascript.
// You should know the structure and you can either modify things directly, iterate through it, or whatever.
// In this example, I'm just going to change one of the audience values directly:
johnsComp[0].audience = "plains";
// Now let's update that record with the newly modified object.
// Note - if there are more than one "John Smith"s, then all of them will be updated.
friends({first:"John",last:"Smith"}).update({comp:johnsComp});
// To show what TAFFYDB looks like after updating:
console.log(friends().stringify());
"[{"id":1,"gender":"M","first":"John","last":"Smith","city":"Seattle, WA","comp":[{"id":1,"audience":"plains"},{"id":2,"audience":"cottage"}],"___id":"T000003R000002","___s":true},{"id":2,"gender":"F","first":"Basic","last":"Smith","city":"Seattle, WA","comp":[{"id":1,"audience":"bush"},{"id":2,"audience":"swamp"}],"___id":"T000003R000003","___s":true}]"
For a better targeted query or update (something that perhaps acts like a nested query/update), you can possibly try passing in a function. If you look at the docs, there is a simple example of this for update():
db().update(function () {this.column = "value";return this;}); // sets column to "value" for all matching records
I have an example, in this case i made an update to a nested field.
To acces the data you can do like this:
console.log( JSON.stringify(
data({'id':'489'}).get()[0].review[0][0].comments
))
This is an example how it works

Will RestKit's dynamic mapping solve this complex JSON mapping?

I am using RestKit in my app, which needs to use an existing synchronization service that structures the incoming data this way:
{
"timestamp": 000000000001,
"status" : 0,
"syncData":[
{
"errors":[],
"rows":[ {"name":"AAA", ...},
{"name":"BBB", ...},
...],
"rtype":"FOO" },
{
"errors":[],
"rows":[ {"id":1, "description":"ZZZ", ....},
{"id":2, "description":"YYY", ....},
...],
"rtype":"BAR"
}, ...
I'm new to RestKit and trying to figure out the best way to solve this problem, and the complementary problem of sending this same structure of data back to the server. I'm using Core Data with RestKit.
I've mapped a SyncResponse entity to hold the top level data, and what I want to get out of this is a collection of FOO objects, "AAA", "BBB", etc., and a collection of BAR objects, "ZZZ", "YYY", etc., and a few dozen other collections of objects whose Class is indicated by the "rtype" field.
I've read the doc section on dynamic mapping and some example code and postings here, but I don't see how dynamic mapping works in this case as it is not of the {"a":{is A}, "b":{is B}} format. Is this possible using dynamic mapping, and if so, what concepts am I missing here?
Assuming it is possible, how do I, starting with collections of FOOs and BARs send data back, of course replacing the SyncResponse with something like a SyncUpdateRequest wrapper?
I don't think you'll be able to do this using a set of mappings alone.
Your best option may be to create your mappings for each item and one for the overall structure. The overall mapping just extracts the array as an NSArray of dictionaries. Once you have the array you can iterate over it yourself, check the type and then apply an RKMapperOperation to perform the mappings.
For sending your update request, I'd look at it as a quite separate thing. I'd build an array of dictionaries where the dictionaries have 'plain' key / value pairs for some information and 'complex' key / value pairs for the rows. Your request mapping is then in terms of this array of dictionaries (which cover the custom parts) and the rows (which should be the inverse of your response mapping for the class). Then RestKit should be able to handle it in the standard way (compared to the complexity of your response mapping above).

RestKit: Map 2-Dimensional Array (Collection in Collection)

I was just trying to parse a JSON-Object which includes a 2-dimensional array.
Example:
{
"2dimarray": [
[{"key": "val"}, {"key": "val"}],
[{"key": "val"}, {"key": "val"}]
]
}
Assuming the contents of 2dimarray[x][y] are only of one type, I added the mapping:
[objectMapping mapKeyPath:#"2dimarray" toRelationship:#"2dimarray" withMapping: myMappingForIncludedObjects];
In the log RestKit tells me:
W restkit.object_mapping:RKObjectMappingOperation.m:438 WARNING: Detected a relationship mapping for a collection containing another collection. This is probably not what you want. Consider using a KVC collection operator (such as #unionOfArrays) to flatten your mappable collection.
But actually it is what I want. Basicly I assumed that the object mapper would fill my Objective-C property NSArray* 2dimarray with NSArray*s that include objects that are mapped with myMappingForIncludedObjects. Instead, each array is mapped (which fails, of course) with myMappingForIncludedObjects.
What am I doing wrong? Or better: What do I need to do to archive the behavior I expected?
I believe that the issue you cite is Blake explaining the problem, not a solution. I don't think RestKit is set up to handle the mapping that you describe (an array of arrays of objects). You can walk through an example of what he describes in the issue as well as looking at his commit, and you'll see that the introduced logic was aimed at detecting the problem and logging it for debugging purposes.

Resources