How to filter _source in reactivesearch? - reactivesearch

I need to exclude certain fields from the _source field in the elastic response since those fields are huge and transferring them unnecessarily wastes lots of time. In general, in elastic this is done by providing _source parameter in the query, e.g.:
GET /_search
{
"_source": { "excludes": [ "content" ] },
"query" : { ... }
}
Searchkit, for example, does this exclusion for highlighted fields automatically (which would be ideal in my case), but also supports an option for user to provide _source filter irrespective of highlighting too. Reactivesearch DataSearch component seems to be missing this kind of capability.
I can't figure out how to add _source (or any other search parameter) to the reactivesearch DataSearch query. Is that possible?

We currently don't support this behavior in ReactiveSearch, but we should. I have filed an issue for the same https://github.com/appbaseio/reactivesearch/issues/417.
Edit: This is now supported, you can see how to pass it in the documentation of Result components.

Related

Search Outlook Calendar Event categories for multiple hits - Microsoft Graph

I am trying to keep track of Outlook calendar events without the need to store information about them on my own systems. I decided to do this by adding the required ids as categories with their type of id before it as shown in the code sample below.
{
"#odata.etag": "",
"createdDateTime": "",
"categories": [
"ID1::abc123",
"ID2::def456"
]
}
I tried using the 'any' lambda operator and this works fine if I want to filter based on one category using the query below:
https://graph.microsoft.com/v1.0/me/events?$filter=categories/any(x:x%20eq%20'ID1::abc123')
What I need is a query that will check if an event has both ids so in this case only the events where ID1=abc123 and ID2=def456. I figured https://graph.microsoft.com/v1.0/me/events?$filter=categories/any(x:x%20eq%20'ID1::abc123')%20AND%20categories/any(x:x%20eq%20'ID2::def456') should do the trick but this keeps returning empty arrays.
Thanks in advance!
Since categories are available to the user (and this is going to look really strange in outlook), I would suggest you to use the transactionId on the events to store the external id. This will automatically deny your new event if you try to create a duplicate.
I know this isn’t the answer you were looking for, but using this solution will be much more feature proof.

How to correctly query OData with filter in nested type based on property from parent type?

I am getting a dynamic value (_FilterDate) on the parent type that I want to use as a filter for the nested type /Trips but can't get it to work because I still get entries in the nested data that do not meet the filter. Actually, there is no difference whether I use this filter or not.
$filter=Trips/all(d:d/EndDate ge _FilterDate)
I also tried this:
$expand=Trips($filter=EndDate ge $it/_FilterDate)
but got the error: "Could not find a property named '_FilterDate' on type 'Default.Trips'."
So I'm wondering how to get the syntax right and thus kindly ask for help.
Example portion:
"value": [
{
"_FilterCompany": "YES",
"_FilterLocation": "YES",
"_FilterDate": "2020-01-08",
"Trips": [
{
"StartDate": "2019-06-24",
"EndDate": "2019-06-28",
},
{
"StartDate": "2020-02-07",
"EndDate": "2020-02-07",
}
]
}
There are two issues going on here:
this response is specifically regarding the OData v4 specification and the .Net ODataLib implementation.
You have correctly identified that when filtering results based on a nested collection you you must separately apply the filter within that collection if you want the fitler to apply to the items within that collection as well.
This is because the root level $filter criteria only affects the selection of the root items, think of it as if the $expand operator is applied after the $filter has identified the top level of row the return, $expand is simply executed as a Linq Include statement.
In your second attempt, $it references the instance of Trips, which is a known bug/by design, according to the spec it is expected to work the way you have implemented it:
5.1.1.6.4 $it
Example 82: customers along with their orders that shipped to the same city as the customer's address. The nested filter expression is evaluated in the context of Orders; $it allows referring to values in the outer context of Customers.
http://host/service/Customers?
$expand=Orders($filter=$it/Address/City eq ShipTo/City)
So knowing the $it is broken, the spec doc does specify a $root identifier that you might also be able to use, but in ODataLib 7.3 $root is still not supported OOTB either. There is an issue logged here: $it references the wrong resource #616
Workaround
If your Trips data type has a navigation property back to the Filter/root record, then you can use that navigation property as part of the $filter:
Assuming the navigation property is called Filter
$filter=Trips/all(d:d/EndDate ge _FilterDate)&$expand=Trips($filter=EndDate ge Filter/_FilterDate)
If your Trips type does not have this navigation link back to the parent record then you are stuck at this stage with these two workarounds:
Create a Function on the controller to return this filtered data specifically, as this would be simple to evaluate as a Linq query in the server-side.
Accept that the server will return extra rows in the Trips collections, and apply the filter over the results in the client-side.

Avoid calculating startIndex and endIndex when creating a document using Google Docs API

I have proven to myself that I can insert text into a Google Docs document using this code:
function appendToDocument() {
let offset = 12;
let updateObject = {
documentId: 'xxxxxxx',
resource: {
requests: [{
"insertText": {
"text": "John Doe",
"location": {
"index": offset,
},
},
}],
},
};
gapi.client.docs.documents.batchUpdate(updateObject).then(function(response) {
appendPre('response = ' + JSON.stringify(response));
}, function(response) {
appendPre('Error: ' + response.result.error.message);
});
}
My next step is to create an entire, complex document using the api. I am stunned by what appears to be the fact that I need to maintain locations into the documents, like this
new Location().setIndex(25)
I am informing myself of that opinion by reading this https://developers.google.com/docs/api/how-tos/move-text
The document I am trying to create is very dynamic and very complex, and handing the coding challenge to keeping track of index values to the api user, rather than the api designer, seems odd.
Is there an approach, or a higher level api, that allows me construct a document without this kind of house keeping?
Unfortunately, the short answer is no, there's no API that lets you bypass the index-tracking required of the base Google Docs API - at least when it comes to building tables.
I recently had to tackle this issue myself - a combination of template updating and document construction - and I basically ended up writing an intermediate API with helper functions to search for and insert by character indices.
For example, one trick I've been using for table creation is to first create a table of a specified size at a given index, and put some text in the first cell. Then I can search the document object for the tableCells element that contains that text, and work back from there to get the table start index.
Another trick is that if you know how many specific kinds of objects (like tables) you have in your document, you can parse through the full document object and keep track of table counts, and stop when you get to the one you want to update/delete (you can use this approach for creating too but the target text approach is easier, I find).
From there with some JSON parsing and trial-and-error, you can figure out the start index of each cell in a table, and write functions to programmatically find and create/replace/delete. If there's an easier way to do all this, I haven't found it. There is one Github repo with a Google Docs API wrapper specifically for tables, and it does appear to be active, although I found it after I wrote everything on my own and I haven't used it.)
Here's a bit of code to get you started:
def get_target_table(doc, target_txt):
""" Given a target string to be matched in the upper left column of a table
of a Google Docs JSON object, return JSON representing that table. """
body = doc["body"]["content"]
for element in body:
el_type = list(element.keys())[-1]
if el_type == "table":
header_txt = get_header_cell_text(element['table']).lower().strip()
if target_txt.lower() in header_txt:
return element
return None
def get_header_cell_text(table):
""" Given a table element in Google Docs API JSON, find the text of
the first cell in the first row, which should be a column header. """
return table['tableRows'][0]\
['tableCells'][0]\
['content'][0]\
['paragraph']['elements'][0]\
['textRun']['content']
Assuming you've already created a table with the target text in it: now, start by pulling the document JSON object from the API, and then use get_target_table() to find the chunk of JSON related to the table.
doc = build("docs", "v1", credentials=creds).documents().get(documentId=doc_id).execute()
table = get_target_table(doc, "my target")
From there you'll see the nested tableRows and tableCells objects, and the content inside each cell has a startIndex. Construct a matrix of table cell start indices, and then, for populating them, work backwards from the bottom right cell to the upper left, to avoid displacing your stored indices (as suggested in the docs and in one of the comments).
It's definitely a bit of a slog. And styling table cells is a whole 'nother beast, which is a dizzying maze of JSON options. The interactive JSON constructor tool on the Docs API site is useful to get the syntax write.
Hope this helps, good luck!
The answer I arrived at: You can create Docs without using their JSON schema.
https://developers.google.com/drive/api/v3/manage-uploads#node.js_1
So, create the document in your format of choice (HTML, DocX, MD (you'd use pandoc to convert MD to another format)), and then upload that.

Building a search criteria thats tricky?

I am trying to do a search on my database IndividualRecords by first building a search criteria but its syntax is getting a little tricky for some values. Its easy to set a criteria for an exact field like if the firstName field has 'John' in it I would put this predicate in my criteria:
IndividualRecord.withCriteria {
if (predicates.firstName != null) {
eq 'firstName', predicates.firstName
}
}
But if they also add that they want to search for US citizens, I can't simply do,
if (predicates.UScitizenship) {
eq 'citizenship', predicates.citizenship
}
because I want to look for records based on citizenship 'US', 'Us', 'uS', and 'us'(case insensitivity must be taken in to account) so how would I get around this?
and then here is where the real fun starts. Say I want to find only foreign citizens. I do have a low level mongodb api method that tells me if the citizenship is a valid one by returning true if it finds it in the database of country codes that I have so I guess I could build another predicate something like pseudocode:
if (predicates.foreign) {
all such people whose !citizenship.caseIgnoreEquals('US') && matchCountry(it.citizenship)
}
meaning that all such people whose citizenship isn't 'US' and matches the list of country codes I have where matchCountry(String countryCode) is my low level api method for verifying a country code and will return true if its a valid country code.
I am finding it hard to define such complicated predicates' syntax and that is where I need some help. Thanks.
There are two issues at hand here.
First, case sensitivity and insensitivy can be addressed by using ilike instead of equals. So for example:
if (predicates.firstName != null) {
ilike 'firstName', predicates.firstName
}
Secondly, you may want to look at named queries to encapsulate some of your query definitions. This way you can include/exclude them as you see fit. For example:
if (predicates.foreign) {
foreignPersons(predicates) // call to named query which contains logic
}
Using this you should be able to construct very complex queries which are built upon smaller definitions and in turn make them more usable and reusable.

Querying TAFFYDB nested records

I have created a data model using TAFFYDB. Some of the fields have nested records. I am facing difficulties querying and updating the nested records.
For example:
var friends = TAFFY([
{
"id":1,
"gender":"M",
"first":"John",
"last":"Smith",
"city":"Seattle, WA",
"comp":
[
{
"id":1,
"audience":"cavern"
},
{
"id":2,
"audience":"cottage"
}
]
},
{
"id":2,
"gender":"F",
"first":"Basic",
"last":"Smith",
"city":"Seattle, WA",
"comp":
[
{
"id":1,
"audience":"bush"
},
{
"id":2,
"audience":"swamp"
}
]
}
]);
Supposing I need to update any of the comp field's audience, how will I go about it?
With regards to queries:
When you have simpler nested arrays, you should be able to select specific records using the has and hasAll methods. However, there is an open issue that states neither of these methods work correctly. There are commits but since the issue has been left open, I assume they are not 100% fixed.
For for complex nested data, like your example, the only thing I found was this old mailing list conversation talking about some sort of find method. No such method seems to exist though nor is there any mention of it in the docs.
With regards to updates:
You should be able to update the "comp" data by passing in the modified JSON that goes with it (assuming you are able to get the data out of the db in the first place) into a normal update. However, there is an open bug showing that update does not work when record values are objects. So even if you were able to query the data and were able to modify it, you wouldn't be able to update a record anyway because of the bug. You can however do a remove and an insert.
Despite what I found above, I did some testing and found that you can update files by passing in objects. So this is a quick example of how to do a simple update:
// To show what TAFFYDB looks like:
console.log(friends().stringify());
"[{"id":1,"gender":"M","first":"John","last":"Smith","city":"Seattle, WA","comp":[{"id":1,"audience":"cavern"},{"id":2,"audience":"cottage"}],"___id":"T000003R000002","___s":true},{"id":2,"gender":"F","first":"Basic","last":"Smith","city":"Seattle, WA","comp":[{"id":1,"audience":"bush"},{"id":2,"audience":"swamp"}],"___id":"T000003R000003","___s":true}]"
// Get a copy of the comp file from the database for what you want to modify.
// In this example, let's get the **first** record matching people with the name "John Smith":
var johnsComp = friends({first:"John",last:"Smith"}).first().comp;
// Remember, if you want to use select("comp") instead, this will return an array of results.
// So to get the first result, you would need to do this despite there being only one matching result:
// friends({first:"John",last:"Smith"}).select("comp")[0];
// There are no nested queries in TAFFYDB so you need to work with the resulting object as if it were normal javascript.
// You should know the structure and you can either modify things directly, iterate through it, or whatever.
// In this example, I'm just going to change one of the audience values directly:
johnsComp[0].audience = "plains";
// Now let's update that record with the newly modified object.
// Note - if there are more than one "John Smith"s, then all of them will be updated.
friends({first:"John",last:"Smith"}).update({comp:johnsComp});
// To show what TAFFYDB looks like after updating:
console.log(friends().stringify());
"[{"id":1,"gender":"M","first":"John","last":"Smith","city":"Seattle, WA","comp":[{"id":1,"audience":"plains"},{"id":2,"audience":"cottage"}],"___id":"T000003R000002","___s":true},{"id":2,"gender":"F","first":"Basic","last":"Smith","city":"Seattle, WA","comp":[{"id":1,"audience":"bush"},{"id":2,"audience":"swamp"}],"___id":"T000003R000003","___s":true}]"
For a better targeted query or update (something that perhaps acts like a nested query/update), you can possibly try passing in a function. If you look at the docs, there is a simple example of this for update():
db().update(function () {this.column = "value";return this;}); // sets column to "value" for all matching records
I have an example, in this case i made an update to a nested field.
To acces the data you can do like this:
console.log( JSON.stringify(
data({'id':'489'}).get()[0].review[0][0].comments
))
This is an example how it works

Resources