How to structure falcor router to get all available IDs? - falcor

I'm experimenting with using Falcor to front the Guild Wars 2 API and want to use it to show game item details. I'm especially interested in building a router that can use multiple datasources to combine the results of different APIs.
The catch is, Item IDs in Guild Wars 2 aren't contiguous. Here's an example:
[
1,
2,
6,
11,
24,
56,
...
]
So I can't just write paths on the client like items[100..120].name because there's almost certainly going to be a bunch of holes in that list.
I've tried adding a route to my router so I can just request items, but that sends it into an infinite loop on the client. You can see that attempt on GitHub.
Any pointers on the correct way to structure this? As I think about it more maybe I want item.id instead?

You shouldn't find your self asking for ids from a Falcor JSON Graph object.
It seems like you want to build an array of game ids:
{
games: [
{ $type: "ref", value: ["gamesById", 352] },
{ $type: "ref", value: ["gamesById", 428] }
// ...
],
gamesById: {
352: {
gameProp1: ...,
},
428: {
gameProp2: ...
}
}
}
[games, {from: 5, to: 17 }, "gameProp1"]
Does that work?

You can use 'get' API of Falcor, It retrieves multiple values.. You can pass any number of required properties as shown below
var model=new falcor.Model({
cache:{
genereList:[
{name:"Recently Watched",
titles:[
{id:123,
name: "Ignatius",
rating: 4}
]
},
{name:"New Release",
titles:[
{id:124,
name: "Jessy",
rating: 3}
]
}
]
}
});
Getting single value
model.getValue('genereList[0].titles[0].name').
then(function(value){
console.log(value);
});
Getting multiple values
model.get('genereList[0..1].titles[0].name', 'genereList[0..1].titles[0].rating').
then(function(json){
console.log(JSON.stringify(json, null, 4));
})

Related

Searchkick: how to save records in ElasticSearch in different indices based on record timestamp?

I have a Rails Model with Searchkick.
I want my model instances to be saved in ElasticSearch in different indices based on the month creation of the instance.
Lets say I have the following instances in my Model:
A created the 03/25/2021
B created the 03/28/2021
C created the 04/01/2021
Instead of having one ES index (which is the default behavior for Searchkick), how can I store when my instances are created:
A & B in ES index labeled: model_2021_03
C in ES index labeled: model_2021_04
From what I understand, there are two main steps:
Create Multiple Index(indices)
Store the document in one of those index.
Idea here is making the index as "Write_Index" in which you want to put the document and mark others as "Read_Index".
So you can start with:
1. Creating an Index Template.
PUT /_index_template/model_template
{
"index_patterns": [
"model*"
],
"priority": 1,
"template": {
"aliases": {
"model":{}
},
"mappings": {
"dynamic":"strict",
"_source":
{"enabled": false},
"properties": {
//your filed mappings here
}
},
"settings": {
"index": {
"number_of_shards": 1,
"number_of_replicas": 3
}
}
}
}
2.Create an index for a particular month,which will follow the model template(in step 1) with an naming strategy in your code
PUT model_YYYY_MM
Ex: lets say, you create two index model_2021_03, model_2021_04, now you want to store the documents in one of them,
Idea here is to mark the index, that you want to store the document in
as "Write_Index" and all other as "Read_Index", so when you store the
document using alias name("model") here, it will get stored in write
index by default.
3. Making the index as write index and others as read
POST /_aliases
{
"actions": [
{"add":
{"index": "model_2021_04",
"alias": "model",
"is_write_index": true}
},
{"add":
{"index": "model_2021_03",
"alias": "model",
"is_write_index": false}
}
]
}
4.Finally putting documents in index using alias name
PUT /model/_doc/1
{
//your data here
}

Mongoid winningPlan does not use compound index

I have a compound index as follows.
index({ account_id: 1, is_private: 1, visible_in_list: 1, sent_at: -1, user_id: 1, status: 1, type: 1, 'tracking.last_opened_at' => -1 }, {name: 'email_page_index'})
Then I have a query with these exact fields,
selector:
{"account_id"=>BSON::ObjectId('id'), "is_private"=>false, "visible_in_list"=>{:$in=>[true, false]}, "status"=>{:$in=>["ok", "queued", "processing", "failed"]}, "sent_at"=>{"$lte"=>2021-03-22 15:29:18 UTC}, "tracking.last_opened_at"=>{"$gt"=>1921-03-22 15:29:18 UTC}, "user_id"=>BSON::ObjectId('id')}
options: {:sort=>{"tracking.last_opened_at"=>-1}}
The winningPlan is the following
"inputStage": {
"stage": "SORT_KEY_GENERATOR",
"inputStage": {
"stage": "FETCH",
"filter": {
"$and": [
{
"account_id": {
"$eq": {
"$oid": "objectid"
}
}
},
{
"is_private": {
"$eq": false
}
},
{
"sent_at": {
"$lte": "2021-03-22T14:06:10.000Z"
}
},
{
"tracking.last_opened_at": {
"$gt": "1921-03-22T14:06:10.716Z"
}
},
{
"status": {
"$in": [
"failed",
"ok",
"processing",
"queued"
]
}
},
{
"visible_in_list": {
"$in": [
false,
true
]
}
}
]
},
"inputStage": {
"stage": "IXSCAN",
"keyPattern": {
"user_id": 1
},
"indexName": "user_id_1",
"isMultiKey": false,
"multiKeyPaths": {
"user_id": []
},.....
And the rejected plan has the compound index and forms as follows
"rejectedPlans": [
{
"stage": "FETCH",
"inputStage": {
"stage": "SORT",
"sortPattern": {
"tracking.last_opened_at": -1
},
"inputStage": {
"stage": "SORT_KEY_GENERATOR",
"inputStage": {
"stage": "IXSCAN",
"keyPattern": {
"account_id": 1,
"is_private": 1,
"visible_in_list": 1,
"sent_at": -1,
"user_id": 1,
"status": 1,
"type": 1,
"tracking.last_opened_at": -1
},
"indexName": "email_page_index",
"isMultiKey": false,
"multiKeyPaths": {
"account_id": [],
"is_private": [],
"visible_in_list": [],
"sent_at": [],
"user_id": [],
"status": [],
"type": [],
"tracking.last_opened_at": []
},
"isUnique": false,
The problem is that the winningPlan is slow, wouldn't be better if mongoid choose the compound index? Is there a way to force it?
Also, how can I see the execution time for each separate STAGE?
I am posting some information that can help resolve the issue of performance and use an appropriate index. Please note this may not be the solution (and the issue is open to discussion).
...Also, how can I see the execution time for each separate STAGE?
For this, generate the query plan using the explain with the executionStats verbosity mode.
The problem is that the winningPlan is slow, wouldn't be better if
mongoid choose the compound index? Is there a way to force it?
As posted the plans show a "stage": "SORT_KEY_GENERATOR", implying that the sort operation is being performed in the memory (that is not using an index for the sort). That would be one (or main) of the reasons for the slow performance. So, how to make the query and the sort use the index?
A single compound index can be used for a query with a filter+sort operations. That would be an efficient index and query. But, it requires that the compound index be defined in a certain way - some rules need to be followed. See this topic on Sort and Non-prefix Subset of an Index - as is the case in this post. I quote the example from the documentation for illustration:
Suppose there is a compound index: { a: 1, b: 1, c: 1, d: 1 }
And, all the fields are used in a query with filter+sort. The ideal query is, to have a filter+sort as follows:
db.test.find( { a: "val1", b: "val2", c: 1949 } ).sort( { d: 1 })
Note the query filter has three fields with equality condition (there are no $gt, $lt, etc.). Then the query's sort has the last field d of the index. This is the ideal situation where the index will be used for the query''s filter as well as sort operations.
In your case, this cannot be applied from the posted query. So, to work towards a solution you may have to define a new index so as to take advantage of the rule Sort and Non-prefix Subset of an Index.
Is it possible? It depends upon your application and the use case. I have an idea like this and it may help. Create a compound index like the follows and see how it works:
account_id: 1,
is_private: 1
visible_in_list: 1,
status: 1,
user_id: 1,
'tracking.last_opened_at': -1
I think having a condition "tracking.last_opened_at"=>{"$gt"=>1921-03-22 15:29:18 UTC}, in the query''s filter may not help for the usage of the index.
Also, include some details like the version of the MongoDB server, the size of collection and some platform details. In general, query performance depends upon many factors, including, indexes, RAM memory, size and type of data, and the kind of operations on the data.
The ESR Rule:
When using compound index for a query with multiple filter conditions and sort, sometimes the Equality Sort Range rule is useful for optimizing the query. See the following post with such a scenario: MongoDB - Index not being used when sorting and limiting on ranged query

Boost documents in search results which are matched to array

I have this relatively complex search query that's already being built and working with perfect sorting.
But I think here searching is slow just because of script so all I want to remove script and write query accordingly.
current code :-
"sort": [
{
"_script": {
"type": "number",
"script": {
"lang": "painless",
"source": "double pscore = 0;for(id in params.boost_ids){if(params._source.midoffice_master_id == id){pscore = -999999999;}}return pscore;",
"params": {
"boost_ids": [
3,
4,
5
]
}
}
}
}]
Above code explaination:-
For example, if a match query would give a result like:
[{m_id: 1, name: A}, {m_id: 2, name: B}, {m_id: 3, name: C}, {m_id: 4, name: D}, ...]
So I want to boost document with m_id array [3, 4, 5] which would then transform the result into:
[{m_id: 3, name: C}, {m_id: 4, name: D}, {m_id: 1, name: A}, {m_id: 2, name: B}, ...]
You can make use of the below query using Function Score Query(for boosting) and Terms Query (used to query array of values)
Note that the logic I've mentioned is in the should clause of the bool query.
POST <your_index_name>/_search
{
"query": {
"bool": {
"must": [
{
"match_all": {} //just a sample must clause to retrieve all docs
}
],
"should": [
{
"function_score": { <---- Function Score Query
"query": {
"terms": { <---- Terms Query
"m_id": [
3,4,5
]
}
},
"boost": 100 <---- Boosting value
}
}
]
}
}
}
So basically, you can remove the sort logic completely and add the above function query in your should clause, which would give you the results in the order you are looking for.
Note that you'd have to find a way to add the logic correctly in case if you have much complex query, and if you are struggling with anything, do let me know. I'd be happy to help!!
Hope this helps!

How to map object within object in Swagger

Recently a fellow has recommended Swagger to me to write my API with...
Now after searching for a while, I haven't found the way to map my JSONs quite easily.
This is how my response looks like:
{ 1 : {
name: "foo", age: 22
},
2 : {
name: "bar", age:14
},
3 : {
name: "boo", age: 26
},
4 : {
name: "far", age: 19
}
}
So basically I have an object, where the key is an ID, and the value is another object, which has normal key/value pairs.
Now I'm sure someone before me has needed this, but I couldn't find the way to write this.
How would I write this in Swagger?
Thank you for any help / example / referance to another question!

Some more information about "indexes" added to parameters when incoming data contains array of hashes

I am using Ruby on Rails 4.1 and I would like to know some more about "indexes" added to parameters when incoming data contains array of hashes.
That is, for instance, when I run the following code (as an example, the code executes a simple AJAX request to my application)
var my_data = [
{ "a1": 1, "b1": 2 },
{ "a2": 1, "b2": 2 },
{ "a3": 1, "b3": 2 },
];
$.ajax({
type: "POST",
url: "http://0.0.0.0:3000/path.json",
data: { "my_data": my_data }
});
Then Rails parses the following parameters by "automagically" adding "0", "1", "2" indexes:
{"my_data"=>{"0"=>{"a1"=>"1", "b1"=>"2"}, {"1"=>{"a2"=>"1", "b2"=>"2"}, {"2"=>{"a3"=>"1", "b3"=>"2"}}
Why Rails adds those indexes?
When Rails is supposed to add indexes? And when not?
How to avoid Rails to add indexes?

Resources