I have been using the getEmailActivityCounts(Beta version) in graph api for the past 3 years.
But recently facing issues on retrieving the reportDate in the graph data when JSON export type is used. Without reportDate, we couldn't map on which date the metrics count had happened.
Any help would be appreciated.
Request : https://graph.microsoft.com/beta/reports/getEmailActivityCounts(period='D180')?$format=application/json
Response : {
"reportRefreshDate": "2022-01-19",
"send": 3,
"receive": 14,
"read": null,
"meetingCreated": 0,
"meetingInteracted": null,
"reportPeriod": "180"
},
{
"reportRefreshDate": "2022-01-19",
"send": 1,
"receive": 1,
"read": null,
"meetingCreated": 0,
"meetingInteracted": null,
"reportPeriod": "180"
}
Thanks in Advance,
Maerona Wynn
I have a compound index as follows.
index({ account_id: 1, is_private: 1, visible_in_list: 1, sent_at: -1, user_id: 1, status: 1, type: 1, 'tracking.last_opened_at' => -1 }, {name: 'email_page_index'})
Then I have a query with these exact fields,
selector:
{"account_id"=>BSON::ObjectId('id'), "is_private"=>false, "visible_in_list"=>{:$in=>[true, false]}, "status"=>{:$in=>["ok", "queued", "processing", "failed"]}, "sent_at"=>{"$lte"=>2021-03-22 15:29:18 UTC}, "tracking.last_opened_at"=>{"$gt"=>1921-03-22 15:29:18 UTC}, "user_id"=>BSON::ObjectId('id')}
options: {:sort=>{"tracking.last_opened_at"=>-1}}
The winningPlan is the following
"inputStage": {
"stage": "SORT_KEY_GENERATOR",
"inputStage": {
"stage": "FETCH",
"filter": {
"$and": [
{
"account_id": {
"$eq": {
"$oid": "objectid"
}
}
},
{
"is_private": {
"$eq": false
}
},
{
"sent_at": {
"$lte": "2021-03-22T14:06:10.000Z"
}
},
{
"tracking.last_opened_at": {
"$gt": "1921-03-22T14:06:10.716Z"
}
},
{
"status": {
"$in": [
"failed",
"ok",
"processing",
"queued"
]
}
},
{
"visible_in_list": {
"$in": [
false,
true
]
}
}
]
},
"inputStage": {
"stage": "IXSCAN",
"keyPattern": {
"user_id": 1
},
"indexName": "user_id_1",
"isMultiKey": false,
"multiKeyPaths": {
"user_id": []
},.....
And the rejected plan has the compound index and forms as follows
"rejectedPlans": [
{
"stage": "FETCH",
"inputStage": {
"stage": "SORT",
"sortPattern": {
"tracking.last_opened_at": -1
},
"inputStage": {
"stage": "SORT_KEY_GENERATOR",
"inputStage": {
"stage": "IXSCAN",
"keyPattern": {
"account_id": 1,
"is_private": 1,
"visible_in_list": 1,
"sent_at": -1,
"user_id": 1,
"status": 1,
"type": 1,
"tracking.last_opened_at": -1
},
"indexName": "email_page_index",
"isMultiKey": false,
"multiKeyPaths": {
"account_id": [],
"is_private": [],
"visible_in_list": [],
"sent_at": [],
"user_id": [],
"status": [],
"type": [],
"tracking.last_opened_at": []
},
"isUnique": false,
The problem is that the winningPlan is slow, wouldn't be better if mongoid choose the compound index? Is there a way to force it?
Also, how can I see the execution time for each separate STAGE?
I am posting some information that can help resolve the issue of performance and use an appropriate index. Please note this may not be the solution (and the issue is open to discussion).
...Also, how can I see the execution time for each separate STAGE?
For this, generate the query plan using the explain with the executionStats verbosity mode.
The problem is that the winningPlan is slow, wouldn't be better if
mongoid choose the compound index? Is there a way to force it?
As posted the plans show a "stage": "SORT_KEY_GENERATOR", implying that the sort operation is being performed in the memory (that is not using an index for the sort). That would be one (or main) of the reasons for the slow performance. So, how to make the query and the sort use the index?
A single compound index can be used for a query with a filter+sort operations. That would be an efficient index and query. But, it requires that the compound index be defined in a certain way - some rules need to be followed. See this topic on Sort and Non-prefix Subset of an Index - as is the case in this post. I quote the example from the documentation for illustration:
Suppose there is a compound index: { a: 1, b: 1, c: 1, d: 1 }
And, all the fields are used in a query with filter+sort. The ideal query is, to have a filter+sort as follows:
db.test.find( { a: "val1", b: "val2", c: 1949 } ).sort( { d: 1 })
Note the query filter has three fields with equality condition (there are no $gt, $lt, etc.). Then the query's sort has the last field d of the index. This is the ideal situation where the index will be used for the query''s filter as well as sort operations.
In your case, this cannot be applied from the posted query. So, to work towards a solution you may have to define a new index so as to take advantage of the rule Sort and Non-prefix Subset of an Index.
Is it possible? It depends upon your application and the use case. I have an idea like this and it may help. Create a compound index like the follows and see how it works:
account_id: 1,
is_private: 1
visible_in_list: 1,
status: 1,
user_id: 1,
'tracking.last_opened_at': -1
I think having a condition "tracking.last_opened_at"=>{"$gt"=>1921-03-22 15:29:18 UTC}, in the query''s filter may not help for the usage of the index.
Also, include some details like the version of the MongoDB server, the size of collection and some platform details. In general, query performance depends upon many factors, including, indexes, RAM memory, size and type of data, and the kind of operations on the data.
The ESR Rule:
When using compound index for a query with multiple filter conditions and sort, sometimes the Equality Sort Range rule is useful for optimizing the query. See the following post with such a scenario: MongoDB - Index not being used when sorting and limiting on ranged query
I need some pointers here.
I'm talking to an API that returns data based on specific parameters. I have been taking that response and flattening/editing it to fit my model and then saving into the database. Everything was working great until today that I started testing the live endpoint (no dummy data) and as it turns out, the format of the response changes.
For example, if a data set does not have a record, rather than including the value as nil, some responses are not including that key at all. This is breaking my logic to flatten and edit since now I'd need to check that every single field exists before I do anything.
Here are 2 snippets of what it can look like
Sample 1 - (No shared)
{
"request_info": {
"city_id": 76211,
"currency": "usd",
"req_type": "geom"
},
"data": {
"rental_counts": {
"counts": {
"private": {
"1": 17,
"2": 3,
"all": 20
},
"entire": {
"0": 2,
"1": 8,
"2": 11,
"3": 16,
"4": 14,
"5": 6,
"all": 57
}
},
}
}
}
Sample 2 (includes Shared)
{
"request_info": {
"city_id": 76211,
"currency": "usd",
"req_type": "geom"
},
"data": {
"rental_counts": {
"counts": {
"private": {
"1": 17,
"2": 3,
"all": 20
},
"entire": {
"0": 2,
"1": 8,
"2": 11,
"3": 16,
"4": 14,
"5": 6,
"all": 57
},
"shared": {
"0": 2,
"1": 8,
"all": 10
}
},
}
}
}
The changes I believe can happen at any level and for any key (parent or child). I'm sure I'm not the first one to run into something like this. What is the best way to manage it? Is there some method or gem that would help with parsing json and getting it into a standardized model whether the data keys are there or not?
I had been looking at Roar but still don't quite understand how it works very well. Is this something Roar could handle or would the json object need to be pre-defined and not dynamic?
I found a simpler solution than roar or deserializers. Ruby's slice method allows you to only select pre defined keys and ignore all others. I'm calling this method after flattening my hash but before using active record to import.
How does it work in iOS? Currently using Objective C.
Created in Document.
LIKE
{
"_id": "",
"userEmail": "",
"broadcastID": "",
"like": "",
"count": 0,
"type": "like"
}
and
REACH
{
"_id": "",
"userEmai": "",
"broadcastID": "",
"reach": "",
"count": 0,
"type": "reach"
}
Thank you in advance :)
Count Numbers of Likes/Reached
A typical approach would be to create a view with a map function that emits (places into the index) the values you care about. You can then run queries based on the view to further refine the information you retrieve.
Between the map function and query options, you have a lot of control over the information you retrieve. For example, you already have a 'type' field, so it's easy to have your map function only produce output for documents of the correct type.
I have the json like that:
{
"response":
[
8236,
{
"pid": 1234,
"lat": 56,
"long": 30,
},
{
"pid": 123,
"lat": 56,
"long": 29
},
]
}
So how to describe it in RKEntityMapping? How to describe object without key? What attributes should be in AttributeMappingsFromDictionary?
Do I need to create 2 classes with relationships like that:
First one will be describe Root object with variables counter and relationship to second class which have pid,lat and long?
I tried to do like described above with 2 classes and relationship but restkit crash.
You would need to use a response descriptor with a dynamic mapping and key path of response. The dynamic mapping should be passed each item in the array in turn and you can then decide what mapping to return to handle it.
To deal with the individual mapping you would need to use a nil keypath mapping.