Falcor - Deep nested references not cached - falcor

I'm seeing an problem in Falcor client when I request a route that contain nested references.
Here is an example:
Consider the following JsonGraph response from Falcor server on a model.get call
{
"todos": {
"0": { "$type": "ref", "value": ["todosById", "id_0"] },
"1": { "$type": "ref", "value": ["todosById", "id_1"] },
"length": 2
},
"todosById": {
"id_0": {
"name": "get milk",
"label": { "$type": "ref", "value": ["labelsById", "lbl_0"] },
"completed": false
},
"id_1": {
"name": "do the laundry",
"label": { "$type": "ref", "value": ["labelsById", "lbl_1"] },
"completed": false
}
},
"labelsById": {
"lbl_0": { "name": "groceries" },
"lbl_1": { "name": "home" }
}
}
When I call model.get with the following path, all the above jsonGraph result should be in cache:
model.get(['todos', {from: 0, to: 1}, ['completed', 'label', 'name']])
However, manually accessing the cache, I can see todos and todosById are in cache, but not labelsById.
I'm not certain but it looks like labelsById is not in cache because it's a second level reference?
Am I missing something here or is that an expected behaviour of Falcor cache?
Would there be any way to force labelsById to be in cache, so no additional datasource request would be made?
Any help is appreciated !
The problem can be reproduced in this small project:
https://github.com/ardeois/falcor-nested-references-cache-issue
UPDATE
Thanks to #james-conkling answer the json graph can be cached by doing the following model.get:
model.get(
['todos', {from: 0, to: 1}, ['completed', 'name']],
['todos', {from: 0, to: 1}, 'label', 'name']
);
However, on the server side Falcor Router will call todos[{integers:indices}] route twice. This could have an impact on API or database calls to whatever your Falcor server is fronting.

In pathset ['todos', {from: 0, to: 1}, ['completed', 'label', 'name']], the paths ending with the completed and name keys terminates at an atom. But the path ending with the label key terminates at a ref. If you want to actually follow that ref, you'll have to include it as a second path:
[
['todos', {from: 0, to: 1}, ['completed', 'name']],
['todos', {from: 0, to: 1}, 'label', 'name']
]
In general, all paths should terminate on an atom, never on a ref. I'm not sure what the expected behavior is for paths that terminate on a ref, or even if it's well defined (as your other question notes, the behavior has changed from v0 to v1).
The model.get(...paths) call can take multiple pathSet arrays, so rewriting the query should be as straightforward as
model.get(
['todos', {from: 0, to: 1}, ['completed', 'name']],
['todos', {from: 0, to: 1}, 'label', 'name']
);
EDIT
As noted in the comments below, because the router handlers can only resolve a single pathSet at a time, GET requests with multiple pathSets can result in multiple requests to your upstream backing service/db. Some possible solutions:
use a single path
Rewrite the request using a single path ['todos', range, ['completed', 'name', 'label'], 'name']. Technically, this request is asking for todos.n.completed.name and todos.n.label.name (which don't exist), in addition to todos.n.label.name (which does exist).
However, if your router handler returns pathValues for paths that are shorter than the matched path, the shorter pathValues should be merged into your jsonGraph cache. E.g. when matching todos.0.completed.name, return { path: ['todos', 0, 'completed'], value: true }, while when matching todos.0.label.name return { path: ['todos', 0, 'label', 'name'], value: 'First TODO' }.
This is probably the easiest approach, but means your queries aren't really semantically correct (you're knowingly asking for paths that don't exist).
batch upstream requests made by the router
In your router, batch upstream requests to your backing service/db. This is not always straightforward. One possible approach is to use facebook's data-loader, written to solve an equivalent problem w/ GraphQL routers, but not necessarily tied to GraphQL. Another approach could use a custom reducer function to combine requests issued w/i the same tick (e.g. here).
rewrite your schema
So that all paths that need to be requested at the same time are of the same length. This won't always be possible, though, so :shrug.

Related

Exact change GLAccount for BankEntryLine

Currently we import our bank transactions. Through the REST API I read all these transactions and try to match them to our internal invoices.
If I find a match I need to change the GLAccountCode from for example 1000 to 2000 for this particular BankEntryLine. All I see on the BankEntryLine is that I can do a GET or POST but no PUT method.
Is there something wrong with my approach? Like do I have to create something else that reconciles this transaction or is there a different way of updating this transaction line?
Example BankEntryLine:
{
"d": {
"__metadata": {
"uri": "https://start.exactonline.nl/api/v1/000000/financialtransaction/BankEntryLines(guid'123000000-0000-0000-0000-000000000000')",
"type": "Exact.Web.Api.Models.Financial.BankEntryLine"
},
"Document": "00000000-0000-0000-0000-000000000000",
"DocumentNumber": 00000,
"EntryID": "00000000-0000-0000-0000-000000000000",
"EntryNumber": 00000000,
"ExchangeRate": 1,
"GLAccount": "100000000-0000-0000-0000-000000000000",
"GLAccountCode": "1000",
"ID": "123000000-0000-0000-0000-000000000000",
"LineNumber": 1,
"OffsetID": "000000000-0000-0000-0000-000000000000",
"OurRef": null,
"Project": null,
"ProjectCode": null,
"ProjectDescription": null,
"Quantity": null,
"VATCode": "4 "
}
}
API documentation: https://start.exactonline.nl/docs/HlpRestAPIResources.aspx?SourceAction=10
BankEntryLine: https://start.exactonline.nl/docs/HlpRestAPIResourcesDetails.aspx?name=FinancialTransactionBankEntryLines
There is no PUT or DELETE available for this API. I don't directly see another way to update/delete those lines.
Only possible workaround is to make a general journal entry to balance the amount of that suspense GL account to the one you need/want. But that will give you more entries and more lines to match.

Retrieving labels from multiple JIRA Subtasks via JIRA API

I am creating a reporting dashboard with the intent of getting multiple tickets/issues for a project. As most of you probably know, a JIRA issue can have subtasks. These subtasks can have labels.
I want to retrive all labels for every subtask.
I already have the project API request implemented which returns the parent ticket ids along with the issue/ticket number of all subtasks. The problem is the data returned from this request does not include the labels for the subtasks themselves.
I can loop over each subtask number and make an additional API request for each one to get the labels, however this would result in a large number of requests.
Looking through JIRA's API I cannot find a better way of doing this. Google seems to return a lot of results about plugins and version differences with Cloud vs. Server but I have not found a better solution.
Their API makes reference to an expand option but I have yet to figure out a way to make that work for subtask labels (I might be missing something obvious).
If anyone has experience with a similar situation I would appreciate hearing any advice you could offer. Thanks!
What I have currently:
Project API Request:
https://ourcompanyhere.atlassian.net/rest/api/2/search
with an additional parameter added for the JQL string of:
project=PROJECTNAME AND fixversion=version
This returns all the tickets in the project with subtasks but not the subtask labels.
I can loop over the returned data from the above request and make an additional request for each:
https://ourcompanyhere.atlassian.net/rest/api/2/issue/ticketNumberHere
JSON Response
Here is the partial JSON response back (full response is huge and I've removed key information) however this is the complete information for a ticket, with a subtask which has labels. As you can see the labels section of the subtask is completely missing.
ErrorDetail=,
Mimetype=application/json,
Statuscode=200 OK,
Filecontent= {
"expand":"schema,names",
"startAt":0,
"maxResults":50,
"total":3,
"issues":[
{
"expand":"operations,versionedRepresentations,editmeta,changelog,renderedFields",
"id":"24209",
"self":"https://[instance].atlassian.net/rest/api/latest/issue/24209",
"key":"DEV-3089",
"fields":{
"issuetype":{
"self":"https://[instance].atlassian.net/rest/api/2/issuetype/10005",
"id":"10005",
"description":"A new feature of the product, which has yet to be developed.",
"iconUrl":"https://[instance].atlassian.net/secure/viewavatar?size=xsmall&avatarId=10311&avatarType=issuetype",
"name":"New Feature",
"subtask":false,
"avatarId":10311
},
"project":{
"self":"https://[instance].atlassian.net/rest/api/2/project/10000",
"id":"10000",
"key":"DEV",
"name":"Development Queue",
"avatarUrls":{
}
},
"customfield_11000":null,
"fixVersions":[
{
"self":"https://[instance].atlassian.net/rest/api/2/version/14600",
"id":"14600",
"description":"",
"name":"",
"archived":false,
"released":true,
"releaseDate":"2017-09-15"
}
],
"resolution":{
"self":"https://[instance].atlassian.net/rest/api/2/resolution/10000",
"id":"10000",
"description":"Work has been completed on this issue.",
"name":"Done"
},
"customfield_10500":"",
"customfield_10700":null,
"customfield_10900":null,
"resolutiondate":"2017-09-15T09:19:37.000-0400",
"workratio":-1,
"watches":{
"self":"https://[instance].atlassian.net/rest/api/2/issue/DEV-3089/watchers",
"watchCount":2,
"isWatching":true
},
"lastViewed":null,
"created":"2017-05-02T10:15:08.000-0400",
"customfield_10022":null,
"customfield_10100":null,
"priority":{
"self":"https://[instance].atlassian.net/rest/api/2/priority/3",
"iconUrl":"https://[instance].atlassian.net/images/icons/priorities/medium.svg",
"name":"Medium",
"id":"3"
},
"customfield_10300":null,
"labels":[
"[label1]",
"[label2]",
"[label3]",
"[label4]",
"[label5]",
"[label6]"
],
"customfield_10016":null,
"customfield_10017":null,
"versions":[
],
"issuelinks":[
],
"assignee":{
"self":"https://[instance].atlassian.net/rest/api/2/user?username=",
"name":"[name]",
"key":"[name]",
"accountId":"[account]",
"emailAddress":"[email]",
"avatarUrls":{
},
"displayName":"[name]",
"active":true,
"timeZone":"America/New_York"
},
"updated":"2017-09-15T09:19:36.000-0400",
"status":{
"self":"https://[instance].atlassian.net/rest/api/2/status/6",
"description":"The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.",
"iconUrl":"https://[instance].atlassian.net/images/icons/statuses/closed.png",
"name":"Closed",
"id":"6",
"statusCategory":{
"self":"https://[instance].atlassian.net/rest/api/2/statuscategory/3",
"id":3,
"key":"done",
"colorName":"green",
"name":"Done"
}
},
"components":[
],
"description":"[description]",
"customfield_10010":null,
"customfield_10011":null,
"customfield_11100":null,
"customfield_10012":null,
"customfield_10013":null,
"customfield_10015":"",
"customfield_10005":null,
"customfield_10006":null,
"customfield_10600":null,
"customfield_10007":null,
"customfield_10008":null,
"customfield_10800":null,
"customfield_10009":null,
"summary":"[summary]",
"creator":{
"self":"https://[instance].atlassian.net/rest/api/2/user?username=",
"name":"",
"key":"",
"accountId":"",
"emailAddress":"",
"avatarUrls":{
},
"displayName":"",
"active":true,
"timeZone":"America/New_York"
},
"subtasks":[
{
"id":"30213",
"key":"DEV-4118",
"self":"https://[instance].atlassian.net/rest/api/2/issue/30213",
"fields":{
"summary":"",
"status":{
"self":"https://[instance].atlassian.net/rest/api/2/status/6",
"description":"The issue is considered finished, the resolution is correct. Issues which are closed can be reopened.",
"iconUrl":"https://[instance].atlassian.net/images/icons/statuses/closed.png",
"name":"Closed",
"id":"6",
"statusCategory":{
"self":"https://[instance].atlassian.net/rest/api/2/statuscategory/3",
"id":3,
"key":"done",
"colorName":"green",
"name":"Done"
}
},
"priority":{
"self":"https://[instance].atlassian.net/rest/api/2/priority/3",
"iconUrl":"https://[instance].atlassian.net/images/icons/priorities/medium.svg",
"name":"Medium",
"id":"3"
},
"issuetype":{
"self":"https://[instance].atlassian.net/rest/api/2/issuetype/10009",
"id":"10009",
"description":"",
"iconUrl":"https://[instance].atlassian.net/secure/viewavatar?size=xsmall&avatarId=10303&avatarType=issuetype",
"name":"Testing Issue",
"subtask":true,
"avatarId":10303
}
}
}
"reporter":{
"self":"https://[instance].atlassian.net/rest/api/2/user?username=",
"name":"",
"key":"",
"accountId":"",
"emailAddress":"",
"avatarUrls":{
},
"displayName":"",
"active":true,
"timeZone":"America/New_York"
},
"customfield_10000":"2017-09-01T12:35:54.706-0400",
"customfield_10001":null,
"customfield_10200":null,
"customfield_10400":null,
"customfield_10004":null,
"environment":null,
"duedate":null,
"votes":{
"self":"https://[instance].atlassian.net/rest/api/2/issue/DEV-3089/votes",
"votes":0,
"hasVoted":false
}
}
}
]
}
Inspect the response for the /search endpoint again. On a completely empty JIRA Cloud instance I created a Project, one Issue and added a Sub-task for it. Calling the /search endpoint returns a list with two Issues (so, the Issue itself and its Sub-task) and for both there's a field called labels with an array of all the Labels attached to it.
The following is an abbreviated response with all unrelated data removed.
{
"startAt": 0,
"maxResults": 50,
"total": 2,
"issues": [
{
"key": "TEST-1",
"fields": {
"labels": []
}
},
{
"key": "TEST-2",
"fields": {
"parent": {
"key": "TEST-1"
},
"labels": [
"VOILA"
]
}
}
]
}
EDIT
After looking at the response then yes, the array in subtasks is really simple and cannot be separately expanded. You need to do the search, then parse out all the subtasks that you're interested in and either do
a separate /issue/[key] request for each one
a /search for those specific keys
After doing some further research I found a better way to do this. I'm still not getting the subtask labels back but instead of looping over each subtask and sending a separate request for each, you can do one API call using JQL like this:
https://[instance].atlassian.net/rest/api/latest/search?jql=project=[project] AND KEY IN ([comma separated list of tickets])&fields=labels'
The
&fields=labels
part drastically reduces the amount of information returned. So now I can just do a total of two calls and get everything I need. :)
Wanted to post this in case anyone runs into a similar situation.

Use mongoid to count array size with aggregate

I'm trying to translate aggregation from the MongoDB shell to ruby code that uses Mongoid as ODM.
I have some documents like this (very simplified example):
{
"name": "Foo",
"tags": ["tag1", "tag2", "tagN"]
},
{
"name": "Bar",
"tags": ["tagA", "tag2"]
},
...
Now I'd like to get all documents with the name field and the total number of tags for each.
In the MongoDB shell I can achieve it using aggregation framework like this:
db.documents.aggregate(
{$project: {name: 1, tags_count: {$size: $tags}}
)
And it will return:
[{"name": "Foo", "tags_count": 3},
{"name": "Bar", "tags_count": 2}]
Now the frustrating part, I'm trying to implement the same query inside a rails app using Mongoid as ODM.
The code looks like (using rails console):
Document.collection.aggregate(
[
{'$project': {name: 1, tags_count: {'$size': '$tags'}}}
]
).to_a
And it returns the next error:
Mongo::Error::OperationFailure: The argument to $size must be an Array, but was of type: EOO (17124)
My question is: How can I make Mongoid understand that $tags makes reference to the correct field? Or what I'm missing from the code?
Thanks
It looks like there is data which does not consistently have an array in the field. For this you can use $ifNull to place an empty array where none is found and thus return the $size as 0:
Document.collection.aggregate(
[
{'$project': {name: 1, tags_count: {'$size': { '$ifNull': [ '$tags', [] ] } } }}
]
).to_a
Alternately you could simply skip where the field is not present at all using $exists:
Document.collection.aggregate(
[
{'$match': { 'tags_count': { '$exists': true } } },
{'$project': {name: 1, tags_count: {'$size': '$tags'}}}
]
).to_a
But of course that will filter those documents from the selection, which may or may not be the desired effect.

Restricting results of $expand parameter in SensorThings API

I am attempting to truncate the results of an $expand parameter from SensorThingsAPI e.g.
http://example.org/v1.0/Things?$expand=Datastreams
However, $top only restricts the trunk of the query (e.g. Things). Is there a way to truncate the results of the 'leaves' of an $expand?
In this case, the server-side pagination should be controlling the 'leaves' of an $expand.
For example, if the service limit 100 entities for each response and the expanded entities (or the collection) have more than 100, the service will return the top 100 entities following a service-defined order. A #iot.nextLink will also be returned, so that the client know how to fetch the next 100 entities (i.e., next page). Using the above query as an example, an example nextLink to retrieve the Datastreams will be
Datastreams#iot.nextLink:"http://URL_to_retrieve_the_next_page/"
You can use this OGC SensorThings sandbox to see an example return of $expand: http://scratchpad.sensorup.com/OGCSensorThings/v1.0/Datastreams?$expand=Observations
The following JSON shows an example response of the following query with $expand: http://scratchpad.sensorup.com/OGCSensorThings/v1.0/Datastreams?$expand=Observations:
{
"#iot.count": 1,
"value": [{
"#iot.id": 8,
"#iot.selfLink": "http://scratchpad.sensorup.com/OGCSensorThings/v1.0/Datastreams(8)",
"description": "Daily Water level",
"observationType": "http://www.opengis.net/def/observationType/OGC-OM/2.0/OM_Observation",
"unitOfMeasurement": {
"symbol": "m",
"name": "meter",
"definition": "https://en.wikipedia.org/wiki/Metre"
},
"Observations#iot.nextLink": "http://scratchpad.sensorup.com/OGCSensorThings/v1.0/Datastreams(8)/Observations?$top=3&$skip=3",
"Observations#iot.count": 1826,
"Observations": [{
"#iot.id": 1835,
"#iot.selfLink": "http://scratchpad.sensorup.com/OGCSensorThings/v1.0/Observations(1835)",
"phenomenonTime": "2015-12-30T16:00:00.000Z",
"result": "1375.44",
"resultTime": null,
"Datastream#iot.navigationLink": "http://scratchpad.sensorup.com/OGCSensorThings/v1.0/Observations(1835)/Datastream",
"FeatureOfInterest#iot.navigationLink": "http://scratchpad.sensorup.com/OGCSensorThings/v1.0/Observations(1835)/FeatureOfInterest"
}],
"ObservedProperty#iot.navigationLink": "http://scratchpad.sensorup.com/OGCSensorThings/v1.0/Datastreams(8)/ObservedProperty",
"Sensor#iot.navigationLink": "http://scratchpad.sensorup.com/OGCSensorThings/v1.0/Datastreams(8)/Sensor",
"Thing#iot.navigationLink": "http://scratchpad.sensorup.com/OGCSensorThings/v1.0/Datastreams(8)/Thing"
},{},{}]
}

Using restkit object manager

I want to use RestKit to consume a web service.
My collections end point returns something like this.
{
"meta": {
"limit": 20,
"next": "...",
"offset": 0,
"previous": null,
"total_count": 23
},
"objects": [
"..."
],
"requested_time": 1396875600.810225
}
The key "objects" can return an array of one of many types of elements. But always the same for a given collection.
How can I map this response with the ObjectManager?
The complete your object manager configuration you create a number of response descriptors. These descriptors match against path patterns of the response URL and include the mapping to be used to process the response content.
In this way you will have a different response descriptor for each path pattern which returns different content and the linked mapping will instruct RestKit on what type of object to create and how top populate it.

Resources