User delta query confusion - microsoft-graph-api

When using the deltaLink retrieved from a request to the users/delta endpoint on the v1.0 Microsoft graph API, do subsequent requests (using the deltaLink) returned from each previous request return changes since the initial request, or from the last request?
I am confused as to what the intended scenario is for the endpoint. Is the token a long-lived thing, that you use in your application for a while to keep reusing to track changes from some initial sync, or is it a temporary method to see changes between two points in time, and then the token is discarded?
Right now it appears to "accumulate" changes since the token was created, which I guess makes sense, but if left to live for a long time would requests would potentially accumulate a lot of changes.

When you call a deltaLink, it returns everything that has changed since that deltaLink token was issued. The results will also include a new deltaLink token that you use for your next poll of the system.
In other words, it is similar to the source code control systems like Git work. When you execute git pull, it looks at last commit in your local repository and then pulls down all of the changes that have occurred since that commit. In this example, Git's commit id is the delta token.
For example, you can start syncing Users "from now" using this query:
https://graph.microsoft.com/v1.0/users/delta
The results will look something like this:
{
"#odata.context": "https://graph.microsoft.com/v1.0/$metadata#users",
"#odata.deltaLink": "https://graph.microsoft.com/v1.0/users/delta?$deltatoken=4Pqov5cPSZPKjBZh-vGbekLcJ2oUTc1kzqA0XhF-lJrEmf3B2i-HyS72r9jBIqHuZUzdMopk5VyHnAI6_qh59FjavhOmaTmQW4JPL2bLKu5LpQ3m_zMSsp6M3Os03KOgaeay0zwAa08QefM1ArkZzkB_pUmZyV6MIS1eN7JHnBOgotBPFyPb-dnjAcheWE9W0HkUR87kf3jsuA4Ja4QTRnN3Hj_4eYdzoYxLWi54Aq2bHWTbDlPtl76M8Bhw8jiq37Y4R-G7G0eZNuRt43CBY0l3XliXXR5Ubi4ZUGApDAGVSPNc_VdHV4H3nbaB5qvwZZ7tAfqZB0-06-ZI-a0y2hxAPJLnI-iTB2PXdqxnaZn4f26k0khW52C57czh9KOjfE3nYV6pwEDFRFu-qj4062eMQwo2H6yoCLysv-C-XIOK3nDTaR3BPDOPGqNbnZkmB33-MXe9gcCPojAoU9ql95Z9de5QPnqezWVfBhXj_sRv3RlQQfCkGJHg3ZTVkdA475xJuHOhy2po38KlT3FmN0rzg0jOjvfPyTGrRI24C58ushGouckqHcCQllr4Dby9ECsPjVcNFEklSddjllaRMOkpUilecEdHnxsy0zH60bKkc2-6fcUvYuya8y4-7IySvbpk241ldxvoO9EQpDnUCdh3GgxKnYNuLtOiqiGdYVgGgTJa-iBQ1xVghcpsEMD9NqrByB3mSmy9rRKN3WP_C_HQBnEpV7Z3dvu-8ZxewMEhEObhhv8H_15msP4Bm6KvfuO-0EyNaMy_OXvGKpdkczSVQsdZ4jDmsAw_itqqtZmNoa3URjxjt0KAYNo9IrBXBx9yUGt6K_sY1xybfLxwBYOGOaV7zNd7XFaVXL8OM1hG2JGF0H4dM33uppuX5pWrrU0NBDcf9nbNkJf53fec-R1aX9BmEaAtv2xblxL75Kl0j8JKay2iBM7JaebqbGaZV6es9HUVFCIB6mxcNVDmR55U2Tel_D1TZ44eOwoZtvZvLPdQvQKuDYnwM1Yt9JJWXiGKigKi515UBPI52jhSnL1cY8VFnVDz4b81WgaRESSPipzC7fixQ94lvSaQ1MqJXRoWcm3LnjBMb7Z_aF4H_OFJNrGW-v8O__ZozNHUq04-v7rScKu3Cu64bISp1Z18DUeuRn9_Nc3vhXwhi2YlXxsX0mn-iKYjlGgE8MFSkcChpbtFLN3Wejublt3wlZ4yHQhySLDxFkgiA95BZLEXhdyVbfTzdW9cCUf9beltUT8qgcoLGH4lGdo2qjZqmUOZ8vfbPYCiExmOkN0qazvkGUs-VOcl37sccB2VEgqWodzy_Haq9HJJgAhFt_GeuL9VG58cPZPfKi0Q6DImdBE3p7NC36VvpgTbmz7G8N2V_BV8HC7e1lTiaLBkxXMEgcn_Uzl2gqPqc3CJxd3gTm7Z6MxgQXFZynwTXxo1CXSoSuLhMaND79EBrLOa11vd8aYmDwl5xuAXJZ7hZ0fVAuYe9JEUE2BvJYgBUijNi7ug8-_E73kcRQL5K2KbdTtUZRqFDxEnOBpC1adc1Pn633gC18z6Itzy5j6IXlutTBdlrAM-urzcxHX364VhnXwtWhLSEhB0xnBj-PejvfdzyuC3hTW6cLI-CttpgH-oMNDcrweeCB8NGOJYyxwFsYDZ9X3fwDYEhIwhUwdJOqG_10KJYVaRLUvdhgTkaEQdeRrJc2fyyDXAhJ7aIkr0PNy0ue1yn346cypSZw8BRAx6i3d4BTKgCxwnOqK8x6iMOb1Ad2IomKHoxH1_PkPIgMEz1mDcfipvG1IImuMj333wNe8tbPuPBsmQq-t-4GF4mH3sQkvt3pMdcnb4ITqLaZ5xR5Hxbyig6bENFyMR_5w_Q3LugXIIRknWB21jxWbcMOY1ggaYZno-MBFA2ueFDox8ImN4A9orD-8XpgbSqywqv5dnh4rCDoate27oMkz8NEGW6UdZUOSBrC0k91FKO2yHUa8KDA3tTKMIgDU-ynS0hOzfc_4cUICDpQJJOv2tXsWigkuVshJc-1733CXF3ptA8llB9dFPt6_-oWKAjxvHRq6_X8mX2Siz9D2kTXpfUS-AKmEpKcphNWzpIg5K8iKMy-xB_insLCYbkjFkxU5Q6-VnjS6KRolOXaRSejh2faVxFbgIFrOP6Ns5inBJZUemAPV7TTxY_RIGh3f4nMbzxU2P5doyYP7wTr_aiLDw1uJrDxjXRLPzLseS1uizlLpZMPw-QIAUARFyUzj7k4U9bzXoX_9URA5acvETVZOebdUbR3kCOOoBMmbdELX7uUkICPu_T0fsbGLAKA4wZSIIYdqCUydMQk9NofgvE28v-NB2g4-fFPuFANn0H4b0ktFrBT8wUO61ElrnwsL1tLyx6kP4s1y6OaH_ARTpb9StCkcvbO2bh4HYLj09xnxgbx_4RPUI6cag6mjCRhNTIipj3feZ0pBEVlL7NiTavSV2Ho2gGJujurYSE4cdF-Gjtraeulj0ur1buEwVQX8LbLWbO76X4cQLhE7G2Yf7GV8tjW2DX4TdG9yRciPaBntE0Imxe6IKZnSEEyqXMsZIRWLBfI3WIiVka-QD9lJlPZAdkNMb5VqQxyqruiCZ3nK-R7njc1EoVDejJEDOGyAljhF_kvcxsd_Hu0G8QHi0JtXXm8Tm9hH1O7EtEIDQAHR0tt6ihHixK2IYdfmoe3EIHJ_VmlC37RqTHf2ru2FgkoutuNLII9tYsMhWEEin-tgFwdCvUA0ONHytNY2I0EFKkx56t9JGoupL-lwpnhtnqpnVAPAgAk93D01fBz2NSNlXs_z3E8SOmXud35RNCG62i_nmzHICz_WRwKMHEbqelSst9U2h5FY.uzRUQfIEYUBmAFBUnNWTJn2yfL9toRZ2_VNuoRrA7jg",
"value": []
}
The value of #odata.delaLink is a URI that points to the state of Users as of the specific moment in time the URI was generated. If nothing has changed and you follow #odata.delaLink, you will receive an empty data set and a new URI. If you then add a new User, calling the deltaLink would return only that new user.

Related

Create an automatic status change for room booking in Spreadshee

Trying to set up a Google Sheet that can automatically determine request status based on previous request.
So the flow start with Zapier pushing information to this sheet and specific row
Data Entry Sheet
*The data here will be replaced every time there is a new booking request so the number of cell will always be the same.
The approval system that is needed use the "first come first serve" rule where as long as there is no previous request that match the time within the request, google sheet can directly says it's approved, otherwise it is rejected.
The problem is we need to know if there is already another approved request that is submitted before the request that we want to check. This is the sheet we use to track all previous request
Sheet of Previous Request
Is there any way we can compare the data from "New Request" Sheet to all of previous request in "Reservation" Sheet?
Also another question, can we add all the the request that came in into the list of previous request?
Thank You!
Basically I haven't tried any combinations of command but I have try using filter or array individually but it doesn't work.
This is a new project so there is no previous result I can show but the expected result is all the request will have their request status filled so Zapier can return that request status to it's requestor.

Get all Durable Function instances over a time period

I have been trying to use the Durable Functions HTTP API Get Instances call to get a list of Completed/Failed/Terminated instances to delete over a given time period, batched in groups of 50: /runtime/webhooks/durabletask/instances?code=xxx&createdTimeFrom=2021-11-06T00:00:00.0Z&createdTimeTo=2021-11-07T00:00:00.0Z&top=50
As per the documentation, if the response contains the x-ms-continuation-token header then there are more results and I should make another call adding the x-ms-continuation-token to the request headers... even if I get no results in the body (the first few calls always seem to return no results but then I start getting results after that for a while before dropping back to no results). My issue is that this never seems to end because there is always a continuation token even after running for 20+ minutes and hundreds of calls for the same date range. This doesn't happen for the Durable Function Monitor extension for VS Code.
What am I missing from the documentation that will tell me when to stop looking for more records if the x-ms-continuation-token header is always present?

Testing CKErrorChangeTokenExpired handling by generating known expired CKServerChangeToken

The header comment on CKFetchDatabaseChangesOperation fetchDatabaseChangesCompletionBlock states:
"If the server returns a CKErrorChangeTokenExpired error, the previousServerChangeToken value was too old and the client should toss its local cache and
re-fetch the changes in this record zone starting with a nil previousServerChangeToken."
I would like to test this scenario thus I would like to generate an expired CKServerChangeToken so I can set it as the previousServerChangeToken on a CKFetchDatabaseChangesOperation.
I added an init method from the private header:
#interface CKServerChangeToken (Private)
- (id)initWithData:(NSData *)data;
#end
And used it as follows:
CKServerChangeToken knownExpiredToken = [[CKServerChangeToken alloc] initWithData:[[NSData alloc] initWithBase64EncodedString:#"AQAAAVl57tUGHv6sgNT9EeaTcQCM+sDHHA==" options:0]];
That string is a valid change token returned from a request and I have tried unsuccessfully modifying it, e.g. reducing numbers that I see incrementing to lower ones. I have however managed to get another strange invalid argument errors like continuation marker missing. I would be grateful if a CloudKit engineer has any suggestions, thanks.
This is an old post, but I've unintentionally generated .changeTokenExpired errors by doing this:
Fetch changes and save your change token.
In the dashboard, delete the zone you're syncing with.
Create a new zone with the same name.
In your code, fetch changes again, using the change token from step 1.
Since the token refers to a different zone, it doesn't make any sense to CloudKit, which returns a .changeTokenExpired error.
It would be really nice if Apple provided a way to do this, because it's an absolutely crucial situation to test.
I couldn't find any recommended way to do this, so I used tokens generated for other zones. If you've got multiple zones like I did, you can effectively cross wire token retrieval. It's not an elegant solution, but it does work... :)

Asana tag API query often misses newly created Tags

when we create projects via API the newly created project is immediately returned in both the webApp and in the API.
But a tag created using API "https://app.asana.com/api/1.0/tags" is often returned only after two or three GET requests. Also in the webApp it needs a refresh, online application sync does not update new tags like Projects.
This late returns really affects the user interaction. I follow the same workflow thats used for creating and adding project, but tags feels a bit laggy. Am I missing anything?
The answer is that tags which aren't associated with any tasks are - unfortunately - hidden in the app, and consequently also in the API. As you discovered, you can get the ID back from the POST to create and then associate it with a task from there (since there's little purpose in creating a tag if you're not associating it with something that shouldn't typically be a problem, but it is clunky). We are looking at changing our data model for tags to be a bit more intuitive in future, but that's still a ways off, so this is the reality for the foreseeable future.
the newly created tag is missed in the GET /tags API from time to time. But the http response returned after creation of the new tag by POST /tags, will contain the id, name and other properties of the newly created tag. So we can add the new tag from this response.
POST-> https://app.asana.com/api/1.0/tags \
-u "name=fluffy" \
-u "workspace=14916"
# Response
HTTP/1.1 201
{
"data": {
"id": 1771,
"name": "fluffy",
...
}
}

How to stay RESTful with a complex API

My setup: Rails 2.3.10, Ruby 1.8.7
I need to implement an API that is essentially a GET but depending on a date, could involve DELETE and POST actions as well. Let me explain, for a particular day, the API needs to add 10 items to one table randomly selected from another table but this is only done once a day. If the items added are from the previous day, then the API needs to delete those items and randomly add 10 new ones. If multiple calls are made to the API in the same day, then it's just a GET after the initial creation. Hope this makes some sense.
How would I implement this as a RESTful API if at all possible.
How about?
GET /Items
If the next day has arrived, then generate 10 new items before returning them. If the next day has not arrived, then return the same 10 items you previously returned. There is no reason the server cannot update the items based on a GET. The client is not requesting an update so the request is still considered safe.
Not sure if I'm understanding you correctly, but just by looking at this, all I can think is the following: What a horrible thing, to perform an add which depending on what it's added, performs a delete. No disrespect, but seriously. Or maybe it is the way you are describing it.
Whatever the case, if you want to have a RESTful API, then you have to treat GET and PUT distinctively.
I don't think you have a clear use-case picture of how your API (or your system for that matter is to be done.) My suggestion would be to re-model this as follows:
Define a URI for your resource, say /random-items
a GET /random-items gets you between 0 and 10 items currently in the system.
a PUT/random-items with an empty body does the following:
delete any random items added on or before yesterday
add as many random items as necessary to complete 10
an invocation to DELETE /random-items) should return a 405 Method Not Allowed http error code.
an invocation to POST/random-items` should add no more than 10 items, deleting as needed.
/random-items/x is a valid URI so long as x is one of the items currently under /random-items.
A GET to it should return a representation for it or a 404 if it does not exist
A DELETE to it deletes it from under /random-items or 404 if it does not exist
A PUT to it should change its value if it makes sense (or return a 405)
A POST to it should return a 405 always
That should give you a skeleton sorta RESTful API.
However, if you insist, or need to overload GET so that it performs the additions and deletions behinds the scene, then you are making it non-RESTful.
That in itself is not a bad thing if you legitimately have a need for it (as no architectural paradigm is universally applicable.) But you need to understand what RESTful mean and when/why/how to break it.

Resources