Microsoft recommends to use delta function in combination with Subscriptions/Notifications to synchronize mailbox. So my plan is:
Create subscription
Receive notification about new mail in inbox
Use delta function to get latest changes in the inbox
My mailbox already has several thousands of letters. If I run the query
https://graph.microsoft.com/v1.0/users/{id}/mailFolders/inbox/messages/delta
It will return in response #odata.nextLink with $skiptoken param many times and only after I get all the thousands of emails in my mailbox I will receive response with $deltatoken to track new changes.
Is there a way to get deltatoken after the first request? I don't want to synchronize the old messages. I want to skip all old messages in inbox and have a fresh start.
Today the delta query functionality does not support this scenario. To request new features please post ideas to uservoice
This is supported for some endpoints. You can use $deltaToken=latest to get just a deltaToken without any resource data. It's not, as far as I can tell, available for mailboxes… but who knows, maybe it will be soon.
This is not documented anywhere in the documentation for the specific APIs that do support it, but is instead documented in the Overview for change tracking. Why? Because Microsoft wants you to be sad all the time.
I've been using Google refresh/access token in my in-app purchase verification on server-side.
When refreshing the token, I get a new access token, valid for 3600 secs.
Why can't I just re-ask for a new access token when needed, rather than storing and re-using it?
Why can't I just re-ask for a new access token when needed, rather than storing and re-using it?
Simple answer is you can. Technically speaking you could just ask for a new access token every time you want to make a request. As long as you store your refresh token you will then be able to access the server when ever you need.
I think a better question would be why wouldn't you want to. Well if you are using an application and that application is running for 30 minutes there is really no reason to request a new access token when you can just use the one that was returned to you the first time. You are making an extra round trip to the server with ever request you make if you are also requesting an access token first.
However if you have an application that say runs once every five minutes as a cron job or a windows service well then one could argue that its a pain trying to keep track of it. Actually its not you could just add a date to when it was generated and check the date before using it.
Now google is not going to stop you from requesting a new one every time so feel free. However I cant remember if Google gives you a new access token or if they just return the same one that they generated last time. If they generate a new one every time. Remember that they will all work for an hour so dont leave them laying around.
As described in https://developer.apple.com/reference/cloudkit/ckserverchangetoken, the CloudKit servers return a change token as part of the CKFetchRecordZoneChangesOperation callback response. For what set of subsequent record fetches should I include the given change token in my fetch calls?
only fetches to the zone we fetched from?
or would it apply to any fetches to the db that that zone is in? or perhaps the whole container that the db is in?
what about app extensions? (App extensions have the same iCloud user as the main app, but have a different "user" as returned by fetchUserRecordIDWithCompletionHandler:, at least in my testing) Would it be appropriate to supply a change token from the main app in a fetch call from, say, a Messages app extension? I assume not, but would love to have a documented official answer.
I, too, found the scope of CKServerChangeToken a little unclear. However, after reviewing the documentation, both CKFetchDatabaseChangesOperation and CKFetchRecordZoneChangesOperation provide and manage their own server change tokens.
This is particularly useful if you decide to follow the CloudKit workflow Dave Browning outlines in his 2017 WWDC talk when fetching changes (around the 8 minute mark).
The recommended approach is to:
1) Fetch changes for a database using CKFetchDatabaseChangesOperation. Upon receiving the updated token via changeTokenUpdatedBlock, persist this locally. This token is 'scoped' to either the private or shared CKDatabase the operation was added to. The public database doesn't offer change tokens.
2) If you receive zone IDs via the recordZoneWithIDChangedBlock in the previous operation, this indicates there are zones which have changes you can fetch with CKFetchRecordZoneChangesOperation. This operation takes in it's own unique server change token via it's rather cumbersome initializer parameter: CKFetchRecordZoneChangesOperation.ZoneConfiguration. This is 'scoped' to this particular CKRecordZone. So, again, when receiving an updated token via recordZoneChangeTokensUpdatedBlock, it needs persisting locally (perhaps with a key which relates to it's CKRecordZone.ID).
The benefit here is that it probably minimises the number of network calls. Fetching database changes first prevents making calls for each record zone if the database doesn't report any changed zone ids.
Here's a code sample from the CloudKit team which runs through this workflow. Admittedly a few of the APIs have since changed and the comments don't explicitly make it clear the 'scope' of the server change tokens.
I am writing a simple IMAP client that will be able to sync w/ any Google email account. I don't want to have to read the ENTIRE set of message headers on the server every time I sync in order to be assured that I do not miss something. I would prefer to not ever have to do that, and to rely on some field that ensures total order. For example, I would prefer to rely on Google extended Message ID field or even just on Receieved-Date and have my logic be: "keep reading backwards until you hit something you have previously read". But alas, it does not seem to be that simple.
What is the preferred way to do sync such that it is both efficient (in terms of time + bandwidth) and guaranteed (i.e., no missed messages)?
Thanks!
I'm trying to find my way around the OAuth spec, its requirements and any implementations I can find and, so far, it really seems like more trouble than its worth because I'm having trouble finding a single resource that pulls it all together. Or maybe it's just that I'm looking for something more specialized than most tutorials.
I have a set of existing APIs--some in Java, some in PHP--that I now need to secure and, for a number of reasons, OAuth seems like the right way to go. Unfortunately, my inability to track down the right resources to help me get a provider up and running is challenging that theory. Since most of this will be system-to-system API usage, I'll need to implement a 2-legged provider. With that in mind...
Does anyone know of any good tutorials for implementing a 2-legged OAuth provider with PHP?
Given that I have securable APIs in 2 languages, do I need to implement a provider in both or is there a way to create the provider as a "front controller" that I can funnel all requests through?
When securing PHP services, for example, do I have to secure each API individually by including the requisite provider resources on each?
Thanks for your help.
Rob, not sure where you landed on this but wanted to add my 2 cents in case anyone else ran across this question.
I more or less had the same question a few months ago and hearing about "OAuth" for the better part of a year. I was developing a REST API I needed to secure so I started reading about OAuth... and then my eyes started to roll backwards in my head.
I probably gave it a good solid day or 2 of skimming and reading until I decided, much like you, that OAuth was confusing garbage and just gave up on it.
So then I started researching ways to secure APIs in general and started to get a better grasp on ways to do that. The most popular way seemed to be sending requests to the API along with a checksum of the entire message (encoded with a secret that only you and the server know) that the server can use to decide if the message had been tampered with on it's way from the client, like so:
Client sends /user.json/123?showFriends=true&showStats=true&checksum=kjDSiuas98SD987ad
Server gets all that, looks up user "123" in database, loads his secret key and then (using the same method the client used) re-calculates it's OWN checksum given the request arguments.
If the server's generated checksum and the client's sent checksum match up, the request is OK and executed, if not, it is considered tampered with and rejected.
The checksum is called an HMAC and if you want a good example of this, it is what Amazon Web Services uses (they call the argument 'signature' not 'checksum' though).
So given that one of the key components of this to work is that the client and server have to generate the HMAC in the same fashion (otherwise they won't match), there have to be rules on HOW to combine all the arguments... then I suddenly understood all that "natural byte-ordering of parameters" crap from OAuth... it was just defining the rules for how to generate the signature because it needed to.
Another point is that every param you include in the HMAC generation is a value that then can't be tampered with when you send the request.
So if you just encode the URI stem as the signature, for example:
/user.json == askJdla9/kjdas+Askj2l8add
then the only thing in your message that cannot be tampered with is the URI, all of the arguments can be tampered with because they aren't part of the "checksum" value that the server will re-calculate.
Alternatively, even if you include EVERY param in the calculation, you still run the risk of "replay attacks" where a malicious middle man or evesdropped can intercept an API call and just keep resending it to the server over and over again.
You can fix that by adding a timestamp (always use UTC) in the HMAC calculation as well.
REMINDER: Since the server needs to calculate the same HMAC, you have to send along any value you use in the calculation EXCEPT YOUR SECRET KEY (OAuth calls it a consumer_secret I think). So if you add timestamp, make sure you send a timestamp param along with your request.
If you want to make the API secure from replay attacks, you can use a nonce value (it's a 1-time use value the server generates, gives to the client, the client uses it in the HMAC, sends back the request, the server confirms and then marks that nonce value as "used" in the DB and never lets another request use it again).
NOTE: 'nonce' are a really exact way to solve the "replay attack" problem -- timestamps are great, but because computers don't always have in-sync timestamp values, you have to allow an acceptable window on the server side of how "old" a request might be (say 10 mins, 30 mins, 1hr.... Amazon uses 15mins) before we accept or reject it. In this scenario your API is technically vulnerable during the entire window of time.
I think nonce values are great, but should only need to be used in APIs that are critical they keep their integrity. In my API, I didn't need it, but it would be trivial to add later if users demanded it... I would literally just need to add a "nonce" table in my DB, expose a new API to clients like:
/nonce.json
and then when they send that back to me in the HMAC calculation, I would need to check the DB to make sure it had never been used before and once used, mark it as such in the DB so if a request EVER came in again with that same nonce I would reject it.
Summary
Anyway, to make a long story short, everything I just described is basically what is known as "2-legged OAuth". There isn't that added step of flowing to the authority (Twitter, Facebook, Google, whatever) to authorize the client, that step is removed and instead the server implicitly trusts the client IF the HMAC's they are sending match up. That means the client has the right secret_key and is signing it's messages with it, so the server trusts it.
If you start looking around online, this seems to be the preferred method for securing API methods now-adays, or something like it. Amazon almost exactly uses this method except they use a slightly different combination method for their parameters before signing the whole thing to generate the HMAC.
If you are interested I wrote up this entire journey and thought-process as I was learning it. That might help provide a guided thinking tour of this process.
I would take a step back and think about what a properly authenticated client is going to be sending you.
Can you store the keys and credentials in a common database which is accessible from both sets of services, and just implement the OAuth provider in one language? When the user sends in a request to a service (PHP or Java) you then check against the common store. When the user is setting up the OAuth client then you do all of that through either a PHP or Java app (your preference), and store the credentials in the common DB.
There are some Oauth providers written in other languages that you might want to take a look at:
PHP - http://term.ie/oauth/example/ (see bottom of page)
Ruby - http://github.com/mojodna/sample-oauth-provider
.NET http://blog.bittercoder.com/PermaLink,guid,0d080a15-b412-48cf-b0d4-e842b25e3813.aspx