I'm actually working on a Rails Application API. I have models with a lot of data.
I would like to be able from my client application to send like a Token, a timestamp or whatever to get the new, updated, deleted content/datas since the last request. While providing a new Token, timestamp for the futur request.
In that way, I have just to update my local cached content on the client side depending on the result of the request rather than update all of my local datas at each request.
After many researches on Google, I didn't find anything convincing.
I don't now how can I manage that on the server side ? Is that a good practice ?
If yes what's the best way to do it ?
Related
ok, first time making an API!
My assumption is that if data needs to be stored on the back end such that it persists across multiple API calls, it needs to be 1) in cache or 2) in a Database. is that right?
I was looking at the code for the gem "google-id-token". it seems to do just what i need for my google login application. My front end app will send the google tokens to the API with requests.
the gem appears to cache the public (PEM) certificates from Google (for an hour by default) and then uses them to validate the Google JWT you provide.
but when i look at the code (https://github.com/google/google-id-token/blob/master/lib/google-id-token.rb) it just seems to fetch the google certificates and put them into an instance variable.
am i right in thinking that the next time someone calls the API, it will have no memory of that stored data and just fetch it again?
i guess its a 2 part question:
if i put something in an #instance_variable in my API, will that data exist when the next API call comes in?
if not, is there any way that "google-id-token" is caching its data correctly? maybe HTTP requests are somehow cached on the backend and therefore the network request doesnt actually happen over and over? can i test this?
my impulse is to write "google-id-token" functionality in a way that caches the google certs using MemCachier. but since i dont know what I'm doing i thought i would ask.? Maybe the gem works fine as is, i dont know how to test it.
Not sure about google-id-token, but Rails instance variables are not available beyond single requests and views (and definitely not from one user's session to another).
You can low-level cache anything you want with Rails.cache.fetch this is put in a block, takes a key name, and an expiration. So it looks like this:
Rails.cache.fetch("google-id-token", expires_in: 24.hours) do
#instance_variable = something
end
If the cache exists and it is not past its expiration date/time, Rails grabs it from the cache; otherwise, it would make your API request.
It's important to note that low-level caching doesn't work with mem_store (the default for development) and so you need to implement a solution with redis or memcached or something like that for development, too. Also, make sure the file tmp/cache.txt exists. You can run rails dev:cache or just touch it to create it.
More on Rails caching
So, super weird use case.
Basically, a client created objects and syncs them to the server. The server persists them, and returns that same object with a UID. When the client gets that UID object, it deletes the client version and saves the server version.
I’m worried that the client will send the object, and while the server is validating, disconnect. Then when the client sends the object again, we have duplicates.
I could generate a client ID to avoid his situation and persist that with the server object, but I was looking into a way to only persist objects if the client successfully receives the response, so we know it won’t resend the request
I googled around, but I couldn’t find anything. Is there a way to do this?
So, as I thought, my answer really demonstrates a lack of understanding on how HTTP works. I suspected that it wasn't possible with this technology - and it isn't - but there's really an underlying problem that I should have addressed.
The correct answer is to have an id generated on the client that is also stored in the database. The reason is because this makes the request idempotent - that is, the client can resend the same request as many times as it likes without messing up the server.
Whenever the server gets a request to make a new object, it simply checks our client ids sent. If that object already exists, don't make it again, just return the server generated object. Simple!
I've been using Google refresh/access token in my in-app purchase verification on server-side.
When refreshing the token, I get a new access token, valid for 3600 secs.
Why can't I just re-ask for a new access token when needed, rather than storing and re-using it?
Why can't I just re-ask for a new access token when needed, rather than storing and re-using it?
Simple answer is you can. Technically speaking you could just ask for a new access token every time you want to make a request. As long as you store your refresh token you will then be able to access the server when ever you need.
I think a better question would be why wouldn't you want to. Well if you are using an application and that application is running for 30 minutes there is really no reason to request a new access token when you can just use the one that was returned to you the first time. You are making an extra round trip to the server with ever request you make if you are also requesting an access token first.
However if you have an application that say runs once every five minutes as a cron job or a windows service well then one could argue that its a pain trying to keep track of it. Actually its not you could just add a date to when it was generated and check the date before using it.
Now google is not going to stop you from requesting a new one every time so feel free. However I cant remember if Google gives you a new access token or if they just return the same one that they generated last time. If they generate a new one every time. Remember that they will all work for an hour so dont leave them laying around.
As described in https://developer.apple.com/reference/cloudkit/ckserverchangetoken, the CloudKit servers return a change token as part of the CKFetchRecordZoneChangesOperation callback response. For what set of subsequent record fetches should I include the given change token in my fetch calls?
only fetches to the zone we fetched from?
or would it apply to any fetches to the db that that zone is in? or perhaps the whole container that the db is in?
what about app extensions? (App extensions have the same iCloud user as the main app, but have a different "user" as returned by fetchUserRecordIDWithCompletionHandler:, at least in my testing) Would it be appropriate to supply a change token from the main app in a fetch call from, say, a Messages app extension? I assume not, but would love to have a documented official answer.
I, too, found the scope of CKServerChangeToken a little unclear. However, after reviewing the documentation, both CKFetchDatabaseChangesOperation and CKFetchRecordZoneChangesOperation provide and manage their own server change tokens.
This is particularly useful if you decide to follow the CloudKit workflow Dave Browning outlines in his 2017 WWDC talk when fetching changes (around the 8 minute mark).
The recommended approach is to:
1) Fetch changes for a database using CKFetchDatabaseChangesOperation. Upon receiving the updated token via changeTokenUpdatedBlock, persist this locally. This token is 'scoped' to either the private or shared CKDatabase the operation was added to. The public database doesn't offer change tokens.
2) If you receive zone IDs via the recordZoneWithIDChangedBlock in the previous operation, this indicates there are zones which have changes you can fetch with CKFetchRecordZoneChangesOperation. This operation takes in it's own unique server change token via it's rather cumbersome initializer parameter: CKFetchRecordZoneChangesOperation.ZoneConfiguration. This is 'scoped' to this particular CKRecordZone. So, again, when receiving an updated token via recordZoneChangeTokensUpdatedBlock, it needs persisting locally (perhaps with a key which relates to it's CKRecordZone.ID).
The benefit here is that it probably minimises the number of network calls. Fetching database changes first prevents making calls for each record zone if the database doesn't report any changed zone ids.
Here's a code sample from the CloudKit team which runs through this workflow. Admittedly a few of the APIs have since changed and the comments don't explicitly make it clear the 'scope' of the server change tokens.
We have existed API like
/api/activiation_code
each time, the activiation_code will be different, then server will create a token for this call and return it, usually each call will have different activiation_code which return different token.
Since this API need server to create something so it is designed as POST.
Can we design this API as HTTP GET ?
What is the pro and cons ?
You could design the API to support GET requests, but I would not recommend this. If your API is accessible via a website, a user could accidentally activate an account multiple times since the URL will be stored in the browser's history. Additionally, web crawlers could potentially supply values to your API through the URL if you support GET requests.
POST requests are much better because the information is included in the body of the request, not the URL. Thus, it is much less likely that something will go wrong accidentally.