We are creating buckets in plannerPlans with the Microsoft Graph API with POST request:
POST https://graph.microsoft.com/v1.0/planner/buckets
and this request body:
{ planId: <planId>, name: '<new bucket name>', orderHint: ' !' }
Previously with the beta API, such requests finished very fast and the created bucket was visible immediately in the browser and in the API thru this request:
GET https://graph.microsoft.com/v1.0/planner/<planid>/buckets
Now with the released API, the request finishes similar fast, but it takes up to 10 seconds until the newly created bucket is visible in Web and in the API.
The only work-around we see so far is to poll on the above mentioned GET request every other second until the newly created bucket is visible, but that's tedious!
Is there any other option to synchronize such requests, so we can be sure that a newly created bucket exists and is visible to the Graph API, before we continue with our script?
We recognize similar delays when creating groups/plans, any options here?
There is no option for synchronous processing today. However, POST requests return the full data of the created resource, including the etag value, so you shouldn't need to do a read after creation. In other words, all the data you'd receive from the GET request is already returned as a response to the POST request. The returned information can be used for further updates on the resource or can be used with related resources (e.g. you can create tasks and put them in this bucket, even before you are able to read the bucket back).
This also applies to PATCH requests, if "prefer" header is set to "return=representation".
Related
So I have implemented the Asana Webhooks API as described in their documents. I can pass it a project ID and request a new webhook be created. The API successfully sends a authentication request to my application which returns the Security header as described in the Docs. Asana then returns the expected success response, outlining the newly created Webhooks unique ID.
Now if i take this ID and then query the Asana API to show me all configured webhook's on either the parent Workspace or the project resource directly it returns an empty data JSON Object or reports the resource doesn't exist, suggesting the Webhook Ive just created wasn't actually created, despite giving me the expected success response.
Also If I then make a change to a project it doesn't fire the webhook and I don't receive any events on my application.
Strangely everything was working on Friday but today (Monday) I'm experiencing these issues.
Any pointers would be good, Ive been working as the Docs suggest in terms of my request structure and am authenticating using a PAT, Ive even tried a newly created token.
Thanks,
Our webhooks use the handshake mechanism to make sure that it's possible to call you back, but there's always the possibility that subsequent requests can fail. Additionally (although we don't document this very well - there's an opportunity for us) we should immediately try to deliver a (probably) empty event after the handshake (it looks like {"events":[]}. This is kind of like a "second callback" that contains anything that has changed since you created the webhook.
If this fails - or if any subsequent request fails often enough - the webhook will get trashed. "Failure" in this context means returns HTTP response codes other that 200 or 204.
As for why you're having trouble querying the webhook itself, I wasn't able to repro the issue, so we'd have to dive deeper. It should be fine if you:
Specify the workspace
Optionally specify the resource
I tested this out, and it seemed fine. You also might want to directly query the webhook by id with the /webhooks/:id endpoint - note to use the id of the webhook returned by create, and not the id in the resource field.
If you created the webhook (specifically, your PAT or OAuth app was the one making the create request) you should see the information just fine. If you can get the webhook by id, you should see last_failure_at and last_failure_content fields which would tell you why the webhook was unable to make the delivery.
Finally, if you would like to contact us at api-support#asana.com and let them know more details (for instance, the ID of the webhook you're trying to look at) we can look at those fields from our side to see if we can identify what's going on.
I've created and application and a paginated api which is hooked up to each other. However i'm a bit confused on what is best practice in terms of only showing updated data. For instance if i retrieve data one day and save it into my mobile database. How will the app the next day know that it should make a request and only show that particular data that just has been fetched from the database. Do i need to make somekind of flag or look at createdAt?
When making the request, include either the If-None-Match header with the local resource's ETag or the If-Modified-Since header with the date the local resource was requested.
Configure your server to look for the header and return a 304 Not Modified if the data hasn't changed. That will at least save you some traffic on the responses.
In addition, if the resource data is relatively static, or if the client can tolerate having stale client data, then you can add caching headers to your response. As long as the cached request is valid, the request will never leave your client.
Ideally, you want do design your API to support this where possible. For example, have the request "give me all things in 50 meters" return a list of URIs. Then the API will only have to hit the server for those URIs which are stale.
I have implemented a RESTful API with few resources, for example:
/products/
/products/1
/products/2
/categories/
/categories/1
/categories/2
etc.
Now, I have been told that the app should mainly work offline, therefore I need to get all the data from the APIs and store it locally.
Since I am not providing a single chunk of data but there are different resources URI that needs to be called in order to get all the data I was wondering if this could be a problem.
How does this work? will there be many HTTP calls or one call will do everything?
What is the best approach in this case?
Are these endpoints in themselves?
/products
/categories
It's a pretty well established convention for those to return the entire collection. You could even add some request parameters for filtering etc.
Each URI represents single peace of data. The main idea of REST, instead of having randomly named setter and getter URLs and using GET for all the getters and POST for all the setters, we try to have the URLs identify resources, and then use the HTTP actions GET, POST, PUT and DELETE to do stuff to them.
So, using AFNetworking, for example, you get all benefits of this architecture.
Download model could look like:
Ask server for specified resource by get request
save response in background thread
ask for new peace of data
Of course, if you do not have ability to make new endpoint, that will download all stub, you must download it separately for each:
/products/
/products/1
/products/2
/categories/
/categories/1
/categories/2
Setting up your endpoints in this way will allow for a user of your app to retrieve a single product/category or a list of products/categories.
Here's what each of these API endpoints should do when they are called.
/products - returns a list of products
/categories - returns a list of categories
/products/:id - returns the product with the specified id
/categories/:id - returns the category with the specific id
As far as allowing the app to work mostly offline. The best approach is to do some caching on the client (app) side. Every time a call is made to one of these endpoints for the first time, the result should be stored somewhere on the client side. The next time that same request is made, the data has already been retrieved previously, so therefore no network call needs to be made and the app will work offline. However, the first call needs to have a network connection to be made.
This can be implemented with a dictionary, where the key is the request (/products, /categories/1, etc.) and the value is the result that is returned from the API request. Every time a request is made, your app should first check if the data exists already on the client side. If it does it does not need to make a network call to get it and can just return the data that's present on the cleint.
One place in my rails app requires loading a number of responses from an external server, which currently looks like this:
User makes an AJAX request to the server. "Loading data..." is displayed.
5-30 seconds later, the rails app sends response (assuming the data has not been cached).
It would be much better if I could keep the user informed during that long waiting period with messages informing them of the progress of the request. Such as:
User makes request (as before).
Message "Retrieving ABC" displayed
Message "Retrieving XYZ" displayed
Message "Processing data" displayed
Full response as normal.
How can I go about doing this? I don't think that sending back multiple JavaScript responses to one request is possible, but have no idea what the correct way of doing this is!
This is tricky but Rails supports the notion of streaming a request.
But you probably have to do a lot of work in your project to make this work.
Tenderlove (Aaron Patterson) posted a intro into how Streaming works in Rails and I believe there is a Railscast on this topic.
Probably a simpler solution would be to split this into multiple requests.
So the main request (assuming it's an ajax request) takes forever to complete.
Meanwhile you poll the status on a different ajax request and the main action updates the database with it's process so the other request can retrieve that status and send back the appropriate response (where in the process the main request currently is)
So I'd assign each request something like a request id and then have a database table for those requests and their statuses (could be as simple as having only id:integer and status:string)
You assign the request id on the client (use some random data to create a hash or something) and start the long request with that Id.
The client then polls another endpoint with that same id to get the status back.
The long running request in the meantime updates the Status table with the id it was given and where it is currently in processing that request.
I'm working a custom SpringSecurityFilter for my Grails application and I'm trying to use the commons upload library to process the request. I'm able to process the request in the filter but once it gets to my controller, none of the values are available.
Can the HttpRequest only be processed once by the upload library? I'm guessing it's cleaning up the temp files. Is there a way to keep them around so they can be processed again at the controller level?
I need to interrogate a form parameter for the security (due to the client I can't add it to the http headers) but once I get the value, it seems to wipe the request for further processing.
Yes. A Request can only be parsed once.
I saw this answer on Apache's FAQ page for FileUpload.
Question: Why is parseRequest() returning no items?
Answer: "This most commonly happens when the request has already been parsed, or processed in some other way. Since the input stream has aleady been consumed by that earlier process, it is no longer available for parsing by Commons FileUpload."
Reference: http://commons.apache.org/fileupload/faq.html