I am working on a new mobile app which will be talking to a rails server. Originally the idea was to remain restful and follow all conventions, however this goes against the client side best practices and performance of minimizing HTTP requests. I was wondering when you should remain restful and make only one api call per resource type, and when should you make one call which will update, add, remove and return a list of a few different resources.
For example, the app I am working on will be a scorekeeping app. Upon login, I return both the user information, a list of games that the scorekeeper can then edit, as well as all of the stats associated with each game. Since this list is returned in the first call, the view is immediately changed to the game list which is already pre-populated. This is quite fast.
Now from my understanding, to remain restful I would have to first make the login call(POST) for the user information, then make another(GET) call for the games list.
Another example would be uploading stats. Each stat has an action associated with it, whether its delete, update, or create. Currently all the stats are stored in a JSON which will send one POST call to the server. The server will then loop through the list and delete, update, or create the stats as needed. Now restfully I should be making a separate POST, DELETE, or PUT calls for each stat correct?
I have a good understanding of what restful is, but I'm failing to understand when/why to use it, and when to just combine everything into one api call to increase performance for the end user.
Do you have
(a) an actual, measured performance problem,
(b) a good and well-thought out argument for why you will have one, or
(c) a vague concern that REST is chatty?
It sounds like (c). Yes, REST can be chatty. Usually that's addressed with caching and good endpoint design.
Now from my understanding, to remain restful I would have to first
make the login call(POST) for the user information, then make
another(GET) call for the games list.
That would be traditional. It would not be unreasonable for the initial POST to do a redirect to get the games list. You can perform the GET conditionally (If-Modified-Since, If-None-Match), which will save bandwidth and server time. You can also set an explicit expiration time for the result of the GET to save some calls to the server.
Another example would be uploading stats. Each stat has an action
associated with it, whether its delete, update, or create. Currently
all the stats are stored in a JSON which will send one POST call to
the server. The server will then loop through the list and delete,
update, or create the stats as needed. Now restfully I should be
making a separate POST, DELETE, or PUT calls for each stat correct?
In this case, it sounds like the verb you want is PATCH. You can invoke PATCH on a collection endpoint, such as /stats, and include all the updates in one call. I suggest using the structure defined in RFC 6902 for PATCH requests.
Related
I am trying to make an ABAP OData, that receive a request, does some calculation, then, should return a message to the End User and make an decision based on the user input. So basically, the OData service should be put "on hold", before its receives a response.
Does anyone have a good idea?
Appreciate your response.
Regards!
OData is a special kind of REST. REST is stateless. What you want is stateful.
The good way to do turn this stateful flow into a stateless one is:
Send a first request (REST: POST, OData: CREATE) that creates and saves(!) a document that represents the calculation and its result. That first request may return the calculation's result to be presented to the user.
The user's choice then sends a second request that addresses the previously created document (e.g. via a GUID) and includes the user's choice. This means the second request neither has to send the computation input again, nor does it actually perform any calculations; it only changes the existing object's state.
If the calculation is not needed anymore afterwards, that second request may delete it. To prevent data leakage, removing older calculations after a time limit (e.g. 24h) may be a wise move.
I have an application that makes API requests to salesforce using restforce.
Specifically the application finds a contact object, returns IDs for all related objects and then pulls the full record for every related object based on their ID.
This takes a long time for two reasons:
There are a lot of request to an external API, usually takes a few fractions of a second for each to reply and for some there can be +500 individual requests.
There is often a large amount of data being pulled back via each request.
All requests currently fall within the salesforce rest API limits but I'm getting timeout errors from my development server as it can take 5+ minutes for some of these requests to process.
Rails 4.2 - How best to handle this?
My question is how do I best get rails to handle this?
I can fire the API requests either from the controller (which definitely violates the skinny controllers) or from the view (via helper methods, which seems like a dodgy hack).
Ideally I'd like to get it running in a background job, but i'm unsure if I can just include all the authentication and other methods in a job in the same way I can include helper methods?
Even if I could get it to work in a background job, I'm unsure what best practice might be for the user experience. Ideally I'd like to route them to a page telling them to "hang tight, go get a coffee" with a progress bar, and then auto route them to the final page once the request is complete...
But I'm unsure how to generate a temporary display until a job has been completed?
Could anyone recommend any gems or strategies that might help me digest this problem?
You should definitely use a background job for this.
Give a database object to the job, which it will update to signal that is has finished, and maybe from time to time to indicate progress.
On the user side, simply tell them that the background job is working, with eventually a progress indicator, and display the result once the database object giving to the job tells you it's ready.
I'm looking at the Delicious API and see the following is the operation to create a new bookmark:
https://api.del.icio.us/v1/posts/add?&url={URL}&description={description}
It looks like they're using a GET request to create server-side database entries, which I've read elsewhere shouldn't be done with GET requests, only with POST requests.
I'm writing my own API right now and I think that it's fabulous to let users interact with the API directly from the URL. But you can't do this unless you allow CRUD operations over GET.
So, is Delicious really doing CRUD operations over GET? Is there an important reason I shouldn't do the same thing in my API, or is POST just mandated for CRUD to prevent accidental invocation?
Accidental invocation is part of it; that's what the HTTP spec means when it talks about "idempotent" methods. But you could argue that what Delicious is doing is actually idempotent as long as the URL only gets added once no matter how many times you GET. But more importantly is that GET is safe:
The important distinction here is that the user
did not request the side-effects, so therefore
cannot be held accountable for them.
From an interface design standpoint, you want user-agents to make POST and PUT and DELETE more difficult than GET, or at least distinctly different, so that users can rely on that difference to hint when their actions might cause a change in the resource state, because they are responsible for those changes. Using GET to make changes, even if idempotent, blurs that line of accountability, especially when prefetchers are widely deployed.
That depends if you follow the REST principles GET for changing things is forbidden. Therefore most people say with REST use POST for changes.
However there is a difference between GET and POST. According to the RFC GET requests have always a followup RESPONSE. And if you use POST you need to follow the Redirect-After-Post pattern.
Another limitation is that URLs may have a limited size. So GET will only work as long as your input data is short enough. So the delicious API has there a bug. You will not be able to add every possible url via a GET parameter.
I'm building a RESTful API using Zend Framework via the Zend_Rest_Route. For uploading of files, should I use PUT or POST to handle the process? I'm trying to be as consistent as possible with the definition of the REST verbs. Please refer to: PUT or POST: The REST of the Story.
The way I understand this is that I should use PUT if and only if I'm updating the full content of the specified resource. I'll have to know the exact URL to use PUT. On the other hand, I should use POST if I'm sending a command to the server to create a subordinate of the specified resource, using some server-side algorithm.
Let's assume this is a REST API for uploading images. Does that mean I should use POST if the server is to manipulate the image file (i.e. create thumbnail, resize, etc); and use PUT if I just want to save the raw image file to the server?
If I use PUT to handle a file upload, should the process be as follows:
The user sends a GET request to retrieve the specific URL to upload the file by PUT.
Then the user sends a PUT request to that URL.
The file being uploaded is raw - exactly the one the user uploaded.
I'm quite new to this stuff; so hopefully I'm making sense here...
If you know the "best" way to do this, feel free to comment as well.
There seems to be quite a bit of misunderstanding here. PUT versus POST is not really about replace versus create, but rather about idempotency and resource naming.
PUT is an idempotent operation. With it, you give the name of a resource and an entity to place as that resource's content (possibly with server-generated additions). Crucially, doing the operation twice in a row should result in the same thing as if it was done just once or done 20 times, for some fairly loose definition of “the same thing” (it doesn't have to be byte-for-byte identical, but the information that the user supplied should be intact). You wouldn't ever want a PUT to cause a financial transaction to be triggered.
POST is a non-idempotent operation. You don't need to give the name of the resource which you're looking to have created (nor does a POST have to create; it could de-duplicate resources if it wished). POST is often used to implement “create a resource with a newly-minted name and tell me what the name is” — the lack of idempotency implied by “newly-minted name” fits with that. Where a new resource is created, sending back the locator for the resource in a Location header is entirely the right thing to do.
Now, if you are taking the policy position that clients should never create resource names, you then get POST being the perfect fit for creation (though theoretically it could do anything based on the supplied entity) and PUT being how to do update. For many RESTful applications that makes a lot of sense, but not all; if the model being presented to the user was of a file system, having the user supply the resource name makes a huge amount of sense and PUT becomes the main creation operation (and POST becomes delegated to less common things like making an empty directory and so on; WebDAV reduces the need for POST even further).
The summary: Don't think in terms of create/update, but rather in terms of who makes the resource names and which operations are idempotent. PUT is really create-or-update, and POST is really do-anything-which-shouldnt-be-repeated-willy-nilly.
For file upload, unless it is replacing an existing resource, definitely use POST.
In REST, POST is to create new resources, PUT to replace existing resources, GET to retrieve resources, and DELETE to delete resources.
Source: http://en.wikipedia.org/wiki/Representational_state_transfer#RESTful_web_services
REST isn't a standard so this can easily turn into a religious battle. AtomPub and OData standards which are considered to be "RESTful" do agree on this though: POST = creation while PUT = updates
The simple answer is you should use PUT instead of POST in your case since you will be replacing the entire content of the file. Take a look at PUT vs POST
I'll have to know the exact URL to PUT
to
No. You dont have to know the URL to PUT i.e. the PUT URI needn't be present before the PUT operation. If the resource doesn't exist, the resource is created. If the resource is already present, then the resource is replace with the new representation.
To quote the linked article:
PUT puts a page at a specific URL. If
there’s already a page there, it’s
replaced in toto. If there’s no page
there, a new one is created. This
means it’s like a DELETE followed by
an insert of a new record with the
same primary key
In ASP.NET MVC it seems to be common practice not to use GET requests for calls to a controller that modify the model. For example, deleting a customer should not be possible by clicking a simple HTML link.
The only reason for this rule I am aware of is not safeguard against web-crawlers which might indavertently alter the database. GET requests are commonly regarded as safe, whereas POST requests are not.
Does this mean that this rule does not apply to non-public portions of a website (Example: Your password-protected user administration area)? Or is there any other reason not to use destructive GET requests?
This is generally part of HTTP. From the HTTP 1.1 RFC 2616
Implementors should be aware that the
software represents the user in their
interactions over the Internet, and
should be careful to allow the user to
be aware of any actions they might
take which may have an unexpected
significance to themselves or others.
In particular, the convention has been
established that the GET and HEAD
methods SHOULD NOT have the
significance of taking an action other
than retrieval. These methods ought to
be considered "safe". This allows user
agents to represent other methods,
such as POST, PUT and DELETE, in a
special way, so that the user is made
aware of the fact that a possibly
unsafe action is being requested.
Naturally, it is not possible to
ensure that the server does not
generate side-effects as a result of
performing a GET request; in fact,
some dynamic resources consider that a
feature. The important distinction
here is that the user did not request
the side-effects, so therefore cannot
be held accountable for them.
In other words, it's not enforced, but it's really bad form for a GET request to have side-effects. Imagine if a user bookmarks a URL which does updates something, for example - they probably wouldn't expect that to happen.
Another good reason is accelerator plug-ins for browsers. These attempt to speed up page loads by pre-fetching links on the current page. Imagine if you had a bunch of GET requests to delete all the objects in a list, the plug-in would delete them!
The short of it is that you can't predict what a browser will do with GET requests, if it looks like a plain-old hyperlink then its fair game for a browser to go fetch it.
Yes.
It's not just about web crawlers, it's about CRSF - Cross Site Request Forgery.
So imagine that someone is logged into your web site, and browses to www.hax0rs.com
In the source for hax0rs.com is the following tag
<img src="http://mysite.com/members/statusChange?status=I%20am%20looking%20for%20a%20gimp%20mask" height="0" width="0">
Because your user is logged in, and because the request is going to your site, the authentication cookie goes with it. And bang, suddenly your user's status has changed.
What fun :)
But I suppose you can still do some sort of "non-retrieval" actions on GET requests. For example updating the "LastVisit" records which can be consider undestructive and relatively safe.