I see that Grails 2.3 is using REST for the CRUD actions in scaffolding. While it's a great way to learn how REST works, I am wondering if using REST to communicate inside of a single application stack is very efficient. Doesn't it send the request all the way up to the network layer and back down again instead of going directly from the app server to the database? I am visualizing a "pop fly" as opposed to a "line drive." Am I misunderstanding how this works?
I assume when you say, "using REST for the CRUD actions in scaffolding" you are referring to the basic scaffolding (i.e. generate-all example.Book). The scaffolded controllers are not calling the REST API (https://yourapp/book.json) to retrieve the data, they are still using GORM to access the database, and then using the Respond method to render the data in the appropriate format based on the Request's content-type (XML, JSON, HTML). So,
Typical request-response cycle
Client, typically an HTML page, makes request (GET http://yourapp/books/1)
Grails 'parses' request params (id: 1)
Grails' GORM retrieves data from database and creates object instance
Grails resolves response content type to HTML
Grails "responds" to the request with an HTML response using a view from the views directory
SPA/API call
Client, typically javascript, makes request (GET http://yourapp/books/1.json **)
Grails 'parses' request params (id: 1)
Grails' GORM retrieves data from database and creates object instance
Grails resolves response content type to JSON
Grails "responds" to the request with an JSON response
Client consumes the JSON response and acts accordingly
** content-type can be specified in a number of ways, just used the .json suffix since it is the most transparent. See http://grails.org/doc/2.3.x/guide/single.html#contentNegotiation.
And to answer your question regarding whether it "very efficient". I would argue that it is almost always yes, because your payload tends to be much smaller, since you are only transferring data, not data + formatting (HTML, javascript, css, etc). It also provides a way of separating concerns, allowing the client to focus on state and presentation and the backend to focus on data. Furthermore, it means that you can create multiple clients (mobile, desktop-based, web-based) using the same backend API.
Related
I have a web api based server. All controller methods accept objects and return objects (parsed from/to json by the framework). This has been working fine.
However, due to a new requirement, I need to create an adapter and place it between clients and the server. The adapter will be responsible for translation of a small subset of client messages before passing them to the server.
I'd like to avoid unnecessary deserialization/serialization from/to json of messages that do not need to be translated. That is, in majority of cases I'd like to pass raw json client requests to the server and raw json server replies to clients.
The controller methods that will do the translation will have 'standard' signatures - no problem here. However, I'm not sure how to forward messages (json strings) from clients to the server (and vice versa) without deserialization and serialization.
Is this possible in ApiController?
Thanks.
Suppose I have an existing Java service implementing a JSON HTTP API, and I want to add a Swagger schema and automatically validate requests and responses against it without retooling the service to use the Swagger framework / code generation. Is there anything providing a Java API that I can tie into and pass info about the requests / responses to validate?
(Just using a JSON schema validator would mean manually implementing a lot of the additional features in Swagger.)
I don't think there's anything ready to do this alone, but you can easily do this by the following:
Grab the SchemaValidator from the Swagger Inflector project. You can use this to validate inbound and outbound payloads
Assign a schema portion to your request/response definitions. That means you'll need to assign a specific section of the JSON schema to your operations
Create a filter for your API to grab the payloads and use the schema
That will let you easily see if the payloads match the expected structure.
Of course, this is all done for you automatically with Inflector but there should be enough of the raw components to help you do this inside your own implementation
I want to test a set of ruby-on-rails applications. Specifically, I want to trigger all possible GET/POST requests available. I am considering using some web crawler-like tool, which could (recursively) send requests to my web server, get responses, and parse the response HTML file to get all possible "href tags", "form submission buttons", etc.
Essentially I want to see the performance of these web applications and get some logs of things like what are the request routes, parameters, database accesses, queries, transactions, etc.
Sending GET requests is relatively easy to handle, I would need to simply parse the HTML response and extract the href attributes of all anchors. However, I don't know how to handle those POST requests; they would require me to fill in all these parameter fields included in the form fields. I am wondering if there exist some tools doing such work. Or some tools I can easily modify (not too much) code to achieve my functionality?
Thanks a lot.
The problem resides on building an architecture with Backbone and Rails
that handles syncing multiple actions to the server.
Assume the model is define on both Rails and Backbone.
I have an update and destroy operations on a model and I need them to synced
with the server on a user action (button click). On another part of the webapp,
these actions on this same model are synced on the moment they
made (easy, just send a restful ajax http request).
But in the first case, I can't really figure out an easy, stateless and atomic/transactional
save of the several actions (requests) the user took.
Sending multiple requests to the server makes the save non-atomic and a bit of non stateless.
Sending one big request with the actions formatted makes parsing on the server necessary.
So, is there another better solution?
If you want multiple updates, on different resources, as one atomic transaction, that is not REST.
So, of course, you will have to orchestrate the parameters and the requests in Rails. (but it's not about parsing, since you'll send JSON, more about creating a format for the aggregated parameters and figuring out what to do on the Rails side).
A nice way to handle multiple requests at once is at https://github.com/railscasts/414-batch-api-requests
Within my Rails application, I'd like to generate requests that behave identically to "genuine" HTTP requests.
For a somewhat contrived example, suppose I were creating a system that could batch incoming HTTP requests for later processing. The interface for it would be something like:
Create a new batch resource via the usual CRUD methodology (POST, receive a location to the newly created resource).
Update the batch resource by sending it URLs, HTTP methods, and data to be added to the collection of requests it's supposed to later perform in bulk.
"Process" the batch resource, wherein it would iterate over its collection of requests (each of which might be represented by a URL, HTTP method, and a set of data), and somehow tell Rails to process those requests in the same way as it would were they coming in as normal, "non-batched" requests.
It seems to me that there are two important pieces of work that need to happen to make this functional:
First, the incoming requests need to be somehow saved for later. This could be simply a case of saving various aspects of the incoming request, such as the path, method, data, headers, etc. that are already exposed as part of the incoming request object within a controller. It would be nice if there was a more "automatic" way of handling this--perhaps something more like object marshaling or serialization--but the brute force approach of recording individual parameters should work as well.
Second, the saved requests need to be able to be re-injected into the rails application at a later time, and go through the same process that a normal HTTP request goes through: routing, controllers, views, etc. I'd like to be able to capture the response in a string, much as the HTTP client would have seen it, and I'd also like to do this using Rails' internal machinery rather than simply using an HTTP library to have the application literally make a new request to itself.
Thoughts?
a straight forward way of storing the arguments should be serializing the request object in your controller - this should contain all important data
to call the requests later on, i would consider using the Dispatcher.dispatch class method, that takes 3 arguments: the cgi request, the session options (CgiRequest::DEFAULT_SESSION_OPTIONS should be ok) and the stream which the output is written to
Rack Middleware
After doing a lot of investigation after I'd initially asked this question, I eventually experimented with and successfully implemented a solution using Rack Middleware.
A Basic Methodology
In the `call' method of the middleware:
Check to see if we're making a request as a nested resource of a
transaction object, or if it's an otherwise ordinary request. If it's
ordinary, proceed as normal through the middleware by making a call to
app.call(env), and return the status, headers, and response.
Unless this is a transaction commit, record the "interesting" parts of the
request's env hash, and save them to the database as an "operation" associated
with this transaction object.
If this is a transaction commit, retrieve all of the relevant operations
for this transaction. Either create a new request environment, or clone the
existing one and populate it with the values saved for the operation. Also
make a copy of the original request environment for later restoration, if
control is meant to pass through the application normally post-commit.
Feed the constructed environment into a call to app.call(env). Repeat for
each operation.
If the original request environment was preserved, restore it and make one
final call to app.call(env), returning from the invocation of `call' in the
middleware the status, headers, and response from this final call to
app.call(env).
A Sample Application
I've implemented an example implementation of the methodology I describe here, which I've made available on GitHub. It also contains an in-depth example describing how the implementation might look from an API perspective. Be warned: it's quite rough, totally undocumented (with the exception of the README), and quite possibly in violation of Rails good coding practices. It can be obtained here:
http://github.com/mcwehner/transact-example
A Plugin/Gem
I'm also beginning work on a plugin or gem that will provide this sort of interface to any Rails application. It's in its formative stages (in fact it's completely devoid of code at the moment), and work on it will likely proceed slowly. Explore it as it develops here:
http://github.com/mcwehner/transact
See also
Railscasts - Rack Middleware
Rails Guides - Rails on Rack