Using Rails, I have a HTTP endpoint that returns an array of the records (media files) linked to a playlist in JSON format.
The client side caching is done by checking the date of the playlist and generate the appropriate etag with Rails' stale?(#playlist) in the /playlist/id/media_files.json response.
Now I would like to implement pagination but trying to keep, to some degree, client side caching.
Is there some smart approach I can take or am I forced to respond to all requests with a 200 and always pumping data down the wire to the clients?
How do you handle pagination (server, client) caching in API?
When I use page caching in Rails and I have to deal with pagination I just put the page in the URL. So, if for example, I have a resource that looks like this:
resource :users
I change it for (add a new line):
get 'users/page/:page', to: 'users#index'
resource :users
I am not sure what client side you are using, but the idea here is that you want to cache individual pages separately. Since the page is now part of the URL, you have each page cached and your problem should be solved :)
Related
I have an app where I try to adhere to REST.
The app receives requests for external links that don't belong to the app, so the sole purpose of the action is to redirect the request to the external URL.
My suggestion is to have the following controller/action: redirects_controller#create.
Is my thinking correct or should it be the show action instead?
REST (apart from Rails) is about using the correct HTTP method for the correct action. The Rails part is just using the conventional controller action for a given HTTP method.
So, if you're doing a 301 or 302 redirect to another page, which browsers handle by issuing a GET request to the URL in the redirect response's Location header, do it in a show action. This will allow the user's browser to cache the other page when appropriate, and to not notify the user before redirecting.
(There is a way to redirect POSTs, but you didn't mention it so I expect you're talking about regular 301/302 redirects.)
Coming from a Java background, the REST actions must be related to CRUD operations. Requests that do not change the resource like in your case where the intent is to redirect to another page must be tied to a GET verb or show in your example.
If you were to create a new resource you would use POST.
A more detailed explanation can be found in Richardson's rest maturity model level 2
I am developing a Rails app which relies on a lot of jQuery AJAX requests to the server, in the form of JSONs. The app has no authentication (it is open to the public). The data in these requests is not sensitive in small chunks, but I want to avoid external agents from having access to the data, or automating requests (because of the server load and because of the data itself).
I would ideally like to include some kind of authentication whereby only requests can only be made from javascript in the same domain (i.e. clients on my website), but I don't how or if this can be done. I am also thinking about encrypting the query strings and/or the responses.
Thank you.
What do you mean only your app should request these JSONs? A client will eventually have to trigger an event, otherwise no request will be sent to the server.
Look at the source code of any of your app's pages. You will notice an authenticity token, generated by the protect_from_forgery method in your application controller - from the api:
Turn on request forgery protection. Bear in mind that only non-GET, HTML/JavaScript requests are checked.
By default, this is enabled and included in your application controller.
If you really need to check whether a request comes from your own IP, have a look at this great question.
I want to avoid external agents from having access to the data... because of the server load and because of the data itself.
If you're really concerned about security, this other question details how to implement an API key: What's the point of a javascript API key when it can be seen to anyone viewing the js code
You shouldn't solve problems you don't have yet, server load shouldn't be a concern until it actually is a problem. Why don't you monitor server traffic and implement this feature if you notice too much load from other agents?
I ended up passing token=$('meta[name=csrf-token]').attr("content")in the request URL and comparing with session[:_csrf_token] in the controller.
def check_api
redirect_to root_url, :alert => 'effoff' unless request.host =~ /yourdomain.com/
end
that should work to check your domain. Not sure you need the js part, but it's something.
i have a website who have a list of information on the front page. the left side and right side bar have some information who rarely to update so i thing that i need to store them in cache.
how i can store them in cache in ASP.NET MVC 3. any suggetion for doing this.
David Hayden has a blog post on partial page OutputCache
http://davidhayden.com/blog/dave/archive/2011/01/25/partialpageoutputcachingaspnetmvc3.aspx
And Phil Haack has an article on donut hole caching (based on an older version of MVC)
http://haacked.com/archive/2009/05/12/donut-hole-caching.aspx
I don't think you want to store them in the browser cache, but rather in the server cache so you don't need the server to regenerate the content each time. Partial page caching on the client would be hard to do unless you were doing ajax calls. In that case you could cache the result from an ajax call and it would be reused by subsequent ajax calls.
You can use OutputCacheAttribute to store the return values of the controller in the web server cache. What this does is the next the controller action is invoked the cached data is returned instead of executing the method. Since you also mention you want to cache this on the client browser you might want to look at Google Gears or other solutions for that.
I have seen a few examples of how to create RSS feeds using ASP.NET MVC, either by creating an Action or through an HttpHandler.
I need to authenticate feeds and am wondering how this is to be done (and supported by RSS readers rather than just browsing to the page/xml through a browser) and how would authentications differ between an MVC Action or HttpHandler?
the simplest way is to give each client an unique url. so in this case you always will know who is querying the feed.
http://site.com/rss/<some_secret_hash_here>
in other hand - you can use urls with standart user:password pair like:
http://user:password#site.com/rss/blabla.xml
and just parse user:password.
i prefer to use first one.
There are multiple ways to do it.
The best approach, according to me, is using REST architecture with credentials in either the path or as post-data (1st approach preferred).
1st Approach:
Step1: GET http://www.myserver.com/myfeed.rss/username/query => this should return a random value
Step2: GET http://www.myserver.com/myfeed.ress/username/hashed-password => The hashed password expected from the client is hash(<random-value>+<password>).
This will serve two purposes:
Original password is never transmitted on the wire
Random value ensures that the hash is unique, and hence, cannot be reused.
You may want to set an expiry date/time for the username + random-value combination with other IP related security actions to ensure that session hijack cannot happen.
EDIT:
Use HTTP Handler for the path="myfeed.rss" with verbs="GET" in web.config
and supported by RSS readers rather than just browsing to the page/xml through a browse
I would expect most readers to support typical (basic and digest) authentication. E.g. twitter's feeds require authentication.
I have a controller action which could benefit from caching. However, when I turn on action caching via the usual:
caches_action :myaction, :expires_in=>15.minutes
...caching doesn't get invoked. It looks like this is because the action is invoked using an HTTP POST. For similar actions invoked using HTTP GET, caching works fine.
I realise using a POST for this action is probably not great style and breaks resource routing conventions - presumably this is also why the response isn't being cached, even though it could be. However for now I'm stuck with it as this is what the client currently does and I can't change it easily.
So, is there a way to force caching for this method even though it is accessed via POST?
edit: I should clarify perhaps that the POST has no side effects, so it is safe to cache the action. It really should have been a GET in the first place, it just isn't and can't easily be changed for now. Also it does not matter for this that browsers or proxies won't cache the response.
Are the contents of the post data the same every post? I suspect they arent and this is why the action wont cache.
A couple of ways to deal with this:
1) Forget about caches_action and use Rails.cache easily inside your controller to cache the expensive parts of your controller code
2) Use Rack Middleware/ Metal Endpoint to receive the post data from the other application and shoehorn the data to the shape you want.
edit:
Im running Rails 2.3.3 and i can confirm that it does cache POST requests
For the purpose of checking while your developing make sure you have set perform_caching to true in development.rb :
config.action_controller.perform_caching = true
Also make sure its the same in production.rb
I tested this scenario with the following in my controller :
caches_action :index
def index
#listings = Listing.find(:all)
end
Using both GET and POST requests this cached as expected.
Also i tried setting the http headers Cache-Control: no-cache on my post client and the action still cached
If you're running OSX use this awesome tool http://ditchnet.org/httpclient/ to create GET and POST requests