Rails page caching with intra-page administration - ruby-on-rails

I'd love to use page caching on a Rails site I run. The information on each page is mostly constant, but the queries that need to be run to collect the information are complicated and can be slow in some cases.The only obstacle to using page caching is that the administrative interface is built into the main site so that admin operations can be performed without leaving the page of interest.
I'm using Apache+mod_rails (Passenger). Is there a way to indicate to Apache that .html files should be ignored when the current user either has a session variable or a cookie named 'admin'*? The session variable need not be evaluated by Apache for validity (since it will be evaluated by Rails in this case).

Is there a way to indicate to Apache that .html files should be ignored when the current user either has a session variable or a cookie named 'admin'*?
I believe it is not really possible. Even if it is, I guess should be very tricky.
Instead you can use Action Caching. A quote from docs:
One of the issues with page caching is
that you cannot use it for pages that
require checking code to determine
whether the user should be permitted
access.
This sounds like exactly your case.
But if you still really need Page Caching via web server, I think you better implement separate pages for admin and non-admin.
This is because of one reason. When you enable Page Caching rails doesn't process the request at all and thus there is no way to know if user is authenticated or not.
You probably can overcome this using Dynamic Page Caching. The idea is basically to add the "admin" part from the JavaScript. I don't personally like this a lot though.
One more update: quick search brought me to this article.
The idea is to cache page conditionally and plug mod_rewrite to serve admin pages.
Will work for you but is pretty dirty solution.

Related

Site Map and other site metadata updates won't show up due to caching issue

Due to the fact that I am using the OWIN authentication, I had to put the cms on a different website. The cms is at cms.domain.com and in my VS project for domain.com I simply pointed the "piranha" connection string to the right database. So far everything was working perfectly, I only had to change the MediaProvider do deal with the domain name issues for the 2 retrieve methods of IMediaProvider.
Now I'm trying to integrate the menu using the #UI.Menu helper, however it looks like the helpers are not using the database of the Web.Config file (I have no problem to retrieve posts from C#). I only see the Start page listed. To confirm it I have also tried to change the site description from cms.domain.com/manager and to display it with the helper #Site.Description but it still shows default site description so it really looks like there is another database around..
Where does data used by the HTML helpers come from? And How can I fix that?
UPDATE : It seems that it is actually a caching issue, it has nothing to do with the connection string.
Piranha CMS caches a lot of meta-data for performance and to minimize the round-trips to the database. The site information, as well as the sitemap is two of these things as these will most likely be used in every page-rendering.
The default cache implementation uses the IIS cache, which is per application pool. The cache is invalidated when data is modified in the manager interface, but if you for example would run the manager interface in another application pool (site/application) this will make the caching fail, causing the kind of errors you describe.
Not being sure how your application is set up, this is my primary guess. If you are in fact running the client web & the manager in different application pools, and you need to continue doing this you should try one of the following approaches.
Implement a distributed cache provider
Set the system param CACHE_SERVER_EXPIRES to 1
Setting the param to 1 invalidates all server cache after one minute. This is of course not to recommend if you are expecting a lot of traffic to your site as it will more or less disable the caching mechanism.
I hope this helps you.
Best regards
HÃ¥kan
All of sudden it's now working. The only thing I remember to have done is delete duplicate entries in the dbo.page table. It's all working now, however it doesn't explain why the Site Description wasn't retrieve properly too. But never mind, I hope this will help someone else. I hope custom authentication will be built-in in the next release of Pirhana CMS !

Multisite application in Rails (like shopify.com)

I would like create web app like shopify.com.
User can pickup subdomain(or domain), theme and have own store.
How can I do this?
Create main application, deploy it automatically like new standalone version and update it via git?
I'm using Rails 3.
Thanks for your advice.
Based on replies:
When I choose to use only one application (without multiple instances) and give user his subdomain, it will looks like their own website. But everything will be in one database (It's good idea?). And how can I have multiple themes in Rails app?
Take a look at LocomotiveCMS, specifically the routing system. Locomotive actually hosts multiple sites inside a single rails application. It does this by inspecting the request URL when it comes in and setting the current_site variable with the site which is set up to handle the domain. Then the current_site is actually just an object which contains all the pages, contents, settings, etc. for the specific site being served up.
So to answer your question, I think a good solution is to give your rails app the ability to serve up multiple sites based on the domain. It's not that hard, and it seems less fragile to me than trying to automatically deploy new instances of an app.
So far I have understood, you want to let your users have their own subdomain, different theme but the functionality would be same right. Users just need to have a feel of something of their own.
Well definitely, you need to have a single application that supports multiple subdomains.
A quick googling gave me [ http://37signals.com/svn/posts/1512-how-to-do-basecamp-style-subdomains-in-rails ]. May be you can get some insights from here.
For example if your service is http://www.myfi.com, a brief idea can be:
When a customer is registering, you should let him choose his subdomain. And the newly created account will be associated with this subdomain with a url. Say, http://customer1.myfi.com.
You should register for domain *.myfi.com so that anyone in the world hit with anysubdomain.myfi.com, it comes in your application.
Then from the url part, you should identify the subdomain (customer1) that is being used, and need to set that in session.
Now when someone will try to login, you must verify the account in the context of that subdomain's account.
In fact, all following actions need to be handled in the context of the subdomain's account.
Just tried the gather a glimpse of the implementation here. If you have confusion about something specific, share that also.
Edit:
Whenever you are thinking about multiple theme, you must have simple design which is completely driven by css and js. The app/view files should contain only content and HTML nodes with class names or ids.
Generally a UI designer can put more helpful ideas about how to make such theming mechanism. But all I can feel is, based on the chosen theme by customer, you have to load different css and js.
Actually the strategies can be indefinitely sophisticated and scalable, but its always wise to start with something easy. Then ideas will automatically evolve into better ones.

Rails gem or code to flag malicious user behavior in real-time?

Is there an existing gem or code that will flag malicious user behavior in real-time? i.e. not something where I manually comb the log files for 404s or suspicious accesses (e.g. sql injection attempts, js inserted into text fields, etc.?)
For instance, today I noticed requests like this in the log.
ActionController::RoutingError (No route matches
"///scripts/setup.php" with {:method=>:get}):
I'd love to know real-time via alerts or emails if someone is scanning the site for vulnerabilities - i.e. to differentiate innocuous 404s from malicious 404s, to flag sql injection or js injections, etc.
Are there existing gems or code to do this or must I roll my own?
Thanks for thoughts.
If there was such a system and your site was even moderately populate you would get a lot of real time updates.
The reality is that you're likely going to get people trying to put js in their profiles, submitting weird stuff to your site out of both curiosity and malice. You could scan your log files for this stuff regularly to make sure your actions and views are protected against such attacks, but that really isn't manageable in the long run.
The problem is, how do you know if your input is malicious? Depending on your site the input you just mentioned could be perfectly valid.
If you're worried about sql injection you should document and thoroughly test your find_by_sql statements.
If you're worried about un-escaped user input you could consider moving to erubis and enabling escaping by default.
A more pressing problem for most sites is misuse of the functionality you provide, for example sending too many messages, posting too many questions, voting too often, scraping your content etc... for this you can configure your webserver to stop a user making too many requests (you can set this up with nginx really easily) and impose rate limiting in your application code.

Hybrid Rails Caching Options, am I reinventing something?

Well, not really reinventing, however, we have a large content-based website which handles load (after we fixed the SQL pooling issue) to a certain point, then we just run out of steam. A lot of this is due to bad code we are trying to fix up, but a lot is just due to the level of requests etc.
We were considering page caching, because well, it's damn quick (yep... :D) but that doesn't work because we have certain fragments within the page which are specific to the logged in user. But, not all hope is lost...
I was wondering if it would be ideal to do the following:
Page level caching, with sweepers to clean out the pages when content is updated.
Replace the user-specific fragments with placeholders (and perhaps generic content like... 'View your Account, or Sign Up Here')
When the users page loads fire off an async request (AJAX, or, some would call it AJAH), which requests the 'dynamic' fragment, then place the content placeholders with this fragment..
The main issue I can see with this is that users with JS turned off wouldn't see the content, but I honestly don't think we would be too affected by this, and IMHO people who disable javascript, for the most part, are idiots (yes, I said it!).
I'd also be interested to know if I'm (no doubt) reinventing something, and if anyone can point me to a site which is already doing something like this it would be appreciated.
Thanks awesome SO community!
Ryan Bates covered this technique in Railscast 169: Dynamic Page Caching. It's worth taking a look at.
Have you thought about server-side fragment caching? I've used it extensively, and it couldn't be more awesome. You could simply cache the content 'fragments' and render normally whatever depends on the logged in user.
There are many good resources for fragment caching, I'd start at the documentation:
http://api.rubyonrails.org/classes/ActionController/Caching/Fragments.html
Also very good from the Scaling Rails series:
http://railslab.newrelic.com/2009/02/09/episode-7-fragment-caching
When serving static content or cached content starts to slow down serving the real working processes put a reverse proxy as frontend to your application. It will free processes to do real work and reduce slowdowns due to file system caches becoming inefficient. And it will help you to make "client side caching" shareable to multiple visitors. Have a look at the fantastic screen cast series about caching and scaling by NewRelic: http://railslab.newrelic.com/scaling-rails

How do I cache subdomain-specific views in Rails?

Our Rails app has some views that get heavy traffic but don't change too often (weekly at the most). I want to cache these views but we use subdomains to specify user accounts.
I've seen a few different blog posts on how to cache views based on subdomains. Just wondering what the preferred method is.
Also, one of the pages we need to cache is XML output. I don't know if that matters at all.
You just need to inject the things that change into your cache_key value when you cache. This may require using action caching instead of page caching. It sounds like you would need to inject the user id or the subdomain id when you're generating the cache_key for your content.

Resources