Your free cache has been inactive recently - memcache error - ruby-on-rails

I am constantly getting this error for many of my apps suddenly.
Dear MemCachier user,
Regarding your cache with ID my-id: Your cache has been inactive for
the last 20 days. It will be deactivated in 10 days.
Note that deactivated caches can always be re-activated on your
analytics dashboard. For more information see
https://www.memcachier.com/documentation#disabled-caches
Cheers, The MemCachier Team
I have got this email for too many of my apps hosted on heroku till now. One thing common among them is that all those apps are using free version of MemCachier for Rails4 app. Few of these apps are not used frequently but most of them are used very frequently. I haven't got this issue listed anywhere and I want to stop this service from being deactivated.
Any help would be highly appreciated.
Thanks in advance!

I am the author of the code that sends you these mails. As you can imagine many users create a dev cache to test it but do not deprovision it once they no longer use the cache. Since dev caches are free there is not really an incentive to remove them and we have thousands of unused caches in our system. For this reason we deactivate dev caches after 30 days of inactivity.
The only way you can prevent this from happening is to use your cache at least once every 30 days. If your apps are active less often there are a couple of options:
As #FieryCat suggested you could run a cron job that regularly accesses your caches. Obviously we do not encourage this because it leads to unused caches being artificially kept alive.
Buy a prod cache as they never get deactivated. Now, I understand that it is unreasonable to buy caches for many apps that only use it once in a while. What you could do however is purchasing one 100MB cache (the smallest paid plan) and share the cache across all your apps. In this case you only need to be careful that each app uses it's own namespace (you can set a namespace in rails with the :namespace option). If you need help setting up a cache that is shared across all your apps, please contact the MemCachier support.
Finally, note that deactivated caches can always be reactivated on the analytics dashboard.

Related

Azure Websites and ASP.NET, how much inactivity before the app pool is recycled causing a recompilation?

I have a MVC3, .NET4.5 asp.net web application hosted on Azure Websites.
I am experimenting with "Free", "Shared" and "Standard" scaling configurations.
I have noticed that after a period of inactivity the compiled code get dropped from memory, or the app pool gets recycled forcing a JIT recompile.
My main question is what is time period before the compiled code gets dropped forcing a recompile? I assume this is as a result of the application pool recycling? I have come across this on standard shared hosts such as DiscountASP.
My second question is: What is the best approach to minimise this issue as I would not like my users bumping into this recompilation lag? My initial thoughts are precompilation.
Many thanks in advance.
EDIT:
I have a found a related SO post on this here: App pool timeout for azure web sites
However it seems, as like standard Shared hosting, one cannot change App Pool recycling. One has more flexibility with the "Standard" scale option, since it is dedicated. So the likely options at present are:
1) Precompilation
2) Use of "Keep alive" ping sites.
EDIT2:
1) "Keep Alive" approach seems to be working. I have a 10 minute monitor running.
I believe the inactivity period is 20 minutes by default. I haven't used web sites yet so I'm not famailiar with rescrtictions on changing settings but one quick way to keep your site activie is to use a uptime monitoring service like Pingdom (you can check one site for free at time of writing), this will ping your site regularly and prevent it from becoming idle.

Ruby on Rails hosting - Engineyard vs Enterprise-Rails

I've been running a Rails app on 1 big dedicated server. Now for scaling I want to switch to a cloud service hoster and serve the app on 3 instances - App, DB and Redis.
I have really bad experience with Heroku performance wise and hence cost efficiency. So for me 2 Alternatives remain: Engineyard and Enterprise-Rails.
What I find important is that Engineyard doesn't offer an autoscaling option to handle peaks. On the other hand Enterprise-Rails doesn't have too much of documentation, most of it is handled by a support crew which is setting up everything.
What are other differences and what should I use for my website? I don't need much of administration work and I am not experienced with it. Basically I just want my Site to run optimally safe, stable and cost efficient without much personal work involved.
I am running a massive Rails app off AWS at this time and I'm really happy with it. Previously I had a number of dedicated boxes that were always causing problems - sooner or later one of them would crash for some reason, Raid failures, database problems whatnot.
At AWS I use RDS for database, elastic cache for caching, I keep all my code on a fat instance that acts as staging server and get a variable number of reserved instances to load the code via NFS.
I also use autoscaling - we've prepaid for a number of reserved instances and autoscaling helps starting up nodes when CPU usage goes above 60%, then removing them when it goes below 25%. autoscaling rules are based on cloudwatch alerts that can be set to monitor a particular group of instances, memcache servers, and so on, you even get e-mails and SMS notifications via SNS when certain scaling activities take place, say when more than 100 instances are spammed in less than 1 hour (massive traffic spike). The instances also get added right up to the load balancers by the way and you don't need to mess with the session store as you can use the sticky session feature which is quite nice.
Recently I also started using a 2nd launch group with spot instances, this complicated things a bit in terms of cloudwatch rules but I'm able to save a lot every month as spot prices are much lower. When the spot price (minimum) I bid is not enough, the set-up I have switches back to reserved instances.
Even more recently I've also started using CloudFront which got my app's page assets to load real fast (about 2 megs of CSS, JS, some icon sprites). Previously I was serving directly from instances via the load balancers.
This took about 20 hours to deploy, test and tune for maximum performance and availability.
One of the problems I have with AWS is that there's no support unless you're prepared to foot a bill. They claim some support is offered without a subscription but the only option in the support area is Billing. Ha. Fortunately it's all stable enough not to put me in a position where I'd have to pay for it.
Overall Rails fits in quite nice with AWS. I spend less than 2 hours per month doing maintenance, where I was spending over 30 previously. Most important for me is that I know that I can GTFO on a vacation for X months knowing nothing will cause any trouble - haven't had a monitoring alert more than a year.
Later edit: the app is a sports site with white labeling feature, lots of users, lots of administrators working on content in the back-end, database intensive as we show market pricing data that should update every few seconds. I had an average load time of about 3 seconds per page with dedicated servers that were doing about the same thing - database, memcache, storage, load balancing, web app. Now my average is under 1 second. Monthly bill is about 8 times lower now.
While Engine Yard doesn't offer auto-scaling (it is in the pipeline), we do have a fairly easy to use scaling feature that allows you to spin up multiple instances at once in times of need.
The advantages over something like Enterprise-Rails is the full documentation, the choice to deploy from the CLI or the dashboard,and our amazing support team. It's also easier to use Engine Yard and move from a personal machine or from another cloud setup than it is using a service such as AWS directly.

Architecture ideas for Rails 3 application

I was hoping to get some opinions regarding the architecture of our Rails 3 application.
Currently we have a Rails 3.0.7 application that allows users to manage content that is displayed on their TV (promotions, videos, menus, sports stats, etc.) through one of our connected media devices. We have over 1000 (and growing) of these connected devices that poll our system every minute to check for changes to their content, and every 15 minutes to report their stats (e.g. CPU, Memory, etc.).
One of the major advantages of our system is that we, as admins, can change how an individual content item looks/works and it is distributed to all devices that use it. The disadvantage of this feature is when we make a change our system becomes temporarily unusable because all the connected devices ask for their update about the same time.
Therefore, we plan on re-architecting our application so the content mgt. system is not impacted when the devices are communicating with the application. There are probably dozens of way to solve this issue. One way would be to have a separate Rails application that is only for the devices to get the content they should display, admins can monitor, etc.. It could share the model, database, etc. with the current content mgt. system. This way might prove difficult to manage the models, migrations, etc. I obviously don't want to duplicate the models. Also it would be ideal if the content mgt. system could still display status of devices for accounts so account admins can see if their devices are online, etc.
I'm thinking some type of queue mechanism is a good fit like resque/redis because when changes are made in the content mgt. system we could just queue a job which the device instance could pick up and process.
I wanted to toss this out to the community to get opinions and ideas from other folks that might have worked or are still working with systems that leverage connected devices. Thanks in advance for your contributions. I appreciate it!
Louis
1000+ clients with ~1 req. per minute each does not sound like a load that requires architectural changes for normal operation. Generally, simple one-app architecture will be easier to maintain in the long run, so you should try sticking to it until there're issues that cannot be solved.
If performance/responsiveness is the main issue, why not add a caching proxy server to the stack?
Other simple option is to install the app on two servers and use one for admin and other for client devices. Note that this will only help if database isn't the bottleneck.

Is paid heroku faster than free version?

I am currently developing a rails3.2 app and finding Heroku load times exceptionally slow. Can someone please tell me if this is what is to be expected with a paid server on Heroku?
There's no actual speed difference between paid Heroku and free. As others have mentioned, your app will "spin down" after a period of inactivity on the free service, and this does not happen on any level of paid service. The only other performance difference is that your app can only handle as many concurrent connections as there are dynos - so if two users connect to your free app at the same time, one has to wait for the other's request to finish (this is usually minimal and shouldn't bother anything until you start to get some traffic).
That having been said, you should also consider when your app is slow. If it's slow for the first request, and spry for requests after that, it's the spin-down issue and nothing to worry about. If all requests are slow, that's probably something that needs to be troubleshot in the app (though a paid Heroku account is probably still not the answer).
The free version idles after a period of inactivity. This is probably the slowness you are experiencing. The paid version does not idle.
I went through the same problem a couple of days ago and it seems like the best fix for this is to install the NewRelic addon to your heroku app. The NewRelic addon keeps on monitoring your web application (subsequently making periodic request to your app), ensuring that the dyno stays active. This effectively cancels out the idling issue.
One thing to note though, its probably best to install the addon only after you have finish a huge part of your development and are actively testing the app with beta users.
Also, note that in the paid version of heroku, the dyno never idles (as per their documentation). Hope this helps.

Is Heroku dependable?

I have been hosting a site on Heroku for a few months that is very soon to go into production.
Since I began with them, there have been at least three significant outages, one of which was the disastrous Amazon outage last month and another of which is a multi-hour outage happening today.
I believe in Heroku's vision and I think they are a great company, but I am faced with the ultimate problem: if they can't keep sites up and running, everything I like about them doesn't really matter.
Is Heroku a reliable provider to run a production site on Rails?
Are there any other providers I might look into that have a better reputation for reliability than Heroku?
In my opinion, downtime can happen with almost any provider. What you need to see is how well or badly the host handles the downtime and the effort they make in keeping the customer updated about possible resolution.
In my opinion Heroku is a great place to host your app. The advantages and ease of deploying there covers up for the recent (and rare) downtime FOR ME.
I am user of Heroku with Amazon RDS plugin for the past 7-8 months and my conclusion is there is nothing to appreciate about Heroku except their architecture. Here is why I think:
Even though it is sold for $250 million+ they were still NOT using the Amazon multiple zones feature of Amazon. Below is the link how SmugMug survived amazon crash by using Amazon's multiple zones feature.
http://don.blogs.smugmug.com/2011/04/24/how-smugmug-survived-the-amazonpocalypse/
No phone contact support in the event of issues (not application but Heroku's), lot to learn from Rackspace
The application I am hosting, people will starve if it goes down for few hours on Friday forget about 60 hours downtime.
I see intermittent deployment and connectivity issues. Please visit this link for a confirmation:
http://status.heroku.com/
I know developers love it because they throw a cheap web process called 'dyno' for free.
So far Heroku does not offer multiple availability zone redundancy. If you want something more reliable than Heroku you can create your own EC2 instances in multiple availability zones. Of course this will require significantly more server upkeep, admin, and deployment time.
I have seem Heroku to be reliable. I highly recommended it for starting out and validating your idea. I believe when you start your project you want get it out quickly (to customer or to public).
As mentioned in other comments at some point you might need to switch over to EC2 as you might need zone redundancy and it might actually become cheaper to run of EC2 especially if you already have an SA in the company.
No. It is not. As a customer I've experienced multiple critical outages. These things happen and I get that. But what makes Heroku unreliable is their nearly non-existent support when things do go wrong. I would use caution when evaluating Heroku or any provider for that matter and really understand what you're paying for. Paying as much as I did for Heroku I expected more.
As an example one of their databases went offline early on a Sunday. I immediately was made aware, not from Heroku but from our customers and new relic alerts. I contacted Heroku support just to get the ball rolling as I began to troubleshoot. 24 hours later I had literally no responses from Heroku. I could not fork, follow, or take snapshot of the database as they suggest (because they were experiencing issues) so I basically sat on my hand and waited. Hoping that somebody would respond as I frantically attempt to recover somehow, someway.
Was this their fault. No. Not at all. I should/could have done something to mitigate this failure. But as much as I pay for their servies each month I expected something resembling a response to my critical issue.
Our our app is hosted by Heroku and went down mutliple times over the last 12 months.
Two times it was caused by one of the third-party apps that Heroku offers:
We used Zerigo (recommended by Heroku) for our DNS. This has caused our site to go down twice - one time it took over 12 hours te recover. This is absolutely crazy for something like DNS, so we have switched to a more reliable provider.
The Redistogo app went down once.
Heroku does bring some benefits, but be careful about the apps you select.
In my org i build simple SPA productivity apps, and have been using Heroku to host them for the last year after migrating away from a physical box server to cloud VMs.
I've had multiple days lost due to Heroku development hindering outages. Usually while running apps stay online, and work, when Heroku goes down you can't push updates or restart apps.
Lets also not forget the ridiculous times for scheduled maintenance (usually 2PM EST, midweek....REALLY?)
As of writing this, the Logging system for Heroku has now been acting up (more or less down) for over 24 hour.
Thankfully my apps aren't mission critical. While I like Heroku's ease of use, it's just not worth this much headache for what is nothing other than an AWS middle-man.
That said, I'm moving over to just pure AWS EC2 instances.

Resources