Is paid heroku faster than free version? - ruby-on-rails

I am currently developing a rails3.2 app and finding Heroku load times exceptionally slow. Can someone please tell me if this is what is to be expected with a paid server on Heroku?

There's no actual speed difference between paid Heroku and free. As others have mentioned, your app will "spin down" after a period of inactivity on the free service, and this does not happen on any level of paid service. The only other performance difference is that your app can only handle as many concurrent connections as there are dynos - so if two users connect to your free app at the same time, one has to wait for the other's request to finish (this is usually minimal and shouldn't bother anything until you start to get some traffic).
That having been said, you should also consider when your app is slow. If it's slow for the first request, and spry for requests after that, it's the spin-down issue and nothing to worry about. If all requests are slow, that's probably something that needs to be troubleshot in the app (though a paid Heroku account is probably still not the answer).

The free version idles after a period of inactivity. This is probably the slowness you are experiencing. The paid version does not idle.

I went through the same problem a couple of days ago and it seems like the best fix for this is to install the NewRelic addon to your heroku app. The NewRelic addon keeps on monitoring your web application (subsequently making periodic request to your app), ensuring that the dyno stays active. This effectively cancels out the idling issue.
One thing to note though, its probably best to install the addon only after you have finish a huge part of your development and are actively testing the app with beta users.
Also, note that in the paid version of heroku, the dyno never idles (as per their documentation). Hope this helps.

Related

How to isolate worker dynos from web dynos on heroku?

We have developed a Rails app in Heroku, we have around 3 web dynos, and 2-3 worker dynos. We have some exporting and importing functionalities that use a lot of our worker dynos, when that happens, everything crashes, and we get an App error un the website.
Sentry tells us that it is due to a Timeout. We are trying to find out which functionality of our software is taking so much worker time. The problem is that it affects all of our users, some of them only using web layer functionalities.
But I was wondering, is there a way to isolate our worker dyno problems from the web dynos work? I mean, Is there a way that our site does not crash when one user exports a big amount of data and saturates the workers?
Thanks in advance!
Regards,
Gonzalo
thank you for the answers, let me give you some related info:
- We use delayed_job for the workers, which is async.
- We used to have a DB connection before, it supported 120 connections, and we never saw it completely busy. The current AWS RDS only reached 24% of its use and we only saw 28 concurrent connections on the day of the crash.
- New Relic did not indicate delay in the DB.
- The web dynos start to generate timeout in many functionalities, if the crash is not related to the workers, it may be because of some functionality that is not in jobs.
Update:
- We have set some limits in our exports, even when those are in jobs, this was affecting our web and giving some App Errors. When we set the limit, the App Errors were dramatically reduced.
- We are still searching for any other unoptimized functionality.

Your free cache has been inactive recently - memcache error

I am constantly getting this error for many of my apps suddenly.
Dear MemCachier user,
Regarding your cache with ID my-id: Your cache has been inactive for
the last 20 days. It will be deactivated in 10 days.
Note that deactivated caches can always be re-activated on your
analytics dashboard. For more information see
https://www.memcachier.com/documentation#disabled-caches
Cheers, The MemCachier Team
I have got this email for too many of my apps hosted on heroku till now. One thing common among them is that all those apps are using free version of MemCachier for Rails4 app. Few of these apps are not used frequently but most of them are used very frequently. I haven't got this issue listed anywhere and I want to stop this service from being deactivated.
Any help would be highly appreciated.
Thanks in advance!
I am the author of the code that sends you these mails. As you can imagine many users create a dev cache to test it but do not deprovision it once they no longer use the cache. Since dev caches are free there is not really an incentive to remove them and we have thousands of unused caches in our system. For this reason we deactivate dev caches after 30 days of inactivity.
The only way you can prevent this from happening is to use your cache at least once every 30 days. If your apps are active less often there are a couple of options:
As #FieryCat suggested you could run a cron job that regularly accesses your caches. Obviously we do not encourage this because it leads to unused caches being artificially kept alive.
Buy a prod cache as they never get deactivated. Now, I understand that it is unreasonable to buy caches for many apps that only use it once in a while. What you could do however is purchasing one 100MB cache (the smallest paid plan) and share the cache across all your apps. In this case you only need to be careful that each app uses it's own namespace (you can set a namespace in rails with the :namespace option). If you need help setting up a cache that is shared across all your apps, please contact the MemCachier support.
Finally, note that deactivated caches can always be reactivated on the analytics dashboard.

How does adding a 3rd party logging service affect how much i have to pay to Heroku

This would sound very newbie but I've just added a centralized logging service (Splunkstorm free version) to my rails app on heroku and it completely changed my life. I don't know why i never thought of this before.
I can just read all the logs from web interface without running heroku logs --tail which spawns a new dyno everytime i do it.
Which makes me curious: Does adding this type of logging service affect how much i have to pay to heroku? I mean, it's sending out packets every time something happens.
Nope!
Bandwidth is included in the dyno pricing (including the one you get for free).
There is a soft limit at 2TB of bandwidth, but you're unlikely to come anywhere near that from logging.

Zero downtime on Heroku

Is it possible to do something like the Github zero downtime deploy on Heroku using Unicorn on the Cedar stack?
I'm not entirely sure how the restart works on Heroku and what control we have over restarting processes, but I like the possibility of zero downtime deploys and up until now, from what I've read, it's not possible
There are a few things that would be required for this to work.
First off, we'd need backwards compatible migrations. I leave that up to our team to figure out.
Secondly, we'd want to migrate the db right after a push, but before the restart (assuming our migrations are fully backwards compatible, this should not affect anything)
Thirdly, we'd want to instruct Unicorn to launch a new master process and fork some workers, then swap the PIDs and gracefully shut down the old process/workers
I've scoured the docs but I can't find anything that would indicate this is possible on Heroku. Any thoughts?
I can't address migrations, but the part about restarting processes and avoiding wait time:
There is an beta feature for heroku called preboot. After a deploy, it boots your new dynos first and waits a while before switching traffic and killing the old ones:
https://devcenter.heroku.com/articles/labs-preboot/
I also wrote a blog post that has some measurements on my app's performance improvements using this feature:
http://ylan.segal-family.com/blog/2012/08/27/deploy-to-heroku-with-near-zero-downtime/
You might be interested in their feature called preboot.
Taken from their documentation:
This feature provides seamless deploys by booting web dynos with new code before killing existing web dynos.
Some apps take a long time to boot up, and this can cause unacceptable delays in serving HTTP requests during deployment.
There are a few caveats:
You must have at least two web dynos to use this feature. If you have your web process type scaled to 1 or 0, preboot will be disabled.
Whoever is doing the deployment will have to wait a few minutes before the new code starts serving user requests; this happens later than it would without preboot (but in the meanwhile, user requests are still served promptly by old dynos).
There will be a short period (a minute or two) where heroku ps shows the status of the new code, but user requests are still being served by old code.
There is much more information about it, so refer to their documentation.
It is possible, but requires a fair amount of forward planning. As of Rails 3.1 there's three tasks that need carrying out
Upload the new code
Run any database migrations
Sync the assets
Uploading code and restarting is fairly straightforward, the main problem lies with the other two, but the way round them is the pretty much the same.
Essentially you need to:
Make the code compatible with the migration you need to run
Run the migration, and remove any code written specifically for it
For instance, if you want to remove a column, you’ll need to deploy a patch telling ActiveRecord to ignore it first. Only then you can deploy the migration, and clean up that patch.
In short, you need to consider your database and the code compatability an work around them so that the two can overlap in terms of versioning.
An alternative to this method might be to have two versions of the application running on Heroku at the same time. When you deploy, switch the domain to the other version, do the deploy, and switch it back again. This will help in most instances, but again, database compat is an issue.
Personally, I would say that if your deployments are significant to require this sort of consideration, taking parts of the application offline are probably the safest answer. By breaking up an application into several smaller applications can help mitigate this and is a mechanism that I use regularly.
No - this is currently not possible using Unicorn on Heroku cedar. I've been bugging Heroku about this for weeks.
Here was Heroku Support's reply to my email on March 8, 2012:
Hi, you could enable maintenance mode when doing a deploy, at least your users would see a maintenance page instead of an error, and also request queue wouldn't build up.
We're definitely aware this is a pain and we're working to offer rolling / zero-downtime deploys in the future. We have no ETA to announce, though.

Is Heroku dependable?

I have been hosting a site on Heroku for a few months that is very soon to go into production.
Since I began with them, there have been at least three significant outages, one of which was the disastrous Amazon outage last month and another of which is a multi-hour outage happening today.
I believe in Heroku's vision and I think they are a great company, but I am faced with the ultimate problem: if they can't keep sites up and running, everything I like about them doesn't really matter.
Is Heroku a reliable provider to run a production site on Rails?
Are there any other providers I might look into that have a better reputation for reliability than Heroku?
In my opinion, downtime can happen with almost any provider. What you need to see is how well or badly the host handles the downtime and the effort they make in keeping the customer updated about possible resolution.
In my opinion Heroku is a great place to host your app. The advantages and ease of deploying there covers up for the recent (and rare) downtime FOR ME.
I am user of Heroku with Amazon RDS plugin for the past 7-8 months and my conclusion is there is nothing to appreciate about Heroku except their architecture. Here is why I think:
Even though it is sold for $250 million+ they were still NOT using the Amazon multiple zones feature of Amazon. Below is the link how SmugMug survived amazon crash by using Amazon's multiple zones feature.
http://don.blogs.smugmug.com/2011/04/24/how-smugmug-survived-the-amazonpocalypse/
No phone contact support in the event of issues (not application but Heroku's), lot to learn from Rackspace
The application I am hosting, people will starve if it goes down for few hours on Friday forget about 60 hours downtime.
I see intermittent deployment and connectivity issues. Please visit this link for a confirmation:
http://status.heroku.com/
I know developers love it because they throw a cheap web process called 'dyno' for free.
So far Heroku does not offer multiple availability zone redundancy. If you want something more reliable than Heroku you can create your own EC2 instances in multiple availability zones. Of course this will require significantly more server upkeep, admin, and deployment time.
I have seem Heroku to be reliable. I highly recommended it for starting out and validating your idea. I believe when you start your project you want get it out quickly (to customer or to public).
As mentioned in other comments at some point you might need to switch over to EC2 as you might need zone redundancy and it might actually become cheaper to run of EC2 especially if you already have an SA in the company.
No. It is not. As a customer I've experienced multiple critical outages. These things happen and I get that. But what makes Heroku unreliable is their nearly non-existent support when things do go wrong. I would use caution when evaluating Heroku or any provider for that matter and really understand what you're paying for. Paying as much as I did for Heroku I expected more.
As an example one of their databases went offline early on a Sunday. I immediately was made aware, not from Heroku but from our customers and new relic alerts. I contacted Heroku support just to get the ball rolling as I began to troubleshoot. 24 hours later I had literally no responses from Heroku. I could not fork, follow, or take snapshot of the database as they suggest (because they were experiencing issues) so I basically sat on my hand and waited. Hoping that somebody would respond as I frantically attempt to recover somehow, someway.
Was this their fault. No. Not at all. I should/could have done something to mitigate this failure. But as much as I pay for their servies each month I expected something resembling a response to my critical issue.
Our our app is hosted by Heroku and went down mutliple times over the last 12 months.
Two times it was caused by one of the third-party apps that Heroku offers:
We used Zerigo (recommended by Heroku) for our DNS. This has caused our site to go down twice - one time it took over 12 hours te recover. This is absolutely crazy for something like DNS, so we have switched to a more reliable provider.
The Redistogo app went down once.
Heroku does bring some benefits, but be careful about the apps you select.
In my org i build simple SPA productivity apps, and have been using Heroku to host them for the last year after migrating away from a physical box server to cloud VMs.
I've had multiple days lost due to Heroku development hindering outages. Usually while running apps stay online, and work, when Heroku goes down you can't push updates or restart apps.
Lets also not forget the ridiculous times for scheduled maintenance (usually 2PM EST, midweek....REALLY?)
As of writing this, the Logging system for Heroku has now been acting up (more or less down) for over 24 hour.
Thankfully my apps aren't mission critical. While I like Heroku's ease of use, it's just not worth this much headache for what is nothing other than an AWS middle-man.
That said, I'm moving over to just pure AWS EC2 instances.

Resources