I'm new to Heroku, and would like to have some idea of how I can go about guesstimating the number of dynos that might be needed for a RoR app (need to give some number to customer).
The app is already running on the free 1 dyno. Here's some basic info about the app:
App is mainly database reads, with very little writes
Possibly heavy DB load, with queries involving distance calculations (from lat-long, using gmaps4rails)
From some basic testing with WAPT (eval version), it looks like a typical search request takes a min. ~1.3s, avg. ~2s, max. 4-5s
Again from WAPT testing, up to 20 concurrent users and observing the Heroku logs, I don't seem to be seeing any requests being queued
Other requests are largely static assets
How would I get some rough idea of the number of dynos needed, to handle X concurrent users, or how many concurrent users the single dyno can likely handle?
Update: Heroku changed their dyno pricing prolicy https://www.heroku.com/pricing, so the these information might not correct anymore.
According to this article http://neilmiddleton-eu.herokuapp.com/getting-more-from-your-heroku-dynos/, if you use Unicorn, 1 dyno can handle 1 million request per day (100ms per request). So if you host all media in S3, 1 page view need 3 requests (1 html, 1 pipe-lined css, 1 pipe-lined javascript), 1 dyno can handle roughly 300.000 page view a day, or 80 page view per seconds with Unicorn.
Let's say 1 user will view 1 page in 5 seconds, and your application can manage to respond in 300ms, technically, you will have roughly 400 concurrent users with 1 dyno.
But actually our application (quite heavy), 1 dyno can only accept 1/10 of those, around 50 concurrent users.
Hope this help you!
Related
I have a tinder style app that allows users to rate events. After a user rates an Event, a background resque job runs that re-ranks the other events based on user's feedback.
This background job takes about 10 seconds and it runs about 20 times a minute per user.
Using a simple example. If I have 10 users using the app at any given time, and I never want a job to be waiting, what's the optimal way to do this?
I'm confused about Dynos, resque pools, and redis connections. Can someone help me understand the difference? Is there a way to calculate this?
Not sure you're asking the right question. Your real question is "how can I get better performance?" Not "how many dynos?" Just adding dynos won't necessarily give you better performance. More dynos give you more memory...so if your app is running slowly because you're running out of available memory (i.e. you're running on swap), then more dynos could be the answer. If those jobs take 10 seconds each to run, though...memory probably isn't your actual problem. If you want to monitor your memory usage, check out a visualization tool like New Relic.
There are a lot of approaches to solving your problem. But I would start with the code that your wrote. Posting some code on SO might help understand why that job takes 10 seconds (Post some code!). 10 seconds is a long time. So optimizing the queries inside that job would almost surely help.
Another piece of low hanging fruit...switch from resque to sidekiq for your background jobs. Really easy to use. You'll use less memory and should see an instant bump in performance.
Dynos: These are individual virtual/physical servers. Think of them as being the same as EC2 instances.
Redis Connections: Individual connections to the Redis Instance.
Resque Pool: A gem that allows you to run workers concurrently on the same dyno/instance.
First of all, it’s worth looking for ways in which you can improve the performance of the job itself. You might be able to get it below ten seconds by using low level model caching or optimizing your algorithm.
In terms of working out how many workers you would need, you’ll need to take the number runs per minute (20) times the number of seconds it takes to run (10) times the number of users (10). That will give you the number of seconds per minute it would take to run on one worker. 20 * 10 * 10 = 2000. Divide that by 60 and you have the number of minutes per minute, 33.3. So if you had 34 workers, and these numbers were all consistent, they should be able to keep on top of things.
That said, you shouldn’t be in a position where you need to run 36 or more dynos for just 10 concurrent users for a ranking algorithm. That’s going to get expensive very quickly.
Optimise your algorithm, try to add more caching, and give Sidekiq a try too. In my experience, Sidekiq can process a queue up to 10 times faster than Resque. It depends what your job is doing, and how you utilize each tool, but it's worth checking out. See Sidekiq vs Resque.
Re-ranking other events is a bad idea.
You should consider having total_points and average_points columns for events table and let the ranks be decided by order by queries. Like this.
class Event
has_many :feedbacks
scope :rank_by_total, -> { order(:total_points) }
scope :rank_by_average, -> { order(:average_points) }
end
class Feedback
belongs_to :event
after_create :update_points
def update_points
total = event.feedbacks.sum(:points)
avg = event.feedbacks.average(:points)
event.update(total_points: total, average_points: avg)
end
end
So, How many workers/dynos do you need?
You don't need to worry about dyno or worker for this problem. No matter how many dynos with higher processing power you use, your solution will take good amount of time when your events table becomes huge. So try changing your solution the way I have described.
I'm about to release an iOS app, and deploy its backend (rails backend that serves the iOS app) to heroku.
I have very little knowledge when it comes to the practical price you will pay based on traffic, etc. This link (http://notes.ericjiang.com/posts/881) states... Nowadays, especially with faster code and faster computers, a standard 512MB dyno can power websites with tens of thousands of hits per hour.
I'm trying to get a rough estimate of how much running my backend on heroku could cost me. What's the best way to figure this out? The pricing is all very straightforward. It basically just comes down to how many dyno's I'm going to need.
If I get 5 beta testers to run my iOS app for a 10 minute window, can I extrapolate some statistics as to how much my backend is being used? Is it the 'hits' that matter, or the 'data' transferred, or the 'time' the backend is actively doing something, like queueing up some resulting data?
Is there a formula to figure it out? Let's say say a user averages 10 hits per minute, and I constantly have an average of 5000 users. That would be 3 million hits per hour. What exactly should I be looking for in trying to determine an accurate pricing for my first backend?
While Heroku do have some limits surrounding bandwidth (not requests) for the most part your cost is close to fixed.
Monthly pricing is typically made up from a combination of:
Dynos
Databases
Addons
Premium support
Heroku provide a price calculator on their website. Further, standard (non-hobby) dynos and up include metrics around CPU usage and memory usage.
My suggestion if you're just starting out? Start with one web dyno and a Postgres database. Beta test your app and check your metrics. A Rails app on a single Standard 1X dyno can handle a reasonable amount of traffic (depending on what else it might be doing) and if you need to add more dynos it's only a command line interface away.
Hope that helps.
I've got Heroku deployment with my Rails 4 app and it's proving to be extremely slow. I'm not sure if my location has a factor as I'm based in Australia
I've got NewRelic addon and below is the problem that I'm seeing.
Category Segment % Time Avg calls Avg Time (ms)
View layouts/users Template 98.4 1.0 16,800
Based on this breakdown, I see that layout users is the problem for the performance (which is nearly 16.8 seconds!).
Is there a good way to profile this to find out exactly what functions are causing this problem and what are the best way to fix those?
Also another important thing to note is that when I go to map report it shows End User of 19.5 seconds which takes up a lot of time.
When an app on Heroku has only one web dyno and that dyno doesn't receive any traffic in 1 hour, the dyno goes to sleep.
When someone accesses the app, the dyno manager will automatically wake up the web dyno to run the web process type. This causing delay for this first request.
Are you noticing similar behaviour?
I'm looking for some advice here. My school's student section registration process is online and involves around 6,000 students
They base seating off first come first serve basis. Every year they open the site at noon and floods of people get on a try to register as fast as possible to get good seats. Every year without fail the server crashes and everyone is mad.
After several years of being frustrated myself, I've offered to redo their registration system.
My plan is to rewrite it in ruby on rails, and use heroku for hosting.
Does a heroku dyno only handle one request at a time?
Heroku scales up to 50 dynos. Will that be enough to handle around 6,000 users with about 5 pageviews per transaction in a short amount of time, say a half hour?
Any helpful strategies or tips you can give me before I dive into this project?
Does a heroku dyno only handle one request at a time?
Yes. Heroku dynos are single threaded.
Heroku scales up to 50 dynos. Will that be enough to handle around
6,000 users with about 5 pageviews per transaction in a short amount
of time, say a half hour?
This depends on how fast your page loads. For arguments sake let's pretend it takes 2s per page request (as per Google Analytics recommendation) and you need to load 6,000 users x 5 page views / 30 minutes - 1000 page views per minute.
At 2s per page load, one single dyno would load 30 page views per minute. At 50 dynos, this would be 1500 page views per minute. This would obviously allow you to exceed your overall goal and leave you some room for error, but if all 6000 users are hitting the page at once then a single Heroku app may not be able to keep up depending on your timeout. You would need to implement a user queue system - explained below.
Any helpful strategies or tips you can give me before I dive into this
project?
All that said, a 2s load time may vary depending on the assets your page needs to load, the amount it needs to interact with a database, it's queries, caching, etc. Your page can also potentially serve much faster.
You also need to worry about the initial hit of all the users. This could be taken care of via a first come first serve queue system - similar to that used by Ticketmaster if you've ever used their site. This could be accomplished via AWS SQS or your preference of queue system.
With a user queue and caching of your assets and common database queries, you should be able to accomplish this with 50 or less dynos.
EDIT: I'm taking your word for it that Heroku will run 50 web dynos. They show 24 as max on their pricing page, but I cannot find any info one way or another.
Does a heroku dyno only handle one request at a time?
It depends on web server you use (https://devcenter.heroku.com/articles/dynos#dynos-and-requests). If you want more concurrency within a dyno, I'd suggest taking a look at something like Puma.
Heroku scales up to 50 dynos. Will that be enough to handle around 6,000 users with about 5 pageviews per transaction in a short amount of time, say a half hour?
Any helpful strategies or tips you can give me before I dive into this project?
You can have more than 50 dynos. A specific answer for you app is going to be way better that a guess or generalization. Run a load test against your site (e.g., using Blitz) and find out the real numbers. Costs for add-ons are pro-rated per second, so you only pay for the period you have it installed. So make sure you uninstall or downgrade Blitz once you've finished your test.
I have an Rails 3 application hosted on heroku, it has pretty common configuration where I have a client facing part of my application say: www.myapplication.com and an admin part of my application admin.myapplication.com.
I want my client facing part of my application to be fast, and I don't really care about how fast my admin module is. What I do care about is that I do not want usage on my admin site to slow down the client facing part of my application.
Ideally my client-side of the app with have 3 dedicated dynos, and my admin side will have 1 dedicated dyno.
Does anyone have any idea on the best way to accomplish this?
Thanks!
If you split the applications you're going to have share the database connections between the two apps. To be honest, I'd just have it one single app and give it 4 dynos :)
Also, Dynos don't increase performance, they increase throughput so you're capable of dealing with more requests a second.
For example,
Roughly - If a typical page response is 100ms, 1 dyno could process 10 requests a second. If you only have a single dyno and your app suddenly receives 10 requests per second then the excess requests will be queued until the dyno is free'd up to process those requests. Also requests > 30s will be timed out.
If you add a second dyno requests would be shared between the 2 dynos so you'd now be able to process 20 requests a second (in an ideal world) and so on as you add more dynos.
And remember a dyno is single threaded, so if it's doing something ANYTHING ie rendering a page, building a pdf and including receiving an uploaded image etc then it's busy and unable to process further requests until it's finished and if you don't have an more dynos requests will be queued.
My advice is to split your application into it's logical parts. Having a separate application for the admin interface is a good thing.
It does not have to be on the same domain as the main application. It could have a global client IP restriction or just a simple global Basic Auth.
Why complicate things and stuff two things into one application? This also lets you eperimenting more with the admin part and redeploy it without affecting your users.