I'm currently considering moving to jRuby, and I'm still unsure how everything will work, but let's consider this hypothetical situation.
A user 1 loads a page in my app which takes about 2.5 seconds, and at about 500ms in the execution a user 2 tries to open different page which takes 1 second to load.
If my estimate is correct, this is what would happen if you ran it in MRI with single process :
User 1 waits for 2,5 seconds for his page to load
User 2 waits for 3 seconds for his page to load (2 seconds waiting for user 1 to complete the loading of his page, and 1 second for his page to finish rendering)
Is my estimate correct?
And let's say if I ran the same app under jRuby this would happen :
User 1 waits 2,5 seconds for his page to load
User 2 waits for 1 or more seconds but less than 3, depending how much of the memory/cpu the request from user 1 takes
Is my other estimate correct? Of course assuming your code is thread safe. If my estimate is incorrect please correct me, or if it is correct, do I need to make sure that some config is set at rails app level or should I be careful about something else other than thread safe code?
Update
I've just done a small jRuby POC app, used the warbler gem to build a war file, and deployed a war to a Tomcat web server. I don't think my estimate was correct for jRuby, this is what I observed :
User 1 waits 2,5 seconds for his page to load
User 2 waits for 3 seconds
Which is identical to MRI, in terms of request processing, shouldn't jRuby process these in parallel?
we're talking were hypothetical things (and assumptions)
if "loads a page in my app which takes about 2.5 seconds" all users will keep loading this thing (concurrently) unless of course you do some caching or store it after the first load for other users.
the difference is that in MRI whenever Ruby code is executed (not waiting on IO such as a database or loading something from http://) 2 threads won't run concurrently, while in JRuby they will.
if you're seeing User 2 waits for 3 seconds on JRuby it means that smt is blocking multiple requests e.g. there's a Mutex somewhere along the way (e.g. Rack::Lock).
Related
I've got Heroku deployment with my Rails 4 app and it's proving to be extremely slow. I'm not sure if my location has a factor as I'm based in Australia
I've got NewRelic addon and below is the problem that I'm seeing.
Category Segment % Time Avg calls Avg Time (ms)
View layouts/users Template 98.4 1.0 16,800
Based on this breakdown, I see that layout users is the problem for the performance (which is nearly 16.8 seconds!).
Is there a good way to profile this to find out exactly what functions are causing this problem and what are the best way to fix those?
Also another important thing to note is that when I go to map report it shows End User of 19.5 seconds which takes up a lot of time.
When an app on Heroku has only one web dyno and that dyno doesn't receive any traffic in 1 hour, the dyno goes to sleep.
When someone accesses the app, the dyno manager will automatically wake up the web dyno to run the web process type. This causing delay for this first request.
Are you noticing similar behaviour?
I've got a rails app in development mode (3.1.12 ruby 1.9.2) running on a windows server via fastcgi (with heliconzoo).
If i wait a certain amount of time from page load to page load, it takes 4-6 seconds to load the page. Afterwards it's loading with normal speed independent on which computer and in every browser.
Does the app/service shutdown if nobody visits the page within a while?
What can I do to prevent this behaviour?
e: For the first entry of today the development.log says it took 500ms to load the page which is 10 times faster...But it took 5s to load actually...
I'm new to Heroku, and would like to have some idea of how I can go about guesstimating the number of dynos that might be needed for a RoR app (need to give some number to customer).
The app is already running on the free 1 dyno. Here's some basic info about the app:
App is mainly database reads, with very little writes
Possibly heavy DB load, with queries involving distance calculations (from lat-long, using gmaps4rails)
From some basic testing with WAPT (eval version), it looks like a typical search request takes a min. ~1.3s, avg. ~2s, max. 4-5s
Again from WAPT testing, up to 20 concurrent users and observing the Heroku logs, I don't seem to be seeing any requests being queued
Other requests are largely static assets
How would I get some rough idea of the number of dynos needed, to handle X concurrent users, or how many concurrent users the single dyno can likely handle?
Update: Heroku changed their dyno pricing prolicy https://www.heroku.com/pricing, so the these information might not correct anymore.
According to this article http://neilmiddleton-eu.herokuapp.com/getting-more-from-your-heroku-dynos/, if you use Unicorn, 1 dyno can handle 1 million request per day (100ms per request). So if you host all media in S3, 1 page view need 3 requests (1 html, 1 pipe-lined css, 1 pipe-lined javascript), 1 dyno can handle roughly 300.000 page view a day, or 80 page view per seconds with Unicorn.
Let's say 1 user will view 1 page in 5 seconds, and your application can manage to respond in 300ms, technically, you will have roughly 400 concurrent users with 1 dyno.
But actually our application (quite heavy), 1 dyno can only accept 1/10 of those, around 50 concurrent users.
Hope this help you!
My app is basic (1 dyno, 20MB slug size) and some of the pages take too long to load at times. Using Firebug, I've observed that most of the times the pages load within 3-4 sec but sometimes it takes more than a minute for the page to load (both data points are when the cache on the browser is cleared). The basic html response was within 500ms and the main component of the time was downloading a png image (17kb image) for which the wait time (after sending request) was more than a min. I cannot understand why this would be the case.
I am using Yslow to analyze the entire page (gave a B grade) and I think this has something to do with Heroku taking long to send images at times.
I have referred to the question - Why are my basic Heroku apps taking two seconds to load?
As suggested in the answers, I have put a simple cron task in heroku that accesses the homepage every hour through a URI GET request.
What could I do to improve the speed?
I am considering the following things:
1. Move images to a CDN
2. Put a get expires header as given in http://upstre.am/blog/tag/heroku/
I have put a simple cron task in heroku that accesses the homepage every hour through a URI GET request.
From what you are describing you are using Heroku cron job to ping your app. Unfortunately this will not work as you have to use an external ping service such as Pingdom.
Update: seems like external ping services like Pingdom no longer works either..
Heroku 'idles' dynos if they aren't used for more than 30min I believe. This means you'll need more than 1 web dyno if you want to keep the app active and ready to load at any time. Worker dynos are never idled.
Basically just set your app to 2 web dynos.
I'm trying delayed_job now, and have some questions.
From the http://github.com/collectiveidea/delayed_job page, I can see some information:
Workers can be running on any
computer, as long as they have access
to the database and their clock is in
sync. Keep in mind that each worker
will check the database at least every
5 seconds.
When I invoke rake jobs:work once, it will create ONE worker, right?
When a worker checks the database, it will read ALL new and failed tasks EACH TIME, and run them?
it says a worker will check the database every 5 seconds, can I make it 2 seconds?
When I create a worker(rake jobs:work), there are already 10 tasks in the database, and each will take 3s. How many processes will DelayedJob create? And how many seconds need in total?
yes
yes
Delayed::Worker.sleep_delay = 2
1 worker will work on each task in turn, passing or failing it before going onto the next. 30 seconds total + however long 9 sleep delays are for the total time (45 sec. by default). I'm not sure how to answer your question on processes. 1 worker is created, which is a process. Zero or more other processes may be created, depending on what the job to run is.