I'm contemplating writing a web application with Rails. Each request made by the user will depend on an external API being called. This external API can randomly be very slow (2-3 seconds), and so obviously this would impact an individual request.
During this time when the code is waiting for the external API to return, will further user requests be blocked?
Just for further clarification as there seems to be some confusion, this is the model I'm anticipating:
Alice makes request to my web app. To fulfill this, a call to API server A is made. API server A is slow and takes 3 seconds to complete.
During this wait time when the Rails app is calling API server A, Bob makes a request which has to make a request to API server B.
Is the Ruby (1.9.3) interpreter (or something in the Rails 3.x framework) going to block Bob's request, requiring him to wait until Alice's request is done?
If you only use one single-threaded, non-evented server (or don't use evented I/O with an evented server), yes. Among other solutions using Thin and EM-Synchrony will avoid this.
Elaborating, based on your update:
No, neither Ruby nor Rails is going to cause your app to block. You left out the part that will, though: the web server. You either need multiple processes, multiple threads, or an evented server coupled with doing your web service requests with an evented I/O library.
#alexd described using multiple processes. I, personally, favor an evented server because I don't need to know/guess ahead of time how many concurrent requests I might have (or use something that spins up processes based on load.) A single nginx process fronting a single thin process can server tons of parallel requests.
The answer to your question depends on the server your Rails application is running on. What are you using right now? Thin? Unicorn? Apache+Passenger?
I wholeheartedly recommend Unicorn for your situation -- it makes it very easy to run multiple server processes in parallel, and you can configure the number of parallel processes simply by changing a number in a configuration file. While one Unicorn worker is handling Alice's high-latency request, another Unicorn worker can be using your free CPU cycles to handle Bob's request.
Most likely, yes. There are ways around this, obviously, but none of them are easy.
The better question is, why do you need to hit the external API on every request? Why not implement a cache layer between your Rails app and the external API and use that for the majority of requests?
This way, with some custom logic for expiring the cache, you'll have a snappy Rails app and still be able to leverage the external API service.
Related
I'm trying to scale up an app server to process over 20,000 requests per minute.
When I stress-test the requests, most requests are easily handling 20,000 RPM or more.
But, requests that need to make an external HTTP request (eg, Facebook Login) bring the server down to a crawl (3,000 RPM).
I conceptually understand the limitations of my current environment -- 3 load-balanced servers with 4 unicorn workers per server can only handle 12 requests at a time, even if all of them are waiting on HTTP requests.
What are my options for scaling this better? I'd like to handle many more connections at once.
Possible solutions as I understand it:
Brute force: use more unicorn workers (ie, more RAM) and more servers.
Push all the blocking operations into background/worker processes to free up the web processes. Clients will need to poll periodically to find when their request has completed.
Move to Puma instead of Unicorn (and probably to Rubinius from MRI), so that I can use threads instead of processes -- which may(??) improve memory usage per connection, and therefore allow the number of workers to be increased.
Fundamentally, what I'm looking for is: Is there a better way to increase the number of blocked/queued requests a single worker can handle so that I can increase the number of connections per server?
For example, I've heard discussion of using Thin with EventMachine. Does this open up the possibility of a Rails worker that can put down the web request it's currently working on (because that one is waiting on an external server) and then picks up another request while it's waiting? If so, is this a worthwhile avenue to pursue for performance compared with Unicorn and Puma? (Does it strongly depend on the runtime activities of the app?)
Unicorn is a single-threaded, multi-process synchronous app server. It's not a good match for this kind of processing.
It sounds like your application is I/O bound. This argues for an event-oriented daemon to process your requests.
I'd recommend trying EventMachine and the em-http-request and em-http-server.
This will allow you to service both incoming requests to the http server and outgoing HTTP service calls asynchronously.
Update:
Read "Indicate to an ajax process that the delayed job has completed" before if you have the same problem. Thanks Gene.
I have a problem with concurrency. I have a controller scraping a few web sites, but each call to my controller needs about 4-5 seconds to respond.
So if I call 2 (or more) times in a row, the second call needs wait for the first call before starting.
So how I can fix this problem in my controller? Maybe with something like EventMachine?
Update & Example:
application_controller.rb
def func1
i=0
while i<=2
puts "func1 at: #{Time.now}"
sleep(2)
i=i+1
end
end
def func2
j=0
while j<=2
puts "func2 at: #{Time.now}"
sleep(1)
j=j+1
end
end
whatever_controller.rb
puts ">>>>>>>> Started At #{Time.now}"
func1()
func2()
puts "End at #{Time.now}"
So now I need request http://myawesome.app/whatever several times at the same times from the same user/browser/etc.
I tried Heroku (and local) with Unicorn but without success, this is my setup:
unicorn.rb http://pastebin.com/QL0wdGx0
Procfile http://pastebin.com/RrTtNWJZ
Heroku setup https://www.dropbox.com/s/wxwr5v4p61524tv/Screenshot%202014-02-20%2010.33.16.png
Requirements:
I need a RESTful solution. This is API so I need to responds JSON
More info:
I have right now 2 cloud servers running.
Heroku with Unicorn
Engineyard Cloud with Nginx + Panssenger
You're probably using webrick in development mode. Webrick only handles one request at a time.
You have several solutions, many ruby web servers exist that can handle concurrency.
Here are a few of them.
Thin
Thin was originally based on mongrel and uses eventmachine for handling multiple concurrent connections.
Unicorn
Unicorn uses a master process that will dispatch requests to web workers, 4 workers equals 4 concurrent possible requests.
Puma
Puma is a relatively new ruby server, its shiny feature is that it handles concurrent requests in threads, make sure your code is threadsafe !
Passenger
Passenger is a ruby server bundled inside nginx or apache, it's great for production and development
Others
These are a few alternatives, many other exist, but I think they are the most used today.
To use all these servers, please check their instructions. They are generally available on their github README.
For any long response time controller function, the delayed job gem
is a fine way to go. While it is often used for bulk mailing, it works as well for any long-running task.
Your controller starts the delayed job and responds immediately with a page that has a placeholder - usually a graphic with a progress indicator - and Ajax or a timed reload that updates the page with the full information when it's available. Some information on how to approach this is in this SO article.
Not mentioned in the article is that you can use redis or some other memory cache to store the results rather than the main database.
Answers above are part of the solution: you need a server environment that can properly dispatch concurrent requests to separate workers; unicorn or passenger can both work by creating workers in separate processes or threads. This allows many workers to sit around waiting while not blocking other incoming requests.
If you are building a typical bot whose main job is to get content from other sources, these solutions may be ok. But if what you need is a simple controller that can accept hundreds of concurrent requests, all of which are sending independent requests to other servers, you will need to manage threads or processes yourself. Your goal is to have many workers waiting to do a simple job, and one or more masters whose jobs it is to send requests, then be there to receive the responses. Ruby's Thread class is simple, and works well for cases like this with ruby 2.x or 1.9.3.
You would need to provide more detail about what you need to do for help getting to any more specific solution.
Try something like unicorn as it handles concurrency via workers. Something else to consider if there's a lot of work to be done per request, is to spin up a delayed_job per request.
The one issue with delayed job is that the response won't be synchronous, meaning it won't return to the user's browser.
However, you could have the delayed job save its responses to a table in the DB. Then you can query that table for all requests and their related responses.
What ruby version are you utilizing?
Ruby & Webserver
Ruby
If its a simple application I would recommend the following. Try to utilize rubinius (rbx) or jruby as they are better at concurrency. Although they have drawback as they're not mainline ruby so some extensions won't work. But if its a simple app you should be fine.
Webserver
use Puma or Unicorn if you have the patience to set it up
If you're app is hitting the API service
You indicate that the Global Lock is killing you when you are scraping other sites (presumably ones that allow scraping), if this is the case something like sidekiq or delayed job should be utilized, but with caution. These will be idempotent jobs. i.e. they might be run multiple times. If you start hitting a website multiple times, you will hit a website's Rate limit pretty quickly, eg. twitter limits you to 150 requests per hour. So use background jobs with caution.
If you're the one serving the data
However reading your question it sounds like your controller is the API and the lock is caused by users hitting it.
If this is the case you should utilize dalli + memcached to serve your data. This way you won't be I/O bound by the SQL lookup as memcached is memory based. MEMORY SPEED > I/O SPEED
I have a backend Rails server with Sidekiq, which serves as API server. The app works as follow:
My Rails server receives many requests from incoming API clients at the same time.
For each of these requests, the Rails server will allocate jobs to a Sidekiq server. Sidekiq server makes requests to external APIs (such as Facebook) to get data, and analyze it and return a result to Rails server.
For example, if I receive 10 incoming requests from my API clients, for each request, I need to make 10 requests to external API servers, get data and process it.
My challenge is to make my app responds to incoming requests concurrently. That is, for each incoming request, my app should process in parallel: make calls to external APIs, get data and return result.
Now, I know that Puma can add concurrency to Rails app, while Sidekiq is multi-threaded.
My question is: Do I really need Sidekiq if I already have Puma? What would be the benefit of using both Puma and Sidekiq?
In particular, with Puma, I just invoke my external API calls, data processing etc. from my Rails app, and they will automatically be concurrent.
Yes, you probably do want to use Puma and Sidekiq. There are really two issues at play here.
Concurrency (as it seems you already know) is the number of web requests that can be handled simultaneously. Using an app server like Puma or Unicorn will definitely help you get better concurrency than the default web brick server.
The other issue at play is the length of time that it takes your server to process a web request.
The reason that these two things are related is that number or requests per second that your app can process is a function of both the average processing time for each request and the number of worker processes that are accepting requests. Say your average response time is 100ms. Then a single web worker can process 10 requests per second. If you have 5 workers, then you can handle 50 requests per second. If your average response time is 500ms, then you can handle 2 reqs/sec with a single worker, and 10 reqs/sec with 5 workers.
Interacting with external APIs can be slow at times, and in the worst cases it can be very unreliable with unresponsive servers on the remote end, or network outages or slowdowns. Sidekiq is a great way to insulate your application (and your end users) from the possibility that the remote API is responding slowly. Imagine that the remote API is running slowly for some reason and that the average response time from it has slowed down to 2 seconds per request. In that case you'd only be able to handle 2.5 reqs/sec with 5 workers. With anymore traffic than that your end users might start to have a long wait time before any page on your app could respond, even those that don't make remote API calls, because all of your web workers might be waiting for the slow remote API to respond. As traffic continues to increase your users would start getting connection timeouts.
The idea with using Sidekiq is that you separate the time spent waiting on the external API from your web workers. You'd basically take the request for data from your user, pass it to Sidekiq, and then immediately return a response to the user that basically says "we're processing your request". Sidekiq can then pick up the job and make the external request. After it has the data it can save that data back into your application. Then you can use web sockets to push a notification to the user that the data is ready. Or even push the data directly to them and update the page accordingly. (You could also use polling to have the page continually asking "is it ready yet?", but that gets very inefficient very quickly.)
I hope this makes sense. Let me know if you have any questions.
Sidekiq, like Resque and Delayed Job, is designed to provide asynchronous job processing from a queue.
If you don't need jobs to be queued up and run asynchronously, there's no substantial benefit (or harm) to using Sidekiq.
If the tasks need to run synchronously (which it sounds like you might—it's not clear if clients are waiting for data or just requesting that jobs run), Sidekiq and its relatives are likely the wrong tool for the job. There is no guaranteed processing time when using Sidekiq or other solutions; jobs are pushed onto the end of the stack, however long that may be, and won't be processed until their turn comes up. If clients are waiting for data, they may time out long before your worker pool ever processes their jobs.
We have an existing API where a client asks our server for information that we have to get from another external server. When the external server takes a long time, say 10 seconds, it holds up a Rails passenger instance for that whole 10 seconds.
Is there some way to pass the rendering of our reply to delayed_job so that I can free up the Rails instance?
NOTE: Ideally, we would just update our API and reply to our API client that we are busy and to try back again in a few seconds to see if we are ready. However, there are already thousands of clients out there and changing them is not practical at this time.
The usual way to handle this is to queue up the job and return immediately, then poll or use some async notification framework like Pusher or Faye to update the remote client. You definitely cannot pass the connection to DJ as you describe. Another avenue you might investigate is using EventMachine to handle it, a lá http://railstips.org/blog/archives/2011/05/04/eventmachine-and-passenger/. A third alternative would be to precache the data from the remote web service, but that is an avenue very dependent on what you're doing (authorization, for example, is not something you could do there.)
The basic bottom-line is that you're dealing with a bit of an architecture issue. If you absolutely have to talk to the remote service AND output the results in the request cycle, there's not a lot you can do about it short of changing to a more evented backend like EventMachine or Node.js.
Hey guys, I have a program that uses ajax to send a post to multiple social networks via their APIs based on user form input. I was wondering if this process (which doesn't take more than 2-3 seconds when I test it myself) is worth daemonizing with something like BackgroundRB? In other words, were this program to become used by 100+ people, would the simple call to an action via AJAX slow the entire application down?
Yeah I'd recommend using DelayedJob to accomplish this task. You want to avoid unnecessary HTTP requests to your app. With DelayedJob, it connects to your database and makes third party connections without initiating any HTTP requests to your app.
I wouldn't recommend BackgroundRB.
Sort answer: you have to go into background, use delayed_job
Longer answer:
The problem is that although it takes only 2-3 seconds, it completely locks the application server while it does it. so if you have lets say 5 mongrels, or passenger app servers running, it means that if 5 people decide to do this action within 2-3 seconds interval no other requests will be able to be processed.
So while its ok to do it during the development it's a must to move it to background in production.
I wouldn't recommend BackgroundRB. For what you need it seems you need delayed_job
You have a lot of solution to made that
bj
delayed_job
resque