let's say I've sent a get request to some action in some controller in rails.
and in that action I'm sending requests to get web pages from another server.
for example :
open("http://example.com/myexample.xml")
when i call this function using localhost as a parameter the site requests it self so the server goes in a dead lock state and stops
any ideas to get page of localhost without making the requests queued on the main thread ?
the same problem happens when the main thread sleeps or get busy to process a request and another request comes to the server ... it waits till the first request is finished.
any solutions for that ?
You can run another server instance:
rails s # http://localhost:3000
rails s -p 3001 # http://localhost:3001
Then you can send requests from localhost:3001 to localhost:3000 or on the contrary.
I prefer to use unicorn as second server
rails s # http://localhost:3000
unicorn # http://localhost:8080
Related
I am trying to call an api (from controller), store its resulting JSON response in a variable and then access parts of the variable's value at will (in my view). The following code hangs:
in Controller:
parsed_response = JSON.parse(HTTP.get('http://localhost:3000/api/v2/storefront/products'))
#products = parsed_response['data']
in View:
<%= #products %>
The above renders nothing, hangs up a process and I have to kill -9 to shut down the server. What am I doing wrong here, is this somehow making the HTTP.get request more than once in rapid succession, thus causing the process to hang? Is it to do with the #parse and #get both being called at once on one line?
(I'm using gem 'http' here)
My guess is that your server (puma, unicorn, ..) accepts a limited number of parallel HTTP connections.
Suppose your app accepts only one connection at a time:
Your request localhost:3000/page_1, so you occupy that connection. No one can connect to your server until this connection is over.
localhost:3000/page_1 wants to connect to localhost:3000/page_2, it has to wait until the active connection ends. Since it cannot connect, it waits... until a timeout or until you kill your server.
I deployed rails app with nginx and passenger,there is a long running rails action, something like this:
def handler
sleep 100
respond_to do |format|
format.json { render :json=>{:success=>true}}
end
end
However, the nginx error log print this message:
Couldn't forward the HTTP response back to the HTTP client: It seems the user clicked on the 'Stop' button in his browser.
Obviously,this error don't called by click stop button. Maybe nginx timeout or rails time out or passenger timeout, how to solve this problem, is it possible be solved by configure file. Is there any help
Nginx isn't getting a response from Passenger in time.
You can increase the timeout via nginx config via the max_execution_time.
The standard way to handle this would be to kick the sleep 100 to a background job (DJ, Resque et, I personally prefer resque as my go to but DJ is easy to setup and there are an almost infinite number of pre-built offers available).
Then you would return a 202 which means "accepted" and is used to indicate that the processing of the request is not complete but the server did receive the request.
If I run rails server in production environment by rails server -e production, rails runs as a single-threaded application.
So will it respond to one HTTP request at a time or more requests at a time?
It depend on which server you are using to run rails application .
For instance if you are using Webrick server (which is single threaded by default)then you will handle one request at a time but if you use passenger server then it will create instances of your application (which is configurable), then you can handle multiple request at a time
Since it is single-threaded so one request can only enter the action after the previous requests finish. But it can respond to multiple requests at a time.
We have a Rails app that we run on Unicorn (2 workers) and nginx. We want to integrate a 3rd party API where processing of a single request takes between 1 and 20 seconds. If we simply create a new controller that proxies to that service the entire app suffers, because it takes only 2 people to make a request to that service via our API and for 20 seconds the rest of the users can't access the rest of our app.
We're thinking about 2 solutions.
Create a separate node.js server that will do all of the requests to the 3rd party API. We would only use Rails for authentication/authorization in this case, and we would redirect the requests to node via nginx using X-Accel-Redirect header (as described here http://blog.bitbucket.org/2012/08/24/segregating-services/)
Replace Unicorn with Thin or Rainbow! and keep proxying in our Rails app, which could then, presumably, allow us to handle many more concurrent connections.
Which solution might we be better off? Or is there something else we could do.
I personally feel that nodes even-loop is better suited for the job here, because in option 2 we would still be blocking many threads and waiting for HTTP requests to finish and in option 1, we could be doing more requests while waiting for the slow ones to finish.
Thanks!
We've been using the X-Accel-Redirect solution in production for a while now and it's working great.
In nginx config under server, we have entries for external services (written in node.js in our case), e.g.
server {
...
location ^~ /some-service {
internal;
rewrite ^/some-service/(.*)$ /$1 break;
proxy_pass http://location-of-some-service:5000;
}
}
In rails we authenticate and authorize the requests and when we want to pass it to some other service, in the controller we do something like
headers['X-Accel-Redirect'] = '/some-service'
render :nothing => true
Now, rails is done with processing the request and hands it back to nginx. Nginx sees the x-accel-redirect header and replays the request to the new url - /some-service which we configured to proxy to our node.js service. Unicorn and rails can now process new requests even if node.js+nginx is still processing that original request.
This way we're using Rails as our main entry point and gatekeeper of our application - that's where authentication and authorization happens. But we were able to move a lot of functionality into these smaller, standalone node.js services when that's more appropriate.
You can use EventMachine in your existing Rails app which would mean much less re-writing. Instead of making a net/http request to the API, you would make a EM::HttpRequest request to the API and add a callback. This is similar to node.js option but does not require a special server IMO.
I have a single-threaded Rails app running on thin in single-threaded mode on Heroku Cedar.
While I do a huge POST request (a file upload) that takes more than a minute, I can do other GET requests at the same time.
Heroku support assures me that their routing layer is not storing the request and then sending it all at once (which is the behavior of many proxies, such as nginx). They insist that my app is handling concurrent requests.
What's going on here?
Thin is built on top of EventMachine, which provides event-based IO.
This means that Thin does async receiving of your POST request, while serving GET requests in the meanwhile. When POST data is uploaded, Thin then goes on to pass it to Rails (where it is processed synchronously and blocks other requests until finished).