I have a quite big application, running from inside spree extension. Now the issue is, all requests are very slow even locally. I am getting messages like 'Waiting for localhost" or "waiting for server" in my browser status bar for 3 - 4 seconds for each request issued, before it starts execution. I can see execution time logged in log file is quite good. But overall response time is poor because of initial delay. So please suggest me, where can I start looking into improving this situation?
One possible root cause for this kind of problem is that initial DNS name resolution is failing before eventually resolving. You can check if this is the case using tcpdump (if that's available for your platform) or wireshark. Look for taffic to and from your client host on port 53 and see if the name responses are happening in a timely fashion.
If it turns out that this is the problem then you need to make sure that the client is configured such that the first resolver it trys knows about your server addresses (I'm guessing these are local LAN addresses that are failing). Different platforms have different ways of configuring this. A quick hack would be to put the address of your server in the client's hosts file to see if that fixes it.
Once you send in your request, you will see 'waiting for host' right up until the Ruby work is done, and it starts sending a response. So, if there is pretty much any processing work that is slowing you down, you'd see this error. What you'd want to do is start looking at the functions that youre seeing the behaviour on, and breaking them down into pieces to see which peices are slow. If EVERYTHING is slow, than you need to look at the things that are common to every function - before functions, or Application Controller code, or something similar. What I do, when I'm just playing around to see what I need to fix is just put 'puts' statements in my code at different stages, to print the current time, then I can see which stage is taking a long time, you know?
Related
Environment:
Ruby: 2.1.2
Rails: 4.1.4
Heroku
In our rails app hosted on Heroku, there are times that requests take a long time to execute. It is just 1% of times or less, but we cannot figure out what it is happening.
We have newrelic agent installed and it says that it is not request-queuing, it is the transaction itself who takes all that time to execute.
However, transaction trace shows this:
(this same request most of the times takes only 100ms to be executed)
As far as I can tell, the time is being consumed before our controller gets invoked. It is consumed on
Rack::MethodOverride#call
and that is what we cannot understand.
Also, most of the times (or even always, we are not sure) this happens on POST requests that are sent by mobile devices. Could this have something to do with a slow connection? (although POST-payload is very tiny).
Has anyone experienced this? Any advice on how to keep exploring this issue is appreciated.
Thanks in advance!
Since the Ruby agent began to instrument middleware in version 3.9.0.229, we've seen this question arise for some users. One possible cause of the longer timings is that Rack::MethodOverride needs to examine the request body on POST in order to determine whether the POST parameters contain a method override. It calls Rack::Request#POST, which ends up triggering a read that reads in the entire request body.
This may be why you see that more time than expected is being spent in this middleware. Looking more deeply into how the POST body relates to the time spent in the middleware might be a fruitful avenue for investigation.
In case anyone is experiencing this:
Finally we have made the switch from unicorn to passenger and this issue has been resolved:
https://github.com/phusion/passenger-ruby-heroku-demo
I am not sure, but the problem may have something to do with POST requests on slow clients. Passenger/nginx says:
Request/response buffering - The included Nginx buffers requests and
responses, thus protecting your app against slow clients (e.g. mobile
devices on mobile networks) and improving performance.
So this may be the reason.
I'm new to RoR, and am building a few beginner's projects - but am unfamiliar with the error processing on RoR. For instance, I am working on this project right here, even though the site I found it through warned it had a few errors (if you know of a program that shortens urls based on a domain you own, that has NO errors, let me know and I'll use it). The problem I'm having is on the step:
rails server
It produces the result that the writer shows, then brings up a SECURITY WARNING. Underneath the security warning it has three timestamps with INFO WEBBrick and INFO ruby. However, an hour later, it's still here - and it hasn't brought the application back to the original location of the code (ie: C:\Location).
Is this an error? Or is it supposed to load? I ask because from the article it seems like I could just move on to the next step (after thirty minutes I hit ENTER just to see what happens, but no response), but - unless I open up a new command prompt - I don't see that happening.
Have you made any requests?
After the Rails server starts, it will sit "forever" waiting to service client requests (e.g., from a browser). Under Windows, that command prompt won't be useful until the server is shut down, e.g., with a Ctrl-C.
You can either open a new command window, as you've done, or shut the server down and use the same window. It's worth nothing that sometimes you'll need to restart the server, much of the time you won't. Figuring out when, and under what circumstances, is deterministic, but occasionally confusing.
I've been helplessly observing this problem for a couple months now, and have decided this is my best shot.
I'm not sure what the cause of the problem is, but I can list some of the things I'm doing. I have an iOS app that uses AFNetworking to connect to a remote server hosted by Google App Engine using HTTP POST requests.
Now, everything works great, but sometimes, very very sporadically and random, I get failed requests. The activity indicator spins and spins for about a minute, and I get no feedback at the end - just a failed request. I check my server logs, and I don't see any errors. After the failed request, I try again, and it works fine. It works fine for the whole day. And then another time randomly the issue repeats itself, sometimes spinning for 10 seconds with a fail, or a minute.
Generally, what can possibly be the cause of this? Is it normal to have some failed connections randomly? Is that something on my part?
But the weird thing is, is that while on my iPhone the app is running, and the indicator is spinning, and it's trying to connect, I try connecting on the iOS simulator, and the connection works just fine. I try again on the iPhone, and it doesn't work.
If I close the app completely and start again, then it works again. So it sounds like it may be a software issue rather than connection issue, but then again I have no evidence or data what so ever.
I know it's vague, but I'm hoping someone may have had a similar problem. Anything helps.
There is a known issue with instance start on GAE for Java. You can star http://code.google.com/p/googleappengine/issues/detail?id=7706 issue.
The same problem was reported for Python but it is not such a big problem.
I think you should check logging level you use on appengine and monitor all your calls. Instance start usually takes more time, so you will be able to see how much time do you use on start and is it really a timeout problem.
For Java version you could try to change log level to debug:
.level = DEBUG
in your logging.properties file. It will give you more information about instance start process.
I'm writing a Rails web service that interacts with various pieces of hardware scattered throughout the country.
When a call is made to the web service, the Rails app then attempts to contact the appropriate piece of hardware, get the needed information, and reply to the web client. The time between the client's call and the reply may be up to 10 seconds, depending upon lots of factors.
I do not want to split the web service call in two (ask for information, answer immediately with a pending reply, then force another api call to get the actual results).
I basically see two options. Either run JRuby and use multithreading or else run several regular Ruby instances and hope that not many people try to use the service at a time. JRuby seems like the much better solution, but it still doesn't seem to be mainstream and have out of the box support at Heroku and EngineYard. The multiple instance solution seems like a total kludge.
1) Am I right about my two options? Is there a better one I'm missing?
2) Is there an easy deployment option for JRuby?
I do not want to split the web service call in two (ask for information, answer immediately with a pending reply, then force another api call to get the actual results).
From an engineering perspective, this seems like it would be the best alternative.
Why don't you want to do it?
There's a third option: If you host your Rails app with Passenger and enable global queueing, you can do this transparently. I have some actions that take several minutes, with no issues (caveat: some browsers may time out, but that may not be a concern for you).
If you're worried about browser timeout, or you cannot control the deployment environment, you may want to process it in the background:
User requests data
You enter request into a queue
Your web service returns a "ticket" identifier to check the progress
A background process processes the jobs in the queue
The user polls back, referencing the "ticket" id
As far as hosting in JRuby, I've deployed a couple of small internal applications using the glassfish gem, but I'm not sure how much I would trust it for customer-facing apps. Just make sure you run config.threadsafe! in production.rb. I've heard good things about Trinidad, too.
You can also run the web service call in a delayed background job so that it's not hogging up a web-server and can even be run on a separate physical box. This is also a much more scaleable approach. If you make the web call using AJAX then you can ping the server every second or two to see if your results are ready, that way your client is not held in limbo while the results are being calculated and the request does not time out.
I have a rails application that takes a while to process certain requests (as it processes image uploads). I currently have three mongrel processes in a cluster, and I expect one of the other two to handle a second request if the first one is busy.
However, this doesn't seem to be happening. If I watch the output from top on the server, I can tell when it is resizing an image, so if I try to hit the application with an easy request (which would normally finish quickly), it seems to wait until the long-running request in finished.
When I look in the log/mongrel.xxxx.log files, all I see is the output from the application starting up.
How can I verify which process is actually handling each request?
Can I get them mongrels to log each request (even if it is just a timestamp, and something like 'GET /path/to/resource')?
Any ideas why the cluster might not be sharing the load in the first place?
The front-end web server is nginx, so maybe that is where I should be looking? The nginx access_log doesn't seem to have anything in it about where the request was proxied to.
We encountered similar issues on reevoo.com some time ago. Have a look at the resultant blog post.
Of specific interest is the use of mongrel_proctitle to diagnose the issue. It will allow a ps aux to see what a specific mongrel is handling.