How can I simulate load/traffic on my local machine - ruby-on-rails

I have an application that is meant to check the status of text messages and resend same, if failed, but each time after I restart it on Heroku, it works well for about 15 minutes, before it crashes with the following error:
ActiveRecord::ConnectionTimeoutError (could not obtain a database connection within 5.000 seconds (waited 5.000 seconds)):
This I suspect is being caused by the high number of hit to the application endpoint.
I am trying to reproduce this error locally on my computer, so I won't need to have to push changes each time before I can know if it works or not.
could anybody point me in the right direction to take?
I am thinking of googling for something like stress testing, or load testing
FYI: I am using rspec for my testing

I don't know if you've already looked through it, but the Rails docs have a pretty good guide on performance testing:
http://guides.rubyonrails.org/v3.2.13/performance_testing.html
The benchmarker section might be particularly useful if you want to hit an ActiveRecord query a number of times:
http://guides.rubyonrails.org/v3.2.13/performance_testing.html#benchmarker

You can use some load testing tool, that works from the console, to test your app on localhost:
https://github.com/tsenart/vegeta
https://github.com/lubia/sniper

Related

end of file reached EOFError (Databasedotcom + Rails + Heroku)

After much frustration trying to figure it out myself, I am reaching for SO guys (you!) to help me trace this formidable error:
Message: end of file reached EOFError Backtrace:
["/app/vendor/ruby-1.9.3/lib/ruby/1.9.1/openssl/buffering.rb:174:in
`sysread_nonblock
Background: My App is a Rails 3 app hosted on Heroku and 100% back-end app. It uses Redis/Resque workers to process it's payload received from Salesforce using Chatter REST API.
Trouble: Unlike other similar errors of EOF in HTTPS/OpenSSL in Ruby, my error happens very random (since I can't yet predict when will this come up).
Usual Suspects: The error has been noticed quite frequently when I try to create 45 Resque workers, and try to sync data from 45 different Salesforce Chatter REST API connections all at once! It's so frequent that my processing fails 20% or more of the total and all due to this error.
Remedy Steps:
I am using Databasedotcom gem which uses HTTPS and follows all the required steps to connect to create a sane HTTPS connection.
So...
Use SSL set in HTTPS - checked
URI Encode - checked
Ruby 1.9.3 - checked
HTTP read timeout is set to 900 (15 minutes)
I retry this EOF error MAX of 30 times after sleeping 30 seconds before each retry!
Still, it fails some of the data.
Any help here please?
Have you considered that Salesforce might not like so many connections at once from a single source and you are blocked by an DDOS-preventer?
Also, setting those long timeouts is quite useless. If the connection fails, drop it and reschedule a new one yourself. That is what Resque does fine, it will add up those waiting times if it keeps having problems...

My app fails to connect to the server some times

I've been helplessly observing this problem for a couple months now, and have decided this is my best shot.
I'm not sure what the cause of the problem is, but I can list some of the things I'm doing. I have an iOS app that uses AFNetworking to connect to a remote server hosted by Google App Engine using HTTP POST requests.
Now, everything works great, but sometimes, very very sporadically and random, I get failed requests. The activity indicator spins and spins for about a minute, and I get no feedback at the end - just a failed request. I check my server logs, and I don't see any errors. After the failed request, I try again, and it works fine. It works fine for the whole day. And then another time randomly the issue repeats itself, sometimes spinning for 10 seconds with a fail, or a minute.
Generally, what can possibly be the cause of this? Is it normal to have some failed connections randomly? Is that something on my part?
But the weird thing is, is that while on my iPhone the app is running, and the indicator is spinning, and it's trying to connect, I try connecting on the iOS simulator, and the connection works just fine. I try again on the iPhone, and it doesn't work.
If I close the app completely and start again, then it works again. So it sounds like it may be a software issue rather than connection issue, but then again I have no evidence or data what so ever.
I know it's vague, but I'm hoping someone may have had a similar problem. Anything helps.
There is a known issue with instance start on GAE for Java. You can star http://code.google.com/p/googleappengine/issues/detail?id=7706 issue.
The same problem was reported for Python but it is not such a big problem.
I think you should check logging level you use on appengine and monitor all your calls. Instance start usually takes more time, so you will be able to see how much time do you use on start and is it really a timeout problem.
For Java version you could try to change log level to debug:
.level = DEBUG
in your logging.properties file. It will give you more information about instance start process.

Heroku. Request taking 100ms, intermittently Times out

After performing load testing against an app hosted on Heroku, I am finding that the most DB intensive request takes 50-200ms depending upon load. It never gets slower, no matter the load. However, seemingly at random, the request will outright timeout (30s or more).
On Heroku, why might a relatively high performing query/request work perfectly 8 times out of 10 and outright timeout 2 times out of 10 as load increases?
If this is starting to seem like a question for Heroku itself, I'm looking to first answer the question of whether "bad code" could somehow cause this issue -- or if it is clearly a problem on their end.
A bit more info:
Multiple Dynos
Cedar Stack
Dedicated Heroku DB (16 connections, 1.7 GB RAM, 1 comp. unit)
Rails 3.0.7
Thanks in advance.
Since you have multiple dynos and a dedicated DB instance and are paying hundreds of dollars a month for their service, you should ask Heroku
Edit: I should have added that when you check your logs, you can look for a line that says "routing" That is the Heroku routing layer that takes HTTP request and sends them to your app. You can add those up to see how much time is being spent outside your app. Unfortunately I don't know how easy it is to get large volumes of those logs for a load test.

Slow request processing in rails, 'waiting for server' message

I have a quite big application, running from inside spree extension. Now the issue is, all requests are very slow even locally. I am getting messages like 'Waiting for localhost" or "waiting for server" in my browser status bar for 3 - 4 seconds for each request issued, before it starts execution. I can see execution time logged in log file is quite good. But overall response time is poor because of initial delay. So please suggest me, where can I start looking into improving this situation?
One possible root cause for this kind of problem is that initial DNS name resolution is failing before eventually resolving. You can check if this is the case using tcpdump (if that's available for your platform) or wireshark. Look for taffic to and from your client host on port 53 and see if the name responses are happening in a timely fashion.
If it turns out that this is the problem then you need to make sure that the client is configured such that the first resolver it trys knows about your server addresses (I'm guessing these are local LAN addresses that are failing). Different platforms have different ways of configuring this. A quick hack would be to put the address of your server in the client's hosts file to see if that fixes it.
Once you send in your request, you will see 'waiting for host' right up until the Ruby work is done, and it starts sending a response. So, if there is pretty much any processing work that is slowing you down, you'd see this error. What you'd want to do is start looking at the functions that youre seeing the behaviour on, and breaking them down into pieces to see which peices are slow. If EVERYTHING is slow, than you need to look at the things that are common to every function - before functions, or Application Controller code, or something similar. What I do, when I'm just playing around to see what I need to fix is just put 'puts' statements in my code at different stages, to print the current time, then I can see which stage is taking a long time, you know?

Response time increasing (worsening) over time with consistent load

Ok. I know I don't have a lot of information. That is, essentially, the reason for my question. I am building a game using Flash/Flex and Rails on the back-end. Communication between the two is via WebORB.
Here is what is happening. When I start the client an operation calls the server every 60 seconds (not much, right?) which results in two database SELECTS and an UPDATE and a resulting response to the client.
This repeats every 60 seconds. I deployed a test version on heroku and NewRelic's RPM told me that response time degraded over time. One client with one task every 60 seconds. Over several hours the response time drifted from 150ms to over 900ms in response time.
I have been able to reproduce this in my development environment (my Macbook Pro) so it isn't a problem on Heroku's side.
I am not doing anything sophisticated (by design) in the server app. An action gets called, gets some data from the database, performs an AR update and then returns a response. No caching, etc.
Any thoughts? Anyone? I'd really appreciate it.
What does the development log say is slow for those requests? The view or db? If it's the db, check to see how many records there are in database and see how to optimize the queries. Maybe you need to index some fields.
Are you running locally in development or production mode? I've seen Rails apps performance degrade faster (memory usage) over time in development mode. I'm not sure if one can run an app on Heroku in development mode but if I were you I would check into that.

Resources