Why My web site time out while running JMeter load Test - timeout

I'm new to JMeter and. I followed this tutorial to learn JMeter.
I tried to do a load tested under following conditions.
Number of Threads (Users) - 1000
Ramp-Up Period (in seconds) - 10
Loop Count - 5
While I'm running the test, I tried to load my website (after clear cache)But, it takes more than usual time to load the page. This issue doesn't occur when the browser has cached data.
Can someone please tell me why this is happening? Is it because of when 1000 users load my site, it may crash or something?
Any kind of explanation will be appreciated.

While running your JMeter test if you try to load your website (after clear cache), it will always take more time to load than usual.It's because you have cleared the cache and now the browser needs to render the page resources again to load your desired page.After loading is complete and if you try to load the page again without clearing cache, it will take less time to load the page this time.Browser does not fetch page resources every time, rather the browser saves it in its cache.So next time when you try to open or load that page, the browser could use those cache to open that page for you in the shortest period of time. So for the first time when a browser load a page it takes more time than loading that specific page later(without clearing cache).
Another point is , as your Jmeter test was running while you tried to load your website, it will take a longer time to load your website.Because your application was already handling some requests send by JMeter.So handling extra load will impact on your website page response time.
Ramp up time 10sec for 1000 users!!!
It is not the best practice. You have to give enough time to warm up those 1000 users. 10 sec is too small to be the ramp up time for 1000 users.So during the JMeter test period, it is obvious that your browser will take an unexpected time to load your webpage(using Browser) or end up notifying "Connection Timeout".It necessarily doesn't mean that your application is crashed. It's simply because of unrealistic test script design in JMeter.

Could you elaborate on the type of webserver software you are using e.g?
- Apache HTTPD 2.4 / Nginx / Apache Tomcat / IIS
And the underlying operating system?
- Windows (Server?) / Mac OS X / Linux
If your webserver machine is not limited by the maximum performance of your CPU, Disk etc. (check the Task Manager) your performance might be limited by the configuration of Apache.
Could you please check the Apache HTTPD log files for relevant warnings?
Depending on your configuration (httpd.conf + any files "Include"d from there) you may be using the mpm_winnt worker, that has a configurable number of worker threads which by default is 64 according to:
https://httpd.apache.org/docs/2.4/mod/mpm_common.html#threadsperchild
Once these are all busy new requests from any client (your browser, your loadtest, etc.) will have to wait for their turn.
Try and see what happens if you increase the number of threads!

Related

How to improve prerender speed for Twitterbot request?

For the project I am working, I've set up a prerender service on the same server as the project and use Nginx to pass social media requests to the prerender service.
I have observed that if an authorized user shares a page to the Twitter, it usually works, i.e. the meta tag image and text are rendered as a twitter card. However, if the user has shared other pages of the same project, the images are usually not rendered when the user visits his twitter posts.
From the Nginx access log, it seems Twitterbots made requests at the same time and the prerender service was too slow to render the pages. 499 status were shown in the Twitterbot requests and 504 were shown in the prerender log.
The server is hosted on the UpCloud using 1 CPU and 2 GB memory data plan. The prerender service is run in a docker container with 300MB, it will cache rendered pages for 60 seconds. Due to the memory quota, I hesitate to increase the cache duration.
I have been studying the server logs and possible solutions, but haven't been able to come up with other solution than refactoring the UI. Had anyone else struggled with this issue and how do you overcome it?
That seems like a pretty underpowered server for running a Prerender server. You might want to at least give it more RAM and possibly another CPU to get better performance. 504's shouldn't be happening often at all.
Depending on how long your pages take to prerender, caching for much longer than 60 seconds is highly recommended. You probably won't see many cache hits in 60 seconds (from users sharing URLs on twitter) for a single URL unless you have a very high traffic site.

Very slow POST request with Ruby on Rails

We are 2 working on a website with Ruby on Rails that receives GPS coordinates sent by a tracking system we developped. This tracking system send 10 coordinates every 10 seconds.
We have 2 servers to test our website and we noticed that one server is processing the 10 coordinates very quickly (less than 0.5 s) whereas the other server is processing the 10 coordinates in 5 seconds minimum (up to 20 seconds). We are supposed to use the "slow" server to put our website in production mode this is why we would try to solve this issue.
Here is an image showing the time response of the slow server (on the bottom we can see 8593 ms).
Slow Server
The second image shows the time response of the "quick" server.
Fast Server
The version of the website is the same. We upload it via Github.
We can easily reproduce the problem by sending fake coordinates with POSTMan and difference of time between the two servers remain the same. This means the problem does not come from our tracking system in my opinion.
I come here to find out what can be the origins of such difference. I guess it can be a problem from the server itself, or from some settings that are not imported with Github.
We use Sqlite3 for our database.
However I do not even know where to look to find the possible differences...
If you need further information (such as lscpu => I am limited to a number of 2 links...) in order to help me, please do not hesitate. I will reply very quickly as I work on it all day long.
Thank you in advance.
EDIT : here are the returns of the lscpu commands on the server.
Fast Server :
Slow Server :
May be one big difference is the L2 cache...
My guess is that the answer is here but how can I know what is my value of pragma synchronous and how can I change it ?
The size of the .sqlite3 file I use is under 1 Mo for the tests. Both databases should be identical according to my schema.rb file.
The provider of the "slow" server solved the problem, however I do not know the details. Some things were consuming memory and slowing down everything.
By virtual server, it means finally that several servers are running on the same machine, each is attributed a part of the machine.
Thanks a lot for your help.

Apache/Passenger/RoR Slow - But Why

I am running Ubuntu (64Bit) with Apache 2.2.17, Passenger 3.0.11, Ruby 1.9.3 and Rails 3.2.6
When accessing the web page (index.html) on my webpage the request takes ages to complete, somewhere around 30 second in extreme cases.
The server has plenty of memory available (top shows more than 4GB free), the Apache processes (there are 10 of them) each show 0% CPU in top and the load is also almost 0 and there are hardly any DB accesses as I cache most of the things with memcached.
The log files of Apache as well as Rails do not show any errors, on the contrary the render times shown in the RubyOnRails log file show excellent values (<100 ms).
So where to go from here?
Is the first request slow or all requests slow? Passengers shutdown after a given time interval. So intermittenly requests (requests with sufficient time span in between) will allow passengers to shutdown (only to be restarted at next request.
Passenger does the autoshutdown BY DESIGN. This is so because on a shared environment, there might be other user's apps. If your app is idle for a while, then the resources can be transferred to other people's app.
If you are on a tight budget and you have multiple apps hosted on the same server, then passenger is a great solution.
If you have only ONE app in your server which you control, then please reconfigure Passengers to NOT shutdown (if that indeed is your problem).
You can do "passenger-status" to see how many passengers are currently running and available for taking requests.
The configuration to ensure that Passengers stay up is PassengerMinInstances and PassengerPoolIdleTime.
Are you accessing it through a 'fake domain name' (added to your /etc/hosts file)?
If so, do
service avahi-daemon stop
At least that's what worked for me on ubuntu 10.10 :)
For some reason a DNS lookup is made on each and every request you do to the server, and when the domain doesn't exists, it times out ...
The performance issue has been keeping me busy for all these days. I believe I have nailed it down to Apache configuration: KeepAliveTimeout, it was set to a very high value (90), can't think why it was set that high, must have been a typo.
My understanding of KeepAliveTimeout is that the Apache process gets locked to the client for 90 seconds, even if the client isn't issuing any further requests, hence when traffic picks up (which it did on that day when performance was significantly reduced, page visits more than tripled) all Apache processes are busy waiting for the KeepAliveTimeout, while blocking all new requests coming in. This would also explain why the system was not showing much load at all, it was just sitting there waiting. I reduced the value down to 10, if traffic picks up I'll probably drop it to 5.

Heroku app takes too long to load at times

My app is basic (1 dyno, 20MB slug size) and some of the pages take too long to load at times. Using Firebug, I've observed that most of the times the pages load within 3-4 sec but sometimes it takes more than a minute for the page to load (both data points are when the cache on the browser is cleared). The basic html response was within 500ms and the main component of the time was downloading a png image (17kb image) for which the wait time (after sending request) was more than a min. I cannot understand why this would be the case.
I am using Yslow to analyze the entire page (gave a B grade) and I think this has something to do with Heroku taking long to send images at times.
I have referred to the question - Why are my basic Heroku apps taking two seconds to load?
As suggested in the answers, I have put a simple cron task in heroku that accesses the homepage every hour through a URI GET request.
What could I do to improve the speed?
I am considering the following things:
1. Move images to a CDN
2. Put a get expires header as given in http://upstre.am/blog/tag/heroku/
I have put a simple cron task in heroku that accesses the homepage every hour through a URI GET request.
From what you are describing you are using Heroku cron job to ping your app. Unfortunately this will not work as you have to use an external ping service such as Pingdom.
Update: seems like external ping services like Pingdom no longer works either..
Heroku 'idles' dynos if they aren't used for more than 30min I believe. This means you'll need more than 1 web dyno if you want to keep the app active and ready to load at any time. Worker dynos are never idled.
Basically just set your app to 2 web dynos.

For Ruby and Rails, how to print out the true page rendering time on the webpage?

If using in the controller:
def index
#time_start_in_controller
...
end
and at the end of view, use
<%= "took #(Time.now - #time_start_in_controller} seconds" %>
but isn't the time at the view not the true ending of rendering, because it needs to mix with the layout and so forth. What is a more accurate way (just as accurate as possible) to print out the page generation time right on the webpage?
(update: also, the console showing the log as taking 61ms, but the page definitely took 2 to 3 seconds to load, and the network I am using is super fast, at home or at work, at 18mbps or higher with a ping of maybe 30ms)
update: it is a bit strange that if I use the http performance test ab
ab -n 10 http://www.my-web-site.com:8080
it takes 3 seconds total for 10 requests. But if I use Firefox or Chrome to load the page, each page load is about 3 seconds. This is tunneling to my work's computer back to my notebook running Rails 3, but shouldn't make a difference because I run Bash locally for the above statement and use Firefox locally too.
in a typical production environment, static content (images, css, js) are handled by the web server (eg. apache, nginx etc) not you rails server. so you should check their logs as well. if you are serving static content from your rails server that could be your problem right there.
If your browser time is slow but the time taken in rails (According to the logs) is fast, that can mean many things including but not quite limited to:
network speed is slow
your dns server is slow and browser can't resolve your dns quickly (this happens with for instance if you use godaddy for your dns server they throttle dns lookups)
the requests concurrency exceeds how many threads you have in rails
one way to debug these types of performance issues is to put something in front of the rails server (for example Haproxy) and turn the logging to full. As they will show how many waiting requests there are and how long the actual request/response transferring took along with how long it took your rails thread to process.

Resources