How to reduce TTFB time under rails application - ruby-on-rails

Under development environment, my app sends GET request which is completed for 10 seconds (Completed 200 OK in 10000 ms ...., is written into console output). But TTFB time is about 40 seconds. GET request receives about 3 MB data.
I arranged response, so the processing is not change but response is empty array and nothing is change about times.
So who uses this time from 'controller finished job' to 'TTFB time, browser receives first byte'
Any explanation?

Related

TimeoutInterval in Alamofire

If I set an 5 mins timeout interval for Alamofire request like below, it means an individual/overall API sync would take 5 mins?
sessionConfiguration.timeoutIntervalForRequest = 300
self.defaultManager = Alamofire.SessionManager(configuration: sessionConfiguration, serverTrustPolicyManager: policyManager)
No, it means that max time for timeout response is 300 seconds. If API finishes earlier you won't have to wait that much.
Regarding second question - it depends on your backend. For most cases 60 seconds is more then enough, however if you have any call that exceed it then you need to increase it. Note that it doesn't take parsing time into the consideration, just getting response from server. If you have large object that needs x minutes to parse but get it in 50 seconds it will be fine.
The main reason for timeout is because your server might never response and instead of letting your app hang, you can handle timeout error somehow.

tank.log grows till consume all available space

I use yandex-tank to generate a load 1 post per 10 seconds during 24h. But yandex-tank failed after about 16 hours of running, because tank.log consumed all the free space available. In my case it increased till 37Gb.
My load.ini:
[phantom]
address=192.168.254.201 ;Target's address
port=12224 ;target's port
rps_schedule=const(0.1, 24h) ;load scheme
connection_test=0
ssl=0
My ammo.txt consists of 10 similar post requests:
300
POST /api/< maybe confidential data>dimension1,dimension2,channel HTTP/1.1
Host: <confidential data>:12224
Content-Type: application/json
Content-Length: 103
Connection: keep-alive
{ "dimension1":"dimension1_1", "dimension2":"dimension2_not_used", "channel":"channel_1", "value": 91}
The command line:
yandex-tank ammo.txt
It seems that space consumed by repeating records "Stats cache timestamps:", like
2016-12-28 04:52:21,033 [DEBUG] yandextank.plugins.Aggregator.plugin plugin.py:101 Stats cache timestamps:
[1482836903, 1482836904, ....]
in the beginning of the file, this record consists of 1 timestamps. But the last available "Stats cache timestamps:" contains 54212 timestamps!
Ther are more than 3 billions of timestamps in the file totally!
Is there a way to suppress/switch off this logging?
It's a bug. I've removed these messages.

Different response time for post on RoR

I have a web service to which data is posted. What I do is parse the json and store the data.
And my response is
render json: { success: true, message: "ok" }, status: 200
Active record takes about 10ms to complete the inserts. There are no views so rails returns something like 0.4 ms to render them. What I find strange are the response time, that are always different (ok, 20 ms up or down is ok, but those are the response times): 130ms, 62ms, 149ms, 71ms, 77ms, 150ms, 48ms, 72ms
My app is on Heroku (I only have new relic enabled, no other request). Does anyone know why the difference or a good way to test it? Maybe because of new Relic?

JMeter : How do I fix the "No. Of Times(N)" the network request may get repeated?

I am using JMeter to get the no of bytes - response timings - request status for series of requests.
Few request gets called/executed repeated no of times, how can I make sure that My JMeter Test plan repeats those network requests for correct N - No of times.
In your thread group, set the loop count to the 'No of Times(N)' you want the request to be repeated.

How does parse response when there is more request then the limit?

I am working in implementing of parse service.
Suppose we have a free account of parse. So it allow us to send 30 Request/Second.
What happen if there is 40 request sent each second.
will parse respond to 10 request after the 30 request completed or will it reject that request?
if it respond to 10 request after the first 30 and same thing happen for continues 2 min then we have 10 * 120 = 1200 request pending. What happen in this scenario ?
If Prase service going to reject due to this reason(like more request then the limit) then how does we get to know this reason. Is there any error code for this rejection?
From this post:
https://parse.com/questions/getting-this-application-has-exceeded-its-burst-limit-code-155-any-idea
seems that any additional call that exceed the burst limit, will return an error like this:
{"code":155,"error":"This application has exceeded its burst limit."}

Resources