In ruby/rails, can you differentiate between no network response vs long-running response? - ruby-on-rails

We have a Rails app with an integration with box.com. It happens fairly frequently that a request for a box action to our app results in a Passenger process being tied up for right around 15 minutes, and then we get the following exception:
Errno::ETIMEDOUT: Connection timed out - SSL_connect
Often it's on something that should be fairly quick, such as listing the contents of a small folder, or deleting a single document.
I'm under the impression that these requests never actually got to an open channel, that either at the tcp or ssl levels we got no initial response, or the full handshake/session-setup never completed.
I'd like to get either such condition to timeout quickly, say 15 seconds, but allow for a large file that is successfully transferring to continue.
Is there any way to get TCP or SSL to raise a timeout much sooner when the connection at either of those levels fails to complete setup, but not raise an exception if the session is successfully established and it's just taking a long time to actually transfer the data?
Here is what our current code looks like - we are not tied to doing it this way (and I didn't write this code):
def box_delete(uri)
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
http.verify_mode = OpenSSL::SSL::VERIFY_NONE
request = Net::HTTP::Delete.new(uri.request_uri)
http.request(request)
end

Related

How can I force close excon connection when using chunked request

I am trying to read the first chunk of each image I am requesting to get its mime type and size which I'm able to do.
However, when I use Connection#reset it doesn't kill the connection and keeps downloading next chunks.
I am just wondering is it possible to close the connection after getting the first chunk?
This is my code right now
streamer = lambda do |chunk, _remaining_bytes, total_bytes|
image_format = MimeMagic.by_magic(chunk)
# other code
#connection.reset
end
Excon.defaults[:chunk_size] = 25
#connection = Excon.new(image_url)
#connection.get(response_block: streamer)
I don't believe there is a way currently to stop before the chunked response concludes. That being said, it might be possible that you could get the data you want from a head request and avoid the need for a get request?

Rails HTTP Ping to unreliable Target

I would like to send an HTTP GET request to an ip address and port to determine if there is a device online that can respond at that address.
I want to have a relatively reasonable timeout so that my application does not hang while connecting, if there's no. I have been using Net::HTTP, but there does not seem to be a way to set a timeout when using an ip address.
res = Net::HTTP.get_response(ip_address, '/index.html', port)
Is there a best practice or better method to perform this request or a way to set a timeout in Net::HTTP when using an ip address rather than domain name?
I'm using Ruby 2.1.5 and Rails 4.1.0 with hosting on Heroku.
You can see about HTTParty gem. This gem provide many options and easy to use.
You set timeout for the request to return the response
response = HTTParty.get('https://www.google.co.in/', timeout: 60)
timeout is in seconds.
or in Net http you can set as,
uri = URI.parse(ip_address + '/index.html')
request = Net::HTTP::Get.new(uri.path)
begin
response = Net::HTTP.start(uri.host, uri.port) {|http|
http.read_timeout = 100 #Default is 60 seconds
http.request(request)
}
rescue Net::ReadTimeout => e
puts e.message
end
There's no major difference between requesting via an ip address or by dns name, in the latter case DNS query is made and usually a Host-header is set, after that request is done via the ip.
In Net::HTTP there's open_timeout setting that raises Net::OpenTimeout when set if connection cannot be established during that period. By default it's nil which means 'forever'
Not sure what you are looking for. In the Net::HTTP-class there is read_timeout-setter. See here: http://docs.ruby-lang.org/en/2.1.0/Net/HTTP.html#method-i-read_timeout-3D

Optimizing HTTP Status Request on Heroku (Net:HTTP)

I'm running an app on heroku where users can get the HTTP status (200,301,404, etc) of several URLs that they can paste on a form.
Although it runs fine on my local rails server, when I upload it on heroku, I cannot check more than 30 URLs (I want to check 200), as heroku time outs after 30seconds giving me an H12 Error.
def gethttpresponse(url)
httpstatusarray = Hash.new
uri = URI.parse(url)
http = Net::HTTP.new(uri.host, uri.port)
request = Net::HTTP::Get.new(uri.request_uri)
response = http.request(request)
httpstatusarray['url'] = url
httpstatusarray['status'] = response.code
return httpstatusarray
end
At the moment I'm using Net:HTTP, and it seems very heavy. Is there anything I can change on my code or any other gem I could use to get the HTTP status/headers on a more efficient (fast) way?
i noticed that
response.body holds the entire HTML source code of the page which i do not need. is this loaded on the response object by default?
If this is the most efficient way to check HTTP Status, would you agree that this needs to become a background job?
Any reference on gems that work faster, reading material and thoughts are more than welcome!
If a request takes longer than 30 seconds then the request is timed out as you're seeing here. You're entirely dependent on how fast the server the other end responds. For example, if say it were itself a Heroku application on 1 dyno then it will take possibly 10 seconds to unidle the application therefore leaving only 20 seconds for the other URLs to be tested.
I suggest you move each check to it's own background job and then poll the jobs to know when they complete and update the UI accordingly.

Ruby - extending timeout for slow API calls

I'm calling an external API over HTTP which will take more 30 seconds to provide a response. When I run it, although the API call completes successfully (the remote service does what it's supposed to do), my Ruby gets a timeout error before it receives the 'OK' response. I get this error:
/Users/chris/.rvm/rubies/ruby-2.1.0/lib/ruby/2.1.0/net/protocol.rb:158:in `rescue in rbuf_fill': Net::ReadTimeout (Net::ReadTimeout)
Is there a way I can give it more time so it can cleanly handle the response?
http = Net::HTTP.new(#host, #port)
http.read_timeout = 500
Source: http://ruby-doc.org/stdlib-2.1.1/libdoc/net/http/rdoc/Net/HTTP.html#method-i-read_timeout-3D

ActiveResource EOFError on "slow" API

I'm seriously struggling to solve this one, any help would be appreciated!
I have two Rails apps, let's call them Client and Service, all very simple, normal REST interface - here's the basic scenario:
Client makes a POST /resources.json request to the Service
The Service runs a process which creates the resource and returns an ID to the Client
Again, all very simple, just that Service processing is very time-intensive and can take several minutes. If that happens, an EOFError is raised on the Client, exactly 60s after the request was made (no matter what the ActiveResource::Base.timeout is set to) while the service correctly processed the request and responds with 200/201. This is what we see in the logs (chronologically):
C 00:00:00: POST /resources.json
S 00:00:00: Received POST /resources.json => resources#create
C 00:01:00: EOFError: end of file reached
/usr/ruby1.8.7/lib/ruby/1.8/net/protocol.rb:135:in `sysread'
/usr/ruby1.8.7/lib/ruby/1.8/net/protocol.rb:135:in `rbuf_fill'
/usr/ruby1.8.7/lib/ruby/1.8/timeout.rb:62:in `timeout'
...
S 00:02:23: Response POST /resources.json, 201, after 143s
Obviously the service response never reached the client. I traced the error down to the socket level and recreated the scenario in a script, where I open a TCPSocket and try to retrieve data. Since I don't request anything, I shouldn't get anything back and my request should time out after 70 seconds (see full script at the bottom):
Timeout::timeout(70) { TCPSocket.open(domain, 80).sysread(16384) }
These were the results for a few domain:
www.amazon.com => Timeout after 70s
github.com => EOFError after 60s
www.nytimes.com => Timeout after 70s
www.mozilla.org => EOFError after 13s
www.googlelabs.com => Timeout after 70s
maps.google.com => Timeout after 70s
As you can see, some servers allowed us to "wait" for the full 70 seconds, while others terminated our connection, raising EOFErrors. When we did this test against our service, we (expectedly) got an EOFError after 60 seconds.
Does anyone know why this happens? Is there any way to prevent these or extend the server-side time-out? Since our service continues "working", even after the socket was closed, I assume it must be terminated on the proxy-level?
Every hint would be greatly appreciated!
PS: The full script:
require 'socket'
require 'benchmark'
require 'timeout'
def test_socket(domain)
puts "Connecting to #{domain}"
message = nil
time = Benchmark.realtime do
begin
Timeout::timeout(70) { TCPSocket.open(domain, 80).sysread(16384) }
message = "Successfully received data" # Should never happen
rescue => e
message = "Server terminated connection: #{e.class} #{e.message}"
rescue Timeout::Error
message = "Controlled client-side timeout"
end
end
puts " #{message} after #{time.round}s"
end
test_socket 'www.amazon.com'
test_socket 'github.com'
test_socket 'www.nytimes.com'
test_socket 'www.mozilla.org'
test_socket 'www.googlelabs.com'
test_socket 'maps.google.com'
I know this is nearly a year old, but in case anyone else finds this, I wanted to add a possible culprit.
Amazon's ELB will terminate idle connections at 60 seconds, so if you are using EC2 behind ELB, then ELB could be the server side problem.
the only "documentation" I could find here is https://forums.aws.amazon.com/thread.jspa?threadID=33427&start=50&tstart=50, but it's better than nothing
Each server decides when to close the connection. It depends on the server side software and its settings. You can't control that.

Resources