I am using httparty (0.13.1) gem. I am making series of API calls using httparty. Some of my initial API calls succeeded, but the later calls fail consecutively. I have added a timeout of 180 seconds. I searched google but I can not find any solution still. I am struggling due to this for a long time.
My code:
response = HTTParty.get("http://pubapi.cryptsy.com/api.php?method=marketdatav2", timeout: 180)
Error:
A Net::ReadTimeout occurred in background at 2014-10-05 11:42:06 UTC :
I don't know whether this time out is working? I feel 180 seconds is more than enough to retrieve the response because the default timeout is 60 seconds. If I want to handle Net read timeout error, Is there a way to do it? I want to return nil if this error occurs.
Or Is there a best solution to avoid happening this error?
Had a similar problem with a different module, and in my case it likely succeeds if retried, so I catch the timeout and retry.
max_retries = 3
times_retried = 0
begin
#thing that errors sometimes
rescue Net::ReadTimeout => error
if times_retried < max_retries
times_retried += 1
puts "Failed to <do the thing>, retry #{times_retried}/#{max_retries}"
retry
else
puts "Exiting script. <explanation of why this is unlikely to recover>"
exit(1)
end
end
Left here in case someone else finds it helpful.
you could use rescue to handling your timeout exception:
begin
HTTParty.get("http://pubapi.cryptsy.com/api.php?method=marketdatav2", timeout: 180)
rescue Net::ReadTimeout
nil
end
Related
I have a working system to produce errors and send them to be used by Active Admin.
For example in Active admin, for a specific page of our CMS, the page might execute :
url_must_be_accessible("http://www.exmaple.com", field_url_partner, "URL for the partner")
And this uses the code below to send to the Active Admin Editor different type of errors notifications:
module UrlHttpResponseHelper
def url_must_be_accessible(url, target_field, field_name)
if url
url_response_code = get_url_http_response(url).code.to_i
case url_response_code
when -1
# DNS issue; website does not exist;
errors.add(target_field,
"#{field_name}: DNS Problem -> #{url} website does not exist")
when 200
return
when 304
return
else
errors.add(target_field,
"#{field_name}: #{url} sends #{url_response_code} response code")
end
end
end
def get_url_http_response(url)
uri = URI.parse(URI.encode(url))
request = Net::HTTP.get_response(uri)
return request
rescue Errno::ECONNREFUSED, SocketError => e
OpenStruct.new(code: -1)
end
end
In local mode, this worked great! But in production, we're on Heroku and when a request pn Heroku goes beyond 30 seconds like if you try on this link "http://www.exmaple.com", the app crashes with a "H12 error".
I'd like to add to the code above two things
- timeouts: i think i need both read_timeout and open_timeout and that the read_timeout + open_timeout should be < to 30 seconds, with let's take some security , better < 25seconds
if the request is still "going" after 25 seconds, then avoid reaching 30seconds by giving up/dropping the request
and catch this "we dropped the request intentionnally because risk of timeout" by sending a notification to the user. I'd like to use my current system with somehting along the lines of:
rescue Errno::ECONNREFUSED, SocketError => e
OpenStruct.new(code: -7) # = some random number
end
case url_response_code
when -7
errors.add(target_field,
"#{field_name}: We tried to reach #{url} but this takes too long and risks crashing the app. please check the url and try again.")
How can I create a code like -1 but another one to rescue this "timeout"/"drop the request attempt" that I myself enforce.
Tried but nothing works. I don't manage to create the code for catch and drop this request if reaches 25 seconds...
That's not very beautiful solution (see: https://medium.com/#adamhooper/in-ruby-dont-use-timeout-77d9d4e5a001), but I believe you still can use it here, because you only have one thing happening inside opposite to the example in the link, where multiple actions could cause non-obvious behavior:
def get_url_http_response(url)
uri = URI.parse(URI.encode(url))
request = Timeout.timeout(25) { Net::HTTP.get_response(uri) }
return request
rescue Errno::ECONNREFUSED, SocketError => e
OpenStruct.new(code: -1)
rescue Timeout::Error
# return here anything you want
end
I have done this tons of times before when fetching things from a database, etc.
For my specific case I am using a 3rd party to connect to a piece of hardware... Anyways, in the case of an error, such as an invalid id obviously, we want to raise a exception or a rescue... but unfortunately I don't know how to raise it because by the time it is hit, it's too late (I think)
Here...
#
# getting params and saving item above...
#
if item.save
device = RubySpark::Device.new("FAKEUNITID800")
device.function("req", "ITEM")
redirect_to controller: 'items', action: 'edit_items'
end
If this was a valid ID, everything would work, and it would take you to the /edit page! But the issue is, with an invalid ID, it just does...
Completed 500 Internal Server Error in 897ms
RubySpark::Device::ApiError - Permission Denied: Invalid Device ID:
I checked out the following tutorials
Rescue StandardError, Not Exception
How to catch 404 and 500 error in Rails?
Dynamic Rails Error Pages
But honestly, they just make me more confused. Maybe I have the wrong approach to this. I always thought that first you make the request, and then you have a fall back case, depending what status (ie. 200, 500, 404) you get... you go from there.
Rails returns an 500 Internal Server Error response because an exception was raised that it does not no how to handle. You can't rescue "500 Internal Server Error" in Rails because it is not an exception - its the framework bailing from an uncaught exception to avoid data loss or unpredictable behavior.
Fortunatly you don't have to. You can just rescue the RubySpark exception:
begin
device = RubySpark::Device.new("FAKEUNITID800")
device.function("req", "ITEM")
rescue RubySpark::Device::ApiError => e
logger.error(e.message)
end
You can also use rescue_from in Rails controllers that wraps the entire action in a before block:
class FooController < ApplicationCotnroller
rescue_from RubySpark::Device::ApiError, with: :do_something
# ...
end
I am using some api with httparty gem
I have read this question:
How can I handle errors with HTTParty?
And there are two most upvoted answers how to handle errors
first one using response codes (which does not address connection failures)
response = HTTParty.get('http://twitter.com/statuses/public_timeline.json')
case response.code
when 200
puts "All good!"
when 404
puts "O noes not found!"
when 500...600
puts "ZOMG ERROR #{response.code}"
end
And the second - catching errors.
begin
HTTParty.get('http://google.com')
rescue HTTParty::Error
# donĀ“t do anything / whatever
rescue StandardError
# rescue instances of StandardError,
# i.e. Timeout::Error, SocketError etc
end
So what is the best practice do handle errors?
Do I need to handle connection failures?
Right now I am thinking of combining this two approaches like this:
begin
response = HTTParty.get(url)
case response.code
when 200
# do something
when 404
# show error
end
rescue HTTParty::Error => error
puts error.inspect
rescue => error
puts error.inspect
end
end
Is it a good approach to handle both connection error and response codes?
Or I am being to overcautious?
You definitely want to handle connection errors are they are exceptions outside the normal flow of your code, hence the name exceptions. These exceptions happen when connections timeout, connections are unreachable, etc, and handling them ensures your code is resilient to these types of failures.
As far as response codes, I would highly suggest you handle edge cases, or error response codes, so that your code is aware when there are things such as pages not found or bad requests which don't trigger exceptions.
In any case, it really depends on the code that you're writing and at this point is a matter of opinion but in my opinion and personal coding preference, handling both exceptions and error codes is not being overcautious but rather preventing future failures.
I'm calling an external API over HTTP which will take more 30 seconds to provide a response. When I run it, although the API call completes successfully (the remote service does what it's supposed to do), my Ruby gets a timeout error before it receives the 'OK' response. I get this error:
/Users/chris/.rvm/rubies/ruby-2.1.0/lib/ruby/2.1.0/net/protocol.rb:158:in `rescue in rbuf_fill': Net::ReadTimeout (Net::ReadTimeout)
Is there a way I can give it more time so it can cleanly handle the response?
http = Net::HTTP.new(#host, #port)
http.read_timeout = 500
Source: http://ruby-doc.org/stdlib-2.1.1/libdoc/net/http/rdoc/Net/HTTP.html#method-i-read_timeout-3D
I'm seriously struggling to solve this one, any help would be appreciated!
I have two Rails apps, let's call them Client and Service, all very simple, normal REST interface - here's the basic scenario:
Client makes a POST /resources.json request to the Service
The Service runs a process which creates the resource and returns an ID to the Client
Again, all very simple, just that Service processing is very time-intensive and can take several minutes. If that happens, an EOFError is raised on the Client, exactly 60s after the request was made (no matter what the ActiveResource::Base.timeout is set to) while the service correctly processed the request and responds with 200/201. This is what we see in the logs (chronologically):
C 00:00:00: POST /resources.json
S 00:00:00: Received POST /resources.json => resources#create
C 00:01:00: EOFError: end of file reached
/usr/ruby1.8.7/lib/ruby/1.8/net/protocol.rb:135:in `sysread'
/usr/ruby1.8.7/lib/ruby/1.8/net/protocol.rb:135:in `rbuf_fill'
/usr/ruby1.8.7/lib/ruby/1.8/timeout.rb:62:in `timeout'
...
S 00:02:23: Response POST /resources.json, 201, after 143s
Obviously the service response never reached the client. I traced the error down to the socket level and recreated the scenario in a script, where I open a TCPSocket and try to retrieve data. Since I don't request anything, I shouldn't get anything back and my request should time out after 70 seconds (see full script at the bottom):
Timeout::timeout(70) { TCPSocket.open(domain, 80).sysread(16384) }
These were the results for a few domain:
www.amazon.com => Timeout after 70s
github.com => EOFError after 60s
www.nytimes.com => Timeout after 70s
www.mozilla.org => EOFError after 13s
www.googlelabs.com => Timeout after 70s
maps.google.com => Timeout after 70s
As you can see, some servers allowed us to "wait" for the full 70 seconds, while others terminated our connection, raising EOFErrors. When we did this test against our service, we (expectedly) got an EOFError after 60 seconds.
Does anyone know why this happens? Is there any way to prevent these or extend the server-side time-out? Since our service continues "working", even after the socket was closed, I assume it must be terminated on the proxy-level?
Every hint would be greatly appreciated!
PS: The full script:
require 'socket'
require 'benchmark'
require 'timeout'
def test_socket(domain)
puts "Connecting to #{domain}"
message = nil
time = Benchmark.realtime do
begin
Timeout::timeout(70) { TCPSocket.open(domain, 80).sysread(16384) }
message = "Successfully received data" # Should never happen
rescue => e
message = "Server terminated connection: #{e.class} #{e.message}"
rescue Timeout::Error
message = "Controlled client-side timeout"
end
end
puts " #{message} after #{time.round}s"
end
test_socket 'www.amazon.com'
test_socket 'github.com'
test_socket 'www.nytimes.com'
test_socket 'www.mozilla.org'
test_socket 'www.googlelabs.com'
test_socket 'maps.google.com'
I know this is nearly a year old, but in case anyone else finds this, I wanted to add a possible culprit.
Amazon's ELB will terminate idle connections at 60 seconds, so if you are using EC2 behind ELB, then ELB could be the server side problem.
the only "documentation" I could find here is https://forums.aws.amazon.com/thread.jspa?threadID=33427&start=50&tstart=50, but it's better than nothing
Each server decides when to close the connection. It depends on the server side software and its settings. You can't control that.