How to set default pageload timeout in Watir? - timeout

I have a page that I'd like to raise error if it loads too slow.
Is there some method for Watir analogous to Watir-Webdriver's:
client = Selenium::WebDriver::Remote::Http::Default.new
client.timeout = 10
#browser = Watir::Browser.new :firefox, http_client: client

Watir-Classic does not have an API for controlling how long to wait for a page to load.
When clicking a link or using the goto method, the Browser#wait method is called. This will block execution until the page is loaded. It is hard-coded to timeout if the page does not load in 5 minutes:
def wait(no_sleep=false)
#xml_parser_doc = nil
#down_load_time = 0.0
interval = 0.05
start_load_time = ::Time.now
Timeout::timeout(5*60) do
...
end
Solution 1 - Use Timeout
If you only need to change timeout for a small number of scenarios, the simplest option may be to use the Timeout library.
For example, www.cnn.com takes 9 seconds to load on my computer. However, to only wait up to 5 seconds, you can wrap the goto (or click) method in an extra timeout:
Timeout::timeout(5) do
browser.goto 'www.cnn.com'
end
#=> execution expired (Timeout::Error)
Solution 2 - Monkey patch Browser#wait
If you want the change to apply to all pages, you could overwrite the Browser#wait method to use a different timeout. For example, overwriting it to only be 5 seconds:
require 'watir-classic'
module Watir
class Browser
def wait(no_sleep=false)
#xml_parser_doc = nil
#down_load_time = 0.0
interval = 0.05
start_load_time = ::Time.now
# The timeout can be changed here (it is in seconds)
Timeout::timeout(5) do
begin
while #ie.busy
sleep interval
end
until READYSTATES.has_value?(#ie.readyState)
sleep interval
end
until #ie.document
sleep interval
end
documents_to_wait_for = [#ie.document]
rescue WIN32OLERuntimeError # IE window must have been closed
#down_load_time = ::Time.now - start_load_time
return #down_load_time
end
while doc = documents_to_wait_for.shift
begin
until READYSTATES.has_key?(doc.readyState.to_sym)
sleep interval
end
#url_list << doc.location.href unless #url_list.include?(doc.location.href)
doc.frames.length.times do |n|
begin
documents_to_wait_for << doc.frames[n.to_s].document
rescue WIN32OLERuntimeError, NoMethodError
end
end
rescue WIN32OLERuntimeError
end
end
end
#down_load_time = ::Time.now - start_load_time
run_error_checks
sleep #pause_after_wait unless no_sleep
#down_load_time
end
end
end
browser.goto 'www.cnn.com'
#=> execution expired (Timeout::Error)
You could put the timeout value into a variable so that it can be dynamically changed.

Related

Rake task errors with: JSON::ParserError: 765: unexpected token at '' but works fine in rails console

I have a rake task which loops over pages of card game database and checks for the cards in each deck. Until recently this was working fine (it's checked 34000 pages of 25 decks each no problem) but recently this has stopped working when I run the rake task and I get the error:
JSON::ParserError: 765: unexpected token at ''
In order to debug this I have tried running each line of the get request and json parse manually in the rails console and it works fine every time. Weirder still I have installed pry and it works every time I go through the json parse manually with pry (takes ages though).
Here is the rake task:
desc "Create Cards"
require 'net/http'
require 'json'
task :create_cards => :environment do
# Get the total number of pages of decks
uri = URI("https://www.keyforgegame.com/api/decks/")
response = Net::HTTP.get(URI(uri))
json = JSON.parse(response)
deck_count = json["count"]
# Set variables
page_number = 1
page_size = 25 # 25 is the max page size
page_limit = deck_count / 25
card_list = Card.where(is_maverick: false)
# Updates Card List (non-mavericks) - there are 740 cards so we stop when we have that many
# example uri: https://www.keyforgegame.com/api/decks/?page=1&page_size=30&search=&links=cards
puts "Updating Card List..."
until page_number > page_limit || Card.where(is_maverick: false).length == 740
uri = URI("https://www.keyforgegame.com/api/decks/?page=#{page_number}&page_size=#{page_size}&search=&links=cards")
response = Net::HTTP.get(URI(uri))
json = JSON.parse(response) # task errors here!
cards = json["_linked"]["cards"]
cards.each do |card|
unless Card.exists?(:card_id => card["id"])
Card.create({
card_id: card["id"],
amber: card["amber"],
card_number: card["card_number"],
card_text: card["card_text"],
card_title: card["card_title"],
card_type: card["card_type"],
expansion: card["expansion"],
flavor_text: card["flavor_text"],
front_image: card["front_image"],
house: card["house"],
is_maverick: card["is_maverick"],
power: card["power"],
rarity: card["rarity"],
traits: card["traits"],
})
end
end
puts "#{page_number}/#{page_limit} - Cards: #{Card.where(is_maverick: false).length}"
page_number = (page_number + 1)
end
end
The first json parse where it gets the total number of pages of decks works okay. It's the json parse in the until block that is failing (I've marked the line with a comment to that effect).
As I say, if I try this in the console it works fine and I can parse the json without error, literally copying and pasting the lines from the file into the rails console.
Since you're looping over an api, it's possible there are rate limits. Public APIs normally have per second rate limits. You could try adding a sleep to slow down your requests, not sure how many your making per second. I tested with a simple loop and looks like response returns an empty string if you hit the api too fast.
url='https://www.keyforgegame.com/api/decks/?page=1&page_size=30&search=&links=cards'
uri = URI(url)
i = 1
1000.times do
puts i.to_s
i += 1
response = Net::HTTP.get(URI(uri))
begin
j = JSON.parse(response)
rescue
puts response
#= ""
end
end
I played with this until the loop stopped returning empty string after the 3rd request and got it to work with sleep 5 inside each loop, so you can probably add as the first line inside your loop. But you should probably add error handling to your rake task in case you encounter any other API errors.
So for now you can probably just do this
until page_number > page_limit || Card.where(is_maverick: false).length == 740
sleep 5
# rest of your loop code, maybe add a rescue like I've shown
end

Single spec duration

Is there a way in RSpec to show every single test duration and not just the total suite duration?
Now we have
Finished in 7 minutes 31 seconds (files took 4.71 seconds to load)
but I'd like to have something like
User accesses home and
he can sign up (finished in 1.30 seconds)
he can visit profile (finished in 3 seconds)
.
.
.
Finished in 7 minutes 31 seconds (files took 4.71 seconds to load)
You can use rspec --profile N, which would show you the top N slowest examples.
For a quick solution see #maximf's answer. For an alternative solution, you could write your own rspec formatter, which would give you greater control over what you are measuring.
For example, extnding rspec's base text formatter:
RSpec::Support.require_rpec_core "formatters/base_text_formatter"
module RSpec::Core::Formatters
class TimeFormatter < BaseTextFormatter
Formatters.register self, :example_started, :example_passed
attr_accessor :example_start, :longest_example, :longest_time
def initialize(output)
#longest_time = 0
super(output)
end
def example_started(example)
#example_start = Time.now
super(example)
end
def example_passed(example)
time_taken = Time.now - #example_start
output.puts "Finished #{example.example.full_description} and took #{Helpers.format_duration(time_taken)}"
if #time_taken > #longest_time
#longest_example = example
#longest_time = time_taken
end
super(example)
end
def dump_summary(summary)
super(summary)
output.puts
output.puts "The longest example was #{#longest_example.example.full_Description} at #{Helpers.format_duration(#longest_time)}"
end
end
end
Note that this will only log times on passed examples, but you could add an example_failed failed to do similar, it also only works with RSpec 3. This is based on my work on my own formatter: https://github.com/yule/dots-formatter
Instead of doing rspec --profile Neverytime we run specs (as #maximf said), we can add it to our RSpec configuration:
RSpec.configure do |config|
config.profile_examples = 10
end

rescue from connection reset by peer error and retry

I am hitting an external service which does some password encryption and returns couple of things.
Now if I want to generate 50 passwords we run this function in a loop 50 times
def self.encrypt_password(password)
retries = 2
uri = URI
params = Base64.encode64(password)
uri.query = URI.encode("Source=#{params}")
begin
retries.times.each do
res = Net::HTTP.get_response(uri)
if res.is_a?(Net::HTTPSuccess)
obj = JSON.parse(res.body)
pw = Base64.decode64(obj["Data"])
ps = Base64.decode64(obj["Key"])
pws = Iconv.iconv('ascii', 'utf-16', pws)
return pwe,pws[0]
end
end
rescue
raise "Error generating pws: #{$!}"
end
end
But the problem, i am encountering is that there are occasions when the service just returns the following error in the middle of a loop and exits:
"Connection reset by Peer error"
My question is how do I rescue from that error and retry a few times without breaking the flow of the program?
Or can someone recommend alternate solutions to my problem?
NOTE: I am using ruby on rails 2 and ruby 1.8.x
Ruby has the retry method, that can be used in the rescue clause.
It just runs the current method again, so you can use a counter to limit the number of retries:
def self.encrypt_password(password)
retries = 2
uri = URI
params = Base64.encode64(password)
uri.query = URI.encode("Source=#{params}")
retries.times.each do
res = Net::HTTP.get_response(uri)
if res.is_a?(Net::HTTPSuccess)
obj = JSON.parse(res.body)
pw = Base64.decode64(obj["Data"])
ps = Base64.decode64(obj["Key"])
pws = Iconv.iconv('ascii', 'utf-16', pws)
return pwe,pws[0]
end
end
rescue SomeExceptionType
if retries > 0
retries -= 1
retry
else
raise "Error generating pws: #{$!}"
end
end
end

Can I automatically re-run a method if a timeout error occurs?

We have an application that makes hundreds of API calls to external services. Sometimes some calls take too much time to respond.
I am using the rake_timeout gem to find time consuming process, so, Timeout::Error will be thrown whenever some request is taking too long to respond. I am rescuing this error and doing a retry on that method:
def new
#make_one_external_service_call = exteral_api_fetch1(params[:id])
#make_second_external_call = exteral_api_fetch1(#make_one_external_service_call)
#Below code will be repeated in every method
tries = 0
rescue Timeout::Error => e
tries += 1
retry if tries <= 3
logger.error e.message
end
This lets the method fully re-run it. This is very verbose and I am repeating it every time.
Is there any way to do this so that, if the Timeout:Error occurrs, it will retry that method automatically three times?
I have a little module for that:
# in lib/retryable.rb
module Retryable
# Options:
# * :tries - Number of tries to perform. Defaults to 1. If you want to retry once you must set tries to 2.
# * :on - The Exception on which a retry will be performed. Defaults to Exception, which retries on any Exception.
# * :log - The log level to log the exception. Defaults to nil.
#
# If you work with something like ActiveRecord#find_or_create_by_foo, remember to put that call in a uncached { } block. That
# forces subsequent finds to hit the database again.
#
# Example
# =======
# retryable(:tries => 2, :on => OpenURI::HTTPError) do
# # your code here
# end
#
def retryable(options = {}, &block)
opts = { :tries => 1, :on => Exception }.merge(options)
retry_exception, retries = opts[:on], opts[:tries]
begin
return yield
rescue retry_exception => e
logger.send(opts[:log], e.message) if opts[:log]
retry if (retries -= 1) > 0
end
yield
end
end
and than in your model:
extend Retryable
def new
retryable(:tries => 3, :on => Timeout::Error, :log =>:error) do
#make_one_external_service_call = exteral_api_fetch1(params[:id])
#make_second_external_call = exteral_api_fetch1(#make_one_external_service_call)
end
...
end
You could do something like this:
module Foo
def self.retryable(options = {})
retry_times = options[:times] || 10
try_exception = options[:on] || Exception
yield if block_given?
rescue *try_exception => e
retry if (retry_times -= 1) > 0
raise e
end
end
Foo.retryable(on: Timeout::Error, times: 5) do
# your code here
end
You can even pass multiple exceptions to "catch":
Foo.retryable(on: [Timeout::Error, StandardError]) do
# your code here
end
I think what you need is the retryable gem.
With the gem, you can write your method like below
def new
retryable :on => Timeout::Error, :times => 3 do
#make_one_external_service_call = exteral_api_fetch1(params[:id])
#make_second_external_call = exteral_api_fetch1(#make_one_external_service_call)
end
end
Please read the documentation for more information on how to use the gem and the other options it provides
you could just write a helper-method for that:
class TimeoutHelper
def call_and_retry(tries=3)
yield
rescue Timeout::Error => e
tries -= 1
retry if tries > 0
Rails.logger.error e.message
end
end
(completely untested) and call it via
TimeoutHelper.call_and_retry { [your code] }

Customize IO stream timeout value in Ruby / Rails

In my rails app, I use open-uri to open an external file which may take up to 10 minutes to load
Example:
dl_stream = open('http://wetten.overheid.nl/xml.php?regelingID=bwbr0020368')
Now, after 1 minute, Ruby will throw a timeout error. I gleaned this from the source code, in \net\protocol.rc:
#read_timeout = 60
def rbuf_fill
begin
#rbuf << #io.read_nonblock(BUFSIZE)
rescue IO::WaitReadable
if IO.select([#io], nil, nil, #read_timeout)
retry
else
raise Timeout::Error
end
rescue IO::WaitWritable
# OpenSSL::Buffering#read_nonblock may fail with IO::WaitWritable.
# http://www.openssl.org/support/faq.html#PROG10
if IO.select(nil, [#io], nil, #read_timeout)
retry
else
raise Timeout::Error
end
end
end
I'm guessing I can set this timeout value to something more amenable to my situation, like 15 minutes, in my app settings, but how and where?
You can add the timeout in seconds to the call to open with the :read_timeout option:
# timeout after 10 minutes
open('http://example.com', :read_timeout => 600).read
All the options are documented here.

Resources