Rerun Cucumber step only in case of specific failure - ruby-on-rails

Running Cucumber in CircleCI with Selenium sometimes the tests fail due to CircleCI's performance. A common failure is a Net::ReadTimeout error, which never seems to happen locally. I want to rescue the steps from that error and try them again, but I do not want to rerun all failed tests.
I could put build a rescue into the specific step(s) that seem to trigger this error, but ideally I would be able to provide Cucumber a list of errors that it rescues once or twice, to rerun that step, before finally letting the error pass through.
Something like:
# support/env.rb
Cucumber.retry_errors = {
# error => number of retries
"Net::ReadTimeoutError" => 2
}
Does anything like that exist?

I would be surprised if you found something like you are looking for in Cucumber.
Re-running a failing step just to make really sure that it is an actual failure and not just a random network glitch is, from my perspective, solving the wrong issue.
My approach would be see if the verification you are looking for is possible to do without a network. I might also consider using other tooling than Cucumber if I really must re-run a few times to make sure an error really is an error. This would, however, lead me down another rabbit hole. How many times should you run, what is the threshold? Should three out of five executions pass for you to declare a passing test? It gets ugly very fast in my eyes.

It looks like this issue is that Selenium takes took long the compile the assets on the first test. Subsequent tests use the compiled assets and do not have an issue. After viewing this Github issue I upped the timeout limit for Selenium.
Capybara.register_driver :chrome do |app|
http_client = Selenium::WebDriver::Remote::Http::Default.new
http_client.timeout = 120 # default is 60 seconds
Capybara::Selenium::Driver.new(app, browser: :chrome, http_client: http_client)
end

I know this doesn't specifically catch retries only of a specific class, but there does exist a clean way to do this in cucumber now, especially since this result comes up in Google when searching for "rerun cucumber"
In your cucumber.yml file, you can do something like this now:
<%
std_opts = "--format #{ENV['CUCUMBER_FORMAT'] || 'pretty'} --strict --tags 'not #wip'"
std_opts_with_retry = "#{std_opts} --no-strict-flaky --retry 3"
%>
default: <%= std_opts_with_retry %> features
There was a big philosophical debate about whether or not a "flaky" test should be considered a fail. It was agreed if "--strict" is passed, the default should be that "flaky" tests (aka: tests that fail on the first run and pass on a following run) fail the run. So in order to get flaky tests to not "fail" your test run, you can pass the additional --no-strict-flaky along with the --retry 3 option and now any tests that may sometimes take a variable amount of time on your CI platform won't require a rebuild of the entire commit.
One thing to note when doing this: in general, I'd advise bumping your timeout back down to a reasonable limit where the majority of tests can pass without needing a long wait, although I understand in this case it's to accommodate longer compile times.

Related

Debugging Rspec Postgres lockups

I am trying to test an app that uses gem devise_token_auth, which basically includes a couple extra DB read/writes on almost every request (to verify and update user access tokens).
Everything is working fine, except when testing a controller action that includes several additional db read/writes. In these cases, the terminal locks up and I'm forced to kill the ruby process via activity monitor.
Sometimes I get error messages like this:
ruby /Users/evan/.rvm/gems/ruby-2.1.1/bin/rspec spec/controllers/api/v1/messages_controller_spec.rb(1245,0x7fff792bf310) malloc: *** error for object 0x7ff15fb73c00: pointer being freed was not allocated
*** set a breakpoint in malloc_error_break to debug
Abort trap: 6
I have no idea how to interpret that. I'm 90% sure the problem is due to this gem and the extra DB activity it causes on each request because I when I revert to my previous, less intensive auth, all the issues go away. I've also gotten things under control by giving postgres some extra time on the offending tests:
after :each do
sleep 2
end
This works fine for all cases except one, which requires a timeout before the expect, otherwise it throws this error:
Failure/Error: expect(#user1.received_messages.first.read?).to eq true
ActiveRecord::StatementInvalid:
PG::UnableToSend: another command is already in progress
: SELECT "messages".* FROM "messages" WHERE "messages"."receiver_id" = $1 ORDER BY "messages"."id" ASC LIMIT 1
which, to me, points to the DB issue again.
Is there anything else I could be doing to track down/control these errors? Any rspec settings I should look into?
If you are running parallel rspec tasks, that could be triggering this. When we've run into issues like this, we have forced those tests to run in single, non-parallel instance of rspec in our CI using tags.
Try something like this:
context 'when both records get updated in one job', non_parallel do
it { is_expected.to eq 2 }
end
And then invoke rspec singularly on the non_parallel tag:
rspec --tag non_parallel
The bulk of your tests (not tagged with non_parallel) can still be run in parallel in your CI solution (e.g. Jenkins) for performance.
Of course, be careful applying this band-aid. It is always better to identify what is not race-safe in your code, since that race could happen in the real world.

Cucumber After hook is being executed before scenario ends

I have an After hook in hooks.rb that deletes users created in the last scenario.
I started to notice that when tests run at a specific time of the day, this hook is being executed in the middle of a scenario.
There is a method that executes until a certain line and then the hook executes just before an assert command in that method, which fails because of it.
The tests are run from a batch file ("ruby file_name.rb").
Does anyone have an idea why this might happen or how to solve it?
Thanks!
Don't you run your tests from command line like following?
$ cucumber
I would suggest to use debugger gem. You could add debugger statement just before you think is failing and then use some debugger commands
https://github.com/cldwalker/debugger
Perhaps related to:
https://github.com/cucumber/cucumber/issues/52
Issue 52 is mostly fixed on the master, but I think there are a few remaining tests broken that need to be fixed before release.
Regardless of that, you might instead try using the database_cleaner gem for this purpose in general. We use a clean database before every scenario for testing to ensure we have discrete tests that cannot have false positive/negative results due to results of other tests. We use the following:
begin
# start off entire run with with a full truncation
DatabaseCleaner.clean_with :truncation
# continue with the :transaction strategy to be faster while running tests.
DatabaseCleaner.strategy = :transaction
rescue NameError
raise "You need to add database_cleaner to your Gemfile (in the :test group) if you wish to use it."
end
And we load our test seeds before each run:
Before do |scenario|
load Rails.root.join('db/seeds.rb')
end
Note that our seeds.rb checks which environment it is running in to keep it short. A big seeds file run in this manner would significantly increase test run times, so be careful.

Cucumber step to pause and hand control over to the user

I'm having trouble debugging cucumber steps due to unique conditions of the testing environment. I wish there was a step that could pause a selenium test and let me take over.
E.g.
Scenario: I want to take over here
Given: A bunch of steps have already run
When: I'm stuck on an error
Then: I want to take control of the mouse
At that point I could interact with the application exactly as if I had done all the previous steps myself after running rails server -e test
Does such a step exist, or is there a way to make it happen?
You can integrate ruby-debug into your Cucumber tests. Nathaniel Ritmeyer has directions here and here which worked for me. You essentially require ruby-debug, start the debugger in your environment file, and then put "breakpoint" where ever you want to see what's going on. You can both interact with the browser/application and see the values of your ruby variables in the test. (I'm not sure whether it'll let you see the variables in your rails application itself - I'm not testing against a rails app to check that).
I came up with the idea to dump the database. It doesn't let you continue work from the same page, but if you have the app running during the test, you can immediately act on the current state of things in another browser (not the one controlled by Selenium).
Here is the step:
When /I want to take control/i do
exec "mysqldump -u root --password=* test > #{Rails.root}/support/snapshot.sql"
end
Because it is called by exec, DatabaseCleaner has no chance to truncate tables, so actually it's irrelevant that the command is a database dump. You don't have to import the sql to use the app in its current state, but it's there if you need it.
My teammate has done this using selenium, firebug a hook (#selenium_with_firebug)
Everything he learned came from this blogpost:
http://www.allenwei.cn/tips-add-firebug-extension-to-capybara/
Add the step
And show me the page
Where you want to interact with it
Scenario: I want to take over here
Given: A bunch of steps have already run
When: I'm stuck on an error
Then show me the page
use http://www.natontesting.com/2009/11/09/debugging-cucumber-tests-with-ruby-debug/
Big thank you to #Reed G. Law for the idea of dumping the database. Then loading it into development allowed me to determine exactly why my cucumber feature was not impacting database state as I had expected. Here's my minor tweak to his suggestion:
When /Dump the database/i do
`MYSQL_PWD=password mysqldump -u root my_test > #{Rails.root}/snapshot.sql`
# To replicate state in development run:
# `MYSQL_PWD=password mysql -u root my_development < snapshot.sql`
end
You can also use the following in feature/support/debugging.rb to let you step through the feature one step at a time:
# `STEP=1 cucumber` to pause after each step
AfterStep do |scenario|
next unless ENV['STEP']
unless defined?(#counter)
puts "Stepping through #{scenario.title}"
#counter = 0
end
#counter += 1
print "At step ##{#counter} of #{scenario.steps.count}. Press Return to"\
' execute...'
STDIN.getc
end

How to skip certain tests with Test::Unit

In one of my projects I need to collaborate with several backend systems. Some of them somewhat lacks in documentation, and partly therefore I have some test code that interact with some test servers just to see everything works as expected. However, accessing these servers is quite slow, and therefore I do not want to run these tests every time I run my test suite.
My question is how to deal with a situation where you want to skip certain tests. Currently I use an environment variable 'BACKEND_TEST' and a conditional statement which checks if the variable is set for each test I would like to skip. But sometimes I would like to skip all tests in a test file without having to add an extra row to the beginning of each test.
The tests which have to interact with the test servers are not many, as I use flexmock in other situations. However, you can't mock yourself away from reality.
As you can see from this question's title, I'm using Test::Unit. Additionally, if it makes any difference, the project is a Rails project.
The features referred to in the previous answer include the omit() method and omit_if()
def test_omission
omit('Reason')
# Not reached here
end
And
def test_omission
omit_if("".empty?)
# Not reached here
end
From: http://test-unit.rubyforge.org/test-unit/en/Test/Unit/TestCaseOmissionSupport.html#omit-instance_method
New Features Of Test Unit 2.x suggests that test-unit 2.x (the gem version, not the ruby 1.8 standard library) allows you to omit tests.
I was confused by the following, which still raises an error to the console:
def test_omission
omit('Reason')
# Not reached here
end
You can avoid that by wrapping the code to skip in a block passed to omit:
def test_omission
omit 'Reason' do
# Not reached here
end
end
That actually skips the test as expected, and outputs "Omission: Test Reason" to the console. It's unfortunate that you have to indent existing code to make this work, and I'd be happy to learn of a better way to do it, but this works.

Why do Test::Unit testcases start up so slowly?

>rails -v
Rails 1.2.6
>ruby -v
ruby 1.8.6 (2007-03-13 patchlevel 0) [i386-mswin32]
When I run a test fixture (that tests a rails model class) like this, it takes 20-30 secs to start executing these tests (show the "Loaded suite..."). What gives?
>ruby test\unit\category_test.rb
require File.dirname(__FILE__) + '/../test_helper'
class CategoryTest < Test::Unit::TestCase
def setup
Category.delete_all
end
def test_create
obCategoryEntry = Category.new({:name=>'Apparel'})
assert obCategoryEntry.save, obCategoryEntry.errors.full_messages.join(', ')
assert_equal 1, Category.count
assert_not_nil Category.find(:all, :conditions=>"name='Apparel'")
end
#.. 1 more test here
end
This one is Rails using a MySql DB with no fixtures. This time it clocked 30secs+ to startup.
Take a look at this Rails Test Server.
A quote from the author:
"Every time you run a test in a Rails
application, the whole environment is
loaded, including libraries that don’t
change between two consecutive runs.
That can take a considerable amount of
time. What if we could load the
environment once, and only reload the
changing parts before each run?
Introducing RailsTestServing.
With RailsTestServing, the run time of
a single test file has gone from 8
seconds down to .2 of a second on my
computer. That’s a x40 speed
improvement. Now, I don’t think twice
before hitting ⌘R in TextMate. It
feels liberating!"
(This was featured on the Rails Envy Podcast this past week which is where I found this.)
When starting any tests, Rails first loads any fixtures you have (in test/fixtures) and recreates the database with them.
20-30 seconds sounds very slow though. Do you have a lot of fixtures that need to be loaded before your tests run, or is your database running slow?
Ruby's gem tool follows a path discovery algorithm which, apparently, is not Windows (as I see from your ruby -v) friendly.
You can get a clear picture if you trace, for example, a Rails application loading with ProcMon. Every (I really mean every) require starts a scan over all directories in Ruby's path plus all gem directories. A typical require takes 20 ms on an average machine. Since Rails makes hundreds of requires, those 20 ms easily sum up to seconds every time you launch the Rails environment. Take in the time to initialize the fixtures in the database and you get a better idea of why it takes so much time to just begin running the test-cases.
Perhaps because of each file-system architecture and implementation (path caching etc.), this is less of a problem in Linux than in Windows. I don't know who you should blame, though. It looks like the NTFS file-system could be improved with a better path caching implementation, but clearly the gem tool could implement the caching itself and have its performance not so dependent on the platform.
It seems like Test::Unit is the simplest, but also one of the slowest ways to do unit testing with Ruby. One of alternatives is ZenTest.
Test unit startup isn't particularly slow, and nowhere near 20 seconds.
(11:39) ~/tmp $ cat test_unit.rb
require 'test/unit'
class MyTest < Test::Unit::TestCase
def test_test
assert_equal("this", "that")
end
end
(11:39) ~/tmp $ time ruby test_unit.rb
Loaded suite test_unit
Started
F
Finished in 0.007338 seconds.
1) Failure:
test_test(MyTest) [test_unit.rb:4]:
<"this"> expected but was
<"that">.
1 tests, 1 assertions, 1 failures, 0 errors
real 0m0.041s
user 0m0.027s
sys 0m0.012s
It's probably something you're doing in your tests. Are you doing anything complicated? Setting up a database? Retrieving something from the internet?
Complete shot in the dark, but the majority of the time I see long startup times on things, it is usually due to some sort of reverse DNS lookup happening with some TCP socket communication somewhere along the way.
Try adding:
require 'socket'
Socket.do_not_reverse_lookup = true
at the top of your test file after your other require line.
What does your test_helper.rb look like? Are you using instantiated fixtures?
self.use_instantiated_fixtures = true
[edit]
If this is set to true try setting it to false.

Resources