Get extra info within after(:each) in RSpec - ruby-on-rails

I'm using RSpec and I want to get result (passed or not), class name, test name (or description) and error message (if present) of each test after once it finished.
So here is a pretty simple code
describe MyClass do
after :each do
#how do I know a test passed, its name, class name and error message?
end
it "should be true" do
5.should be 5
end
it "should be false" do
5.should be 6
end
end
Your suggestions?

There are a few formatters for test output that can show you which tests passed/failed/pending, and it sounds like you are interested in the one called documentation.
In order to make this formatter available for all your rspec files, create a file called .rspec with content:
--color # I just like this one for more easily read output
--formatter documentation
This means that when you run a test suite like you have above you will see:
MyClass
should be true
should be false (Failed - 1)
# Individual output for each error would show up down here
Reference: https://www.relishapp.com/rspec/rspec-core/v/2-11/docs/formatters/custom-formatters

You can get extra information from formatters, but since after hooks are potential points of failure, they don't expose failure information themselves.
See https://www.relishapp.com/rspec/rspec-core/v/2-11/docs/formatters/custom-formatters and https://github.com/rspec/rspec-core/blob/master/lib/rspec/core/formatters/base_formatter.rb

Related

Rails + RSpec + Set the Order of Test Cases Files while running RSpec Test Suite

For RSpec Capybara Test Case [ Selenium ], I have near about 7 to 8 spec files. Few of the test cases are dependent on each other. For example, before deleting an product, I have to create the product.
But when test cases excution starts, delete product based rspec runs before the create product rspec.
File Name:-
product_delete.rspec
product_listing.rspec
product_newly_added.rspec
Command : rspec
.rspec file in root folder
--require spec_helper
--format html
--out ./log/rspec_results.html
--color
Test case failed while execution for delete product.
Is there any way to define the sequence of file execution while running RSpec.
Test cases should be independent. For your delete test case you can use factory and create a record then delete it in a single test case as shown in example.
just define factory once and use it to create records, in this way DRY wont be violated.
describe 'POST destroy' do
before(:each) do
#obj = build(:factory_name)
#obj.save
end
it 'it has status 200' do
post :destroy, {"id" => #obj.id}
expect(ClassOfObj.count).to eq(0)
end
end
One possible approach is to not separate these actions into their own test cases. With feature specs you test whole features, not single buttons. So, your test might look like this:
Navigate to new item page. Make sure form is displayed
Fill out the form. Click submit. Verify that success message is displayed on screen.
Verify that you have been redirected to item index page. Verify that newly created item is indeed present on the page.
Click "delete" button.
Confirm that you're on index page and that item is no longer displayed.
As mentioned by most/all the other answers, your tests should be independent, and RSpec supports running tests in random order to guarantee that. One of the easiest ways to implement testing in these conditions is to use factories for the creation of your test data (FactorGirl, etc). In this case you would end up with a test along the lines of
feature "deleting of products" do
scenario "removes last product" do
create(:product) # Use factory to create one product
visit products_path
expect(page).to have_css('div.product', count: 1) # verify there is only one product shown on the page
click_link('delete') # click the delete button
expect(page).to have_text("Product deleted!") # check for a visible change that indicates deletion has completed
visit products_path
expect(page).not_to have_css('div.product') # No products shown any more - you may need to expect for something else first if the products are dynamically loaded to the page to ensure that has completed
end
end
You could check the DB contents rather than revisiting the products_path, but direct DB querying in feature tests is generally a bad smell since it's coupling user experience with implementation details.
If using this in Rails < 5.1 with a JS capable driver, you'll probably need to install database_cleaner and turn off transaction mode for JS tests - https://github.com/teamcapybara/capybara#transactions-and-database-setup and https://github.com/DatabaseCleaner/database_cleaner#rspec-with-capybara-example. In Rails 5.1+ the DB connection is shared between the app and tests so you can generally leave transactional testing enabled and database_cleaner is unneeded.

Error when using RSpec's `all` matcher with Capybara's `have_css` matcher

I'm just getting started with feature specs using RSpec (and Capybara). I'm testing my ActiveAdmin dashboard and I want to check that all panels have an orders table as shown in this snippet:
feature 'admin dashboard', type: :feature do
def panels
page.all('.column .panel')
end
describe 'all panels' do
it 'have an orders table' do
expect(panels).to all(have_css('table.orders tbody'))
end
end
end
I've used the all matcher a lot in my unit tests but it doesn't appear to work when wrapping Capybara's have_css matcher because I'm getting the following error:
Failure/Error: expect(panels).to all(have_css('table.orders tbody'))
TypeError:
no implicit conversion of Capybara::RackTest::CSSHandlers into String
Am I correct in my assumption that RSpec's built-in all matcher should work with other matchers as well?
Note: I'm using describe and it instead of feature and scenario in this instance because I'm testing output rather than user interaction scenarios (see my other question).
Unfortunately there is a conflict between RSpec's all and Capybara's all see Capybara Issue 1396. The all that you are calling is actually Capybara's all.
Solution 1 - Call BuiltIn::All Directly
The quickest solution would be to call RSpec's all method directly (or at least that code that it executes.
The expectation will work if you use RSpec::Matchers::BuiltIn::All.new instead of all:
expect(panels).to RSpec::Matchers::BuiltIn::All.new(have_css('table.orders tbody'))
Solution 2 - Redefine all
Calling the BuiltIn:All directly does not read nicely so might get annoying if used often. An alternative would be to re-define the all method to be RSpec's all method. To do this, add the module and configuration:
module FixAll
def all(expected)
RSpec::Matchers::BuiltIn::All.new(expected)
end
end
RSpec.configure do |c|
c.include FixAll
end
With the change, the all in the following line will behave like RSpec's all method.
expect(panels).to all(have_css('table.orders tbody'))
Note that if you want to use Capybara's all method, you would now always need to call it using the session (ie page):
# This will work because "page.all" is used
expect(page.all('table').length).to eq(2)
# This will throw an exception since "all" is used
expect(all('table').length).to eq(2)
I used a very similar approach to the accepted answer, but in a Cucumber environment I was getting errors about RSpec.configure not existing. Also, I wanted to call the matcher something besides all so that I could use them both without conflicts. This is what I ended up with
# features/support/rspec_each.rb
module RSpecEach
def each(expected)
RSpec::Matchers::BuiltIn::All.new(expected)
end
end
World(RSpecEach) # extends the Cucumber World environment
Now I can do things like:
expect(page.all('#employees_by_dept td.counts')).to each(have_text('1'))

Get Errors From FactoryGirl.lint

I inherited a number of FactoryGirl factories that don't really work and I'm trying to bring them up to snuff. Part of that has been the use of FactoryGirl.lint. So far, however, I have been able to find which factories fail and, for any individual one, run
x = FactoryGirl.build :invalid_factory
x.valid? # returns false as expected
x.errors # prints out the validation errors for that object
What I'd like to do is avoid having to do that for each factory. Is there a way to quickly get FactoryGirl.lint to write out the errors which each invalid factory? A flag to pass, a parameter to set? The documentation is extremely sparse on .lint
Loop through FactoryGirl.factories to perform your check on each factory.
FactoryGirl.factories.map(&:name).each do |factory_name|
describe "#{factory_name} factory" do
# Test each factory
it "is valid" do
factory = FactoryGirl.build(factory_name)
if factory.respond_to?(:valid?)
# the lamba syntax only works with rspec 2.14 or newer; for earlier versions, you have to call #valid? before calling the matcher, otherwise the errors will be empty
expect(factory).to be_valid, lambda { factory.errors.full_messages.join("\n") }
end
end
This script from the FactoryGirl wiki shows how to automate the check with RSpec and use Guard to always verify factories are valid.

How do you test your Config/Initializer Scripts with Rspec in Rails?

So I was searching online for a solution, but there seems to be scarce information about testing initializers created in Rails.
At the moment, I've written a pretty large API call-n-store in my config/initializers/retrieve_users.rb file. It makes an API request, parses the JSON, and then stores the data as users. Being pretty substantial, I've yet to figure out the cleanest way to test it. Since I need to retrieve the users before any functions are run, I don't believe I can move this script anywhere else (although other suggestions would be welcomed). I have a couple questions about this:
Do I have to wrap this in a function/class to test my code?
Where would I put my spec file?
How would I format it RSpec-style?
Thanks!
I just gave this a shot and have passing rspec tests, I am going to further refactor my code and in the end it will be shorter, but here is a snapshot of what I did, using this link as a guide:
The gist of it is this: make a class in your initializer file with the functions you want, then write tests on the functions in the class.
config/initializers/stripe_event.rb
StripeEvent.configure do |events|
events.subscribe 'charge.dispute.created' do |event|
StripeEventsResponder.charge_dispute_created(event)
end
end
class StripeEventsResponder
def self.charge_dispute_created(event)
StripeMailer.admin_dispute_created(event.data.object).deliver
end
end
spec/config/initializers/stripe_events_spec.rb
require 'spec_helper'
describe StripeEventsResponder do
before { StripeMock.start }
after { StripeMock.stop }
after { ActionMailer::Base.deliveries.clear }
describe '#charge_dispute_created' do
it "sends one email" do
event = StripeMock.mock_webhook_event('charge.dispute.created')
StripeEventsResponder.charge_dispute_created(event)
expect(ActionMailer::Base.deliveries.count).to eq(1)
end
it "sends the email to the admin" do
event = StripeMock.mock_webhook_event('charge.dispute.created')
StripeEventsResponder.charge_dispute_created(event)
expect(ActionMailer::Base.deliveries.last.to).to eq(["admin#example.com"])
end
end
end
Seems like this would be tricky, since the initializers are invoked when loading the environment, which happens before the examples are run.
I think you've already figured out the most important step: move the code into another unit where it can be tested outside of the Rails initialization process. This could be a class or a module. Mock out the network dependencies using webmock.
Where should this code reside? See this (now quite ancient) answer by no less than Yehuda Katz himself. Put the spec file in the corresponding part of the tree; i.e. if the unit ends up in lib/my_class.rb, the spec goes in spec/lib/my_class_spec.rb. You may need to modify spec/rails_helper to load the spec file.

RSpec test if a custom Rails log is written to

I'm trying to test a method that logs multiple messages based on conditionals to a log that is not the default Rails logger. I formatted the logger in config/environment.rb:
# Format the logger
class Logger
def format_message(level, time, progname, msg)
"#{time.to_s(:db)} #{level} -- #{msg}\n"
end
end
and created a new logger in the ImportRecording class in the lib/ directory. A method of that class includes the following:
# some code omitted...
days.each do |day|
if not hash[day].include? "copied"
#log.error "#{day} needs to be copied!"
end
if not hash[day].include? "compressed"
#log.error "#{day} needs to be compressed!"
end
if not hash[day].include? "imported"
#log.debug "#{day} needs to be imported"
`rake RAILS_ENV=#{Rails.env} recordings:import[#{day}]` unless Rails.env == "test"
end
end
# finishing up logging omitted...
I wrote a tiny macro to help with testing this method:
def stub_todo
{ "20130220" => ["copied"],
"20130219" => ["copied", "compressed"],
"20130218" => ["copied", "compressed", "imported"] }
end
and here's my test:
describe ".execute_todo" do
it "carries out the appropriate commands, based on the todo hash" do
ImportRecording.execute_todo stub_todo
ImportRecording.log.should_receive(:debug).with("20130219 needs to be imported")
ImportRecording.log.should_receive(:error).with("20130220 needs to be compressed!")
ImportRecording.log.should_receive(:debug).with("20130220 needs to be imported")
end
end
I stare at the import log as I run these tests and watch the lines get added (there's a delay, because the log is large by now), but the tests still fail. I wonder if the formatting of the log is messing this up, but I am passing the aforementioned strings to the methods :debug and :error to the log. Any help?
EDIT 3/14/13:
In the hopes that someone may be able to help me out here, I changed by test to look as follows:
it "carries out the appropriate commands, based on the todo hash" do
ImportRecording.stub!(:execute_todo).with(stub_todo).and_return(false)
ImportRecording.log.should_receive(:debug).with("20130219 needs to be imported")
ImportRecording.log.should_receive(:error).with("20130220 needs to be compressed!")
ImportRecording.log.should_receive(:debug).with("20130220 needs to be imported")
end
and this is the error I'm getting from RSpec:
Failure/Error: ImportRecording.log.should_receive(:debug).with("20130219 needs to be imported")
(#<Logger:0x007fb04fa83ed0>).debug("20130219 needs to be imported")
expected: 1 time
received: 0 times
I found the problem. Somehow, these expectations are supposed to be declared before the method that should actually cause them. The code should read
it "carries out the appropriate commands, based on the todo hash" do
ImportRecording.log.should_receive(:debug).with("20130219 needs to be imported")
ImportRecording.log.should_receive(:error).with("20130220 needs to be compressed!")
ImportRecording.log.should_receive(:debug).with("20130220 needs to be imported")
ImportRecording.execute_todo stub_todo
end
The tests now pass. I also had to add in more lines to the test to account for every line that was written to the log because of the method call. So for future researchers, state your expectations and then call the method.
I stare at the import log as I run these tests and watch the lines get added
If you set message expectations on the logger calls, you should not see the lines added to the log. Like stubs, message expectations replace the implementation of the original method. This suggests that your logger setup is mis-configured somehow.

Resources