In my Rails app, I have a couple of controllers where I have an after_action hook that sends an email through an ActionMailer. Like so:
after_action :send_notifications, only: [:create, :update]
def send_notifications
case #record.status.downcase
when 'review', 'corrected'
RecordNotifier.received(#record).deliver_later
when 'rejected'
#rejection = Rejection.where(record_id: #record.id).last
RecordNotifier.rejected(#record, #rejection).deliver_later
when 'submitted'
#form = Form.find( #record.form_id )
RecordNotifier.submitted(#record).deliver_later
end
end
My problem is that my test suite fails randomly because sometimes a mailer will still need to finish executing after the spec has run, but since the Database gets cleaned after each test, the objects the mailer needs don't exist anymore. But I'm pretty sure thats the problem--I had one test that was failing in this way, but when I added sleep 1 before the last expect(page).to have_something statement, it passed. I could also see it in the logs--the database getting zapped before the app was ready.
Incidentally, I don't have any ActiveJob setup for the mailers, I just used deliver_later because the guides said it worked the same and if I add ActiveJob later, it will already be ready.
So how do I avoid this? How do I make sure the app is all done before the spec exits and DatabaseCleaner runs?
Related
I am new to Minitest / Capybara / Selenium. But I want to test my destroy Controller action. I an trying the following and it is failing
test "destroy" do
companies_count = Company.count
visit company_path(#company)
click_on "Delete"
page.driver.browser.switch_to.alert.accept
assert_equal (companies_count - 1), Company.count
end
OUTPUT:
test_destroy FAIL (2.17s)
Expected: 6
Actual: 7
Tried this way also.
test "destroy" do
assert_difference('Company.count', -1) do
delete company_url(#company)
end
end
OUTPUT:
Minitest::UnexpectedError: NoMethodError: undefined method `delete' for #<CompaniesControllerTest:0x000056171e550038>
Can someone help me in testing my destroy action?
Assuming you're using a modern version of Rails (5.2/6) and a standard system test configuration (not running parallel tests in threads) then the concerns in the answer of Gregório Kusowski are irrelevant because the DB connection is shared between your tests and your application, preventing the issue of the tests not being able to see your apps changes.
Also assuming you're using Selenium in these system tests, the main problem you're dealing with is that actions in the browser occur asynchronously from your tests, so just because you've told your test to accept the dialog box doesn't mean the action to delete the company has completed when it returns. The way to verify that is to just sleep for a little bit before checking for the change in count. While that will work it's not a good final solution because it ends up wasting time. Instead, you should be checking for a visual change that indicates the action has completed before verifying the new count
test "destroy" do
companies_count = Company.count
visit company_path(#company)
accept_confirm do
click_on "Delete"
end
assert_text "Company Deleted!" # Check for whatever text is shown to indicate the action has successfully completed
assert_equal (companies_count - 1), Company.count
end
This works because Capybara provided assertions have a waiting/retrying behavior that allows the application up to a specific amount of time to catch up with what the test is expecting.
Note: I've replaced the page.driver... with the correct usage of Capybaras system modal API - If you're using page.driver... it generally indicates you're doing something wrong.
This is very likely to happen because what you execute directly in your test happens in a transaction, and your web-driver is triggering actions that happen on another one. You can read more about how it happens here: https://edgeguides.rubyonrails.org/testing.html#testing-parallel-transactions
Here is a similar issue: Rails integration test with selenium as webdriver - can't sign_in
And as it is stated in the Rails Guides and the similar question, you will probably have to use a solution like http://rubygems.org/gems/database_cleaner
If you don't want to do this, the other option you have is to validate that your action was successful via the web-driver, like for example asserting that there are 6 rows in the table you list all companies.
While upgrading a middle size app from Rails 5.1 to 6.0.2.2 (+ Ruby 2.6.1 -> 2.6.3) I start to get flaky tests in all kind of tests. 90% coverage, 500 hundreds tests, and between 0 to 8 tests failings, totally random. After a long bug hunt, I noticed that I got everything working with 100% confidence if I skip all Websocket related tests.
This is typically what I have to test and how I'm doing it (Minitest/Spec syntaxe):
class Api::PlayersControllerTest < ActionController::TestCase
before do
#user = users(:admin)
sign_in #user
end
it "broadcasts stop to user's player" do
put :update, format: :json, params: {id: #user.id}
assert_broadcast_on("PlayersChannel_#{#user.id}", action: "stop")
end
end
Notice that it's not an "integration test" because we're using a raw API call. What I have to check is: if some request is coming to some controller, ActionCable is broadcasting a Websocket message. That's why I have to use Devise sign_in helper in a controller Test.
ActionCable is backed by Redis, in all environments.
I do not use Parallel testing.
Dataset is using Fixtures, not factories.
use_transactional_tests is set to true
I have 23 tests like this one, and they all used to pass without any problem using Rails 5.1. Started one by one using a focus, they also all pass 100% both in Rails 5 or 6. The problem is when executing the whole test suite, I start to get flakiness in all sections (Unit/Models tests included), mostly related to dataset consistency. Actually, it looks like fixtures are not (or poorly) reloaded.
Any ideas? Is it something wrong with what I'm doing, or do you think it's a Rails 6 issue?
Ok, solved by adding Database Cleaner Gem. Pretty curious that it works using the same ":transaction" strategy that's in use with the Rails barebone fixture management... but anyway, it works!
I am using Rails 5.2 and doing some testing. I am trying my unit testing with ActionController::TestCase and found that #controller.class.skip_before_action :verify_user does not reset the controller automatically before going to the next test unit method.
Now because rails tests are randomized at every run, certain unit fails sometimes and not other times. I figured it was expecting HTTP 401 but I got 200. Cause the controller ignored to before_action :verify_user.
I could set #controller.class.before_action :verify_user at end of every unit (and it works!) but should it not be the rep of testing sys to reset context before every run?
Giving a snip of my code:
class ApiSiteMetricsTest < ActionController::TestCase
tests Api::SiteMetricsController
def test_1_index
#controller.class.skip_before_action :verify_user,raise: false
get "index", params:{
"format"=>"json",
"site_metric_value"=>{
"site_metric_id"=>2403,
"date_acquired"=>"2018-03-14T01:44:00+05:30",
"site_id"=>3840,
"lab_device_details"=>"",
"comment"=>"",
"sender_affiliation"=>"",
"float_value"=>""
}
}
assert_response :success
File.open("#{Rails.root}/del.html", "wb") { |f| f.write(#response.body) }
#Do I have to do this o every test?
##controller.class.before_action :verify_user
end
...
You are absolutely right that each test should reset the state of the system after running. Tests should should be entirely independent - which is precisely why they are run in a random order (by default).
For most things - such as database transactions - the test framework can handle this for you. But there are infinitely other ways that you could alter the environment; the test framework cannot always cover your back.
For example, what if your test changes an ENV variable? Or calls Timecop.freeze? Or adds a database record via a second database connection? Or sets a global variable? ...
Sometimes, you need to reset the state manually!
In this case, I would do:
class ApiSiteMetricsTest < ActionController::TestCase
tests Api::SiteMetricsController
def test_1_index
#controller.class.skip_before_action :verify_user,raise: false
# ...
ensure
#controller.class.before_action :verify_user
end
end
The ensure is there so that even if this test fails, the state should be reset regardless - therefore not affecting whether or not other tests fail.
In some cases, you may find it convenient to use MiniTest's setup and teardown methods to provide this functionality. (Equivalent to the before and after hooks in rspec).
when I send a mail through deliver_later it is managed by sidekiq, and then my registered mail observer is triggered.
I have a Capybara test that checks states changed inside observer code, but it fails randomly if observer is not executed right after the clicks, and the expectation doesn't work correctly.
Example:
# spec
scenario 'Test that fails randomly' do
click_link "Go!"
# MyModel#done is a boolean attribute, so we have #done? method availiable
expect(MyModel.first.done?).to be true
end
# The controller that manages the Go! link, triggers a mailer.
# After the mailer, this is executed.
# Registered observer
def delivered_mail(mail)
email = Email.find_by_message_id mail.message_id
email.user.update_attributes done: true
end
Fun fact: If I execute this scenario isolated, the test will always pass.
If I execute the test suite completely, the test will 9:1 fail:pass more or less. ¯_(ツ)_/¯
Tried putting this in rails_helper:
require 'sidekiq/testing'
RSpec.configure do |config|
Sidekiq::Testing.inline!
end
And also putting Sidekiq::Testing.inline! in the very first line of the scenario block... nothing. The same fun fact.
Update:
Added database_cleaner gem, and now it fails everytime.
Actions in Capybara (click_link, etc) know nothing about any behaviors they trigger. Because of this there is no guarantee as to what the app will have done after your click_link line returns, other than the link will have been clicked, and the browser will have started to perform whatever that action triggers. Your test then immediately checks 'MyModel.first.done?` while the browser could still be submitting a request (This is one reason why directly checking database records in feature tests is generally frowned upon).
The solution to that (and end up with tests that will work across multiple drivers reliably is to check for a visual change on the page that indicates the action has completed. You also need to set up ActiveJob properly for testing so you can make sure the jobs are executed. To do this you will need to include ActiveJob::TestHelper, which can be done in your RSpec config or individual scenarios, and you will need to make sure ActiveJob::Base.queue_adapter = :test is set (can be done in config/environment/tests.rb file if wanted). Then assuming your app shows a message "Mail sent!" on screen when the action has completed you would do
include ActiveJob::TestHelper
ActiveJob::Base.queue_adapater = :test
...
perform_enqueued_jobs do
click_link "Go!"
expect(page).to have_text('Mail sent!') # This will wait for the message to appear, which guarantees the action has completed and enqueued the job
end # when this returns any jobs enqueued during the block will have been executed
expect(MyModel.first.done?).to be true
I have a Rails model with an after_create callback, that has code which interacts with an external API. That code is getting executed and content on another app is being created when I run rspec tests.
I want to do something such as:
after_create :external_api_code, unless: testing?
def testing?
#what goes here to recognize that the object is being created in a test?
end
To check whether or not code is running in the test environment:
Rails.env.test?
To avoid running external API code in RSpec, put this in your configuration block:
RSpec.configure do |config|
config.before(:each) do
allow_any_instance_of(Model).to receive(:external_api_code).and_return(return_value)
end
end
To actually run the code in the test that needs it run:
allow_any_instance_of(Model).to receive(:external_api_code).and_call_original
One of possible solution may be to just stub the external_api_code in all tests and unstub it where its call is really needed. Of course this solution will work but it requires some monkey business because you have to place the stubbing code in all test files of your project. This is possible RSpec code to do it:
before(:all) do
User.any_instance.stub(:external_api_code)
end
like this in your model test case file.