While upgrading a middle size app from Rails 5.1 to 6.0.2.2 (+ Ruby 2.6.1 -> 2.6.3) I start to get flaky tests in all kind of tests. 90% coverage, 500 hundreds tests, and between 0 to 8 tests failings, totally random. After a long bug hunt, I noticed that I got everything working with 100% confidence if I skip all Websocket related tests.
This is typically what I have to test and how I'm doing it (Minitest/Spec syntaxe):
class Api::PlayersControllerTest < ActionController::TestCase
before do
#user = users(:admin)
sign_in #user
end
it "broadcasts stop to user's player" do
put :update, format: :json, params: {id: #user.id}
assert_broadcast_on("PlayersChannel_#{#user.id}", action: "stop")
end
end
Notice that it's not an "integration test" because we're using a raw API call. What I have to check is: if some request is coming to some controller, ActionCable is broadcasting a Websocket message. That's why I have to use Devise sign_in helper in a controller Test.
ActionCable is backed by Redis, in all environments.
I do not use Parallel testing.
Dataset is using Fixtures, not factories.
use_transactional_tests is set to true
I have 23 tests like this one, and they all used to pass without any problem using Rails 5.1. Started one by one using a focus, they also all pass 100% both in Rails 5 or 6. The problem is when executing the whole test suite, I start to get flakiness in all sections (Unit/Models tests included), mostly related to dataset consistency. Actually, it looks like fixtures are not (or poorly) reloaded.
Any ideas? Is it something wrong with what I'm doing, or do you think it's a Rails 6 issue?
Ok, solved by adding Database Cleaner Gem. Pretty curious that it works using the same ":transaction" strategy that's in use with the Rails barebone fixture management... but anyway, it works!
Related
I am new to Minitest / Capybara / Selenium. But I want to test my destroy Controller action. I an trying the following and it is failing
test "destroy" do
companies_count = Company.count
visit company_path(#company)
click_on "Delete"
page.driver.browser.switch_to.alert.accept
assert_equal (companies_count - 1), Company.count
end
OUTPUT:
test_destroy FAIL (2.17s)
Expected: 6
Actual: 7
Tried this way also.
test "destroy" do
assert_difference('Company.count', -1) do
delete company_url(#company)
end
end
OUTPUT:
Minitest::UnexpectedError: NoMethodError: undefined method `delete' for #<CompaniesControllerTest:0x000056171e550038>
Can someone help me in testing my destroy action?
Assuming you're using a modern version of Rails (5.2/6) and a standard system test configuration (not running parallel tests in threads) then the concerns in the answer of Gregório Kusowski are irrelevant because the DB connection is shared between your tests and your application, preventing the issue of the tests not being able to see your apps changes.
Also assuming you're using Selenium in these system tests, the main problem you're dealing with is that actions in the browser occur asynchronously from your tests, so just because you've told your test to accept the dialog box doesn't mean the action to delete the company has completed when it returns. The way to verify that is to just sleep for a little bit before checking for the change in count. While that will work it's not a good final solution because it ends up wasting time. Instead, you should be checking for a visual change that indicates the action has completed before verifying the new count
test "destroy" do
companies_count = Company.count
visit company_path(#company)
accept_confirm do
click_on "Delete"
end
assert_text "Company Deleted!" # Check for whatever text is shown to indicate the action has successfully completed
assert_equal (companies_count - 1), Company.count
end
This works because Capybara provided assertions have a waiting/retrying behavior that allows the application up to a specific amount of time to catch up with what the test is expecting.
Note: I've replaced the page.driver... with the correct usage of Capybaras system modal API - If you're using page.driver... it generally indicates you're doing something wrong.
This is very likely to happen because what you execute directly in your test happens in a transaction, and your web-driver is triggering actions that happen on another one. You can read more about how it happens here: https://edgeguides.rubyonrails.org/testing.html#testing-parallel-transactions
Here is a similar issue: Rails integration test with selenium as webdriver - can't sign_in
And as it is stated in the Rails Guides and the similar question, you will probably have to use a solution like http://rubygems.org/gems/database_cleaner
If you don't want to do this, the other option you have is to validate that your action was successful via the web-driver, like for example asserting that there are 6 rows in the table you list all companies.
Background: I am unit testing a game server which is built upon rails 4.1.1 and separate socket.io/node.js for socket messaging. Messages from node.js to rails are going through RESTful http requests.
Single test case runs as follows:
(1) rake unit test --> (2) rails controller --> (3) node.js/socket.io --> (4) rails controller
Problem description: Some DB entries are created with ActiveRecord at step (2), then upon receiving a socket message at step (3) node.js sends HTTP request back to rails controller and finally(!!) at step (4) rails controller tries to access DB entries from step (2), but TEST DB contents are empty at this point.
Question: It seems like desired behavior of rake to cleanup TEST DB, but how can I persist TEST db across test cases and prevent such problem?
Thanks in advance
You should prepare and send request to node app inside a test and assert response there.
But it's not a good practice. The better solution would be HTTP mocks (like webmock gem). This approach will save lots of time in the future.
Luckily, I figured out the solution.
By default, rake is wrapping all tests in separate DB transactions and rolls back on cleanup. Moreover, whatever requests/queries are coming outside of TestCase are not included in transaction and not visible inside the test case.
To avoid such behavior, we have to disable transactional fixtures in test/test_helper.rb
class ActiveSupport::TestCase
self.use_transactional_fixtures = false
end
As downside, we have to cleanup test db manually. So #Alexander Shlenchack points out to avoid such practice in the first place and use http/socket mocks in future.
Here is brief summary http://devblog.avdi.org/2012/08/31/configuring-database_cleaner-with-rails-rspec-capybara-and-selenium/
And related question Rails minitest, database cleaner how to turn use_transactional_fixtures = false
I'm struggling with this for quite a while now: I'm trying to upgrade an app from Rails 3.2 to Rails 4. While on Rails 3.2 all specs are passing, they fail under certain conditions in Rails 4.
Some specs are passing in isolation while failing when run together with other specs.
Example Video
https://www.wingolf.org/ak-internet-files/Spec_Behaviour.mp4 (4 mins)
This example video shows:
Running 3 specs using :focus–––green.
Running them together with another spec–––two specs passing before now fail.
Running the 3 specs, but inserting two empty lines–––one spec fails.
Undo does not help when using guard.
focus/unfocus does not help.
Restarting guard does not help.
Running all specs and then running the 3 specs again does help and make them green again. But adding the other task makes two specs fail, again.
As one can see, some specs are red when run together with other specs. Even entering blank lines can make a difference.
More Observations
For some specs, passing or failing occurs randomly when run several times.
The behavior is not specific to one development machine but can be reproduced on travis.
To delete the database completely between the specs using database_cleaner does not help.
To Rails.cache.clear between the specs does not help.
Wrapping each spec in an ActiveRecord::Base.transaction does not help.
This does occur in Rails 4.0.0 as well as in Rails 4.1.1.
Using this minimal spec_helper.rb without spring or anything does not help.
Using guard vs. using bundle exec rspec some_spec.rb:123 directly doesn't make a difference.
This behavior goes for model specs, thus doesn't have to do anything with parallel database connections for features specs.
I've already tried to keep as many gems at the same version as in the (green) Rails-3.2 branch, including guard, rspec, factory_girl, etc.–––does not help.
Update: Observations Based on Comments & Answers
Thanks to engineerDave, I've inserted require 'pry'; binding.pry; into one of the concerning specs. Using the cd and show-source of pry, it was ingeniously easy and fun to narrow down the problem: Apparently, this has_many :through relation does not return objects when run together with other specs, even when called with (true).
has_many(:groups,
-> { where('dag_links.descendant_type' => 'User').uniq },
through: :memberships,
source: :ancestor, source_type: 'Group'
)
If I call groups directly, I get an empty result. But if I go through the memberships, the correct groups are returned:
#user.groups # => []
#user.groups(true) # => []
#user.memberships.collect { |m| m.group } # returns the correct groups
Has Rails changed the has many through behavior in Rails 4 in a way that could be responsible? (Remember: The spec works in isolation.)
Any help, insights and experiences are appreciated. Thanks very much in advance!
Code
Current master branch on Rails 3.2––all green.
Rails-4 branch––strange behavior.
The file/commit seen in the video––strange behavior.
All specs passing on travis for Rails 3.2.
Diff of the Gemfile.lock (or use git diff master..sf/rails4-minimal-update Gemfile.lock |grep rspec)
How to Reproduce
This is how one can check if the issue still exists:
Preparation
git clone git#github.com:fiedl/wingolfsplattform.git
cd wingolfsplattform
git checkout sf/rails4-minimal-update
bundle install
# please create `config/database.yml` to your needs.
bundle exec rake db:create db:migrate db:test:prepare
Run the specs
bundle exec rspec ./vendor/engines/your_platform/spec/models/user_group_membership_spec.rb
bundle exec rspec ./vendor/engines/your_platform/spec/models/user_group_membership_spec.rb:213
The problem still exists, if the spec :213 is green in the second call but is red when run together with the other specs in the first call.
Based on that you're using:
the should syntax
that you indicate you've upgraded recently (perhaps a bundle update?)
that your failure messages indicate a NilObject error.
Is something like this perhaps what is causing it?
https://stackoverflow.com/a/16427072/793330
Are you are calling an object in your test which hasn't been instantiated yet?
I think this might be an rspec 3 upgrade issue where should is deprecated.
Have you ruled out an rspec gem upgrade to the new rspec 3 syntax (2.99 or 3.0.0+) as the culprit?
"Remove support for deprecated expect(...).should. (Myron Marston)"
IMO this behavior would not be caused by a Rails 4 update as its centered around your test suite.
Update (with pry debug):
Also you could use the pry gem to get a window into what is going on in your specs.
Essentially you can put a big "stop" sign (similar to a debug break) right before the spec executes to get a handle on the environment at that point.
it {
require 'pry'; binding.pry
should == something
}
Although beaware sometimes these pry calls wreck havoc on guard's threading and you have to kill it with CTRL+Z and then kill -9 PID that shows.
Update #2: Looking at updated answer.
You might be running up against FactoryGirl issues based on your has_many issue
You may need to trigger a before action in your Factory to pre-populate the associated record. Although this could get messy, i.e. here be monsters, you can trigger after and before callbacks in your factory that will bring these objects into being.
after(:create) do |instance|
do stuff here
end
I'm new to Ruby On Rails. I love, it has Testing capabilities built in. But, I can't wrap around my head with testing. Here is my first basic Question about it.
What happens during testing really?
I understand development, we want some result, we use the data we have or get it from users to achieve the end result we want. But, the notion of testing seems sometimes confusing for me. I have been testing applications in browser for some time, are we replicating the same with code? Is it what testing is about? Replicating browser testing with automated code? Enlighten Me here.
Reading A Guide to Testing Rails Applications will be a good starting point.
Basically, you have three kinds of tests: unit, functional and integration.
Unit tests are testing your Models. In these tests you check whether a single method of your model works as expected, for example you set assign a login with spaces, and then you test whether the spaces were removed:
class UserTest < ActiveSupport::TestCase
def test_login_cleaning
u = User.new
u.login = " login_with_spaces "
assert_equal "login_with_spaces", u.login
end
# ... and other tests
end
Functional tests are testing your controllers (and views). In each test you simulate one request sent to one controller with given set of parameters, and then you ensure that the controller returned the proper response.
Note however, that in this test you cannot test the rendering of the page, so it's not strictly simulating a browser. To test whether your page looks nicely, you need to do it manually (I am almost sure some techniques exist, but I do not know of them).
An example of functional test:
class UserControllerTest < ActionController::TestCase
def test_show_renders_admin
get :show, :id => 1
assert_response :success
assert_select "div.user" do
assert_select "span.name", "Joe Admin"
end
end
def test_show_handles_unknown_id
get :show, :id => 9999
assert_response 404
assert_select "p.warning", "No such user"
end
end
Integration tests are testing a sequence of requests - something like a scenario, where an user logins, gets the 'create user' page, creates an user, and so on. These tests check whether the single requests (tested in functional tests) are able to work together.
I see that Simone already pointed the importance of automation in tests, so the link to the Guide is the only value in my answer ;-)
You may find it very helpful to apply some rules of Test Driven Development, especially when your project matures a little.
I know that it's not easy to start the project by writing test, because often you do not yet know how everything will work, but later, when you find a bug, I strongly suggest to start fixing every bug from writing a failing test case. It really, really helps both in the bug-fixing phase, and later - ensuring that the bug does not reappear.
Well, I noticed that I did not directly answer your question ;-)
When you start test procedure, Rails:
deletes the test database (so make sure you do not have any valuable data here),
recreates it using the structure of the development database (so, make sure you have run all your migrations),
loads all the fixtures (from test/fixtures/*)
loads all the test classes from test/units/* and other directories,
calls every method whose name starts with 'test_' or was created by the macro test "should something.." (alphabetically, but you may consider the order as being random)
before every call it executes a special setup procedure, and after every call it executes teardown procedure,
before every call it may (depending on the configuration) recreate your database data, loading the fixtures again.
You will find more information in the Guide.
What happens during testing is that you really run a set of specialized programs or routines (test code) that calls routines in your application (code under test) and verifies that they produce the expected results. The testing framework usually has some mechanism to make sure that each test routine is independent of the other tests. In other words the result from one test does not affect the result of the others.
In Rails specifically you run the tests using the rake test command line tool. This will load and execute each test routine in a random order, and tell you if each test was successful or not.
This answer doesn't necessary apply to Rails itself. When you talk about testing in Rails, you usually mean automatic testing.
The word automatic is the essence of the meaning. This is in fact the biggest difference between unit testing and "browser" testing.
With unit testing you essentially write a code, a routine, that stresses a specific portion of your code to make sure it works as expected. The main advantages of unit testing compared to "browser" testing are:
It's automatic and can be run programmatically.
Your test suite increases during the development lifecycle.
You reduce the risk of regression bugs, because when you modify a piece of code and you run the test suite, you are actually running all the tests, not just a random check.
Here's a basic, very simple example. Take a model, let's say the User model. You have the following attributes: first_name, last_name. You want a method called name to return the first and last name, if they exist.
Here's the method
class User
def name
[first_name, last_name].reject(&:blank?).join(" ")
end
end
and here's the corresponding unit test.
require 'test_helper'
class UserTest < ActiveSupport::TestCase
def test_name
assert_equal "John Doe", User.new(:first_name => "John", :last_name => "Doe").name
assert_equal "John", User.new(:first_name => "John").name
assert_equal "Doe", User.new(:last_name => "Doe").name
assert_equal "", User.new().name
end
end
In one of my projects I need to collaborate with several backend systems. Some of them somewhat lacks in documentation, and partly therefore I have some test code that interact with some test servers just to see everything works as expected. However, accessing these servers is quite slow, and therefore I do not want to run these tests every time I run my test suite.
My question is how to deal with a situation where you want to skip certain tests. Currently I use an environment variable 'BACKEND_TEST' and a conditional statement which checks if the variable is set for each test I would like to skip. But sometimes I would like to skip all tests in a test file without having to add an extra row to the beginning of each test.
The tests which have to interact with the test servers are not many, as I use flexmock in other situations. However, you can't mock yourself away from reality.
As you can see from this question's title, I'm using Test::Unit. Additionally, if it makes any difference, the project is a Rails project.
The features referred to in the previous answer include the omit() method and omit_if()
def test_omission
omit('Reason')
# Not reached here
end
And
def test_omission
omit_if("".empty?)
# Not reached here
end
From: http://test-unit.rubyforge.org/test-unit/en/Test/Unit/TestCaseOmissionSupport.html#omit-instance_method
New Features Of Test Unit 2.x suggests that test-unit 2.x (the gem version, not the ruby 1.8 standard library) allows you to omit tests.
I was confused by the following, which still raises an error to the console:
def test_omission
omit('Reason')
# Not reached here
end
You can avoid that by wrapping the code to skip in a block passed to omit:
def test_omission
omit 'Reason' do
# Not reached here
end
end
That actually skips the test as expected, and outputs "Omission: Test Reason" to the console. It's unfortunate that you have to indent existing code to make this work, and I'd be happy to learn of a better way to do it, but this works.