I want to test my Ruby-on-Rails website that uses PostgreSQL database with Cucumber.
I also have the FactoryGirl gem installed, so I can create factories.
I understand that the typical way to create data for the test database would be to write Gherkin tables and put them in the background block of a cucumber file. But I already have a Ruby script that creates samle data suitable for the test database.
Yet, I am currently lost in Cucumber’s settings. Could you please advise how to make Cucumber run my Ruby script populating the test database before each test and how to clean the test database after each test. Apparently, my google-fu is inadequate for this task.
Just:
add the cucumber rule like I have 4 clock in the pocket to features/clock.feature (for example).
implement the rule in the features/step_definition/clock_steps.rb (for example) to create defined amount Clock models, and single Pocket model. and associate them:
When /I have (\d+) clock in the pocket/ do |amount|
pocket = FactoryGirl.create :pocket
amount.times { FactoryGirl.create :clock, pocket: pocket }
end
So, you then will get some data populated.
If you want predefine the step for most scenarios you can either:
Create a larger step (for example) called I have basic clock setup, then define it as an above one, but add another required steps into it.
or
Use step recursion (this is non-recommended way) as follows:
When /I have basic clock setup/ do |amount|
step 'I have 4 clock in the pocket'
end
I ended up using a Before do block in the .rb file with Cucumber tests:
Before do
<code to populate the test database here>
end
Given /first condition/
do some stuff
end
...and so on
Related
For context, this question arose because we are migration from Rails 5 to Rails 6, and introducing reader / writer database connections via the new replication features.
Our specific problem is with request specs, with an eye towards using transactional fixtures. When we run our request specs files in isolation, they pass. When run as part of a multiple-file pass (such as a full bundle exec parallel_rspec pass used on circle CI) they fail. If we turn off transactional fixtures, the tests take far too long to run, but pass.
Using byebug, we've poked in and determined that the problem is that our test data has been written to / is accessible by the writer DB connection, but the route is attempting to use the reader DB connection to read it. I. E. ActiveRecord::Base.connected_to(role: :reading) { puts Foo.count } is 0, while the same code connecting to writing role is non-zero.
The problem from there seems fairly obvious: because we're using transactional tests / fixtures, the code is never committed to the DB. It's only available on the connection it was made on. The request spec is reading from the 'right' db for the call (a GET request should use the reader db), but in the use-case of tests that's producing errors.
It seems like this is a fairly obvious use case that either Rails or rspec should have a tool for handling, we just don't seem to be able to find the relevant documentation.
You need to tell the test environment that it should be using a single connection for both. There are multiple ways of doing this:
You can configure your test environment not to use replicas at all. See Setting up your application for examples of using a replica and not using a replica then reproduce the non-replica version in your database.yml for the test environment only.
You can use connected_to within your specs themselves so that those tests are forced to use the specific connection you want them to use. One way to do this is with around hooks:
describe "around filter" do
around(:each) do |example|
puts "around each before"
ActiveRecord::Base.connected_to(role: :writing) { example.run }
puts "around each after"
end
it "gets run in order" do
puts "in the example"
end
end
You can monkey patch your ActiveRecord configuration in rails_helper so that it doesn't use replicas (but I'd really recommend #1 over this option)
I've got some model code that calls to the database with a simple find():
#thing = Thing.find(id)
I have data seeded in the database for the test environment. If I open the console in test (rails c -e test), I can run Thing.find(1) and get a result fine, however when I run a test that calls the method shown above, it reports that it cannot find a record with the id of 1.
I assume I am misunderstanding the relationship between test seed data and the tests being run against that database. Why do I see seed sin the test DB but the test doesn't?
A more conventional way of doing this would be to create fixtures (or factories if using FactoryBot) and call these factories in your test setup. As previous answers have said, hardcoding test ID's will likely return RecordNotFound due to auto-incrementing.
Background: I am unit testing a game server which is built upon rails 4.1.1 and separate socket.io/node.js for socket messaging. Messages from node.js to rails are going through RESTful http requests.
Single test case runs as follows:
(1) rake unit test --> (2) rails controller --> (3) node.js/socket.io --> (4) rails controller
Problem description: Some DB entries are created with ActiveRecord at step (2), then upon receiving a socket message at step (3) node.js sends HTTP request back to rails controller and finally(!!) at step (4) rails controller tries to access DB entries from step (2), but TEST DB contents are empty at this point.
Question: It seems like desired behavior of rake to cleanup TEST DB, but how can I persist TEST db across test cases and prevent such problem?
Thanks in advance
You should prepare and send request to node app inside a test and assert response there.
But it's not a good practice. The better solution would be HTTP mocks (like webmock gem). This approach will save lots of time in the future.
Luckily, I figured out the solution.
By default, rake is wrapping all tests in separate DB transactions and rolls back on cleanup. Moreover, whatever requests/queries are coming outside of TestCase are not included in transaction and not visible inside the test case.
To avoid such behavior, we have to disable transactional fixtures in test/test_helper.rb
class ActiveSupport::TestCase
self.use_transactional_fixtures = false
end
As downside, we have to cleanup test db manually. So #Alexander Shlenchack points out to avoid such practice in the first place and use http/socket mocks in future.
Here is brief summary http://devblog.avdi.org/2012/08/31/configuring-database_cleaner-with-rails-rspec-capybara-and-selenium/
And related question Rails minitest, database cleaner how to turn use_transactional_fixtures = false
I'm new to Ruby On Rails. I love, it has Testing capabilities built in. But, I can't wrap around my head with testing. Here is my first basic Question about it.
What happens during testing really?
I understand development, we want some result, we use the data we have or get it from users to achieve the end result we want. But, the notion of testing seems sometimes confusing for me. I have been testing applications in browser for some time, are we replicating the same with code? Is it what testing is about? Replicating browser testing with automated code? Enlighten Me here.
Reading A Guide to Testing Rails Applications will be a good starting point.
Basically, you have three kinds of tests: unit, functional and integration.
Unit tests are testing your Models. In these tests you check whether a single method of your model works as expected, for example you set assign a login with spaces, and then you test whether the spaces were removed:
class UserTest < ActiveSupport::TestCase
def test_login_cleaning
u = User.new
u.login = " login_with_spaces "
assert_equal "login_with_spaces", u.login
end
# ... and other tests
end
Functional tests are testing your controllers (and views). In each test you simulate one request sent to one controller with given set of parameters, and then you ensure that the controller returned the proper response.
Note however, that in this test you cannot test the rendering of the page, so it's not strictly simulating a browser. To test whether your page looks nicely, you need to do it manually (I am almost sure some techniques exist, but I do not know of them).
An example of functional test:
class UserControllerTest < ActionController::TestCase
def test_show_renders_admin
get :show, :id => 1
assert_response :success
assert_select "div.user" do
assert_select "span.name", "Joe Admin"
end
end
def test_show_handles_unknown_id
get :show, :id => 9999
assert_response 404
assert_select "p.warning", "No such user"
end
end
Integration tests are testing a sequence of requests - something like a scenario, where an user logins, gets the 'create user' page, creates an user, and so on. These tests check whether the single requests (tested in functional tests) are able to work together.
I see that Simone already pointed the importance of automation in tests, so the link to the Guide is the only value in my answer ;-)
You may find it very helpful to apply some rules of Test Driven Development, especially when your project matures a little.
I know that it's not easy to start the project by writing test, because often you do not yet know how everything will work, but later, when you find a bug, I strongly suggest to start fixing every bug from writing a failing test case. It really, really helps both in the bug-fixing phase, and later - ensuring that the bug does not reappear.
Well, I noticed that I did not directly answer your question ;-)
When you start test procedure, Rails:
deletes the test database (so make sure you do not have any valuable data here),
recreates it using the structure of the development database (so, make sure you have run all your migrations),
loads all the fixtures (from test/fixtures/*)
loads all the test classes from test/units/* and other directories,
calls every method whose name starts with 'test_' or was created by the macro test "should something.." (alphabetically, but you may consider the order as being random)
before every call it executes a special setup procedure, and after every call it executes teardown procedure,
before every call it may (depending on the configuration) recreate your database data, loading the fixtures again.
You will find more information in the Guide.
What happens during testing is that you really run a set of specialized programs or routines (test code) that calls routines in your application (code under test) and verifies that they produce the expected results. The testing framework usually has some mechanism to make sure that each test routine is independent of the other tests. In other words the result from one test does not affect the result of the others.
In Rails specifically you run the tests using the rake test command line tool. This will load and execute each test routine in a random order, and tell you if each test was successful or not.
This answer doesn't necessary apply to Rails itself. When you talk about testing in Rails, you usually mean automatic testing.
The word automatic is the essence of the meaning. This is in fact the biggest difference between unit testing and "browser" testing.
With unit testing you essentially write a code, a routine, that stresses a specific portion of your code to make sure it works as expected. The main advantages of unit testing compared to "browser" testing are:
It's automatic and can be run programmatically.
Your test suite increases during the development lifecycle.
You reduce the risk of regression bugs, because when you modify a piece of code and you run the test suite, you are actually running all the tests, not just a random check.
Here's a basic, very simple example. Take a model, let's say the User model. You have the following attributes: first_name, last_name. You want a method called name to return the first and last name, if they exist.
Here's the method
class User
def name
[first_name, last_name].reject(&:blank?).join(" ")
end
end
and here's the corresponding unit test.
require 'test_helper'
class UserTest < ActiveSupport::TestCase
def test_name
assert_equal "John Doe", User.new(:first_name => "John", :last_name => "Doe").name
assert_equal "John", User.new(:first_name => "John").name
assert_equal "Doe", User.new(:last_name => "Doe").name
assert_equal "", User.new().name
end
end
In one of my projects I need to collaborate with several backend systems. Some of them somewhat lacks in documentation, and partly therefore I have some test code that interact with some test servers just to see everything works as expected. However, accessing these servers is quite slow, and therefore I do not want to run these tests every time I run my test suite.
My question is how to deal with a situation where you want to skip certain tests. Currently I use an environment variable 'BACKEND_TEST' and a conditional statement which checks if the variable is set for each test I would like to skip. But sometimes I would like to skip all tests in a test file without having to add an extra row to the beginning of each test.
The tests which have to interact with the test servers are not many, as I use flexmock in other situations. However, you can't mock yourself away from reality.
As you can see from this question's title, I'm using Test::Unit. Additionally, if it makes any difference, the project is a Rails project.
The features referred to in the previous answer include the omit() method and omit_if()
def test_omission
omit('Reason')
# Not reached here
end
And
def test_omission
omit_if("".empty?)
# Not reached here
end
From: http://test-unit.rubyforge.org/test-unit/en/Test/Unit/TestCaseOmissionSupport.html#omit-instance_method
New Features Of Test Unit 2.x suggests that test-unit 2.x (the gem version, not the ruby 1.8 standard library) allows you to omit tests.
I was confused by the following, which still raises an error to the console:
def test_omission
omit('Reason')
# Not reached here
end
You can avoid that by wrapping the code to skip in a block passed to omit:
def test_omission
omit 'Reason' do
# Not reached here
end
end
That actually skips the test as expected, and outputs "Omission: Test Reason" to the console. It's unfortunate that you have to indent existing code to make this work, and I'd be happy to learn of a better way to do it, but this works.