I have a method in application_helper that is called admin_rights? to check if a user should be able to add content to the site. I haven't implemented a user system so it only returns true at the moment. But I am trying to test it, but I can't seem to find out how to stub it out so it returns false in the test. The spec checks for a link that should only be visible when admin_rights? returns true. When i test it manually by changing admin_rights? to false, it works as intended. So I am apparently not stubbing it out correctly.
The Spec is:
context "no admin rights" do
before do
page.stub(:admin_rights?).and_return(false)
visit fencers_path
end
it "should not have add fencer link" do
expect(page).not_to have_link('+ Fekter', href: new_fencer_path)
end
end
I'm looking for the correct way to stub it out or an alternative way to test it.
The test case you posted is an acceptance test. It boots up a server instance and goes through the full stack. You should really not rely on stubbing and mocking in these kind of tests. They should ensure that the application as a whole works and should treat your application as a black box. To replace tiny bits of code is a recipe for very brittle acceptance tests. Also if you run your tests with a driver that runs Javascript then there is no chance to get the stubbing to work because the server runs in a different process than your tests do.
You should implement the logic for admin_rights? and then tune your acceptance test-setup that the logic actually returns false. For example sign in with a normal user, which does not have admin rights. In the end you want your acceptance tests to match closely to the real world scenario.
Related
I was wondering if there is a script that can take existing codebase and generate unit tests for each method in controllers. By default all would be passing since they would be empty and i can remove tests i for methods i dont feel important.
This would save huge time and increase testing. Since i'd have to define only what each method should output and not boilerplate that needs to be written.
You really shouldn't be doing this. Creating pointless tests is technical debt that you don't want. Take some time, go through each controller and write a test (or preferably a few) for each method. You'll thank yourself in the long run.
You can then also use test coverage tools to see which bits still need testing.
You can use shared tests to avoid repetition. So for example with rspec, you could add the following to your spec_helper/rails_helper
def should_be_ok(action)
it "should respond with ok" do
get action.to_sym
expect(response).to be_success
end
end
Then in your controller_spec
describe UserController do
should_be_ok(:index)
should_be_ok(:new)
end
I'm trying to set up some feature specs before I get into refactoring some of my company's old code. It's kind of an unconventional setup, but I was able to figure out enough about test doubles to bypass the authentication enough to get started. One problem I'm still having is that some of the instance variables set in these methods I'm bypassing are expected by the view, so I get undefined method for nil:NilClass errors. I would like to get the specs running before I make any changes to the program code. In this case, I could easily just move the particular instance variable to another method. But I'm sure more situations like this will come up. Here's the example I'm currently working on:
def security_level
#right_now = Time.now
#
# other code that wont work without
# connecting to a remote authentication
# server
#
end
Then in my spec:
feature 'Navigation' do
before(:each) do
allow_any_instance_of(ApplicationController).to receive(:security_level).and_return(nil)
end
scenario 'is possible' do
visit root_path
expect(page.has_content?('Quick Stats'))
end
end
Here's the error, coming from #right_now.year in the view
Failure/Error: visit root_path
NoMethodError:
undefined method `year' for nil:NilClass
# ./common/views/layouts/bootstrap/layout.haml:63
EDIT: Is there a way to set instance variables on the controller from within a feature spec?
There's no easy way to accomplish what you want.
The feature spec is handled mostly by Capybara, not RSpec. Capybara runs the majority of the browser / rails server behavior in an external process. This make it inaccessible from RSpec's point-of-view. Thus you cannot use stubs / doubles in this manner.
Feature specs are largely meant to be end-to-end acceptance tests. The idea is to exercise your system as those who would use your system do. Generally, in these types of specs you perform various "workflows". This means, having the spec, log a user in, navigate to particular pages, filling forms, clicking buttons and links. You then generally make your expectations on what you see in the view.
This means your spec would look more like:
feature 'Navigation' do
let(:regular_user) { User.create!(name: 'A Regular User') }
def sign_in(a_user)
visit sign_in_url
# fill out form
click_button 'Sign In'
end
before(:each) do
sign_in(regular_user)
end
scenario 'is possible' do
visit root_path
expect(page.has_content?('Quick Stats'))
end
end
https://github.com/per-garden/fakeldap may provide enough ldap functionality for your feature tests.
New to Ruby, Rails and TDD. I'm using RSpec with Capybara and Capybara webkit.
Trying to test if a div element exists on a page.
Test Code:
require 'spec_helper'
describe "Login module" do
before do
visit root_path
end
it "should have a module container with id mLogin" do
page.should have_css('div#mLogin')
end
it "should have a module container with id mLogin", :js => true do
page.evaluate_script('$("div#mLogin").attr("id")').should eq "mLogin"
end
end
The first test passes but the second test fails with:
Login module should have a module container with id mLogin
Failure/Error: page.evaluate_script('$("div#mLogin").attr("id")').should eq "mLogin"
expected: "mLogin"
got: nil
Ran the JS in browser dev tools and get "mLogin" rather than nil.
Any ideas? Thanks.
find('div#mLogin')[:id].should eq 'mLogin'
See this from doc:
#evaluate_script
Evaluate the given JavaScript and return the result. Be careful when using this with scripts that return complex objects, such as jQuery statements. execute_script might be a better alternative.
evaluate_script always return nil, as far as I remember.
Anyway, your second test seems like is testing if capybara works, because your first test is enough.
One likely problem is that the have_css matcher supports Capybara's synchronization feature. If the selector isn't found right away, it will wait and retry until it is found or a timeout elapses.
There's more documentation about this at http://rubydoc.info/github/jnicklas/capybara#Asynchronous_JavaScript__Ajax_and_friends_
On the other hand, evaluate_script runs immediately. Since this is the first thing you do after visiting the page, there's a race condition: it's possible that it executes this script before the page has finished loading.
You can fix this by trying to find an element on the page that won't appear until the page is loaded before you call evaluate_script.
Alternately, you can wrap your call in a call to synchronize to explicitly retry, but this is not generally recommended. For situations like this, you're much better off using Capybara's built-in matchers. The evaluate_script method should only be used as a last resort when there is no built-in way to accomplish what you need to do, and you need to take a lot of care to avoid race conditions.
I'm new to Ruby On Rails. I love, it has Testing capabilities built in. But, I can't wrap around my head with testing. Here is my first basic Question about it.
What happens during testing really?
I understand development, we want some result, we use the data we have or get it from users to achieve the end result we want. But, the notion of testing seems sometimes confusing for me. I have been testing applications in browser for some time, are we replicating the same with code? Is it what testing is about? Replicating browser testing with automated code? Enlighten Me here.
Reading A Guide to Testing Rails Applications will be a good starting point.
Basically, you have three kinds of tests: unit, functional and integration.
Unit tests are testing your Models. In these tests you check whether a single method of your model works as expected, for example you set assign a login with spaces, and then you test whether the spaces were removed:
class UserTest < ActiveSupport::TestCase
def test_login_cleaning
u = User.new
u.login = " login_with_spaces "
assert_equal "login_with_spaces", u.login
end
# ... and other tests
end
Functional tests are testing your controllers (and views). In each test you simulate one request sent to one controller with given set of parameters, and then you ensure that the controller returned the proper response.
Note however, that in this test you cannot test the rendering of the page, so it's not strictly simulating a browser. To test whether your page looks nicely, you need to do it manually (I am almost sure some techniques exist, but I do not know of them).
An example of functional test:
class UserControllerTest < ActionController::TestCase
def test_show_renders_admin
get :show, :id => 1
assert_response :success
assert_select "div.user" do
assert_select "span.name", "Joe Admin"
end
end
def test_show_handles_unknown_id
get :show, :id => 9999
assert_response 404
assert_select "p.warning", "No such user"
end
end
Integration tests are testing a sequence of requests - something like a scenario, where an user logins, gets the 'create user' page, creates an user, and so on. These tests check whether the single requests (tested in functional tests) are able to work together.
I see that Simone already pointed the importance of automation in tests, so the link to the Guide is the only value in my answer ;-)
You may find it very helpful to apply some rules of Test Driven Development, especially when your project matures a little.
I know that it's not easy to start the project by writing test, because often you do not yet know how everything will work, but later, when you find a bug, I strongly suggest to start fixing every bug from writing a failing test case. It really, really helps both in the bug-fixing phase, and later - ensuring that the bug does not reappear.
Well, I noticed that I did not directly answer your question ;-)
When you start test procedure, Rails:
deletes the test database (so make sure you do not have any valuable data here),
recreates it using the structure of the development database (so, make sure you have run all your migrations),
loads all the fixtures (from test/fixtures/*)
loads all the test classes from test/units/* and other directories,
calls every method whose name starts with 'test_' or was created by the macro test "should something.." (alphabetically, but you may consider the order as being random)
before every call it executes a special setup procedure, and after every call it executes teardown procedure,
before every call it may (depending on the configuration) recreate your database data, loading the fixtures again.
You will find more information in the Guide.
What happens during testing is that you really run a set of specialized programs or routines (test code) that calls routines in your application (code under test) and verifies that they produce the expected results. The testing framework usually has some mechanism to make sure that each test routine is independent of the other tests. In other words the result from one test does not affect the result of the others.
In Rails specifically you run the tests using the rake test command line tool. This will load and execute each test routine in a random order, and tell you if each test was successful or not.
This answer doesn't necessary apply to Rails itself. When you talk about testing in Rails, you usually mean automatic testing.
The word automatic is the essence of the meaning. This is in fact the biggest difference between unit testing and "browser" testing.
With unit testing you essentially write a code, a routine, that stresses a specific portion of your code to make sure it works as expected. The main advantages of unit testing compared to "browser" testing are:
It's automatic and can be run programmatically.
Your test suite increases during the development lifecycle.
You reduce the risk of regression bugs, because when you modify a piece of code and you run the test suite, you are actually running all the tests, not just a random check.
Here's a basic, very simple example. Take a model, let's say the User model. You have the following attributes: first_name, last_name. You want a method called name to return the first and last name, if they exist.
Here's the method
class User
def name
[first_name, last_name].reject(&:blank?).join(" ")
end
end
and here's the corresponding unit test.
require 'test_helper'
class UserTest < ActiveSupport::TestCase
def test_name
assert_equal "John Doe", User.new(:first_name => "John", :last_name => "Doe").name
assert_equal "John", User.new(:first_name => "John").name
assert_equal "Doe", User.new(:last_name => "Doe").name
assert_equal "", User.new().name
end
end
My tests fail when doing "rake test:functionals" but they pass consistently using autotest.
The failing tests in question seems to be related to Authlogic not logging in the user properly when using rake.
For facilitating signing in a user in tests, I have a test helper method as follows:
class ActionController::TestCase
def signin(user, role = nil)
activate_authlogic
UserSession.create(user)
user.has_role!(role) if role
end
end
The above method is used to signin a user
My stack is shoulda/authlogic/acl9/factory_girl/mocha
The reason why I suspect Authlogic being the issue is the failing tests look like this:
54) Failure:
test: A logged in user PUT :update with valid data should redirect to user profile. (UsersControllerTest)
[/var/lib/gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/action_controller/macros.rb:202:in `__bind_1251895098_871629'
/var/lib/gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:351:in `call'
/var/lib/gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:351:in `test: A logged in user PUT :update with valid data should redirect to user profile. ']:
Expected response to be a redirect to <http://test.host/users/92> but was a redirect to <http://test.host/signin>.
55) Failure:
test: A logged in user PUT :update with valid data should set the flash to /updated successfully/i. (UsersControllerTest)
[/var/lib/gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/assertions.rb:55:in `assert_accepts'
/var/lib/gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/action_controller/macros.rb:41:in `__bind_1251895098_935749'
/var/lib/gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:351:in `call'
/var/lib/gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:351:in `test: A logged in user PUT :update with valid data should set the flash to /updated successfully/i. ']:
Expected the flash to be set to /updated successfully/i, but was {:error=>"You must be signed in to access this page"}
Autotest reads all test files upfront AFAIR (it does so with RSpec, I haven't been using plain tests for a long time now so I may be wrong).
To properly test controllers you need to call activate_authlogic in your setUp method. This is probably done automatically (globally) for integration tests.
Since autotest reads all tests it runs this global setUp and functional tests pass. When you run only functional tests authlogic is not enabled and your tests fail.
I'm not sure about where your problem lies, but I suggest you use Cucumber for testing controllers and user interaction instead of unit tests/rspec. The reason for that is that you exercise your entire app, including the authentication and authorization code you have.
Clearly the user is not getting logged in. Seems like Bragi Ragnarson might be on to something.
Here are some other things to isolate the problem:
Understand if the test is incomplete or relying on some side-effect of autotest. Run the test by itself:
ruby test/functionals/users_controllers_test.rb
Presumably that won't work. If it doesn't, there's some global code that is getting invoked for non functional tests by autotest. It's probably code setting in the test/integration or test/units directories, or one of their requires there.