I've just started using cucumber to test my Rails apps. I've been very successful blindly following the many good examples.
Given /^I visit (.*) web page$/ do |page|
visit page
page.should have_text("some text")
end
Obviously, the call to visit populates the page object. And I have surmised that multiple calls to visit, or click_link, will re-fill the page object. But I'd like a better idea of where and when the page object is instantiated and its scope. Is it global or do I have to set #page= page after I call visit?
I've looked through the capybara source too and really don't have a good feel for the page object. Where can I find good documentation?
Edit: Even more confusion
It appears that I should be using have_content instead of have_text. My confusion today is that:
page.should have_content("this text does not exist on the page")
always passes. I don't understand why this does not fail?
My issue has been solved. Capybara fails silently if you are using ruby 1.8.7 (which is what I am using on my mac)
https://groups.google.com/forum/?fromgroups#!topic/cukes/B3UbbyG5k6s
Related
I did a total rearrangement of my routes so I'm having to go back and change some path names. I ran a cucumber test for my navigation and it came back a success. Knowing that I have some old routes to change I thought that was an odd result so I did some manual checking to confirm my suspicions.
As an example I clicked on a page that was supposed to have a old and obsolete path <li><%= link_to p.title, forum_post_path(p) %></li> that's breaks the page when I am manually clicking around I get a no method error as I should.
But, when I run the cucumber test and use launchy to save and open page. I don't get that error. It loads like there isn't a problem at all. The launchy page that I'm given loads the link_to helper with the bad _path like there isn't a problem at all...
The only thing I can think of is that the words that it is expecting would be there and perhaps the page is loading correctly for a brief moment before Rails spits out the method error and Cucumber is picking up on the positive result first before it returns an error.
Any possible things I can look at? I would hate to get false positives.
Edit: I just added a 1 second sleep timer (should be more than enough) and cucumber still gives me a pass.
Here are the tests so you can view it:
# navigation.feature
Scenario: As a user I want to be able to view a specific forum and it's posts within.
Given There is a User
Given There is a Forum
Given I am on the index
When I click the "Test forum name" link
Then I should see "Name: Test forum name"
Then I should see "Description: test description with a minimum of 20 characters.."
And...
#navigation_steps.rb
Given(/^There is a User$/) do
User.create!(email: "user#test.com", password: "password#1")
expect(User.first.email).to eq("user#test.com")
end
Given(/^I am on the index$/) do
visit root_path
end
Given(/^There is a Forum$/) do
Forum.create!(name: "Test forum name", description: "test description with a minimum of 20 characters..", user_id: User.first.id)
expect(Forum.last.name).to eq("Test forum name")
end
When(/^I click the "([^"]*)" link$/) do |link|
click_link link
end
Then(/^I should see "([^"]*)"$/) do |message|
expect(page).to have_content(message)
end
Screenshots:
You don't say what exactly your tests are checking for, but a couple of possible reasons for this are:
You're using something like spring to keep your test environment loaded - If you are it may not be seeing changing to routes.rb and may require restarting before it knows the routes have changed. Solution: Restart spring or whatever your are using to keep the test env loaded
You incorrectly have the web-console gem in the test environment (it should only be in the development environment) which catches any errors and produces the nice error page. If that error page has the text on it your test checks for (possible since it includes the surrounding code) then your test can pass - Solution: remove web-console from the test environment in your Gemfile.
I am trying to port a selenium test suite to capybara-webkit. The Rails app has an angular app embedded in the rails views and is not behaving as expected with webkit.
A test like this:
require 'spec_helper'
feature 'Editing company profiles' do
before do
#user = create(:employee)
#company = Company.find(#user.employer.id)
sign_in_as! #user
end
scenario 'successfully', js: true do
click_link 'Dashboard'
click_link #company.name
click_button 'Edit'
fill_in 'company_name', with: 'new name'
click_button 'Save'
expect(page).to have_content "Your company profile has been updated!"
end
end
Will pass without issue in selenium, but with webkit I get the error
Failure/Error: Unable to find matching line from backtrace
ActionController::ParameterMissing:
param is missing or the value is empty: company
# ./app/controllers/api/v1/companies_controller.rb:23:in `company_params'
# ./app/controllers/api/v1/companies_controller.rb:10:in `update'
The trace is missing, maybe because it's from angular land, but the error is reporting that no params are coming from the client. I've tried the capybara-angular gem, but it has not helped. I've also tried saving the page with capybara and nothing looks out of place there, are there any ways to access the PATCH request inside of webkit that's being generated in this test? I've also gotten similar errors with poltergeist.
Has anyone setup headless rspec testing with angular + rails? Any tips on how to debug why data isn't being sent over from the client?
Without seeing all of your code, this feels like it could be a problem associated with a known issue in the capybara-webkit gem is unable to pass entity bodies to the server.
I suspect that the update request is being sent as a PATCH request (which is appropriate), but the issue with the gem results in failure for your tests.
A workaround to your problem is to change the method of the request to PUT or POST, the issue linked above shows some options. You will be able to get your test to pass, but it's up to you to decide if changing the request type is worth getting your test to pass.
Note: In practice it may not matter if you don't actually use PATCH, as you could technically use (some of) the other http methods interchangeably -- but use caution as there are reasons to use a specific http method for a given situation. See this rubyonrails.org post from a few years ago for some details.
In Michael Hartl's Rails Tutorial, many examples use an expect() method. Here's one such example in a cucumber step definition:
Then /^she should see her profile page$/ do
expect(page).to have_title(#user.name)
end
This same example can be written as such to the same effect:
Then /^she should see her profile page$/ do
page.should have_title(#user.name)
end
Why use expect()? What value does it add?
There's a documentation on the rspec-expectations gem about this.
Why switch over from should to expect Basically:
should and should_not work by being added to every object. However, RSpec does not own every object and cannot ensure they work consistently on every object. In particular, they can lead to surprising failures when used with BasicObject-subclassed proxy objects.
expect avoids these problems altogether by not needing to be available on all objects.
A more detailed reasons for this is placed in: RSpec's New Expectation Syntax
expect is a bit more universal. You're passing an object or a block to rspec's expect method, rather than attempting to call a should method on objects that are outside of your control. As a result, expect is coming more and more into favor among developers.
New to Ruby, Rails and TDD. I'm using RSpec with Capybara and Capybara webkit.
Trying to test if a div element exists on a page.
Test Code:
require 'spec_helper'
describe "Login module" do
before do
visit root_path
end
it "should have a module container with id mLogin" do
page.should have_css('div#mLogin')
end
it "should have a module container with id mLogin", :js => true do
page.evaluate_script('$("div#mLogin").attr("id")').should eq "mLogin"
end
end
The first test passes but the second test fails with:
Login module should have a module container with id mLogin
Failure/Error: page.evaluate_script('$("div#mLogin").attr("id")').should eq "mLogin"
expected: "mLogin"
got: nil
Ran the JS in browser dev tools and get "mLogin" rather than nil.
Any ideas? Thanks.
find('div#mLogin')[:id].should eq 'mLogin'
See this from doc:
#evaluate_script
Evaluate the given JavaScript and return the result. Be careful when using this with scripts that return complex objects, such as jQuery statements. execute_script might be a better alternative.
evaluate_script always return nil, as far as I remember.
Anyway, your second test seems like is testing if capybara works, because your first test is enough.
One likely problem is that the have_css matcher supports Capybara's synchronization feature. If the selector isn't found right away, it will wait and retry until it is found or a timeout elapses.
There's more documentation about this at http://rubydoc.info/github/jnicklas/capybara#Asynchronous_JavaScript__Ajax_and_friends_
On the other hand, evaluate_script runs immediately. Since this is the first thing you do after visiting the page, there's a race condition: it's possible that it executes this script before the page has finished loading.
You can fix this by trying to find an element on the page that won't appear until the page is loaded before you call evaluate_script.
Alternately, you can wrap your call in a call to synchronize to explicitly retry, but this is not generally recommended. For situations like this, you're much better off using Capybara's built-in matchers. The evaluate_script method should only be used as a last resort when there is no built-in way to accomplish what you need to do, and you need to take a lot of care to avoid race conditions.
When working with Capybara and Rspec in my features spec, after calling "visit", page.body returns:
"<html><head></head><body></body></html>"
This, of course, causes all my "find"s to fail, as there is nothing there. save_and_open_page care of launchy shows me the complete, accurate page, chock full of HTML tags.
Any thoughts on why Capybara is not setting the page element correctly?
turns out this was due to a conflict between webrat and capybara. Diving into the source for where "visit" and "page" are referenced, I discovered that visit is declared in both Webrat and Capybara; however, the effect of "visit" in each differs. Capybara sets the page variable, while webrat sets a response variable. I don't yet know enough about how to use both of them, as they seem to both be useful for different purposes - if anyone wants to leave some comments with some resources I certainly would appreciate it!
I was getting this too.
When I puts out the markup from the visit call, I found that the page was actually throwing a 404, but I wasn't getting a Capybara 404 error.
If you run something like the following, it will print out the markup so you can debug more easily:
When /^I view the front page$/ do
#visit = get "#{host}/frontpage"
puts #visit
end
Hope that helps someone.