I'm trying to set up my ember-cli / rails app with integration testing. After fussing with Ember's built in testing library, I've switched over to using RSpec (which I was using for backend anyway), and Capybara. While I can finally fill in forms correctly, my post request to sign_in always fails. I think the issue is Capybara is posting the request to a different database environment or something! If I check at the rails console, the user is certainly there, and I create a user as part of the RSpec test anyway.
Has anyone managed to set up Ember/Rails/Capybara/RSpec?
This is my spec:
describe "the signin process", :type => :feature, :js => true do
it "signs me in" do
visit '/'
FactoryGirl.create :user, email: "user#example.com", password: 'password'
within("#tufts-nav") do
fill_in 'email', :with => 'test#test.com'
fill_in 'password', :with => 'password'
end
click_button 'Sign In'
# here authentication fails mysteriously
expect(page).to have_content 'Jobs'
end
end
Simple/dumb solution
Have RSpec build ember into rails' public/ before your feature specs.
# build ember, hijack rails public/ directory to host ember app
# this way no need to change settings or run any servers
# assumes rails API root path is not used (since ember now hosted from it)
RSpec.configure do |config|
public_path = Rails.root.join('public')
config.before(:context, type: :feature) do
Dir.chdir 'frontend' do
builder = spawn("ember build --environment=ci -output-path=#{public_path}")
_pid, status = Process.wait2(builder)
fail "non-zero exit status #{status}" unless status == 0
end
end
config.after(:context, type: :feature) do
`git clean -fd #{public_path}`
`git checkout #{public_path}`
end
end
Configuring
Our ember-cli app is in rails-root/frontend, you may need to change the name or path to point at yours
You might want to experiment with the environment part, e.g. using production instead. I do this because my production env is hard-coded to target an API we host on heroku but I want the tests to be self-contained, i.e. run against the rails app capybara hosts.
Git is needed for cleanup. If you don't have that you could build to another path and use mv to swap out the rails public/ dir then put it back afterwards.
You may prefer not to have a global install of ember-cli used to build your project (for versioning reasons). If you want to use the project local one point the spawn command at node_modules/ember-cli/bin/ember instead of just ember.
Otherwise, ember-cli-rails
If you're treating the ember app as a component of your rails app, and want to write tests at the rails level (rspec, capybara, etc) then ember-cli-rails is likely a good choice.
This gem handles building the ember app and serving it from the urls you mount it at in your rails routes.
This is transparent to capybara: it sends a request to a ruby webserver and is given back html that calls out to css and js, just like rails normally does.
Be aware there are some issues with assets from ember-cli getting served by rails with the right paths at the moment, which made me switch away to get something deployed quickly. If you're using the rails asset pipeline for css, images and so on, then you shouldn't have a problem. For me, it affected images and webfonts in the ember-cli app.
Apart from that there will need to be a server for the api and a server for the ember front-end (proxying to the rails api), and capybara will need telling to connect to the ember front-end.
This Rakefile and this post seem like a start.
Related
Background:
I have a rails application with cucumber installed. I would like to use the cucumber tests associated to test the application deployed on a seperate system.
Problem:
So basically I have the URL for the deployed app and the cucumber tests, so when I start the cucumber with the app link as argument - I require cucumber to start the tests without invoking the rails app it is residing with but test the external link.
Why the need:
Cucumber always try to invoke the postgres database which is causing a problem for me as I am trying to dockerise it and I do not want to include postgres in it(for some reasons that is out of scope here).
So, is it possible to make it happen? (Running cucumber without invoking the other things like the app/call to db)
this can be achieved by defining a rack app that acts as proxy(routing to the endpoint that you want) in your rails app.
Example:
class TestAppRoutes < Sinatra::Application
uri = URI.parse("http://10.0.0.0")
get '/*' do
request_url = "#{uri}/#{params['splat'][0]}"
response = Net::HTTP.get(URI.parse(request_url))
response
end
end
And then define a ruby file in the features/support to instantiate the rack app:
if ENV['BASE_URL']
Lookout::Rack::Test.app = APP::TestAppRoutes
end
finally when you are invoking cucumber : do cucumber BASE_URL=http://10.10.10
checkout: https://github.com/lookout/lookout-rack-test
I have recently started using the websocket-rails gem and all is fine in development and even in production, but when I run rspec tests I get the following error:
Failure/Error: Unable to find matching line from backtrace
RuntimeError:
eventmachine not initialized: evma_install_oneshot_timer
The message seems to appear on the second test I run. When I run a previously failing test on it's own by adding :focus => true to the test and then rspec --tag focus its passes OK. If I add the focus to more than one test, usually the first one passes and the second test gives the error.
I am using rspec Capybara with the Selemium web driver. I think this may be to do with the web server in test not being an "eventmachine based web server" but I am not 100% sure. I have tried setting up the websockets server as standalone by adding the following in my websocket initializer config.standalone = true and then starting the server by rake websocket_rails:start_server RAILS_ENV=test.
I solved this by configuring rspec to use the Thin server by adding the following within my spec_helper.rb file:
RSpec.configure do |config|
Capybara.server do |app, port|
require 'rack/handler/thin'
Rack::Handler::Thin.run(app, :Port => port)
end
#other configuration here....
end
I've configured my CI server with Jenkins in order to run my test after each push to Git. I have some integration tests and there is always one that fails. The output given is that capybara can't find the element but if I run the test in local it works.
I've research a bit and I found headless gem but it doesn't work. I also tried to set Capybara.default_wait_time = 5 but nothing.
Does anyone know what kind of configuration I should set in order to get my jenkins green?
Thank you in advance
The failure to find may be element within the environment. Using the 'save_and_open' method will show the exact state of the page that is trying to be rendered in the CI server environment. You can add it to a test as simply as:
it 'should register successfully' do
visit registration_page
save_and_open_page
fill_in 'username', :with => 'mrUser'
end
This assumes that you can view the page on the CI server or alternatively with the headless version the file should be saved so that you review later.
I created a Refinery CMS extension following the extension guide and the extension testing guide.
Some rspec unit tests pass and http://localhost/the_extension loads in a web browser. But now tests are failing because the dummy app has no pages (or any other db tables).
I tried just copying the dummy_dev file to dummy_test (and a whoooole bunch of other things), and that works (sort of) because when I run the dummy app (cd spec/dummy; rvmsudo rails s -p 80 -e test) the pages load in my browser, and when I query the pages table (sqlite3 dummy_test 'select * from refinery_pages;) there are pages.
But every time I run cd vendor/extensions/the_extension; bundle exec rake spec it deletes all the entries in the page table.
I tried adding this to spec/spec_helper.rb:
# Copy the prototype test database to the test database.
FileUtils.cp File.expand_path('../dummy_test_db_prototype', __FILE__), File.expand_path('../dummy/dummy_test', __FILE__)
But by the time the specs are run the test db has been deleted. The db is deleted after the specs are read in, because if I put the following at the bottom of spec/features/refinery/the_extension/the_extension_spec.rb the db still has pages:
# DEBUG
puts 'Querying test db:'
system "sqlite3 /some_app/vendor/extensions/the_extension/spec/dummy/dummy_test 'select * from refinery_pages;'"
puts 'Done querying.'
How do I make sure there are pages in the test database when the tests run?
describe "stuff" do
it "should do stuff" do
before { Refinery::Page.create([{ title: 'home', link_url: '/' }]) }
...
end
end
I've recently moved mi CI server (Teamcity) to another powerful machine with same configuration and pretty similar OS.
Since then some of my integration specs have started to fail. My setup is pretty standard, Rails 3 + capybara + poltergeist + phantomjs.
Failures are deterministic, they always happen and they are always related to some missing nodes in the DOM. Also, failures happens across different projects with similar setup so it's not something related to project configuration. This is happening with both capybara 1.x and capybara 2.
This is the simplest failing spec. Note that this spec runs with no need of javascript, so the issue is also present in rack only specs.
scenario 'require an unsubscription' do
visit unsubscribe_index_path
within main_content do
choose list.description
fill_in 'Email', :with => subscriber.email
click_button 'Unsubscribe'
end
save_page # <--- Added to debug output
# !!! HERE is the first failing assertion
page.should have_content('You should have received a confirmation message')
# Analytics event recorded
# !!! this also is failing
page.should have_event('Unsubscription', 'Sent', list_name)
# If I comment previous two lines the spec passes on CI machine
# this means that the form is submitted with success since email is triggered
# from controller code
last_email_sent.should have_subject 'Unsubscribe request received'
last_email_sent.should deliver_to subscriber.email
end
What I've tried:
run the specs on different machines, they works on every dev machine and also in a staging server. I can only reproduce the failure on the CI machine even outside of CI environment (i.e. by running the specs via command line)
Increased Capybara.default_wait_time to a ridiculous 20
Tried with a brutal sleep before the page.should have_content line
upgrade RVM, ruby, capybara, poltergeist on their latest versions on the CI machine.
upgrade teamcity to its latest version
The strangest thing I found is when I've added a save_page call just before the failing line. If i run the spec on my machine and then on the CI where the server is failing and comparing those two files the result is this:
$ diff capybara-201309071*.html
26a27,29
> <script type='text/javascript'>
> _gaq.push(["_trackEvent","Unsubscription","Sent","listname"]);
> </script>
90a94,96
> <div class="alert-message message notice">
> <p>You should have received a confirmation message</p>
> </div>
Which are the two missing pieces which make the spec failing, so the form is submitted, controller action is run successfully but there are two missing pieces of dom. How that is possible? And why this is happening only on one machine?
For the records, those two pieces of DOM are added with standard rails tools one with
redirect_to unsubscribe_index_path, notice: ...
and the other with the analytical gem
I've found the issue, the two failing projects I'm using dalli_store as session store and I've put the config.cache_store = :dalli_store line in config/application.rb instead of config/environments/production.rb.
On the old CI server there was a memcached daemon running hence all specs was running fine.
In the new server since it's just a CI server and it doesn't run any production or staging code memcached is not present thus any session write (such as flash messages) was silently discarded and this is the reason why all that kind of specs was failing.
I solved by putting the config.cache line in the appropriate environment file, but I'm still wondering why dalli gem doesn't raise any warning when no memcached is available. While the choice of not failing on missing cache daemon is reasonable since the application should work with no cached data, it could be a performance killer in production code and it might go unnoticed if no warning is given.