Poltergeist losing db info - capybara

I have an ActiveRecord class, Post.
In my feature spec which runs with Capybara, RSpec and Poltergeist, I created two instances of it with FactoryGirl
FactoryGirl.create(:post, topic: "Blade running")
FactoryGirl.create(:post, topic: "Dreaming of electric sheep")
Which I can immediately verify in the spec:
scenario "Linking to post by tag", js: true do
FactoryGirl.create(:post, topic: "Blade running")
FactoryGirl.create(:post, topic: "Dreaming of electric sheep")
Post.count # => 2
Post.all.all?(&:persisted?) # => true
visit root_path
# more stuff
end
But when on the next line I visit the root path for my app, (which points to the index action in my posts app), the Posts have vanished (along with their associations):
class PostsController < ApplicationController
def index
#posts = Post.all # => []
#stuff
end
# more methods
end
and when I come back out of the controller action, to the test level, they're back:
Post.count # => 2
Post.all.all?(&:persisted?) # => true
visit root_path
Post.count # => 2
Post.all.all?(&:persisted?) # => true
In the specs that don't use JS I don't have this problem - and just removing "js: true" from the scenario fixes it. But since I'm using a part of the site that requires JS, that's not an option.
I would post this on Poltergeist issues, but since I'm doing something pretty fundamental, it feels a lot more likely that I'm doing something wrong than that this part of Poltergeist is broken. What's my mistake?
Versions:
Rails is 5.0.0
Poltergeist is 1.10.0
Capybara is 2.8.0
Rspec-Rails is 3.5.1

It sounds like you are using transactional testing which doesn't work with the JS capable drivers because the test and app code are run in different threads. Each of those threads maintains its own database connection, which means one thread cannot see the records created in the other thread until the transaction is committed. With transactional testing the transactions are never committed so the threads cannot see anything created in the other thread. See https://github.com/jnicklas/capybara#transactions-and-database-setup and then configure database_cleaner to use truncation (or deletion) strategy for your JS capable tests - https://github.com/DatabaseCleaner/database_cleaner#rspec-with-capybara-example

Related

Running a JS test suite from a Rails environment with a fixture factory, or an API for frontend to request certain fixtures to be loaded on backend

I'm a frontend developer working with EmberJS. That's a fantastic frontend framework that adopts a lot of virtues from Rails: it's very sophisticated and opinionated and at the same time it's extremely agile and handy to use.
Ember has its own test suite perfectly capable of acceptance testing. But it either tests against a backend mock, which is bad for two reasons: it is tedious to create a functional backend mock, and it does not test frontend-backend integration. Or Ember can test against a backend running normally, which makes it impossible to use a fixture factory like FactoryGirl/Fabrication, and you'll have to manually reset the testing database after every test.
A traditional solution to this is to use Capybara. The problem with Capybara is that Ember is very asynchronous by its nature. Capybara is unable to track whether Ember has finished requesting data/calculating reactive properties/rendering GUI/etc.
There are some people using Capybara to test Ember, and all of them use ugliest hacks. Here are just a couple for you to get the foul taste of it:
patiently do
return unless page.has_css? '.ember-application'
end
2000.times do #this means up to 20 seconds
return if page.evaluate_script "(typeof Ember === 'object') && !Ember.run.hasScheduledTimers() && !Ember.run.currentRunLoop"
sleep 0.01
end
source
def wait_for_ajax
counter = 0
while true
active = page.execute_script(“return $.active”).to_i
#puts “AJAX $.active result: “ + active.to_s
break if active < 1
counter += 1
sleep(0.1)
raise “AJAX request took longer than 5 seconds OR there was a JS error. Check your console.” if counter >= 50
end
end
source
So the idea is to use the wonderful Ember's JS test suite to execute tests from Rails. It would allow using Factory Girl to set up the database for every test specifically... and run a NodeJS/Ember test instead of Capybara.
I can imagine something like this:
describe "the signin process", :type => :feature do
before :each do
FactoryGirl.create(:user, :email => 'user#example.com', :password => 'password')
end
it "signs me in" do
test_in_ember(:module => 'Acceptance: User', :filter => 'signs me in')
end
it "logs me out" do
test_in_ember(:module => 'Acceptance: User', :filter => 'logs me out')
end
end
Ember has a command-line interface to testing that can run specific tests, based on Testem.
Alternatively, Ember test suite could be used normally, but it would start every test with a call to a special API, requesting certain fixture factory definitions to be loaded.
Formal question: how do I use the best from both worlds together: native test suite from EmberJS and native fixture factory from Ruby on Rails?
There are a number of issues:
FactoryGirl is a Ruby DSL for creating factories - each factory definition is just a block of ruby code, not some static definition. If you wanted to port the dynamic aspects of a factory such as sequences etc you would have to write a parser which translates the DSL into a javascript DSL.
Javascript factories would not be able to write to the database without accessing your public API's anyways.
Some rough ideas of how this could bridged:
Set up fixtures before your specs - and pass them to your scripts. Most javascript drivers allow you use page.execute_script which can be used to pass a serialized fixture. Cons: must be done before test execution begins.
Set up a factories API which allows you to invoke factories via ajax. Preferably in a mountable Rails Engine. Cons: slow, asynchronous
def FactoriesController
# get /spec/factories/:factory/new
def new
#factory = FactoryGirl.new(params[:factory_uid])
respond_to do |f|
render json: #factory
end
end
# post /spec/factories/:factory
def create
FactoryGirl.create(params[:factory_uid], factory_params)
end
# ...
end
Create a FactoryGirl parser which still does not solve issue 2.
added:
A crappy example factory client:
var FactoryBoy = {
build: function(factory, cb){
$.ajax('/factories/' + factory + '/new' , dataType: 'json').done(function(data){ cb(data) });
},
create: function(factory, cb){
$.ajax('/factories/' + factory , dataType: 'json' method:'POST').done(function(data){ cb(data) });
}
}
Test example:
module('Unit: SomeThing');
test('Nuking a user', function() {
var done = assert.async();
FactoryBoy.create('user', function(user){
this.store.find('user', user.id).then(function(u){
$('#nuke_user').click();
ok(u.get('isDeleted'));
done();
});
});
});

Upgrade to Rails 4.2 breaks rspec feature specs - why?

My model and controller specs are running fine, but after the upgrade to rails 4.2, my feature specs, which use Capybara, no longer work. Example:
#in spec_helper.rb:
def login_admin
create(:user,
first_name: 'John',
last_name: 'Doe',
email: 'doej#test.com',
admin: true)
visit root_path
fill_in 'email', with: 'doej#test.com'
fill_in 'password', with: 'password1'
click_button 'Log in'
puts 'created'
end
#in spec/features/albums_spec.rb
feature "Albums", :type => :feature do
before(:each) do
login_admin
end
scenario 'do something' do
save_and_open_page
end
end
When I run this spec, it never finishes, either pass or fail. No error is thrown; it just sits there, showing Albums with the cursor beneath. 'created' is never put to stdout, and the page is never launched by the save_and_open_page call. Test log shows the erb file is rendered by my login action. This was all working prior to the rails 4.2 upgrade.
This only fails during the spec run - using the app in the browser works fine.
What am I missing here?
UPDATE: possible problems related to capybara/rspec:
Avoid creating models in feature specs, because they're created in a different thread. Instead, create the user using capybara steps (i.e. "sign up" the user every time).
If you really need the database prepared for scenario in a way impossible to create from clicking around the site, implement a simple "admin" area in your app (you'll probably need one anyway) or admin API interface or something like a CSV upload option, etc.
Otherwise, you can search for "capybara rspec database_cleaner append_after" for a setup to support creating models in feature specs, but I've found that none of the solutions are really bullet-proof. You could try: https://github.com/RailsApps/rails_apps_testing
I'm guessing your example is/was stuck on a database operation (waiting for db connection in other threads to end).
PREVIOUS ANSWER:
A few ideas:
catch exceptions in the login_admin method and print them out using STDERR.puts
remove the click_button and see if it fails as expected (shows you the login page)
add a sleep 4 after the password is typed in
separate the click_button into 2 calls (to see which one hangs):
btn = find(:button, locator, options)
STDERR.puts "found: #{btn.inspect}"
btn.click
use bundle show capybara to find out where it is, edit the find (or click) method and put in STDERR.puts methods there to see what's wrong

How to test ActionMailer deliver_later with rspec

trying to upgrade to Rails 4.2, using delayed_job_active_record. I've not set the delayed_job backend for test environment as thought that way jobs would execute straight away.
I'm trying to test the new 'deliver_later' method with RSpec, but I'm not sure how.
Old controller code:
ServiceMailer.delay.new_user(#user)
New controller code:
ServiceMailer.new_user(#user).deliver_later
I USED to test it like so:
expect(ServiceMailer).to receive(:new_user).with(#user).and_return(double("mailer", :deliver => true))
Now I get errors using that. (Double "mailer" received unexpected message :deliver_later with (no args))
Just
expect(ServiceMailer).to receive(:new_user)
fails too with 'undefined method `deliver_later' for nil:NilClass'
I've tried some examples that allow you to see if jobs are enqueued using test_helper in ActiveJob but I haven't managed to test that the correct job is queued.
expect(enqueued_jobs.size).to eq(1)
This passes if the test_helper is included, but it doesn't allow me to check it is the correct email that is being sent.
What I want to do is:
test that the correct email is queued (or executed straight away in test env)
with the correct parameters (#user)
Any ideas??
thanks
If I understand you correctly, you could do:
message_delivery = instance_double(ActionMailer::MessageDelivery)
expect(ServiceMailer).to receive(:new_user).with(#user).and_return(message_delivery)
allow(message_delivery).to receive(:deliver_later)
The key thing is that you need to somehow provide a double for deliver_later.
Using ActiveJob and rspec-rails 3.4+, you could use have_enqueued_job like this:
expect {
YourMailer.your_method.deliver_later
# or any other method that eventually would trigger mail enqueuing
}.to(
have_enqueued_job.on_queue('mailers').with(
# `with` isn't mandatory, but it will help if you want to make sure is
# the correct enqueued mail.
'YourMailer', 'your_method', 'deliver_now', any_param_you_want_to_check
)
)
also double check in config/environments/test.rb you have:
config.action_mailer.delivery_method = :test
config.active_job.queue_adapter = :test
Another option would be to run inline jobs:
config.active_job.queue_adapter = :inline
But keep in mind this would affect the overall performance of your test suite, as all your jobs will run as soon as they're enqueued.
If you find this question but are using ActiveJob rather than simply DelayedJob on its own, and are using Rails 5, I recommend configuring ActionMailer in config/environments/test.rb:
config.active_job.queue_adapter = :inline
(this was the default behavior prior to Rails 5)
I will add my answer because none of the others was good enough for me:
1) There is no need to mock the Mailer: Rails basically does that already for you.
2) There is no need to really trigger the creation of the email: this will consume time and slow down your test!
That's why in environments/test.rb you should have the following options set:
config.action_mailer.delivery_method = :test
config.active_job.queue_adapter = :test
Again: don't deliver your emails using deliver_now but always use deliver_later. That prevents your users from waiting for the effective delivering of the email. If you don't have sidekiq, sucker_punch, or any other in production, simply use config.active_job.queue_adapter = :async. And either async or inline for development environment.
Given the following configuration for the testing environment, you emails will always be enqueued and never executed for delivery: this prevents your from mocking them and you can check that they are enqueued correctly.
In you tests, always split the test in two:
1) One unit test to check that the email is enqueued correctly and with the correct parameters
2) One unit test for the mail to check that the subject, sender, receiver and content are correct.
Given the following scenario:
class User
after_update :send_email
def send_email
ReportMailer.update_mail(id).deliver_later
end
end
Write a test to check the email is enqueued correctly:
include ActiveJob::TestHelper
expect { user.update(name: 'Hello') }.to have_enqueued_job(ActionMailer::DeliveryJob).with('ReportMailer', 'update_mail', 'deliver_now', user.id)
and write a separate test for your email
Rspec.describe ReportMailer do
describe '#update_email' do
subject(:mailer) { described_class.update_email(user.id) }
it { expect(mailer.subject).to eq 'whatever' }
...
end
end
You have tested exactly that your email has been enqueued and not a generic job.
Your test is fast
You needed no mocking
When you write a system test, feel free to decide if you want to really deliver emails there, since speed doesn't matter that much anymore. I personally like to configure the following:
RSpec.configure do |config|
config.around(:each, :mailer) do |example|
perform_enqueued_jobs do
example.run
end
end
end
and assign the :mailer attribute to the tests were I want to actually send emails.
For more about how to correctly configure your email in Rails read this article: https://medium.com/#coorasse/the-correct-emails-configuration-in-rails-c1d8418c0bfd
Add this:
# spec/support/message_delivery.rb
class ActionMailer::MessageDelivery
def deliver_later
deliver_now
end
end
Reference: http://mrlab.sk/testing-email-delivery-with-deliver-later.html
A nicer solution (than monkeypatching deliver_later) is:
require 'spec_helper'
include ActiveJob::TestHelper
describe YourObject do
around { |example| perform_enqueued_jobs(&example) }
it "sends an email" do
expect { something_that.sends_an_email }.to change(ActionMailer::Base.deliveries, :length)
end
end
The around { |example| perform_enqueued_jobs(&example) } ensures that background tasks are run before checking the test values.
I came with the same doubt and resolved in a less verbose (single line) way inspired by this answer
expect(ServiceMailer).to receive_message_chain(:new_user, :deliver_later).with(#user).with(no_args)
Note that the last with(no_args) is essential.
But, if you don't bother if deliver_later is being called, just do:
expect(ServiceMailer).to expect(:new_user).with(#user).and_call_original
A simple way is:
expect(ServiceMailer).to(
receive(:new_user).with(#user).and_call_original
)
# subject
This answer is for Rails Test, not for rspec...
If you are using delivery_later like this:
# app/controllers/users_controller.rb
class UsersController < ApplicationController
…
def create
…
# Yes, Ruby 2.0+ keyword arguments are preferred
UserMailer.welcome_email(user: #user).deliver_later
end
end
You can check in your test if the email has been added to the queue:
# test/controllers/users_controller_test.rb
require 'test_helper'
class UsersControllerTest < ActionController::TestCase
…
test 'email is enqueued to be delivered later' do
assert_enqueued_jobs 1 do
post :create, {…}
end
end
end
If you do this though, you’ll surprised by the failing test that tells you assert_enqueued_jobs is not defined for us to use.
This is because our test inherits from ActionController::TestCase which, at the time of writing, does not include ActiveJob::TestHelper.
But we can quickly fix this:
# test/test_helper.rb
class ActionController::TestCase
include ActiveJob::TestHelper
…
end
Reference:
https://www.engineyard.com/blog/testing-async-emails-rails-42
For recent Googlers:
allow(YourMailer).to receive(:mailer_method).and_call_original
expect(YourMailer).to have_received(:mailer_method)
I think one of the better ways to test this is to check the status of job alongside the basic response json checks like:
expect(ActionMailer::MailDeliveryJob).to have_been_enqueued.on_queue('mailers').with('mailer_name', 'mailer_method', 'delivery_now', { :params => {}, :args=>[] } )
I have come here looking for an answer for a complete testing, so, not just asking if there is one mail waiting to be sent, in addition, for its recipient, subject...etc
I have a solution, than comes from here, but with a little change:
As it says, the curial part is
mail = perform_enqueued_jobs { ActionMailer::DeliveryJob.perform_now(*enqueued_jobs.first[:args]) }
The problem is that the parameters than mailer receives, in this case, is different from the parameters than receives in production, in production, if the first parameter is a Model, now in testing will receive a hash, so will crash
enqueued_jobs.first[:args]
["UserMailer", "welcome_email", "deliver_now", {"_aj_globalid"=>"gid://forjartistica/User/1"}]
So, if we call the mailer as UserMailer.welcome_email(#user).deliver_later the mailer receives in production a User, but in testing will receive {"_aj_globalid"=>"gid://forjartistica/User/1"}
All comments will be appreciate,
The less painful solution I have found is changing the way that I call the mailers, passing, the model's id and not the model:
UserMailer.welcome_email(#user.id).deliver_later
This answer is a little bit different, but may help in cases like a new change in the rails API, or a change in the way you want to deliver (like use deliver_now instead of deliver_later).
What I do most of the time is to pass a mailer as a dependency to the method that I am testing, but I don't pass an mailer from rails, I instead pass an object that will do the the things in the "way that I want"...
For example if I want to check that I am sending the right mail after the registration of a user... I could do...
class DummyMailer
def self.send_welcome_message(user)
end
end
it "sends a welcome email" do
allow(store).to receive(:create).and_return(user)
expect(mailer).to receive(:send_welcome_message).with(user)
register_user(params, store, mailer)
end
And then in the controller where I will be calling that method, I would write the "real" implementation of that mailer...
class RegistrationsController < ApplicationController
def create
Registrations.register_user(params[:user], User, Mailer)
# ...
end
class Mailer
def self.send_welcome_message(user)
ServiceMailer.new_user(user).deliver_later
end
end
end
In this way I feel that I am testing that I am sending the right message, to the right object, with the right data (arguments). And I am just in need of creating a very simple object that has no logic, just the responsibility of knowing how ActionMailer wants to be called.
I prefer to do this because I prefer to have more control over the dependencies I have. This is form me an example of the "Dependency inversion principle".
I am not sure if it is your taste, but is another way to solve the problem =).

Testing/Mocking Workers in Controller Specs

I have a worker that is created inside of a controller action. The worker instantiates another service object. I've tested the worker and service object but I would like to test that the controller action initializes the worker correctly. I'm having troubles conceptually understanding what I should be mocking and the syntax for doing so.
My worker looks like this:
class RepoWorker
def perform(user)
# business logic
RepositorySyncer.new(user)
end
end
My controller looks like this:
class SessionsController < ApplicationController
def create
# business logic
RepoWorker.new.async.perform(user)
redirect_to root_path
end
end
I think my test should look something like this but I can't quite get it to work.
it 'create a job to sync repos with github' do
expect(RepoWorker).to receive(:perform)
post :create, provider: :github
end
I'm using rspec-mocks 2.14.1.
The problem is that when you write
expect(RepoWorker).to receive(:perform)
you actually expect this to happen:
RepoWorker.perform
There are two ways you can achieve what you want. Using stub chains (https://www.relishapp.com/rspec/rspec-mocks/v/3-0/docs/message-expectations/message-chains-in-the-expect-syntax):
expect(RepoWorker).to receive_message_chain(:new, :async, :perform)
Or using any_instance (https://www.relishapp.com/rspec/rspec-mocks/v/3-0/docs/message-expectations/expect-a-message-on-any-instance-of-a-class):
expect_any_instance_of(RepoWorker).to receive(:perform)
Edit:
For rspec 2.x the two methods are those:
worker = double
RepoWorker.stub_chain(:new, :async).and_return(worker)
expect(worker).to receive(:perform)
RepoWorker.any_instance.should_receive(:perform)
You can mock the RepoWorker and make sure that a certain method is called. This way you ensure that nobody removes the perform call without causing any errors - at least you have a spec telling him that this was not a smart move.
So you could do something like this: (not tested, just by a quick search, syntax depends on exact mocking lib you use)
it 'create a job to sync repos with github' do
RepoWorker.should_receive(:perform).at_least_once
post :create, provider: :github
end
This is how I ended up solving my problem.
it 'create a job to sync repos with github' do
job = double('job')
RepoWorker.stub_chain(:new, :async).and_return(job)
expect(job).to receive(:perform)
post :create, provider: :github
end

Elasticsearch out of sync when overwhelmed on HTTP at test suite

I have a Rails app with an Rspec test suite which has some feature/controller tests depending on ElasticSearch.
When we test the "search" feature around the system (and other features depending on ES) we use a real ES, it works perfectly at development environment when we're running single spec files.
When the suite runs at our CI server it gets weird, because sometimes ES won't keep in sync fast enough for the tests to run successfully.
I have searched for some way to run ES in "syncronous mode", or to wait until ES is ready but haven't found anything so far. I've seen some workarounds using Ruby sleep but it feels unacceptable to me.
How can I guarantee ES synchronicity to run my tests?
How do you deal with ES on your test suite?
Here's one of my tests:
context "given params page or per_page is set", :elasticsearch do
let(:params) { {query: "Resultados", page: 1, per_page: 2} }
before(:each) do
3.times do |n|
Factory(:company, account: user.account, name: "Resultados Digitais #{n}")
end
sync_companies_index # this is a helper method available to all specs
end
it "paginates the results properly" do
get :index, params
expect(assigns[:companies].length).to eq 2
end
end
Here's my RSpec configure block and ES helper methods:
RSpec.configure do |config|
config.around :each do |example|
if example.metadata[:elasticsearch]
Lead.tire.index.delete # delete the index for a clean environment
Company.tire.index.delete # delete the index for a clean environment
example.run
else
FakeWeb.register_uri :any, %r(#{Tire::Configuration.url}), body: '{}'
example.run
FakeWeb.clean_registry
end
end
end
def sync_companies_index
sync_index_of Company
end
def sync_leads_index
sync_index_of Lead
end
def sync_index_of(klass)
mapping = MultiJson.encode(klass.tire.mapping_to_hash, :pretty => Tire::Configuration.pretty)
klass.tire.index.create(:mappings => klass.tire.mapping_to_hash, :settings => klass.tire.settings)
"#{klass}::#{klass}Index".constantize.rebuild_index
klass.index.refresh
end
Thanks for any help!
Your test is confused - it's testing assignment, pagination, and (implicitly) parameter passing. Break it out:
Parameters
let(:tire) { double('tire', :search => :sentinel) }
it 'passes the correct parameters to Companies.tire.search' do
expected_params = ... # Some transformation, if any, of params
Companies.stub(:tire).with(tire)
get :index, params
expect(tire).to have_received(:search).with(expected_params)
end
Assignment
We are only concerned that the code is taking one value and assigning it to something else, the value is irrelevant.
it 'assigns the search results to companies' do
Companies.stub(:tire).with(tire)
get :index, params
expect(assigns[:companies]).to eq :sentinel
end
Pagination
This is the tricky bit. You don't own the ES API, so you shouldn't stub it, but you also can't use a live instance of ES because you can't trust it to be reliable in all testing scenarios, it's just an HTTP API after all (this is the fundamental issue you're having). Gary Bernhardt tackled this issue in one of his excellent screencasts - you simply have to fake out the HTTP calls. Using VCR:
VCR.use_cassette :tire_companies_search do
get :index, params
search_result_length = assigns[:companies].length
expect(search_result_length).to eq 2
end
Run this once successfully then forever more use the cassette (which is simply a YAML file of the response). Your tests are no longer dependent on APIs you don't control. If ES or your pagination gem update their code, simply re-record the cassette when you know the API is up and working. There really isn't any other option without making your tests extremely brittle or stubbing things you shouldn't stub.
Note that although we have stubbed tire above - and we don't own it - it's ok in these cases because the return values are entirely irrelevant to the test.

Resources