Elasticsearch out of sync when overwhelmed on HTTP at test suite - ruby-on-rails

I have a Rails app with an Rspec test suite which has some feature/controller tests depending on ElasticSearch.
When we test the "search" feature around the system (and other features depending on ES) we use a real ES, it works perfectly at development environment when we're running single spec files.
When the suite runs at our CI server it gets weird, because sometimes ES won't keep in sync fast enough for the tests to run successfully.
I have searched for some way to run ES in "syncronous mode", or to wait until ES is ready but haven't found anything so far. I've seen some workarounds using Ruby sleep but it feels unacceptable to me.
How can I guarantee ES synchronicity to run my tests?
How do you deal with ES on your test suite?
Here's one of my tests:
context "given params page or per_page is set", :elasticsearch do
let(:params) { {query: "Resultados", page: 1, per_page: 2} }
before(:each) do
3.times do |n|
Factory(:company, account: user.account, name: "Resultados Digitais #{n}")
end
sync_companies_index # this is a helper method available to all specs
end
it "paginates the results properly" do
get :index, params
expect(assigns[:companies].length).to eq 2
end
end
Here's my RSpec configure block and ES helper methods:
RSpec.configure do |config|
config.around :each do |example|
if example.metadata[:elasticsearch]
Lead.tire.index.delete # delete the index for a clean environment
Company.tire.index.delete # delete the index for a clean environment
example.run
else
FakeWeb.register_uri :any, %r(#{Tire::Configuration.url}), body: '{}'
example.run
FakeWeb.clean_registry
end
end
end
def sync_companies_index
sync_index_of Company
end
def sync_leads_index
sync_index_of Lead
end
def sync_index_of(klass)
mapping = MultiJson.encode(klass.tire.mapping_to_hash, :pretty => Tire::Configuration.pretty)
klass.tire.index.create(:mappings => klass.tire.mapping_to_hash, :settings => klass.tire.settings)
"#{klass}::#{klass}Index".constantize.rebuild_index
klass.index.refresh
end
Thanks for any help!

Your test is confused - it's testing assignment, pagination, and (implicitly) parameter passing. Break it out:
Parameters
let(:tire) { double('tire', :search => :sentinel) }
it 'passes the correct parameters to Companies.tire.search' do
expected_params = ... # Some transformation, if any, of params
Companies.stub(:tire).with(tire)
get :index, params
expect(tire).to have_received(:search).with(expected_params)
end
Assignment
We are only concerned that the code is taking one value and assigning it to something else, the value is irrelevant.
it 'assigns the search results to companies' do
Companies.stub(:tire).with(tire)
get :index, params
expect(assigns[:companies]).to eq :sentinel
end
Pagination
This is the tricky bit. You don't own the ES API, so you shouldn't stub it, but you also can't use a live instance of ES because you can't trust it to be reliable in all testing scenarios, it's just an HTTP API after all (this is the fundamental issue you're having). Gary Bernhardt tackled this issue in one of his excellent screencasts - you simply have to fake out the HTTP calls. Using VCR:
VCR.use_cassette :tire_companies_search do
get :index, params
search_result_length = assigns[:companies].length
expect(search_result_length).to eq 2
end
Run this once successfully then forever more use the cassette (which is simply a YAML file of the response). Your tests are no longer dependent on APIs you don't control. If ES or your pagination gem update their code, simply re-record the cassette when you know the API is up and working. There really isn't any other option without making your tests extremely brittle or stubbing things you shouldn't stub.
Note that although we have stubbed tire above - and we don't own it - it's ok in these cases because the return values are entirely irrelevant to the test.

Related

Running a JS test suite from a Rails environment with a fixture factory, or an API for frontend to request certain fixtures to be loaded on backend

I'm a frontend developer working with EmberJS. That's a fantastic frontend framework that adopts a lot of virtues from Rails: it's very sophisticated and opinionated and at the same time it's extremely agile and handy to use.
Ember has its own test suite perfectly capable of acceptance testing. But it either tests against a backend mock, which is bad for two reasons: it is tedious to create a functional backend mock, and it does not test frontend-backend integration. Or Ember can test against a backend running normally, which makes it impossible to use a fixture factory like FactoryGirl/Fabrication, and you'll have to manually reset the testing database after every test.
A traditional solution to this is to use Capybara. The problem with Capybara is that Ember is very asynchronous by its nature. Capybara is unable to track whether Ember has finished requesting data/calculating reactive properties/rendering GUI/etc.
There are some people using Capybara to test Ember, and all of them use ugliest hacks. Here are just a couple for you to get the foul taste of it:
patiently do
return unless page.has_css? '.ember-application'
end
2000.times do #this means up to 20 seconds
return if page.evaluate_script "(typeof Ember === 'object') && !Ember.run.hasScheduledTimers() && !Ember.run.currentRunLoop"
sleep 0.01
end
source
def wait_for_ajax
counter = 0
while true
active = page.execute_script(“return $.active”).to_i
#puts “AJAX $.active result: “ + active.to_s
break if active < 1
counter += 1
sleep(0.1)
raise “AJAX request took longer than 5 seconds OR there was a JS error. Check your console.” if counter >= 50
end
end
source
So the idea is to use the wonderful Ember's JS test suite to execute tests from Rails. It would allow using Factory Girl to set up the database for every test specifically... and run a NodeJS/Ember test instead of Capybara.
I can imagine something like this:
describe "the signin process", :type => :feature do
before :each do
FactoryGirl.create(:user, :email => 'user#example.com', :password => 'password')
end
it "signs me in" do
test_in_ember(:module => 'Acceptance: User', :filter => 'signs me in')
end
it "logs me out" do
test_in_ember(:module => 'Acceptance: User', :filter => 'logs me out')
end
end
Ember has a command-line interface to testing that can run specific tests, based on Testem.
Alternatively, Ember test suite could be used normally, but it would start every test with a call to a special API, requesting certain fixture factory definitions to be loaded.
Formal question: how do I use the best from both worlds together: native test suite from EmberJS and native fixture factory from Ruby on Rails?
There are a number of issues:
FactoryGirl is a Ruby DSL for creating factories - each factory definition is just a block of ruby code, not some static definition. If you wanted to port the dynamic aspects of a factory such as sequences etc you would have to write a parser which translates the DSL into a javascript DSL.
Javascript factories would not be able to write to the database without accessing your public API's anyways.
Some rough ideas of how this could bridged:
Set up fixtures before your specs - and pass them to your scripts. Most javascript drivers allow you use page.execute_script which can be used to pass a serialized fixture. Cons: must be done before test execution begins.
Set up a factories API which allows you to invoke factories via ajax. Preferably in a mountable Rails Engine. Cons: slow, asynchronous
def FactoriesController
# get /spec/factories/:factory/new
def new
#factory = FactoryGirl.new(params[:factory_uid])
respond_to do |f|
render json: #factory
end
end
# post /spec/factories/:factory
def create
FactoryGirl.create(params[:factory_uid], factory_params)
end
# ...
end
Create a FactoryGirl parser which still does not solve issue 2.
added:
A crappy example factory client:
var FactoryBoy = {
build: function(factory, cb){
$.ajax('/factories/' + factory + '/new' , dataType: 'json').done(function(data){ cb(data) });
},
create: function(factory, cb){
$.ajax('/factories/' + factory , dataType: 'json' method:'POST').done(function(data){ cb(data) });
}
}
Test example:
module('Unit: SomeThing');
test('Nuking a user', function() {
var done = assert.async();
FactoryBoy.create('user', function(user){
this.store.find('user', user.id).then(function(u){
$('#nuke_user').click();
ok(u.get('isDeleted'));
done();
});
});
});

Test for HTTP status code in some RSpec rails request exampes, but for raised exception in others

In a Rails 4.2.0 application tested with rspec-rails, I provide a JSON web API with a REST-like resource with a mandatory attribute mand_attr.
I'd like to test that this API answers with HTTP code 400 (BAD REQUEST) when that attribute is missing from a POST request. (See second example blow.) My controller tries to cause this HTTP code by throwing an ActionController::ParameterMissing, as illustrated by the first RSpec example below.
In other RSpec examples, I want raised exceptions to be rescued by the examples (if they're expected) or to hit the test runner, so they're displayed to the developer (if the error is unexpected), thus I do not want to remove
# Raise exceptions instead of rendering exception templates.
config.action_dispatch.show_exceptions = false
from config/environments/test.rb.
My plan was to have something like the following in a request spec:
describe 'POST' do
let(:perform_request) { post '/my/api/my_ressource', request_body, request_header }
let(:request_header) { { 'CONTENT_TYPE' => 'application/json' } }
context 'without mandatory attribute' do
let(:request_body) do
{}.to_json
end
it 'raises a ParameterMissing error' do
expect { perform_request }.to raise_error ActionController::ParameterMissing,
'param is missing or the value is empty: mand_attr'
end
context 'in production' do
###############################################################
# How do I make this work without breaking the example above? #
###############################################################
it 'reports BAD REQUEST (HTTP status 400)' do
perform_request
expect(response).to be_a_bad_request
# Above matcher provided by api-matchers. Expectation equivalent to
# expect(response.status).to eq 400
end
end
end
# Below are the examples for the happy path.
# They're not relevant to this question, but I thought
# I'd let you see them for context and illustration.
context 'with mandatory attribute' do
let(:request_body) do
{ mand_attr: 'something' }.to_json
end
it 'creates a ressource entry' do
expect { perform_request }.to change(MyRessource, :count).by 1
end
it 'reports that a ressource entry was created (HTTP status 201)' do
perform_request
expect(response).to create_resource
# Above matcher provided by api-matchers. Expectation equivalent to
# expect(response.status).to eq 201
end
end
end
I have found two working and one partially working solutions which I'll post as answers. But I'm not particularly happy with any of them, so if you can come up with something better (or just different), I'd like to see your approach! Also, if a request spec is the wrong type of spec to test this, I'd like to know so.
I foresee the question
Why are you testing the Rails framework instead of just your Rails application? The Rails framework has tests of its own!
so let me answer that pre-emptively: I feel I'm not testing the framework itself here, but whether I'm using the framework correctly. My controller doesn't inherit from ActionController::Base but from ActionController::API and I didn't know whether ActionController::API uses ActionDispatch::ExceptionWrapper by default or whether I first would have had to tell my controller to do so somehow.
You'd want to use RSpec filters for that. If you do it this way, the modification to Rails.application.config.action_dispatch.show_exceptions will be local to the example and not interfere with your other tests:
# This configure block can be moved into a spec helper
RSpec.configure do |config|
config.before(:example, exceptions: :catch) do
allow(Rails.application.config.action_dispatch).to receive(:show_exceptions) { true }
end
end
RSpec.describe 'POST' do
let(:perform_request) { post '/my/api/my_ressource', request_body }
context 'without mandatory attribute' do
let(:request_body) do
{}.to_json
end
it 'raises a ParameterMissing error' do
expect { perform_request }.to raise_error ActionController::ParameterMissing
end
context 'in production', exceptions: :catch do
it 'reports BAD REQUEST (HTTP status 400)' do
perform_request
expect(response).to be_a_bad_request
end
end
end
end
The exceptions: :catch is "arbitrary metadata" in RSpec speak, I chose the naming here for readability.
Returning nil from a partially mocked application config with
context 'in production' do
before do
allow(Rails.application.config.action_dispatch).to receive(:show_exceptions)
end
it 'reports BAD REQUEST (HTTP status 400)' do
perform_request
expect(response).to be_a_bad_request
end
end
or more explicitly with
context 'in production' do
before do
allow(Rails.application.config.action_dispatch).to receive(:show_exceptions).and_return nil
end
it 'reports BAD REQUEST (HTTP status 400)' do
perform_request
expect(response).to be_a_bad_request
end
end
would work if that was the only example being run. But if it was, we could just as well drop the setting from config/environments/test.rb, so this is a bit moot. When there are several examples, this will not work, as Rails.application.env_config(), which queries this setting, caches its result.
Mocking Rails.application.env_config() to return a modified result
context 'in production' do
before do
# We don't really want to test in a production environment,
# just in a slightly deviating test environment,
# so use the current test environment as a starting point ...
pseudo_production_config = Rails.application.env_config.clone
# ... and just remove the one test-specific setting we don't want here:
pseudo_production_config.delete 'action_dispatch.show_exceptions'
# Then let `Rails.application.env_config()` return that modified Hash
# for subsequent calls within this RSpec context.
allow(Rails.application).to receive(:env_config).
and_return pseudo_production_config
end
it 'reports BAD REQUEST (HTTP status 400)' do
perform_request
expect(response).to be_a_bad_request
end
end
will do the trick. Note that we clone the result from env_config(), lest we modify the original Hash which would affect all examples.
context 'in production' do
around do |example|
# Run examples without the setting:
show_exceptions = Rails.application.env_config.delete 'action_dispatch.show_exceptions'
example.run
# Restore the setting:
Rails.application.env_config['action_dispatch.show_exceptions'] = show_exceptions
end
it 'reports BAD REQUEST (HTTP status 400)' do
perform_request
expect(response).to be_a_bad_request
end
end
will do the trick, but feels kinda dirty. It works because Rails.application.env_config() gives access to the underlying Hash it uses for caching its result, so we can directly modify that.
In my opinion the exception test does not belong in a request spec; request specs are generally to test your api from the client's perspective to make sure your whole application stack is working as expected. They are also similar in nature to feature tests when testing a user interface. So because your clients won't be seeing this exception, it probably does not belong there.
I can also sympathize with your concern about using the framework correctly and wanting to make sure of that, but it does seem like you are getting too involved with the inner workings of the framework.
What I would do is first figure out whether I am using the feature in the framework correctly, (this can be done with a TDD approach or as a spike); once I understand how to accomplish what I want to accomplish, I'd write a request spec where I take the role of a client, and not worry about the framework details; just test the output given specific inputs.
I'd be interested to see the code that you have written in your controller because this can also be used to determine the testing strategy. If you wrote the code that raises the exception then that might justify a test for it, but ideally this would be a unit test for the controller. Which would be a controller test in an rspec-rails environment.

Rails Functional test functions works separately, but not together. Using cancan, devise, RoR4

I am using RoR4, Cancan(1.5.0) and Devise(3.2.2).
I am using Test:Unit to test my application.
The problem is that if I separate the requests in different functions, it works, but if inside a function I perform two tests, it seems like it evaluates the response of the first request, even after subsequent requests:
This works:
test 'admin cannot delete product that has line_items associated' do
sign_in #admin_user
assert_no_difference('Product.count') do
delete :destroy, id: #prod_to_all
end
assert_redirected_to product_path(#prod_to_all)
end
test 'admin can delete product that has no line_items associated' do
sign_in #admin_user
assert_difference('Product.count', -1) do
delete :destroy, id: products(:prod_to_all_not_ordered)
end
assert_redirected_to products_path
end
If I put them requests together, it fails:
test 'admin cannot delete product that has line_items associated, but can delete one that has no line_items associated' do
sign_in #admin_user
assert_no_difference('Product.count') do
delete :destroy, id: #prod_to_all
end
assert_redirected_to product_path(#prod_to_all)
assert_difference('Product.count', -1) do
delete :destroy, id: products(:prod_to_all_not_ordered)
end
assert_redirected_to products_path
end
Error:
"Product.count" didn't change by -1.
Expected: 5
Actual: 6
My issue is that I have 3 roles: public_user, client, and admin. And to test every function for each role in different functions is a pain. even for simple controllers, it gets bigger than it should, and I hate that solution.
What do you guys suggest? Do I really need to embrace rspec and its contexts or can I get away with test:unit and keep the code DRY?
Besides, it seems to me that something is not so well with test:unit, due to the fact that it doesn't evaluate the second request correctly...is it a bug or something more structural that I am not understanding?
Thank you
I actually like the separate version better. It is cleaner. And I personally like my tests to not be too dry. It makes them more verbose. Which I prefer: Duplication in Tests Is Sometimes good
Rails functional tests are meant to be run with one request per test. If you want to tests multiple requests you can use an integration tests.
It is hard to tell the error without seeing the controller code. It might be that you application is loosing the logged in user during requests. Or that they share state (e.g. instance variables). Debug with pry or some alternate debugger or just plain old puts to see the state of objects.

Rails fragment cache testing with RSpec

I feel like this is a not-so-much documented topic, at least I've had a lot of trouble finding our about the best practices here.
I'm fragment caching in the view using a cache_key:
%tbody
- #employees.each do |employee|
- cache employee do
%tr[employee]
%td= employee.name
%td= employee.current_positions
%td= employee.home_base
%td= employee.job_classes
Now I can add :touch => true on the :belongs_to side of my has_many associations and this will do everything I need to keep this fragment caching up to date, but for the life of me I'm having a hard time figuring out how to test this.
Dropping in :touch => true is easy and convenient but it spreads the expiry logic around a couple places. I'd love to have an RSpec request spec that walks through and checks the behavior on this, something that isn't liable to change much but can bring all the caching requirements into one specific file that describes what is supposed to be occurring.
I tried along these lines:
require 'spec_helper'
include AuthenticationMacros
describe "Employee index caching" do
before do
Rails.cache.clear
ActionController::Base.perform_caching = true
login_confirmed_employee
end
after do
ActionController::Base.perform_caching = false
end
specify "the employee cache is cleared when position assignments are modified"
specify "the employee cache is cleared when home base assignments are modified"
end
The specs were fleshed out with the Capybara steps of going through and making the updates of course, and I thought I was quite on the right track. But the tests were flickering in weird ways. I would modify the specs to output the employee objects cache_key, and sometimes the cache_keys would change and sometimes not, sometimes the specs would pass and sometimes not.
Is this even a good approach?
I know SO wants questions that are answerable, so to start: how can I set up and tear down this test to use caching, when my test env does not have caching on by default? In general, however, I'd really like to hear how you might be successfully testing fragment caching in your apps if you have had success with this.
EDIT
I'm accepting cailinanne's answer as it addresses the problem that I specifically ask about, but I have decided however that I don't even recommend integration testing caching if you can get away from it.
Instead of specifying touch in my association declarations, I've created an observer specific to my caching needs that touches models directly, and am testing it in isolation.
I'd recommend if testing a mulit-model observer in isolation to also include a test to check the observers observed_models, otherwise you can stub out too much of reality.
The particular answer that lead me to this is here: https://stackoverflow.com/a/33869/717365
Let me first say that in this answer, you may get more sympathy then fact. I've been struggling with these same issues. While I was able to get reproducible results for a particular test, I found that the results varied according to whether or not I ran one versus multiple specs, and within or without spork. Sigh.
In the end, I found that 99.9% of my issues disappeared if I simply enabled caching in my test.rb file. That might sound odd, but after some thought it was "correct" for my application. The great majority of my tests are not at the view/request layer, and for the few that are, doesn't it make sense to test under the same configurations that the user views?
While I was wrestling with this, I wrote a blog post that contains some useful test helpers for testing caching. You might find it useful.
Here is what I've used in my specs with caching enabled in my config/environments/test.rb
require 'spec_helper'
include ActionController::Caching::Fragments
describe 'something/index.html.erb' do
before(:each) do
Rails.cache.clear
render
end
it 'should cache my fragment example' do
cached_fragment = Rails.cache.read(fragment_cache_key(['x', 'y', 'z']))
cached_fragment.should have_selector("h1")
end
end
I use view specs to test cache expiration in views:
describe Expire::Employee, type: :view, caching: true do
def hour_ago
Timecop.travel(1.hour.ago) { yield }
end
def text_of(css, _in:)
Nokogiri::HTML(_in).css(css).text
end
let(:employee) { hour_ago { create :employee } }
def render_employees
assign(:employees, [employee.reload])
render(template: 'employees/index.html.erb')
end
alias_method :employees_list, :render_employees
context "when an employee's position gets changed" do
let(:position) { create :position, employee: employee }
before { hour_ago { position.update!(name: 'Old name') } }
let(:update_position) { position.update!(name: 'New name') }
it "should expire the employee's cache" do
expect { update_position }
.to change { text_of('.positions', _in: employees_list) }
.from(/Old name/).to(/New name/)
end
end
# similar spec case for home base assignment
end
where
Timecop gem is used to travel in time to make sure that cache key timestamps are different for different cache versions of employee
Nokogiri gem is used to extract employee position's text from the rendered view
Note that I tagged this spec with caching: true. It enables caching before each test case and disables after it:
config.before(:each, caching: true) do
controller.perform_caching = true
end
config.after(:each, caching: true) do
controller.perform_caching = false
end
And you might want to add an example that checks that an employee is being actually cached
describe Expire::Employee, type: :view, caching: true do
context 'with an uncached employee' do
it 'should cache the employee' do
expect_any_instance_of(Employee)
.to receive(:current_positions).once
2.times { render_employees }
end
end
# other spec cases
end

How can I set up RSpec for performance testing 'on the side'

We are using RSpec in a rails project for unit testing. I would like to set up some performance tests in RSpec, but do it in a way as to not disrupt the 'regular' features and fixtures.
Ideally I'd be able to tag my performance specs in a certain way such that they are not run by default.
Then when I specify to run these specs explicitly it will load a different set of fixtures (it makes sense to do performance testing with a much larger and more 'production-like' dataset).
Is this possible? It seems like it should be.
Has anyone set up something like this? How did you go about it?
I managed to get what I was looking for via the following:
# Exclude :performance tagged specs by default
config.filter_run_excluding :performance => true
# When we're running a performance test load the test fixures:
config.before(:all, :performance => true) do
# load performance fixtures
require 'active_record/fixtures'
ActiveRecord::Fixtures.reset_cache
ActiveRecord::Fixtures.create_fixtures('spec/perf_fixtures', File.basename("products.yml", '.*'))
ActiveRecord::Fixtures.create_fixtures('spec/perf_fixtures', File.basename("ingredients.yml", '.*'))
end
# define an rspec helper for takes_less_than
require 'benchmark'
RSpec::Matchers.define :take_less_than do |n|
chain :seconds do; end
match do |block|
#elapsed = Benchmark.realtime do
block.call
end
#elapsed <= n
end
end
# example of a performance test
describe Api::ProductsController, "API Products controller", :performance do
it "should fetch all the products reasonably quickly" do
expect do
get :index, :format => :json
end.to take_less_than(60).seconds
end
end
But I tend to agree with Marnen's point that this isn't really the best idea for performance testing.
I created rspec-benchmark Ruby gem for writing performance tests in RSpec. It has many expectations for testing speed, resources usage, and scalability.
For example, to test how fast your code is:
expect { ... }.to perform_under(60).ms
Or to compare with another implementation:
expect { ... }.to perform_faster_than { ... }.at_least(5).times
Or to test computational complexity:
expect { ... }.to perform_logarithmic.in_range(8, 100_000)
Or to see how many objects get allocated:
expect {
_a = [Object.new]
_b = {Object.new => 'foo'}
}.to perform_allocation({Array => 1, Object => 2}).objects
If you want to do performance testing, why not run New Relic or something with a snapshot of production data? You don't really need different specs for that, I think.

Resources