I have a method which handle failure on some api calls. I wrote tests for it:
it 'logs the error' do
expect(Rails.logger).to receive(:error).with(/Failed API call/i)
expect(Rails.logger).to receive(:error).with(/#{error_type}/)
expect(Rails.logger).to receive(:error).with(/#{server_error}/)
subject
end
but to make it work I would need to make 3 api calls or split it to 3 test cases. I don't like both of the solutions. I think the best one would be to combine 3 regexp into single expectation.
Is it possible to put multiple Regexps on single parameter in one test case?
You could combine all these regexp into one (using regexp's AND operator).
let(:expected_log_message) do
/(?=.*Failed API call)(?=.*#{error_type})(?=.*#{server_error})/i
end
this regexp will test string if it matches all of above.
Then inside a test case:
it 'logs the error' do
expect(Rails.logger).to receive(:error).with(expected_log_message)
subject
end
Related
I am writing thinking-Sphinx test cases. i have following test case
test 'z' do
app = applications(:one)
message = messages(:two)
message.update_column(:messagable_id, app.id)
message.update_column(:comment, 'This is second message')
ThinkingSphinx::Test.start
sign_in #user
ThinkingSphinx::Test.index
get :index, company_id: #company.id, qc: 'Messages', q: 'Body | second', format: 'json'
assert_response :success
assert_equal decode_json_response(#response)['apps'].count, 2
end
In my case message.update_column is not taking affect, instead if i make the same changes in messages fixture then i got my test case
pass.
Is there any specific reason why update_column is not taking affect with thinking sphinx because everywhere else update_column is working just fine.
If you're using SQL-backed indices (and it seems you are, if you're calling index?), then you can't use transactional fixtures for tests which involve Sphinx/Thinking Sphinx, because Sphinx's indexing happens via separate connections to your database, and the transaction that's operating within the scope of your test isn't available.
The documentation covers the possibility of using DatabaseCleaner and using the deletion approach for tests that involve Sphinx - though the example is for RSpec, and it looks like you're using a different testing library, so I'm not quite sure of the specifics of how to implement this in your case.
in my rspec tests i have quite a few deep nested contexts like these:
describe "#mymethod"
it_behaves_like "specific object"
end
shared_examples_for "specific object" do
context "when it receives proper parameters" do
context "when file is in a queue" do
it "behaves as i want it to" do
#...
end
end
end
as a result when i run my rspec tests i get results like those (or more complex) in sections "Pending"/"Failures"/"Failed examples":
mymethod behaves like specific object when it receives proper parameters when file is in a queue behaves as i want it to
those context strings sometimes tend to be unreadable for the lack of separators.
how can i monkey patch rspec to add them (f.ex. "|") automatically (or is there another preferred solution)? where do descriptor strings get concatenated in rspec?
RSpec has the notion of a separator hard-coded. Unfortunately, it is not configurable right now, but I guess the maintainers would consider a patch for review.
I'm using rspec with ruby on rails for testing.
Question - In a spec if I was to do a cross check a pre-condition is established properly before starting the test, what approach is recommended?
For example, using an rspec ".should" type assertion doesn't seem like this would be the right thing as I'm only checking a precondition...
I would have two different tests: one asserting that things like your precondition can be set correctly, the other assuming it works and then testing whatever depends on it.
A should is perfectly valid in this situation as it is a condition that should be valid at the start of the test. As a very trivial example:
it "should increment by one" do
value = 10
value.should eql(10)
value += 1
value.should eql(11)
end
I don't think current RSpec has that kind of feature, it would be nice to have some other method for these pre-condition assertions such as
it "should increment by one" do
value = 10
value.must eql(10) # This is a necessary pre-condition, not the actual test
value += 1
value.should eql(11)
end
But I guess there is no such method to distinguish things, but the rspec-given gem might be of interest.
Background: So I have roughly (Ruby on Rails app)
class A
def calculate_input_datetimes
# do stuff to calculate datetimes - then for each one identified
process_datetimes(my_datetime_start, my_datetime_end)
end
def process_datetimes(input_datetime_start, input_datetime_end)
# do stuff
end
end
So:
I want to test that calculate_input_datetimes algorithms are working
and calculating the correct datetimes to pass to process_datetimes
I know I can STUB out process_datetimes so that it's code won't be
involved in the test
QUESTION: How can I setup the rspec test however so I can specifically
test that the correct datestimes were attempted to be passed over to
process_datetimes, So for a given spec test that process_datetimes was
called three (3) times say with the following parameters passed:
2012-03-03T00:00:00+00:00, 2012-03-09T23:59:59+00:00
2012-03-10T00:00:00+00:00, 2012-03-16T23:59:59+00:00
2012-03-17T00:00:00+00:00, 2012-03-23T23:59:59+00:00
thanks
Sounds like you want should_receive and specifying what arguments are expected using with, for example
a.should_receive(:process_datetimes).with(date1,date2)
a.should_receive(:process_datetimes).with(date3,date4)
a.calculate_input_datetimes
There are more examples in the docs, for example you can use .ordered if the order of these calls is important
What's the best practices way to test that a model is valid in rails?
For example, if I have a User model that validates the uniqueness of an email_address property, how do I check that posting the form returned an error (or better yet, specifically returned an error for that field).
I feel like this should be something obvious, but as I'm quickly finding out, I still don't quite have the vocabulary required to effectively google ruby questions.
The easiest way would probably be:
class UserEmailAddressDuplicateTest < ActiveSupport::TestCase
def setup
#email = "test#example.org"
#user1, #user2 = User.create(:email => #email), User.new(:email => #email)
end
def test_user_should_not_be_valid_given_duplicate_email_addresses
assert !#user2.valid?
end
def test_user_should_produce_error_for_duplicate_email_address
# Test for the default error message.
assert_equal "has already been taken", #user2.errors.on(:email)
end
end
Of course it's possible that you don't want to create a separate test case for this behaviour, in which case you could duplicate the logic in the setup method and include it in both tests (or put it in a private method).
Alternatively you could store the first (reference) user in a fixture such as fixtures/users.yml, and simply instantiate a new user with a duplicate address in each test.
Refactor as you see fit!
http://thoughtbot.com/projects/shoulda/
Shoulda includes macros for testing things like validators along with many other things. Worth checking out for TDD.
errors.on is what you want
http://api.rubyonrails.org/classes/ActiveRecord/Errors.html#M002496
#obj.errors.on(:email) will return nil if field is valid, and the error messages either in a String or Array of Strings if there are one or more errors.
Testing the model via unit tests is, of course, step one. However, that doesn't necessarily guarantee that the user will get the feedback they need.
Section 4 of the Rails Guide on Testing has a lot of good information on functional testing (i.e. testing controllers and views). You have a couple of basic options here: check that the flash has a message in it about the error, or use assert_select to find the actual HTML elements that should have been generated in case of an error. The latter is really the only way to test that the user will actually get the message.