I am using the standard tools included in Rails 6 for testing. It's a very simple test, but it seems the freeze_time is not working, and the error code is quite difficult to discern a cause from.
Here is the test I am executing:
Here is the error after running the test:
When you create a new Person the value for created_at should be set (assuming it has timestamps applied), but since you're getting nil instead it is almost certain your Person creation fails. Likely due to validations errors when it tries to save. You could look at the model's error entries to be sure.
To get the error to show up for viewing:
class PersonTest < ActiveSupport::TestCase
test 'created at matches current time' do
freeze_time
assert_equal(Time.current, Person.create!.created_at)
end
end
If it is a validation error, you can bypass those:
class PersonTest < ActiveSupport::TestCase
test 'created at matches current time' do
freeze_time
person = Person.new.save(validate: false)
assert_equal(Time.current, person.created_at)
end
end
There are two things wrong with this though.
You want to avoid saving to the DB if at all possible during tests to keep them performant.
You are actually testing Rails's built-in functionality here. This is a test you should not be performing. There may be a better test to check that the Rails timestamps have been applied to your Person model, but that's not one I've ever written before (and I write tests for everything). There is no way to fat-finger the timestamps away in a commit, so testing for their existence feels way overkill.
Related
I want to write some tests for my current project.
These tests will be mostly related to saved data in the database.
For example:
If a user registered 7 years ago, provide him gold badge (a boolean
field set to true)
If registered 3 years ago then bronze badge etc.
Well, the calculations are bit more complicated before the boolean field is set to true. There are about 30-40 test cases that I need to test which are very similar to the above two examples.
I did some reading about Rails testing but I could not figure out the exact difference between integration, unit and functional testing in Rails.
In my case which testing method will be appropriate?
I think it depends how you would implement it.
If you're using a cronjob to check for users badges an integration test would be good, to test the whole job enqueuing and stuff.
If you're updating when the user logs in, you can use a model (unit) test.
For example:
class UserTest < ActiveSupport::TestCase
test "users badge is updated on login if needed" do
user = User.create ... , golden_badge: false
travel_to Time.now + 7.years do
# simulate login, something like: user.login(user: "", pass: "")
assert user.golden_badge
end
end
end
this test would take place in test/models/user_test.rb
It depends where exactly the code that updates this field is executed. For example if you check these 'rewards' on the update of a user in user model itself as below:
class User < ActiveRecord::Base
before_save :check_rewards
def check_rewards
self.update(golden_badge: true) if created_at > 5.years.ago
end
Then the tests would only need to take place in the models/user_spec.rb which is a unit test.
If the code was ran as a cron job every hour, which used a method in a file called check_rewards.rb that would probably belong in two places.
Firstly the 'unit' test of the object that does the updating in services/check_rewards.rb (or commands/check_rewards.rb, or whatever directory) to test the actual functionality is working, like my user spec code above.
Finally you should have an integration test in integration/check_rewards_spec.rb to ensure that the job manager you are using is working as expected, so that the job itself is actually run and has the expected behavior.
Unit testing : For testing smallest unit of code. For a specific functionality. Like CRUD operation testing in a controller. Each action will test in unit testing separately.
Integration Testing : For testing a group of functionality or module. Consider if you want to see what a simple user can see and what logged in user can see in a website. So in both cases you need to test login functionality then a particular functionality after that.
Functional testing is completely different. In this you can test either module or combination of modules. It more like to integration testing.
Unit tests tell a developer that the code is doing things right; functional tests tell a developer that the code is doing the right things.
Read more here
In my view you should go for functional testing.
I just, manually, discovered a migration error. I added a new field to a model, and forgot to add it into the model_params method of the controller. As a result the new field wasn't persisted to the database.
Easy enough to fix once I noticed the problem, but it got me to wondering if there was a way to detect this in testing. I would imagine something like a gem that would parse the schema and generate a set of tests to ensure that all of the fields could be written and that the same data could be read back.
Is this something that can be (or is) done? So far, my searches have led me to lots of interesting reading, but not to a gem like this...
It is possible to write what you would want. Iterate through all the fields in the model, generate params that mirrors those fields, and then run functional tests on your controllers. The problem is that the test is brittle. What if you don't actually want all the fields to be writable through params? What if you reference a model in another controller outside of the standard pattern? How will you handle generating data that would pass different validations? You would either have to be sure that your application would only be written in a certain way or this test would become more and more complex to handle additional edge cases.
I think the solution in testing would be to try to keep things simple; realize that you've made a change to the system and as a result of that change, corresponding tests would need to be updated. In this case, you would update the functional and unit tests affected by that model. If you were strictly adhering to Test Driven Design, you would actually update the tests first to produce a failing test and then implement the change. As a result, hopefully the updated functional test would have failed in this case.
Outside of testing, you may want to look into a linter. In essence, you're asking if you can catch an error where the parameters passed to an object's method doesn't match the signature. This is more catchable when parsing the code completely (i.e. compilation in a static type environment).
EDIT - I skipped a step on the linting, as you would also have to write your code a certain way that a linter would catch it, such as being more explicit of the method and parameters passed to it.
You might want to consider that such a gem may not exist because its not that practical or useful in real life.
Getting the columns off a model is pretty simple from the reflection methods that Active Record gives you. And yeah you could use that theoretically to automagically run a bunch of tests in loop.
But in reality its just not going to cut it. In real life you don't want every column to be assignable. Thats why you are using mass assignment protection in the first place.
And add to that the complexity of the different kinds of constraints and data types your models have. You'll end up with something extremely complex with just adds a bunch of tests with limited value.
If you find yourself omitting a property from mass assignment protection than your should try to cover that part of your controller either with a functional test or an integration test.
class ArticlesControllerTest < ActionController::TestCase
def valid_attributes
{
title: 'How to test like a Rockstar',
content: 'Bla bla bla'
}
end
test "created article should have the correct attributes" do
post :create, article: valid_attributes
article = Article.last
valid_attributes.keys.each do |key|
assert_equals article[key], valid_attributes[key]
end
end
end
So I have an application that revolves around events on a calendar. Events have a starts_at and ends_at. A new requirement has been added to restrict the creation of new events to only be in the future, more specifically >= midnight today. So naturally, I modified my validate method to check for this at instantiation. This however creates a problem when testing with RSpec using FactoryGirl.
A lot of tests require checking certain requirements with events in the past. Problem is, I can no longer create those elements in the past since it fails my validation. So I've found a few ways to get around this with
allow(instance)to receive(:starts_at).and_return(some_future_time)
or
expect_any_instance_of(Event).to receive(:check_valid_times).and_return(true)
Here's some simple code from the app
#Model
class Event < ActiveRecord::Base
validate :check_for_valid_times
private:
def check_for_valid_times
if starts_at.present? && (starts_at < Time.now.at_beginning_of_day)
errors.add(:base, I18n.t('event.start_time_before_now'))
end
unless starts_at.present? && (starts_at < ends_at)
errors.add(:base, I18n.t('event.end_time_before_start'))'
end
end
end
My factory
#factory
require 'factory_girl'
FactoryGirl.define do
factory :event do
starts_at { 2.hour.from_now.to_s }
end
end
The question: Is there a nice clean way to do this instead of having to go handle each creation of an event and dealing with them specifically in all my tests? Currently I've having to go deal with quite a few of them in varying ways.
Let me know if there are other parts of the code I should be posting up.
As always, Thanks in advance for your help!
The way I generally solve problems like this is the timecop gem.
For instance:
Timecop.freeze 1.hour.ago do
create(:event)
end
You can then place these in let blocks, or in a helper in your spec/support/events.rb to keep it DRY and maintainable.
Using timecop, you can actually simulate a test completely. So create the event correctly at the time in the past that you care about, then return time to normal and see that your tests show that the event is in the past and cant be altered, and so on. It allows complete, accurate and understandable time-sensitive tests. RSpec helpers, shared contexts and before/let blocks can handle keeping the tests clean.
Listening to Giant Robots Smashing Into Other Giant Robots podcast, I heard that you want your FactoryGirl factories to be minimal, only providing those attributes that make the object valid in the database. That being said, the talk also went on to say that traits are a really good way to define specific behavior based on an attribute that may change in the future.
I'm wondering if it's also a good idea to have traits defined that purposefully fail validations to clean up the spec code. Here's an example:
factory :winner do
user_extension "7036"
contest_rank 1
contest
trait :paid do
paid true
end
trait :unpaid do
paid false
end
trait :missing_user_extension do
user_extension nil
end
trait :empty_user_extension do
user_extension ""
end
end
will allow me to call build_stubbed(:winner, :missing_user_extension) in my specs in tests I intend to fail validations. I suppose I could further this explicit fail by nesting these bad factories under another factory called :invalid_winner, but I'm not too sure if that's necessary. I'm mostly interested in hearing others' opinions on this concept.
No it's not a good idea, it wont make your specs clear to understand after a while, and later when your code evolve those factory that fail today may not fail anymore, and you would have hard time to review all your specs.
It is way better to write your test for one clearly identified thing. If you want to check that saving fails with a mandatory parameter missing, just write it with your regular factory and add parameters to overwrite the values from the factory:
it 'should fail' do
create :winner, user_extension: nil
...
end
Say I have an instance method that does many different things that I need to test, something like store#process_order. I'd like to test that it sends an email to the customer, adds an entry in the orders table, charges a credit card, etc. What's the best way to set this up in rspec? Currently, I'm using rspec and factory girl I do something like this:
describe Store do
describe "#process_order" do
before do
#store = Factory(:store)
#order = Factory(:order)
# call the process method
#store.process_order(#order)
end
it 'sends customer an email' do
...
end
it 'inserts order to db' do
...
end
it 'charges credit card' do
...
end
end
end
But it feels really tedious. Is this really the right way to write a spec for a method that I need to make sure does several different things?
Note: I'm not interested in answers about whether or not this is good design. It's just an example I made up to help w/ my question - how to write these types of specs.
This is a good method because you can identify which element is broken if something breaks in the future. I am all for testing things individually. I tend not to check things get inserted into the database as you are then rails functionality. I simply check the validity of the object instead.
This is the method that is used in the RSpec book too. I would certainly recommend reading it if you are unsure about anything related to RSpec.
I think what you are doing is fine and I think it's the way rspec is intended to be used. Every statement (specification) about your app gets its own block.
You might consider using before (:all) do so that the order only has to get processed once but this can introduce dependencies on the order the specs are run.
You could combine all the code inside describe "#process_order" into one big it block if you wanted to, but then it would be less readable and rspec would give you less useful error messages when a spec fails. Go head and add raise to one of your tests and see what a nice error message you can get from rspec if you do it the way you are currently doing it.
If you want to test the entire process then we're talking about an integration test, not a unit test. If you want to test #process_order method which does several things, then I'd expect those things mean calling other methods. So, I would add #should_receive expectations and make sure that all paths are covered. Then I would spec all those methods separately so I have a nice unit spec suite for everything. In the end I would definitely write an integration/acceptance spec which checks if all those pieces are working together.
Also, I would use #let to setup test objects which removes dependencies between spec examples (it blocks). Otherwise a failure of one of the examples may cause a failure in other example giving you an incorrect feedback.