I'm new to testing, and having troubles debugging like I might normally in a model or controller.
I've created a user from a factory (using FactoryGirl, if that makes a difference), and I'm pretty sure the create method is failing because of validation when saving. However, I'd love to know how to debug a model instantiated during testing.
I've tried:
user.inspect
puts user
raise user.to_yaml
(The latter works, but stops execution of the rest of my tests, and doesn't show validation errors--it only proves the existence or non-existence of the model I tried to instantiate.)
Other than raising the model as an error, there is no debug output during testing, and the only other thing I've been able to do is tail the log for my test DB and see what's happening there, but it seems clunky at best. What methods would you suggest for accomplishing what I'm after?
Thank you for any direction
Got the answer, it couldn't be easier (this user phrased my question much better and more concisely :)
How do I output a variable in a rspec test?
short answer, use pretty print:
pp user
Checkout plymouth: https://github.com/banister/plymouth
It works with PRY to give you a nice REPL & debugging interface when a test fails. I've used it on some projects and found it very very handy for obscure issues in tests.
You can always crack open a pry session by adding binding.pry to your test. That way you can have access to whatever is available at the exact point in the test (and elsewhere if desired). More info on adding pry to Rails in the official Pry wiki
Related
Using Paper Trail, I've built a change tracking element to my app. Works great in production. One aspect is that it uses the 'whodunnit' field to pull the user name to show which user made the change.
I've looked at the documentation (https://github.com/airblade/paper_trail) and I see the notation about rspec and whodunnit, but I didn't think the reference meant the 'whodunnit' field is not available, only that it would be nil'ed between tests.
Details:
I'm using the test helper for rspec, and I have the rspec feature test setup with
"..., versioning: true do"
Moreover, in debugging while running the test, I can see all the other fields for the event/change are there and saved in the database, only whodunnit is not being saved. Interestingly, I'm only having problems in test; no problems in production, works fine there, just doesn't work in test. And of course, I have
before_action :set_paper_trail_whodunnit
set (in my application controller, not the specific controller, but I moved it just to see if it makes a difference, and unsurprisingly, it doesn't).
EDIT:
I should have mentioned that I checked to confirm that there is a column 'whodunnit' in the test database while the tests are running. Also, there is a method current_user (as expected, otherwise it would not work in production) available in the controller.
I found this from a PaperTrail issue on Github
Unless you are running controller and / or integration specs prior to running your query it's likely that the whodunnit column is not being populated.
I ended up manually setting my whodunnit in my specs with
PaperTrail.controller_info[:whodunnit] = user.id
which kind of feels like defeats the purpose a little. But I'm running this in a unit spec vs a feature spec, so I think that's the only way to go.
This is maybe a bit of an odd question, i've googled around and haven't seen any questions asked (But maybe im asking the wrong question).
So i've been rewriting Integration tests using Factories (Which is interesting, and thanks to the fine people at this site for your help!). But im pretty much relying on screenshots to see that what im wanting to happen in the tests is happening.
Im using a Test database (SQlite to be specific), and since im using Factories the data gets deleted of course once the test is over, so it's like it never happened. So running a server using the test environment I can't really "manually" verify the data.
Is there a good way to be able to manually verify the data? Like stopping the test RIGHT at the end or pausing it temporarily? I suppose I could put in a gigantic sleep...but surely there is a better way?
But a breakpoint in your test, then use whatever commands you like to query the db.
I use gem 'pry-rails' in my Gemfile
then you simply put the line binding.pry in the test and it will freeze there in the terminal and you can check variables, make db calls like User.all or User.count and if you also add the gem pry-nav you can navigate with next, step, continue etc
We have a small method that some of our other teams use internally. I'm writing tests for it, but I have run in to a small issue:
The method itself checks to ensure the request comes from a specific server (request.host). I have tried stubbing, but I perhaps was stubbing the wrong controllers? I tried the controller I was testing and .any_instance, then I tried controller.any_instance, but neither worked.
I have a hunch that I might be able to spoof it using devise, but so far google has yet to yield much usefulness.
I feel mildly stupid for not trying this first, but:
In a test where I am trying to spoof request.host, the way to set this in your corresponding test is:
drumroll please...
request.host = dev.example.com
If you are testing subdomains, I have a writeup with some code here: http://www.chrisaitchison.com/2013/03/17/testing-subdomains-in-rails/
i'm daily in the process of learning new things about Rails (started learning 2 and now migrating to 3 concepts). As a project with Rails, i'm coding a browser game. Till now, i've been using fixtures to load data to my database and i've creating a custom task to recreate the db, load fixtures etc every time i need to. I have to say that i like this approach because i can easily define my monsters with weapons, bonuses etc through active record associations in features.
However, i see that people use a testing environment like RSpec for that kind of things. Although i see that RSpec is used as a language to define proper behaviour, i don't clearly see how it could help me. But since i like to do things the correct way, i'm pretty sure that there is much more for me to understand and read about it.
Therefore, i would like to ask for a solid example of how RSpec could be helpful. For instance, let's say that a user creates an alliance. Through my code, i'm checking whether this user already has an alliance, whether an alliance with that name exists, whether he has the money to create this alliance and more. Where would RSpec fit here ? What would a nice usage example be ?
Moreover, is fixtures done using RSpec in another way ?
I already know that Rails has many great programmer conveniences and i would like to harvest this one as well. But i'm still ignorant about RSpec. That is why i would appreciate some useful insight. Thank you :)
Rspec is a testing framework. It allows you to write automated pieces of code that verify that your code is actually working. For example, you could write a test to make sure that no two alliances have the same name:
describe Alliance do
it 'should not have the same name as another alliance' do
first = Alliance.create :name => "Test Name"
second = Alliance.build :name => "Test Name"
second.should_not be_valid
end
end
This code would then verify that no two alliances can have the same name - it's a way of testing that your code is actually working.
Fixtures, Factories, Mocks and Stubs are all ways of creating temporary data that can be used in tests. RSpec can use Fixtures, but it doesn't require them either. If you want to load test data into your database you can do this in whichever method best suits your needs to perform tests.
Some other testing frameworks are Cucumber, TestUnit, MiniTest and Shoulda. You should be using one already to write tests for your code. You can also read up on the others to find out which framework best suits your needs.
I have been wondering about the usefulness of writing tests that match code one-by-one.
Just an example: in Rails, you can define 7 restful routes in one line in routes.rb using:
resources :products
BDD/TDD prescribes you test first and then write code. In order to test the full effect of this line, devs come up with macros e.g. for shoulda: http://kconrails.com/2010/01/27/route-testing-with-shoulda-in-ruby-on-rails/
class RoutingTest < ActionController::TestCase
# simple
should_map_resources :products
end
I'm not trying to pick on the guy that wrote the macros, this is just an example of a pattern that I see all over Rails.
I'm just wondering what the use of it is... in the end you're just duplicating code and the only thing you test is that Rails works. You could as well write a tool that transforms your test macros into actual code...
When I ask around, people answer me that:
"the tests should document your code, so yes it makes sense to write them, even if it's just one line corresponding to one line"
What are your thoughts?
To me it's a good pratice to avoid some big mistake.
By example after a merge or an editing this resource is delete. How to know that ?
With this test you can see immediatly, that there are this resources needed. If you want change it or delete it you need made 2 changes. Not only one can be do by mistake.
As my boss(who is also a coder by the way) says, these are fine examples on testing but sometimes you just have to know what to test. In your case, testing that rails works is good, but as we all know, it IS already tested. We don't need to write tests for that.
In our dev cycle, we use tests only for those things that aren't simple/complex/already tested. Say you have a paperclip attachment to a model. We don't test if it's attached already, there's a test for that already made by the paperclip people. What you test is if your model can access that attachment or the process in which you attach it or something.
Something like that :) Hope that makes sense
Rails already has tests for its routing DSL. The only benefit of the macro is that it tests whether or not you have actually included the declaration in your route file which should be implicitly tested through integration test suite.
Remember, adding tests adds more code to maintain. Every time you have some code to write you should ask yourself whether it is worth the time that will be involved in maintaining/writing tests or not. When you are learning test-first, however, it is probably better to be strict as it takes time to learn what is worth testing and what isn't.
Hope that helps.
It's worth looking under the hood to see what the shoulda macro actually does. It checks each route generated by resources :products, i.e. it doesn't simply verify that the resources statement exists in the routes file. So this is not really an instance of testing Rails code that has already been tested.
I don't think that you should test something like that explicitly.
When a page is rendered in test, form_for will complain that a route does not exist. And then I add it. And if it is the resources way or a named route, that does not matter in my view. Afterwards when there are multiple named routes, you can refactor the routes file so you use resources instead of multiple named routes.
There's more to testing than just helping you write better code. You need to make sure that code continues to work in the future... Regression Testing. That's what these kind of tests provide.