Using Paper Trail, I've built a change tracking element to my app. Works great in production. One aspect is that it uses the 'whodunnit' field to pull the user name to show which user made the change.
I've looked at the documentation (https://github.com/airblade/paper_trail) and I see the notation about rspec and whodunnit, but I didn't think the reference meant the 'whodunnit' field is not available, only that it would be nil'ed between tests.
Details:
I'm using the test helper for rspec, and I have the rspec feature test setup with
"..., versioning: true do"
Moreover, in debugging while running the test, I can see all the other fields for the event/change are there and saved in the database, only whodunnit is not being saved. Interestingly, I'm only having problems in test; no problems in production, works fine there, just doesn't work in test. And of course, I have
before_action :set_paper_trail_whodunnit
set (in my application controller, not the specific controller, but I moved it just to see if it makes a difference, and unsurprisingly, it doesn't).
EDIT:
I should have mentioned that I checked to confirm that there is a column 'whodunnit' in the test database while the tests are running. Also, there is a method current_user (as expected, otherwise it would not work in production) available in the controller.
I found this from a PaperTrail issue on Github
Unless you are running controller and / or integration specs prior to running your query it's likely that the whodunnit column is not being populated.
I ended up manually setting my whodunnit in my specs with
PaperTrail.controller_info[:whodunnit] = user.id
which kind of feels like defeats the purpose a little. But I'm running this in a unit spec vs a feature spec, so I think that's the only way to go.
Related
I'm finding Rails integration tests relevant for testing flows and I have some questions about the industry standard on replacing controller test (deprecated in rails 5) with integration tests.
Usually we have tiny controllers where we get the parameters, call the right collaborator and prepare the response and it is easy to test it by mocking the collaborator directly on the controller object.
I am concerned about the overhead of migrating every controller test to integration test that persist the db. What are the standards for this case?
Whats the standard when testing just one route/action and not a complete flow?
How can we replace this?:
#controller.stubs(:authenticate).returns(true)
Integration tests are intended to mimic a real user. They're meant to test the entire application in their entirety.
Opinion varies on what this means. To me, it means you should avoid stubbing/mocking completely. Not a single thing stubbed or mocked, everything executed in full. This means that every integration test I write goes through the actual authentication process of typing in a username and password. Some of the steps are redundant, yes.
Integrations tests are slower all around than unit/controller tests. Cutting out the authentication steps likely won't save you enough time to make a difference in the long run (no pun intended).
I'm trying to run some tests on Ruby on Rails, but I don't know how to stay logged when testing. I tried :
setup do
sign_in FactoryGirl.create(:admin)
end
But I get an error (duplicate), because it's running before EACH tests.
duplicate key value violates unique constraint "index_users_on_email"
https://github.com/plataformatec/devise#integration-tests
How can I do for be logged only once in all tests ?
Had this as a comment but couldn't get formatting on it so put it here.
Why do you want to log in only once for all the tests?
In theory, the point of isolated testing is to have each unit run as if it were a single interaction.
If you wanted to call something a single time, before the entire suite, you could put it in the spec helper:
config.before(:suite) do
# Do Something Once
end
Are you talking feature specs? Models? Controllers?
But looking at what you posted it appears that you are probably talking about a feature spec, in which case, I would say do not do that. Let the user login for each spec.
Side Note:
Noticing the error you have, you need to make sure you clean your DB before you run each spec. Check out Database Cleaner Gem for this. This will allow you to have that user recreated for each spec.
I am still new to rails, but I went through Michael Hartl's book (super hepful, btw).
However, I started using Devise (also super helpful) and I had a question about testing with rspec. In Hartl's book, there are all these tests for user validations like uniqueness of email or just whether or not a user is created with valid attributes.
Are simple tests like this needed if I am using Devise?
Part of the reason I am asking is I can't figure out how to write the tests even though I know they are working. In general do you need to test gems to see if their internal functionality is working? Or can I just assume that will work and only test that a user can be created and logged in and be done with it?
I do test how I CanCan in that I test if I was redirected or not, but that seems more like testing that the rules I created are right. Testing the inner functions of Devise seems excessive?
I would consider testing the inner functions of devise to be excessive. However, if you make changes to devise, either the views, controllers, or validations, it would be appropriate then to test your changes.
I also always have request/acceptance tests that test the sign in/up part of my app.
You also asked:
In general do you need to test gems to see if their internal functionality is working?
No, this is generally not done. Things I like to look for: on the gem's github or webpage, do they link to TravisCI --- you can see if their current test suite works. In general, test at your level, not one beneath. For example, don't test that Rails works, just use Rails.
I'm new to testing, and having troubles debugging like I might normally in a model or controller.
I've created a user from a factory (using FactoryGirl, if that makes a difference), and I'm pretty sure the create method is failing because of validation when saving. However, I'd love to know how to debug a model instantiated during testing.
I've tried:
user.inspect
puts user
raise user.to_yaml
(The latter works, but stops execution of the rest of my tests, and doesn't show validation errors--it only proves the existence or non-existence of the model I tried to instantiate.)
Other than raising the model as an error, there is no debug output during testing, and the only other thing I've been able to do is tail the log for my test DB and see what's happening there, but it seems clunky at best. What methods would you suggest for accomplishing what I'm after?
Thank you for any direction
Got the answer, it couldn't be easier (this user phrased my question much better and more concisely :)
How do I output a variable in a rspec test?
short answer, use pretty print:
pp user
Checkout plymouth: https://github.com/banister/plymouth
It works with PRY to give you a nice REPL & debugging interface when a test fails. I've used it on some projects and found it very very handy for obscure issues in tests.
You can always crack open a pry session by adding binding.pry to your test. That way you can have access to whatever is available at the exact point in the test (and elsewhere if desired). More info on adding pry to Rails in the official Pry wiki
I'm currently writing specs for my Ruby on Rails application using Rspec and capybara with selenium to drive the browser.
While executing one of the specs I want to change the value of a session variable.
Eg: I want to set session[:location]="US" so that I can test my application while all values are seen in $. How do I go about it?
Capybara/Selenium specs are for acceptance testing. You shouldn't do any kind of mocking, stubbing or...changing session values directly. You should interact with your application from within the spec just like a normal user would do in the browser.
How the location is being set in your app? Can the user set it manually? If yes, you should do it in the spec in a before block.
It's not exacly as you say.In Cucumber scenario we have chance to test some cases, and you must have a changes to create some background for that cases like create some user in Given block or add somethink to db and etc. Sessions is the same resoures like db and i thnik you should have the chance to prepare it for tests. No metter how strong it's connected with end-user.
Imagin that you create some multistep application where you presist some info beatween steps in session. Your client couldn't even imagin his reservation wihout any of this few steps. So from this point of few it's seems to not make any seans to create acceptance test for every step separatly. But after moth your client want's add some extra super user-frandly validation on 4th step. Now he is only intresting in this side and this validation. Probably he could deal with other extra 4th steps but why? He already saw all this staff and accept it.
What you think about this point of view?