We have a small method that some of our other teams use internally. I'm writing tests for it, but I have run in to a small issue:
The method itself checks to ensure the request comes from a specific server (request.host). I have tried stubbing, but I perhaps was stubbing the wrong controllers? I tried the controller I was testing and .any_instance, then I tried controller.any_instance, but neither worked.
I have a hunch that I might be able to spoof it using devise, but so far google has yet to yield much usefulness.
I feel mildly stupid for not trying this first, but:
In a test where I am trying to spoof request.host, the way to set this in your corresponding test is:
drumroll please...
request.host = dev.example.com
If you are testing subdomains, I have a writeup with some code here: http://www.chrisaitchison.com/2013/03/17/testing-subdomains-in-rails/
Related
I have a rails application that uses Recurly for its transactions. I am trying to write automated tests for some of the helper functions that I have written.
A super simple example of a function...
def status_for_display
transaction.status.capitalize
end
In order to test these functions, I need to have a Recurly::Account object as well as associated Recurly::Transaction objects.
I have tried going the route of using Recurly::Account.create & Recurly::Transaction.create but I cannot seem to get the transactions to match up with the account.
I am also wondering if it doesn't just make better sense to use the VCR gem to make this happen. In which case, how would I go about doing that? I've never really managed to get VCR setup properly.
VCR is, by in large, plug and play. Once you have it configured and enabled, it'll intercept all HTTP requests and try to play back the data from a cassette. The problem with VCR, though, is that it is request data specific. In order for it to work right, you need to ensure that your tests are always sending the exact same request params to Recurly. You can work around this by having it skip certain things, but it's generally a pain.
The other option is to just use something like Webmock directly and house your own "known responses" for your Recurly calls, but then it's up to you to ensure that you responses stay in sync with the API.
In the end, I'd probably recommend going the VCR route but structuring your tests such that you have known good and bad test scenarios so you can get the real benefits of the cassettes.
Currently I'm writing a unit test a method which the only thing that I need from Rails is to use the url helper methods, like 'product_url'
I really don't want to load the whole Rails environment, as we all know it's a bit expensive, just to be able to run the url helper methods.
Is that a way of loading only part of Rails environment, just the necessary to run these methods?
Thanks!
Alex
As far, as I know it is not so easy. But you can use spork server to have rails testing environment in memory to make your tests run fast and smooth.
I'm with #Nick_Kugaevsky– in fact, I'm a lot more negative than him. I just don't think it's possible. To have the concept of a helper, Rails has to load ActionPack, which is not very useful by itself. I guess you could figure out a way to work directly with ActionPack, but I'd be extremely surprised if that were possible.
You can try to mock all *_path in your tests.
I'm new to testing, and having troubles debugging like I might normally in a model or controller.
I've created a user from a factory (using FactoryGirl, if that makes a difference), and I'm pretty sure the create method is failing because of validation when saving. However, I'd love to know how to debug a model instantiated during testing.
I've tried:
user.inspect
puts user
raise user.to_yaml
(The latter works, but stops execution of the rest of my tests, and doesn't show validation errors--it only proves the existence or non-existence of the model I tried to instantiate.)
Other than raising the model as an error, there is no debug output during testing, and the only other thing I've been able to do is tail the log for my test DB and see what's happening there, but it seems clunky at best. What methods would you suggest for accomplishing what I'm after?
Thank you for any direction
Got the answer, it couldn't be easier (this user phrased my question much better and more concisely :)
How do I output a variable in a rspec test?
short answer, use pretty print:
pp user
Checkout plymouth: https://github.com/banister/plymouth
It works with PRY to give you a nice REPL & debugging interface when a test fails. I've used it on some projects and found it very very handy for obscure issues in tests.
You can always crack open a pry session by adding binding.pry to your test. That way you can have access to whatever is available at the exact point in the test (and elsewhere if desired). More info on adding pry to Rails in the official Pry wiki
I'm trying to get the best codecoverage/development time result
Currently I use rspec+shoulda to test my models and rspec+capybara to write my acceptance tests.
I tried writing a controller test for a simple crud but it kinda took too long and I got a confusing test in the end(my bad probably)
What`s the best pratice on controller testing with rspec?
Here is a gist on my test and my controller(one test does not pass yet):
https://gist.github.com/991687
https://gist.github.com/991685
Maybe not.
Sure you can write tests for your controller. It might help write better controllers. But if the logic in your controllers is simple, as it should be, then your controller tests are not where the battle is won.
Personally I prefer well-tested models and a thorough set of integration (acceptance) tests over controller tests any time.
That said, if you have trouble writing tests for controllers, then by all means do test them. At least until you get the hang of it. Then decide whether you want to continue or not. Same goes for every kind of test: try it until you understand it, decide afterwards.
The way I view this is that acceptance tests (i.e. Cucumber / Capybara), test the interactions that a user would normally perform on the application. This usually includes things like can a user create a specific resource with valid data and then do they see errors if they enter invalid data. A controller test is more for things that a user shouldn't be able to normally do or extreme edge cases that would be too (cu)cumbersome to test with Cucumber.
Usually when people write controller tests, they are effectively testing the same thing. The only reason to test a controller's method in a controller test are for edge cases.
Edge cases such as if a user enters an invalid ID to a show page they should be shown a 404 page. This is a very simple kind of thing to test with a controller test, and I would recommend doing that. You want to make sure that when they hit the action that they receive a 404 response, boom, simple.
Making sure that your new action responds successfully and doesn't syntax error? Please. That's what your Cucumber features would tell you. If the action suddenly develops a Case of the Whoops, your feature will break and then you will fix that.
Another way of thinking about it is do you want to test a specific action responds in a certain way (i.e. controller tests), or do you care more about that a user can go to that new action and actually go through the whole motions of creating that resource (i.e. acceptance tests)?
Writing controller tests gives your application permission to lie to you. Some reasons:
controller tests are not executed in the environment they are run in. i.e. they are not at the end of a rack middleware stack, so things like users are not available when using devise (as a single, simple example). As Rails moves more to a rack based setup, more rack middlewares are used, and your environment deviates increasingly from the 'unit' behaviour.
You're not testing the behaviour of your application, you're testing the implementation. By mocking and stubbing your way through, you're re-implementing implementation in spec form. One easy way to tell if you're doing this; if you don't change the expected behaviour of url response, but do change the implementation of the controller (maybe even map to a different controller), do your tests break? If they do, you're testing implementation not behaviour. You're also setting your self up to be lied to. When you stub and mock, there's no assurances that the mocks or stubs you've setup do what you think they do, or even if the methods they're pretending to be exists after refactoring occurs.
Calling controller methods is impossible via your applications 'public' api. The only way to get to a controller is via the stack, and the route. If you can't break it from a request via a url, is it really broken?
I use my tests as an assurance the my application is not going to break when I deploy it. Controller tests add nothing to my confidence that my application is indeed functional, and actually their presence decreases my confidence.
One other example, when testing your 'behaviour' of your application, do you care that a particular file template was rendered, or that a certain exception was raised, or instead is the behaviour of your application to return some stuff to the client with a particular status code?
Testing controllers (or views) increases the burden of tests that you impose on yourself, and means that the cost of refactoring is higher than it needs to be because of the potential to break tests.
Should you test? yes
There are gems that make testing controllers faster
http://blog.carbonfive.com/2010/12/10/speedy-test-iterations-for-rails-3-with-spork-and-guard/
Definitely test the controller. A few painfully learned rules of thumb:
mock out model objects
stub model object methods that your controller action uses
sacrifice lots of chickens.
I like to have a test on every controller method at least just to eliminate stupid syntax errors that may cause the page to blow up.
A lot of people seem to be moving towards the approach of using Cucumber for integration testing in place of writing controller and routing tests.
I'm developing a Rails app, and I was just talking with my colleague that we have a mix of fixtures and mocks in our tests, which we're doing using cucumber and Rspec. The question would be: when should each one be used?
I would use a mock object when using the real object is impracticable/not necessary. Lets say for example you need to call some remote API such as an address finder via zip code. You would probably want to mock the object so the calls on it aren't actually made each time you run your tests. There are other reasons too such as improving speed, asking for data that changes where you need an exact response or perhaps it doesn't exist yet. It allows you to test things in isolation as you can determine that when you call these methods on this mock object you will get this back and you don't actually need to run the code as for this test it's not important.
If you use fixtures you will have a real object and the methods etc will be called and their code run, unless of course you stub the methods out, which is something for another question.
Hope that helps a little. There is a good peepcode (http://peepcode.com/products/rspec-mocks-and-models) on mocking and stubbing, maybe check it out.