I am writing Rails tests using the standard Test::Unit/TestCase.
Is there any way to somehow filter what gets printed to the log, so that you only print the stack for specific test cases.
I have a functional test file with many test cases in it, and I'm really only interested in debugging one test case. Printing my own log statements still requires searching through a few thousand lines of generated log. Or something similar to the RSpec 'pending' functionality.
Run from a command line ruby test/unit/my_model.rb to run one test suite. You can also use a debugger, such as (wrapped by) RubyMine or pry, to stop on a specific test case and look at the log.
But if a sledge-hammer does not solve the problem, you can use tweezers: config.logger.level = Logger::WARN in your test.rb, from Set logging levels in Ruby on Rails
It is probably better if instead of strangling the output to log/test.log, you become familiar with a command such as grep. Grep allows you to run very advanced search queries through files or directories, as long as your running on some flavor of *nix. The simplest use would be
grep search_term file_name
The reason I say you shouldn't constrict the log output is because someday that could bit you in the **s. Hope this helps.
Related
In Mocha we can use --grep flag for selecting specific test for running:
mocha --grep 'my test'
But when numbers of tests raises, come specific test cases can share the same name, what makes single grep insufficient. I'd love to have something like "nested grep" to be able to select tests more specifically, regarding its parents. Can it be possible? Or maybe there are some other options to select a test for running more specific way?
Okay, fortunately I discovered that we can do something like this:
mocha /path/to/specific/file -g 'pattern for specific test case'
It solves my problem as long as case names don't repeat in the scope of one file (but they generally shouldn't).
I'm trying to setup SimpleCov to generate reports for 3 applications that share most of their code(models, controllers) from a local gem but the specs for the code that each app uses are inside each ./spec and not on the gem itself.
For a clearer example. When i run bundle exec rspec spec inside app_1 that uses the shared models from the local gem I want to get(accurate) reports for all the specs that this app_1 has inside ./spec.
The local gem also has some models that belong exclusively for app_2, inside a namespace, so i want to skip the report for those files when i run the test suite inside app_1.
I'm trying to achieve this with something like the following code in app_1/spec/spec_helper.
# This couple of lines are needed to generate report for the models, etc. inside the local gem.
SimpleCov.adapters.delete(:root_filter)
SimpleCov.filters.clear
SimpleCov.adapters.define 'my_filter' do
root = SimpleCov.root.split("/")
root.pop
add_filter do |src|
!(src.filename =~ /^#{root.join("/")}/)
end
add_filter "/app_2_namespace/"
end
if ENV["COVERAGE"] == "true"
SimpleCov.start 'rails'
end
This works, until some questions begin to arise.
Why i get a 85% coverage for a model that's inside the gem but the spec is inside app_2(I'm running the spec inside app_1).
The first time that was a problem, was when i tried to improve that model so i clicked on the report for it and saw which lines were uncovered and i tried to fix them writing tests for them on app_2/spec/namespace/my_model_spec.rb.
But that didn't make any difference, i tried a more aggressive test and i erased all the content on the spec file but somehow i still was getting the 85% of coverage, so the my_model_spec.rb is not related to the coverage results of my_model.rb. Kind of unexpected.
But since this file was on app_2 i decided to add a filter on the SimpleCov.start block on app_1 spec_helper, like:
add_filter "/app_2_name_space/"
I moved then to the app_2 folder and started setting up SimpleCov and see what results i would get here. And they turned out weirder.
For the same model i got 100% coverage, i did the same test of empty'ing the my_model_spec.rb file and still got the 100%. So this really f**ed up, or i don't understand something.
How does this work?(with the Ruby 1.9 Coverage module you say, well when i run locally the example on the official documentation i get different results, so i think there's a bug or outdated documentation there)
ruby-doc: {"foo.rb"=>[1, 1, 10, nil, nil, 1, 1, nil, 0, nil]}
locally: {"foo.rb"=>[1, 1, 10, nil, nil, 1, 0, nil, 1, nil]}
I hope the reports don't show positive results for lines that get evaluated somewhere on the app code, no matter where.
I think the expected behavior is that the results for a model for example are related to it's spec, same thing for controllers, etc.
Is this the case? If so, why am i getting this strange results.
Or do you think the structure of my apps could be messing up with SimpleCov and Coverage?
Thank you for taking the time to read this, if you need more detailed info, just ask.
Regarding your confusion with the model being 100% covered, as I'm not sure that I understand correctly: There's no way for Coverage (and therefore SimpleCov) to know whether your code has been executed from a spec or "somewhere else". Say I have a method "foo" and a method "bar" that calls foo. If I invoke bar in my specs, of course foo will also be shown as covered.
As to your general problem: I think it should be possible to have the coverage reported. Just because the source code is at some different point than your project root should not lead to the loss of coverage reporting.
Two things in your base config: Removing the base adapter (line 2) is unneccessary as adapters basically are glorified config chunks and at this point you'll have it already executed (since it gets invoked when Simplecov is loaded). Resetting the filters should be sufficient.
Also, the custom adapter you define is not used. Please refer to the README as to how to properly set up adapters, but I think you'd be fine with simply using this in the SimpleCov config block when you start the coverage run for now:
SimpleCov.start 'rails' do
your_custom_config
end
What you'll probably want though is a merged coverage report for all your apps. For this, you'll have to define a command_name for each of your spec suites first, inside your config block, like this: command_name 'App1 Specs'.
You'll also have to define a central coverage_path, which will store away your coverage reports across your app suites. Say you have ~/projects/my_project/app[1-3], then putting that into my_project/coverage might make sense. This will lead to your different test suite results getting merged into one single report, just like when using SimpleCov with Cucumber & RSpec for example. Merging has a default timeout of ~10 minutes, so you might need to set this to a higher value using merge_timeout 3600 in your config (those are seconds). For the specifics of these configuration options please again check out the README and the SimpleCov::Configuration documentation. These things are outlined there in fair detail.
So, to sum it up, each of your apps should look somewhat like this:
require 'SimpleCov'
SimpleCov.start 'rails' do
reset_filters!
command_name 'App1 Spec'
coverage_path File.dirname(__FILE__) + '../../coverage' # Assuming this is in my_project/app1/spec/spec_helper.rb
merge_timeout 3600
end
Next thing you might want to add filters to reject all non-project gems by path, and you should be up & running.
When I code, I make quite intense use of "puts" statements for debugging. It allows me to see what happens in the server.
When the code is debugged, I use to remove these "puts" statements for I don't know what reason.
Is it a good idea or should I leave them instead to give more clarity to my server logs?
You should use the logger instead of puts. Use this kind of statements:
Rails.logger.debug "DEBUG: #{self.inspect} #{caller(0).first}" if Rails.logger.debug?
If you want to see the debugging in the real-time (almost), just use the tail command in another terminal window:
tail -F log/development.log | grep DEBUG
Then you do not need to remove these statements in production, and they will not degrade performance too much, because if logger.debug? will prevent the (possibly expensive) construction of the message string.
Using standard output for debugging is usually a bad practice. In cases you need such debug, use the diagnostic STDERR, as in:
STDERR.puts "DEBUG: xyzzy"
Most of the classes in Rails (models, controllers and views) have the method logger, so if possible use it instead of the full Rails.logger.
If you are using older versions of Rails, use the constant RAILS_DEFAULT_LOGGER instead of Rails.logger.
use the logger : http://guides.rubyonrails.org/debugging_rails_applications.html#the-logger
I use the rails_dt gem designed specifically to make such kind of debugging easier.
Using Rails.logger or puts directly is somewhat cumbersome, since it requires you to put a lot of decorative stuff (DEBUG, *** etc.) around debug messages to make them different from regular, useful messages.
Also, it's often difficult to find and defuse the debug output generated by Rails.logger or puts if the message doesn't appear to contain enough searchable characters.
rails_dt prints the origin (file, line), so finding the position in code is easy. Also, you will never confuse DT.p with anything, it clearly does debug output and nothing else.
Example:
DT.p "Hello, world!"
# Sent to console, Rails log, dedicated log and Web page, if configured.
[DT app/controllers/root_controller.rb:3] Hello, world!
Gem is available here.
Is there an easy way to log all method calls in a Rails app?
My main use for this would be in testing (and in debugging tests). I want to have more of a history than a stacktrace provides (for instance, when running rspec with the '-b' option).
It's easy to do. Just add 5 lines of code into your script/server:
#!/usr/bin/env ruby
set_trace_func proc {
|event, file, line, id, binding, classname|
if event == "call" or event == "return"
printf "%8s %s:%-2d %10s %8s\n", event, file, line, id, classname
end
}
require File.expand_path('../../config/boot', __FILE__)
require 'commands/server'
It's described at http://phrogz.net/ProgrammingRuby/ospace.html#tracingyourprogramsexecution
Your application will become quite slow and you might get more output than you want. You can easily add more conditions on file/class/function names to avoid printing unwanted stuff.
Perftools might give you what you're looking for. It analyzes the entire process and can give you a graphical view that looks something like this. Rack perftools profiler is a rubygem that uses perftools and makes it easy to integrate with a Rails application, so I would recommend going with that if you want to try it.
Firstly stacktrace IS every method call that was on the stack at the time an error occurred, what other history could you want besides this?
Secondly, to answer your question, no there is no easy way to log all method calls. You could up your log level all the way to debug which should give you more stuff in the logs, but this will only be things that someone has actually chosen to log, unrelated to method calls.
It probably wouldn't be that difficult to patch ruby in such a way that every method call will print some log statements before and after the method execution, but this will once again be similar to what a stack trace would give you anyway and potentially less since you won't get line numbers etc.
If you want more info than the stack trace, logging is the way most people would do it.
In a Rails application I have a Test::Unit functional test that's failing, but the output on the console isn't telling me much.
How can I view the request, the response, the flash, the session, the variables set, and so on?
Is there something like...
rake test specific_test_file --verbose
You can add puts statements to your test case as suggested, or add calls to Rails.logger.debug() to your application code and watch your log/development.log to trace through what's happening.
In your test you have access to a bunch of resources you can user to debug your test.
p #request
p #response
p #controller
p flash
p cookie
p session
Also, remember that your action should be as simple as possibile and all the specific action execution should be tested by single Unit test.
Functional test should be reserved to the the overall action execution.
What does it mean in practice? If something doesn't work in your action, and your action calls 3 Model methods, you should be able to easily isolate the problem just looking at the unit tests. If one (or more) unit test fails, then you know which method is the guilty.
If all the unit tests pass, then the problem is the action itself but it should be quite easy to debug since you already tested the methods separately.
in the failing test use p #request etc. its ugly, but it can work
An answer to a separate question suggested
rake test TESTOPTS=-v
The slick way is to use pry and pry-nav gems. Be sure to include them in your test gem group. I use them in the development group as well. The great thing about pry and pry nav is you can step through your code with a console, so you can not only see the code as it's executed, but you can also enter console commands during the test.
You just enter binding.pry in the places in the code you want to trigger the console. Then using the 'step' command, you can move line by line through the code as it's executed.