Silence Deprecation warnings Rspec Rails - ruby-on-rails

Is it deliberate? I mean the output message says something along the lines of
If you need more of the backtrace for any of these deprecations you can
configure `config.raise_errors_for_deprecations!`, and it will turn the
deprecation warnings into errors, giving you the full backtrace.
I guess nobody has ever contemplated a potentially more realistic prospect, based on the thousands of views these questions have been getting?
If you would prefer not to receive these friendly warnings configure
`Config.no_more_warnings_please_thanks_all_the_same`
I've tried
To disable warnings when running rake test add $VERBOSE=nil into your spec/spec_helper.rb
And
ActiveSupport::Deprecation.behavior = :silence
and
ActiveSupport::Deprecation::DEFAULT_BEHAVIORS[:silence] = Proc.new {|message, callstack| }
I even put it before the
Bundler.require(*Rails.groups)
Numerous expressions of concern appear that some may not understand the gravity of the situation. i.e.
"It's not good to ignore warnings. You should be reading all of them......"
Any ideas? Surely someone has worked it out. Someone from the Rspec community possibly?
I would like to suppress all warnings actually. Not that I would be ignoring them of course. But if I'm just making some simple reluctant edit to a paying clients legacy product I don't think it is fair on them to have to spend time and money on dealing with warnings.
I'm not very experienced at programming as is evident from my reputation points so please sympathise with my frustration.

--Use with caution--
First off, generic warning about removing warnings.
Secondly, this is what I did. In the RSpec.configure block, add the following lines of code.
config.before(:all, silent: true) do
#with_warnings = $VERBOSE
$VERBOSE = nil
end
config.after(:all, silent: true) do
$VERBOSE = #with_warnings
end
I put this in the rails_helper. On the describe blocks that I don't want returning warnings, I tag them with silent: true.
describe "Is foo bar?", silent: true do
# testing all the things
end
That will suppress EVERYTHING, including warnings about constants and undefined variables. The results of the test will still be intact and will let you know that foo, in fact, does not equal bar.
--Use with caution--

Related

What is "with_after_commit" and why can't I find anything about it?

I helping out on a test suite that came from Rails 4 (which I am not so familiar with) and there were some tests that had "with_after_commit: true" in their declaration.
After a bit of tinkering I removed it and the test suite ran a little bit faster.
It was used in rails_helper.rb as well, much like this:
Why after_commit not running even with use_transactional_fixtures = false
The thing is: I can't find any information about it that would justify its use. I only find references to it in the thread above.
Thanks!
What you're looking for is the ActiveRecord callback called after_commit, not with_after_commit (this is just the name of your test case).
It's documented here and you can find plenty of resources if you just Google it.
It's also good to take a look at the Active Record Callbacks Guide.
I know this is old, but I had the same question, so I'm sharing what I figured out for posterity.
What you're seeing is an RSpec filter, which is basically a hook that can be selectively invoked.
So, in the question you cited, the example shows the following filter:
config.before(:each, :with_after_commit => true) do
DatabaseCleaner.strategy = :truncation
end
Then the spec is defined as:
describe "", :with_after_commit => true do
#...
end
So, the with_after_commit => true is what allows the before(:each) hook to fire.

Treat warnings as failures

I'm trying to clean up my test suite and to make this easier I'd like to have rspec raise an error when ever it encounters a warning. Rather then pass the test and carry on, I'd like the test to fail. Is there a way I can configure rspec to do this?
Are you talking about deprecation warnings? Or warnings in general?
I know you can raise errors when you hit deprecation warnings by setting config.active_support.deprecation = :raise in your test.rb
Per the RubyDoc for RSpec on Method: RSpec::Matchers#output, you should be able to fail the test by using expect(method).to_not output.to_stdout. I'm no expert in using RSpec, but in theory that should work. RSpec-Puppet has a very similar functionality built in that fails on warnings by using Puppet.expects(:warning).never .
I'm not sure this will work everywhere, since there is no contract that warnings have to be issued through Logger#warn, but this might do a decent job.
class Logger
def warn(*args)
super
if Rails.env.test?
raise(RuntimeError, "failing upon warn")
end
end
end
log = Logger.new
log.warn("foo")
Rspec gives us a simple way to do this globally:
config.before do
expect_any_instance_of(Logger).to_not receive(:warn)
end

puts statements for debug

When I code, I make quite intense use of "puts" statements for debugging. It allows me to see what happens in the server.
When the code is debugged, I use to remove these "puts" statements for I don't know what reason.
Is it a good idea or should I leave them instead to give more clarity to my server logs?
You should use the logger instead of puts. Use this kind of statements:
Rails.logger.debug "DEBUG: #{self.inspect} #{caller(0).first}" if Rails.logger.debug?
If you want to see the debugging in the real-time (almost), just use the tail command in another terminal window:
tail -F log/development.log | grep DEBUG
Then you do not need to remove these statements in production, and they will not degrade performance too much, because if logger.debug? will prevent the (possibly expensive) construction of the message string.
Using standard output for debugging is usually a bad practice. In cases you need such debug, use the diagnostic STDERR, as in:
STDERR.puts "DEBUG: xyzzy"
Most of the classes in Rails (models, controllers and views) have the method logger, so if possible use it instead of the full Rails.logger.
If you are using older versions of Rails, use the constant RAILS_DEFAULT_LOGGER instead of Rails.logger.
use the logger : http://guides.rubyonrails.org/debugging_rails_applications.html#the-logger
I use the rails_dt gem designed specifically to make such kind of debugging easier.
Using Rails.logger or puts directly is somewhat cumbersome, since it requires you to put a lot of decorative stuff (DEBUG, *** etc.) around debug messages to make them different from regular, useful messages.
Also, it's often difficult to find and defuse the debug output generated by Rails.logger or puts if the message doesn't appear to contain enough searchable characters.
rails_dt prints the origin (file, line), so finding the position in code is easy. Also, you will never confuse DT.p with anything, it clearly does debug output and nothing else.
Example:
DT.p "Hello, world!"
# Sent to console, Rails log, dedicated log and Web page, if configured.
[DT app/controllers/root_controller.rb:3] Hello, world!
Gem is available here.

Rails: "Stack level too deep" error when calling "id" primary key method

This is a repost on another issue, better isolated this time.
In my environment.rb file I changed this line:
config.time_zone = 'UTC'
to this line:
config.active_record.default_timezone = :utc
Ever since, this call:
Category.find(1).subcategories.map(&:id)
Fails on "Stack level too deep" error after the second time it is run in the development environment when config.cache_classes = false. If config.cache_classes = true, the problem does not occur.
The error is a result of the following code in active_record/attribute_methods.rb around line 252:
def method_missing(method_id, *args, &block)
...
if self.class.primary_key.to_s == method_name
id
....
The call to the "id" function re-calls method_missing and there is nothing that prevents the id to be called over and over again, resulting in stack level too deep.
I'm using Rails 2.3.8.
The Category model has_many :subcategories.
The call fails on variants of that line above (e.g. Category.first.subcategory_ids, use of "each" instead of "map", etc.).
Any thoughts will be highly appreciated.
Thanks!
Amit
Even though this is solved, I just wanted to chime in on this, and report how I fixed this issue. I had the same symptoms as the OP, initial request .id() worked fine, subsequent requests .id() would throw an the "stack too deep" error message. It's a weird error, as it generally it means you have an infinite loop somewhere. I fixed this by changing:
config.action_controller.perform_caching = true
config.cache_classes = false
to
config.action_controller.perform_caching = true
config.cache_classes = true
in environments/production.rb.
UPDATE: The root cause of this issue turned out to be the cache_store. The default MemoryStore will not preserve ActiveRecord models. This is a pretty old bug, and fairly severe, I'm not sure why it hasn't been fixed. Anyways, the workaround is to use a different cache_store. Try using this, in your config/environments/development.rb:
config.cache_store = :file_store
UPDATE #2: C. Bedard posted this analysis of the issue. Seems to sum it up nicely.
Having encountered this problem myself (and being stuck on it repeateadly) I have investigated the error (and hopefully found a good fix). Here's what I know about it:
It happens when ActiveRecord::Base#reset_subclasses is called by the dispatcher between requests (in dev mode only).
ActiveRecord::Base#reset_subclasses wipes out the inheritable_attributes Hash (where #skip_time_zone_conversion_for_attributes is stored).
It will not only happen on objects persisted through requests, as the "monkey test app" from #1290 shows, but also when trying to access generated association methods on AR, even for objects that live only on the current request.
This bug was introduced by this commit where the #skip_time_zone_conversion_for_attributes declaration was changed from base.cattr_accessor to base.class_inheritable_accessor. But then again, that same commit also fixed something else.
The patch initially submitted here that simply avoids clearing the instance_variables and instance_methods in reset_subclasses does introduce massive leaking, and the amounts leaked seem directly proportional to complexity of the app (i.e. number of models, associations and attributes on each of them). I have a pretty complex app which leaks nearly 1Mb on each request in dev mode when the patch is applied. So it's not viable (for me anyways).
While trying out different ways to solve this, I have corrected the initial error (skip_time_zone_conversion_for_attributes being nil on 2nd request), but it uncovered another error (which just didn't happen because the first exception would be raised before getting to it). That error seems to be the one reported in #774 (Stack overflow in method_missing for the 'id' method).
Now, for the solution, my patch (attached) does the following:
It adds wrapper methods for #skip_time_zone_conversion_for_attributes methods, making sure it always reads/writes the value as an class_inheritable_attribute. This way, nil is never returned anymore.
It ensures that the 'id' method is not wiped out when reset_subclasses is called. AR is kinda strange on that one, because it first defines it directly in the source, but redefines itself with #define_read_method when it is first called. And that is precisely what makes it fail after reloading (since reset_subclasses then wipes it out).
I also added a test in reload_models_test.rb, which calls reset_subclasses to try and simulate reloading between requests in dev mode. What I cannot tell at this point is if it really triggers the reloading mechanism as it does on a live dispatcher request cycle. I also tested from script/server and the error was gone.
Sorry for the long paste, it sucks that the rails lighthouse project is private. The patch mentioned above is private.
-- This answer is copied from my original post here.
Finally solved!
After posting a third question and with help of trptcolin, I could confirm a working solution.
The problem: I was using require to include models from within Table-less models (classes that are in app/models but do not extend ActiveRecord::Base). For example, I had a class FilterCategory that performed require 'category'. This messed up with Rails' class caching.
I had to use require in the first place since lines such as Category.find :all failed.
The solution (credit goes to trptcolin): replace Category.find :all with ::Category.find :all. This works without the need to explicitly require any model, and therefore doesn't cause any class caching problems.
The "stack too deep" problem also goes away when using config.active_record.default_timezone = :utc

Trace source of deprecation warnings in rails tests

When running my functional tests, I'm getting the following warning in one of the test cases but I can't pinpoint where it's coming from:
gems/actionpack-2.3.8/lib/action_controller/record_identifier.rb:76: warning: Object#id will be deprecated; use Object#object_id
Unfortunately that's the only line of the backtrace that's shown, even if I run it with rake test --trace, and there is no more information in log/test.log.
How can I get the full backtrace for this warning or otherwise figure out which line in my code is causing it?
To solve this you could enable the full debugging information. (see the help)
ActiveSupport::Deprecation.debug = true
As #Eric Anderson says it should be placed after Rails loads (i.e. after require 'rails/all' in application.rb) but before bundler runs to catch deprecation warning in gems (i.e. before Bundler.require(:default, Rails.env) if defined?(Bundler) in application.rb).
You can add a condition, like if ENV["DEBUG"] or if environment == :test to leave this in your config.
Had the same issue. A gem was causing a deprecation warning but I had no idea which gem since Rail's message only shows the last bit of the callstack in my code. Add the following:
module ActiveSupport::Deprecation
class << self
def deprecation_message_with_debugger(callstack, message = nil)
debugger
deprecation_message_without_debugger callstack, message
end
alias_method_chain :deprecation_message, :debugger
end
end
Placed this after Rails loads (i.e. after require 'rails/all' in application.rb) but before bunder runs to catch deprecation warning in gems (i.e. before Bundler.require(:default, Rails.env) if defined?(Bundler) in application.rb).
Now when a deprecation warning is encountered you are dropped in the debugger. You can either leave this in (and surround with a if Rails.env.test?) or remove it when you have found your issues.
When I get this kind of warning in my tests it is usually because I am using mocking model objects and am not providing all the methods that active record provides for real.
A good starting point would be the rails code itself. Looking at the source code for the action_pack gem which is referenced, the method that is causing the error is dom_id. That method generates an id for an object for use on a page. It seems to be called in a couple of places internally (unless you are calling it directly of course!) but the most likely cause appears to be calling form_for on an object.

Resources