I generated a rails 3.2 migration with an empty down function, because the migration is irreversible (and I don't want to throw an exception). I run the migration successfully, but it has no effect. When I rollback, and run db:migrate again, the effects does apply.
I solved this easily by filling the empty down function with a code which does nothing, but it's still pretty ugly.
Does anyone knows why this happens? Is this a rails bug?
The exception is throwed to prevent destroying your database, if its irreversable, then that's probably the right thing to do.
Your #down could look like this:
def down
raise ActiveRecord::IrreversibleMigration, "Explain why its irreversable!"
end
That will save others a lot of headache as it clearly notifies about irreversable migration and explains the reason behind it :)
EDIT: I cannot confirm this behaviour for Rails 3.2.3. I've created several different migrations without #down, and no exceptino was raised. Maybe it's something in your code, which you didn't show a bit.
EDIT 2: Just to recap, when you're using up/down method, its your responsibility to raise ActiveRecord::IrreversibleMigration. In other case, nothing will happen (#down defined in AR will just return nil). The behaviour is different when you use #change. In some cases the mentioned exception can be raised by #inverse defined here: https://github.com/rails/rails/blob/565bfb9cd49285ebaa170141b4996c22ba81de43/activerecord/lib/active_record/migration/command_recorder.rb#L39 which is expected behaviour.
Related
I have been chasing an issue down for a while now, and still cannot figure out what's happening. I am unable to edit documents made from my gem through normal persistence methods, like update or even just editing attributes and calling save.
For example, calling:
Scram::Policy.where(id: a.id).first.update!(priority: 12345)
Will not work at all (there are no errors, but the document has not updated). But the following will work fine:
Scram::Policy.collection.find( { "_id" => a.id } ).update_one( { "$set" => {"priority" => 12345}})
I am not sure what I'm doing wrong. Calling update and save on any other model works fine. The document in question is from my gem: https://github.com/skreem/scram/blob/master/lib/scram/app/models/policy.rb
I cannot edit its embedded documents either (targets). I have tried removing the store_in macro, and specifying exactly what class to use using inverse_of and class_name in a fake app to reimplement these classes: https://github.com/skreem/scram-implementation/blob/master/lib/scram/lib/scram/app/models/policy.rb
I've tried reimplementing the entire gem into a clean fake rails application: https://github.com/skreem/scram-implementation
Running these in rails console demonstrates how updating does not work:
https://gist.github.com/skreem/c70f9ddcc269e78015dd31c92917fafa
Is this an issue with mongoid concerning embedded documents, or is there some small intricacy I am missing in my code?
EDIT:
The issue continues if you run irb from the root of my gem (scram) and then run the following:
require "scram.rb"
Mongoid.load!('./spec/config/mongoid.yml', :test)
Scram::Policy.first.update!(priority: 32) #=> doesn't update the document at all
Scram::Policy.where(id: "58af256f366a3536f0d54a61").update(priority: 322) #=> works just fine
Oddly enough, the following doesn't work:
Scram::Policy.where(id: "58af256f366a3536f0d54a61").first.update(priority: 322)
It seems like first isn't retrieving what I want. Doing an equality comparison shows that the first document is equal to the first returned by the where query.
Well. As it turns out, you cannot call a field collection_name or else mongoid will ensure bad things happen to you. Just renaming the field solved all my issues. Here's the code within mongoid that was responsible for the collision: https://github.com/mongodb/mongoid/blob/master/lib/mongoid/persistence_context.rb#L82
Here's the commit within my gem that fixed my issue: https://github.com/skreem/scram/commit/25995e955c235b24ac86d389dca59996fc60d822
Edit:
Make sure to update your Mongoid version if you have dealt with this issue and did not get any warnings! After creating an issue on the mongoid issue tracker, PersistenceContext was added to a list of prohibited methods. Now, attempting to use collection_name or collection as a field will cause mongoid to spit out a couple of warnings.
Fix commit: https://github.com/mongodb/mongoid/commit/6831518193321d2cb1642512432a19ec91f4b56d
I have successfully added the exception_notifier to my rails app, and it is emailing a notification for all exceptions at the application level (which is exactly what I want). The only problem is that I need to have a few short lines of code ran whenever an exception is raised as well, and am certain that there is a way to add these lines of code onto the notifier. But despite reading the documentation found here: https://github.com/smartinez87/exception_notification I am still unclear what/where I should put things. Can someone please explain this a bit better for me. I am relatively new to ROR. I just need to add some additional code to run whenever the notifier is alerted to an exception.
This seems to be the correct link to the background notification:
Using a begin rescue should do the trick.
def method
sentence1
begin
sentence2
sentence3
rescue => e
ExceptionNotifier.notify_exception(e)
Sentence code1
return action
end
end
As explained here:
https://github.com/smartinez87/exception_notification#background-notifications
So as a shorthand: To do code before or after the notification is sended, but outside of the notification method you need to catch the exception in the method who raised it and work there.
But remember: Catching exceptions should be an exception. If you know what could go wrong, try to fix it, not to catch it.
I am getting a couple different errors at a particular line of code in one of my models when running in Sidekiq-queued jobs. The code in question is:
#lookup_fields[:asin] ||= self.book_lookups.find_by_name("asin").try(:value)
I either get undefined method 'scope' for #<ActiveRecord::Associations::AssociationScope:0x00000005f20cb0> or undefined method 'aliased_table_for' for #<ActiveRecord::Associations::AliasTracker:0x00000005bc3f90>.
At another line of code in another Sidekiq job, I get the error undefined method 'decrypt_and_verify' for #<ActiveSupport::MessageEncryptor:0x00000007143208>.
All of these errors make no sense, as they are standard methods of the Rails runtime support libraries.
The model in question has a :has_many association defined for the "book_lookups" model, "name" and "value" are fields in the "book_lookups" model. This always happens on the first 1-3 records processed. If I run the same code outside of a Sidekiq job, these errors do not occur.
I cannot reproduce the error on my development machine, only on production which is hosted at Heroku.
I may have "solved" the first set of errors by putting the code `BookLookup.new()' in an initializer, forcing the model to load before Sidekiq creates any threads. Only one night's work to go on, so we'll have to see if the trend continues...
Even if this solves the immediate problem, I don't think it solves the real underlying issue, which is classes not getting fully loaded before being used. Is class-loading an atomic operation? Is it possible for one thread to start loading a class and another to start using the class before it is fully loaded?
I believe that I have discovered the answer: config.threadsafe!, which I had not done. I have now done that and most if not all of the errors have disappeared. References: http://guides.rubyonrails.org/configuring.html, http://m.onkey.org/thread-safety-for-your-rails (especially the section "Ruby's require is not atomic").
Just learning rails, am on to migrations and it all started off pretty logically until I hit something odd going on in the code;
rails generate migration AddRegionToSupplier
The above produces a migration file with only a "def change" method in it.
I googled this and found that this is exactly what is supposed to happen;
http://guides.rubyonrails.org/migrations.html
I would have expected it to generate a "def up" and "def down" method, so that the migration could be rolled back. Have I done something wrong in the generation or am I missing something obvious?
From the link you pasted:
Rails 3.1 makes migrations smarter by providing a new change method.
This method is preferred for writing constructive migrations (adding
columns or tables). The migration knows how to migrate your database
and reverse it when the migration is rolled back without the need to
write a separate down method.
So it looks like you don't have to worry about having a def self.down as Rails is now smart enough to know how to roll it back.
This is a repost on another issue, better isolated this time.
In my environment.rb file I changed this line:
config.time_zone = 'UTC'
to this line:
config.active_record.default_timezone = :utc
Ever since, this call:
Category.find(1).subcategories.map(&:id)
Fails on "Stack level too deep" error after the second time it is run in the development environment when config.cache_classes = false. If config.cache_classes = true, the problem does not occur.
The error is a result of the following code in active_record/attribute_methods.rb around line 252:
def method_missing(method_id, *args, &block)
...
if self.class.primary_key.to_s == method_name
id
....
The call to the "id" function re-calls method_missing and there is nothing that prevents the id to be called over and over again, resulting in stack level too deep.
I'm using Rails 2.3.8.
The Category model has_many :subcategories.
The call fails on variants of that line above (e.g. Category.first.subcategory_ids, use of "each" instead of "map", etc.).
Any thoughts will be highly appreciated.
Thanks!
Amit
Even though this is solved, I just wanted to chime in on this, and report how I fixed this issue. I had the same symptoms as the OP, initial request .id() worked fine, subsequent requests .id() would throw an the "stack too deep" error message. It's a weird error, as it generally it means you have an infinite loop somewhere. I fixed this by changing:
config.action_controller.perform_caching = true
config.cache_classes = false
to
config.action_controller.perform_caching = true
config.cache_classes = true
in environments/production.rb.
UPDATE: The root cause of this issue turned out to be the cache_store. The default MemoryStore will not preserve ActiveRecord models. This is a pretty old bug, and fairly severe, I'm not sure why it hasn't been fixed. Anyways, the workaround is to use a different cache_store. Try using this, in your config/environments/development.rb:
config.cache_store = :file_store
UPDATE #2: C. Bedard posted this analysis of the issue. Seems to sum it up nicely.
Having encountered this problem myself (and being stuck on it repeateadly) I have investigated the error (and hopefully found a good fix). Here's what I know about it:
It happens when ActiveRecord::Base#reset_subclasses is called by the dispatcher between requests (in dev mode only).
ActiveRecord::Base#reset_subclasses wipes out the inheritable_attributes Hash (where #skip_time_zone_conversion_for_attributes is stored).
It will not only happen on objects persisted through requests, as the "monkey test app" from #1290 shows, but also when trying to access generated association methods on AR, even for objects that live only on the current request.
This bug was introduced by this commit where the #skip_time_zone_conversion_for_attributes declaration was changed from base.cattr_accessor to base.class_inheritable_accessor. But then again, that same commit also fixed something else.
The patch initially submitted here that simply avoids clearing the instance_variables and instance_methods in reset_subclasses does introduce massive leaking, and the amounts leaked seem directly proportional to complexity of the app (i.e. number of models, associations and attributes on each of them). I have a pretty complex app which leaks nearly 1Mb on each request in dev mode when the patch is applied. So it's not viable (for me anyways).
While trying out different ways to solve this, I have corrected the initial error (skip_time_zone_conversion_for_attributes being nil on 2nd request), but it uncovered another error (which just didn't happen because the first exception would be raised before getting to it). That error seems to be the one reported in #774 (Stack overflow in method_missing for the 'id' method).
Now, for the solution, my patch (attached) does the following:
It adds wrapper methods for #skip_time_zone_conversion_for_attributes methods, making sure it always reads/writes the value as an class_inheritable_attribute. This way, nil is never returned anymore.
It ensures that the 'id' method is not wiped out when reset_subclasses is called. AR is kinda strange on that one, because it first defines it directly in the source, but redefines itself with #define_read_method when it is first called. And that is precisely what makes it fail after reloading (since reset_subclasses then wipes it out).
I also added a test in reload_models_test.rb, which calls reset_subclasses to try and simulate reloading between requests in dev mode. What I cannot tell at this point is if it really triggers the reloading mechanism as it does on a live dispatcher request cycle. I also tested from script/server and the error was gone.
Sorry for the long paste, it sucks that the rails lighthouse project is private. The patch mentioned above is private.
-- This answer is copied from my original post here.
Finally solved!
After posting a third question and with help of trptcolin, I could confirm a working solution.
The problem: I was using require to include models from within Table-less models (classes that are in app/models but do not extend ActiveRecord::Base). For example, I had a class FilterCategory that performed require 'category'. This messed up with Rails' class caching.
I had to use require in the first place since lines such as Category.find :all failed.
The solution (credit goes to trptcolin): replace Category.find :all with ::Category.find :all. This works without the need to explicitly require any model, and therefore doesn't cause any class caching problems.
The "stack too deep" problem also goes away when using config.active_record.default_timezone = :utc