Rails ActiveRecord callbacks - ruby-on-rails

I'm having an issue with a date format. I have a time picker that has the date in a funky format (well, it's a nice format, actually, but not to the computer). I'm trying to have Chronic parse the date so that it can be saved properly.
At first, I was doing, in the create action of my controller:
params[:event][:start] = Chronic.parse(params[:event][:start])
but if and when validation fails, it sends the parsed value back to the view, and my datetimepicker is all botched, then.
So, I thought... callback? In my model, I added:
private
def date_change
self.start = Chronic.parse(self.start)
end
I tried before_save, before_validation, after_validation... but nothing seems to get that date formatted correctly.
As it stands, I keep getting ArgumentError in EventsController#create - Argument out of range. I assume that's because the database is expecting a properly formatted datetime object.
Any idea on how I can accomplish my goal, here, of not changing the params, but still being able to save a properly formatted object?

I'm guessing that the problem is occurring the the start= mutator method that ActiveRecord supplies. If you're doing things like this in your controller:
#event.update_attributes(params[:events])
#event = Event.create(params[:event])
#...
then create and update_attributes should call start= internally. That should allow you to put the Chronic stuff in your own start=:
def start=(t)
super(Chronic.parse(t))
end
You might need to adjust that for non-String ts, I'm not sure what Chronic.parse(Time.now), for example, would do. You could also call write_attribute(:start, Chronic.parse(t)) or self[:start] = Chronic.parse(t) if you didn't want to punt to super.
Note that before_validation and similar handlers will be called too late to bypass whatever default string-to-timestamp conversion ActiveRecord is doing but a mutator override should happen at the right time.
Alternatively, you could parse the time in the controller with something like this:
event = params[:events].dup
events[:start] = Chronic.parse(events[:start])
#event = Event.create(event)

Assumption is the mother of all mess ups :)
are you sure the callback is hit? Because if it would, and the error occurred (like it did), wouldn't it still send back the incorrect data (because parsed) back to the view? In case of doubt: log something to make sure it is hit.
are you sure which field causes the Argument out of range error.
Most cases bugs are so hard to find/fix because we assume we know the error, but we are looking at the error in the wrong way.
Easy ways to test which attribute causes the error:
open rails console, build an object with the parameters, save it, and ask the errors. Something like
e = Event.new(params[:event]) # copy params verbatim from your logfile
e.save
e.errors
and that will display which field causes the error.
Alternatively: use pry and add a line binding.pry just after the save, so you inspect the errors (more info)
Answer (assuming your assumption was correct)
I see two options to do what you want:
use the after_validation callback, if you are sure the data will always be correct, and correctly parsed by Chronic. This way if validation is passed, then convert the field and normally nothing can go wrong anymore, and the value is never sent to the browser again.
Note: if some other attribute is causing the error, this callback is never hit, of course. Because it does not pass the validation.
use a virtual attribute, e.g. start_str, which is a visual representation of your start, and
before_save convert it to start. It does not really matter that much here, because if validation fails, you just show start_str and not the "real" start field.

Related

Not showing data with react.rb

I'm just trying to use ReactRB with reactive-record.
So the deal is in render part I think. When I'm setting param :user, type: User in React Component class, I can't see any data in my table. Of course Model User in public folder, as this requirement in ReactRB.
Well, in console I see that server is fetching nothing, but right data returned.
What I'm missing? Thanks for the help!
The key for answer is in this screenshot
The details are that the data comes back from the server as a json blob
reactive-record decodes it, but counts on the fact that if you try to json parse a simple string, it raises an error.
opal 0.10 no longer raises standard error, so the whole thing just hangs up.
Just thinking about this... there is a known problem in Opal https://github.com/opal/opal/issues/1545 and this causes a problem in reactive-record. Please make sure that you are not using opal 0.10
One thing to keep in mind is that reactive-record lazy loads records, and attributes. So unless someplace in your render, you access a particular record/attribute that attribute will not show up on the client.
Its hard to tell more without a bit more of your code posted, but here is some help:
Lets say your component looks like this:
class Foo < React::Component::Base
param :user, type: User
def render
"user.name = #{user.name}"
end
end
and someplace either in a controller or in a layout you do this:
render_component '::Foo', {user: User.first}
You might try something very simple like this, just to get familiar with how things work.
What happens should be this: You will render your view and a placeholder for the first User will be sent to the component, during rendering the component looks for that user's name attribute, which it does not have, so that is queued up to fetch from the server. Rendering will complete, and eventually the data will come down from the server, the local model's data will be updated, and components displaying that data will be rerendered.
During prerendering all the above happens internal to the server, and when the component has been rendered the final html is delivered along with all the model data that was used in rendering the component. So on first load if all is working you will not see any fetches from the server.
So if you try out the above small example, and then go into your javascript console you can say things like this:
Opal.User.$first()
and you will see the underlying model data structure returned (I am translating from JS into ruby above... ruby methods all start with $)
You can then do this:
Opal.User.$first().$name()
And you can even do this (assuming there are at least 2 user models):
Opal.User.$find(2).$name()
You should have something like "DummyValue" returned, but then there will be a server fetch cycle in the console, then if you repeat the above command you will get back the actual value!!!
This may not be the best forum for more details, if you need to drop by https://gitter.im/reactrb/chat for more help

rails params validation in controller

Is there a best practice to validate params in a controller?
#user = User.find_by_id(params[:id])
If I tamper with the param to give it an invalid :id param, say by visiting "/users/test", I can generate the following error:
Conversion failed when converting the nvarchar value 'test' to data type int.
I am thinking right now of params that won't go straight to a model and can be validated by model validations.
Yes you should always validate your parameters. People can always mess around with the parameters in their web browser's address bar, or modify parameters stored in the DOM. Another example where parameters can be screwed up is if the webpage is left open a long time. Imagine someone is viewing the page "/users/3/edit" and leaves it open for an hour, then hits refresh. In the mean time that user may have been deleted. You don't want your website to crash - it should handle that gracefully.
Depending on your database and adapter, doing User.find_by_id("test") will not crash. But your database/adapter was not able to convert the string in to an integer. One thing you can do in this particular case is use Ruby's .to_i method.
User.find_by_id(params[:id].to_i)
If params[:id] = "12", Ruby will convert that to the integer 12 and the code will run fine. If params[:id] = "test", Ruby will convert that to the integer 0, and you should never have a database record with an ID of 0.
You can also use regular expressions to test if a string is an integer.
But in general, yes, try to always validate your parameters so you can handle errors gracefully and control the data coming in.

Delayed Job object not properly deserialized

I'm having a hard time believing what I'm seeing, but it sure looks like DJ is failing to deserialize an object properly. I look at the DJ record in mongo and I see in the YAML that the object has its text field set, but when the code runs, the text field is not set. Here is some minimal repro code:
class Board
include Mongoid::Document
field :text, type: String
def process_text_field
if not self.text
raise "Text field is blank"
end
# Text field gets processed
end
end
# in a controller
def start_doing_something_slow
board = Board.find(params[:id])
board.text = "Text field is set"
board.save!
raise "Text disappeared!" unless board.text
board.delay.process_text_field
render json: {:result=>'ok'}
end
I invoke the controller method with the browser, and check the DJ record directly in mongo. I see in the YAML that the Board object has the text field correctly set. But when it executes in DJ, it raises the Text field is blank exception.
Somehow it's not deserializing the object properly.
Well this took me about a week to figure out, so I'm posting it here to help others who fall into this trap. Turns out this is a known bug in delayed_job_mongoid. And it's had a simple fix listed right there in the bug report for 10 months.
The problem arises if you use the identity map in mongoid, which acts as an in-process caching layer to the database. For normal web requests, the cache gets cleared between each request, so your controller methods don't use stale versions of the objects. But delayed_job_mongoid doesn't clear the cache between jobs without this patch (which I just put together): https://github.com/collectiveidea/delayed_job_mongoid/pull/38
The result is your delayed jobs are sometimes using old versions of the objects, depending on what ran before them, which creates truly bizarre, mysterious failures that are extremely difficult to track down until you understand what's happening.

Read json serialised objects back from a file

I am aiming to serialise a set of objects into a file so as to create a backup. I have the start of that working, using a methods on the models (simplified here, assuming I have two ActiveRecords foo and bar):
def backup(file, foo, bar)
file.write(foo.to_json(root: true))
file.write(bar.to_json(root: true))
end
This gives me a file as I desire, in this case with two records:
{"foo":{"Account_id":1,"Name":"F","created_at":"2013-04-16T10:06:19Z","id":1,"updated_at":"2013-04-20T11:36:23Z"}}
{"bar":{"Account_id":1,"Name":"B","created_at":"2013-04-16T10:06:19Z","id":1,"updated_at":"2013-04-20T11:36:23Z"}}
At a later date I then want to read that backup in and reinstantiate those objects, probably then persisting them back to the database. My aim is to iterate through the file checking the type of each object, then instantiating the right object.
I have part of the logic, but not yet all of it, I haven't worked out how I determine the type of each serialised object before I instantiate it. The code I have for a restore is as follows:
def restore(file)
file.each_line do |line|
**<some magic that parses my line into objectType and objectHash>**
case objectType
when :foo
Foo.new.from_json(objectHash)
Foo.process
Foo.save!
when :bar
Bar.new.from_json(objectHash)
Bar.process
Bar.save!
end
end
end
What I'm looking for is the bit that goes in the "some magic" section. I can just write the code to parse the line directly to determine whether it's a foo or a bar, but I feel like there's probably some tricky Rails/Ruby way to do this that is automatic. Unfortunately, in this case Google is not being my friend. All I can see are pages that are focused on json in the web requests, but not parsing json back in this way. Is there something I'm missing, or should I just write the code to split the string directly and read the object type?
If I do write the code to split the string directly, I would write something along the lines of:
objectType = line[/^{"(\w*)"=>(.*)}/, 1]
objectHash = line[/{"(\w*)"=>(.*)}/, 2]
This is pretty ugly and I'm sure there's a better way (which I'm still looking into), but I'm not sure that this is even the right approach v's there being something that automatically looks at a json representation and knows from the root value what object to instantiate.
Lastly, the actual instantiation using from_json isn't working either, it isn't populating any of the fields on my ActiveRecord. It gives me nil parameters, so I think the parse syntax isn't right.
So, that makes three questions:
Is there a way to determine which object it is that I'm just missing, that is much cleaner?
If there isn't and I need to use a regexp, is there a syntax to get both bits of the line parsed in a single go, rather than my two lines with the same regexp?
The from_json syntax appears unhappy. Is there a syntax I'm missing here? (no longer a question - the code above is fixed, I was using as_json when it should have been to_json, although the documentation is rather unclear on that....)
(Note: edits over time to clarify my question, and because I've now got a regexp that works (didn't before), but still not sure it's very elegant.)
Further information - one of the problems here, as I dig into it further, is that the as_json isn't actually giving me json - what I have in the file is a hash, not json at all. Further, the values for created_at and lastupdated_at in the hash aren't quoted - so basically that's what's causing the parse on the way back in to fail. I've worked out that I should use to_json instead of as_json, although the documentation suggests that as_json should work.
I'm not sure I fully understand you're methodology, but I think using JSON.parse() would help.
There's some good information here http://mike.bailey.net.au/2011/02/json-with-ruby-and-rails/
This would help you translate the raw object back to a hash.
OK, so I think I've got something that works. I'm not convinced at all that it's elegant, but it gives me the result. I'll spend some time later trying to make it cleaner.
The code looks like this:
file.each_line do |line|
objectType = line[/^{"(\w*)":(.*)}/, 1]
objectJSON = line[/{"(\w*)":(.*)}/, 2]
objectHash = JSON.parse(objectJSON)
case objectType
when 'foo'
restoredFoo = Foo.new(objectHash.except('id', 'created_at', 'updated_at'))
restoredFoo.created_at = objectHash['created_at']
restoredFoo.updated_at = objectHash['updated_at']
restoredFoo.save!
end
when 'bar'
restoredBar = Bar.new(objectHash.except('id', 'created_at', 'updated_at'))
restoredBar.created_at = objectHash['created_at']
restoredBar.updated_at = objectHash['updated_at']
restoredBar.save!
end
end
Items of note:
I feel like there should be a way to create the object that isn't a JSON.parse, but rather would make use of the from_json method on the model. I'm not sure what the from_json is good for if it doesn't do this!!
I'm having fun with mass_assignment. I don't really want to use :without_protection => true, although this would be an option. My concern is that I do want the created_at and updated_at to be restored as they were, but I want a new id. I'm going to be doing this for a number of entities in my application, I didn't really want to end up replicating the attributes_protected in the code - it seems not very DRY
I'm still pretty sure my reg exp can give me both objectType and objectJSON in one call
But having said all that, it works, which is a good step forwards.

Serialized column by model in rails work correctly only after refresh

In my model I have:
class Log < ActiveRecord::Base
serialize :data
...
def self.recover(table_name, row_id)
d = Log.where(table_name: table_name, row_id: row_id).where("log_type != #{symbol_to_constant(:delete)}").last
row = d.data
raise "Nothing to recover" if d.nil?
raise "No data to recover" if d.data.nil?
c = const_get(table_name)
ret = c.create(row.attributes)
end
And in my controller I calling it as:
def index
Log.recover params[:t], params[:r]
redirect_to request.referer
end
The problem is, if I access this page for the first time, I am getting error specified below, but after refresh, is everything OK. Where can be problem?
undefined method `attributes' for #<String:0x00000004326fc8>
In data column are saved instances of models. For the first time column isn't properly unserialized, it's just yaml text. But after refresh everything is fine. That's confusing, what is wrong? Bug in rails?
It's not every time, sometimes in first access everything is okey.
Deyamlizing an object of class Foo will do funny things if there is no class Foo. This can quite easily happen in development becauses classes are only loaded when needed and unloaded when rails thinks they might have changed.
Depending on whether the class is loaded or not the YAML load will have different results (YAML doesn't know about rail's automatic loading stuff)
One solution worth considering is to store the attributes hash rather than the activerecord object. You'll probably avoid problems in the long run and it will be more space efficient in the long wrong - there's a bunch of state in an activerecord object that you probably don't care about in this case.
If that's not an option, your best bet is probably to make sure that the classes that the serialized column might contain are loaded - still a few calls to require_dependency 'foo' at the top of the file.

Resources