Monkey patching ActiveResource::Errors - ruby-on-rails

I've come across an issue with ActiveResource that has been resolved and was trying to monkey patch it into my application without much luck.
I've added a file in config/initializers/ containing the following:
class ActiveResource::Errors < ActiveModel::Errors
# https://github.com/rails/rails/commit/b09b2a8401c18d1efff21b3919ac280470a6eb8b
def from_hash(messages, save_cache = false)
clear unless save_cache
messages.each do |(key,errors)|
errors.each do |error|
if #base.attributes.keys.include?(key)
add key, error
elsif key == 'base'
self[:base] << error
else
# reporting an error on an attribute not in attributes
# format and add themActive to base
self[:base] << "#{key.humanize} #{error}"
end
end
end
end
# Grabs errors from a json response.
def from_json(json, save_cache = false)
decoded = ActiveSupport::JSON.decode(json) || {} rescue {}
if decoded.kind_of?(Hash) && (decoded.has_key?('errors') || decoded.empty?)
errors = decoded['errors'] || {}
if errors.kind_of?(Array)
# 3.2.1-style with array of strings
ActiveSupport::Deprecation.warn('Returning errors as an array of strings is deprecated.')
from_array errors, save_cache
else
# 3.2.2+ style
from_hash errors, save_cache
end
else
# <3.2-style respond_with - lacks 'errors' key
ActiveSupport::Deprecation.warn('Returning errors as a hash without a root "errors" key is deprecated.')
from_hash decoded, save_cache
end
end
end
But it seems still to be calling activeresource-3.2.2/lib/active_resource/validations.rb:31:in 'from_json'. Any help on how properly to monkey patch this would be very much appreciated.
Thanks!

It turns out that the problem was Rails lazy loading ActiveResource after my file was loaded in the config, overriding it with the original definitions. The fix is simply requiring the needed files before defining the patched code.
My revised code:
require 'active_resource/base'
require 'active_resource/validations'
module ActiveResource
class Errors
# https://github.com/rails/rails/commit/b09b2a8401c18d1efff21b3919ac280470a6eb8b
def from_hash(messages, save_cache = false)
clear unless save_cache
messages.each do |(key,errors)|
errors.each do |error|
if #base.attributes.keys.include?(key)
add key, error
elsif key == 'base'
self[:base] << error
else
# reporting an error on an attribute not in attributes
# format and add themActive to base
self[:base] << "#{key.humanize} #{error}"
end
end
end
end
# Grabs errors from a json response.
def from_json(json, save_cache = false)
decoded = ActiveSupport::JSON.decode(json) || {} rescue {}
if decoded.kind_of?(Hash) && (decoded.has_key?('errors') || decoded.empty?)
errors = decoded['errors'] || {}
if errors.kind_of?(Array)
# 3.2.1-style with array of strings
ActiveSupport::Deprecation.warn('Returning errors as an array of strings is deprecated.')
from_array errors, save_cache
else
# 3.2.2+ style
from_hash errors, save_cache
end
else
# <3.2-style respond_with - lacks 'errors' key
ActiveSupport::Deprecation.warn('Returning errors as a hash without a root "errors" key is deprecated.')
from_hash decoded, save_cache
end
end
end
end

Related

Collect KeyErrors from Ruby hash into array

I need to extract multiple fields from hash. But I respect my client and I want to gather all missed fields instead of returning it one by one. My idea was to use #fetch, intercept error with KeyError, put error.key into instance variable array and return proper error explanation with full list of missed keys.
Something like that
class Extractor
def initialize hash
#hash = hash
#missed_keys = []
end
def call
extract_values
return "Missed keys: #{#missed_keys.join(', ')}" if #missed_keys.present?
rescue KeyError => e
puts 'Field was missed'
#missed_keys << e.key
return 'Error'
end
private
def extract_values
{
value_1: #hash.fetch(:required_field_1),
value_2: #hash.fetch(:required_field_2),
value_3: #hash.fetch(:required_field_3)
}
end
end
When I try to process hash without required fields I got 'Error' after the first missed field:
pry(main)> Extractor.new(hash: {}).call
Field was missed
=> "Error"
Any clues?
DrySchema and other hash validators are not an option.
An issue with the provided solution is that the extracted values are never returned in the happy path (which presumably is important?). The call method is also stateful / non-idempotent. Subsequent calls to call will duplicate the missing-keys.
Finally - not sure how it's being used, but I don't love a method that returns either a hash or a string.
An alternative that attempts to follow a more functional pattern might look like:
class Extractor
attr_reader :hash, :missed_keys, :required_keys
def initialize hash
#hash = hash
#missed_keys = []
#required_keys = [:required_field_1, :required_field_2, :required_field_3]
end
def call
validate_keys_exist!
extract_values
end
private
def validate_keys_exist!
missed_keys = find_missing_keys
raise MissingKeysError, "Missed keys: #{missed_keys.join(', ')}" if missed_keys.any?
end
def find_missing_keys
required_keys - hash.keys
end
def extract_values
hash.slice(*required_keys)
# not sure if you need to map the keys to new values.
# if so you can iterate over a hash of `from: :to` pairs instead of the
# required_keys array.
end
end
Ok, I got it. The reason is in intercept level and method closures.
In aforementioned implementation Ruby tried to execute call method, got an error and exits.
If we rework it like that:
class Extractor
def initialize hash
#hash = hash
#missed_keys = []
end
def call
extract_values
return "Missed keys: #{#missed_keys.join(', ')}" if #missed_keys.present?
end
private
def extract_values
{
value_1: #hash.fetch(:required_field_1),
value_2: #hash.fetch(:required_field_2),
value_3: #hash.fetch(:required_field_3)
}
rescue KeyError => e
puts 'Field was missed'
#missed_keys << e.key
nil
end
end
it looks better, but still not what we wanted:
pry(main)> Extractor.new(hash: {}).call
Field was missed
=> "Missed keys: required_field_1"
This is because ruby tried to execute extract_values method, encounters first missed value and exits
So the solution as follow:
class Extractor
def initialize hash
#hash = hash
#missed_keys = []
end
def call
extract_values
return "Missed keys: #{#missed_keys.join(', ')}" if #missed_keys.present?
end
private
def extract_values
{
value_1: fetch_value(:required_field_1),
value_2: fetch_value(:required_field_2),
value_3: fetch_value(:required_field_3)
}
end
def fetch_value(key)
#hash.fetch(key)
rescue KeyError => e
puts 'Field was missed'
#missed_keys << e.key
nil
end
end
Extractor.new(hash: {}).call
Field was missed
Field was missed
Field was missed
=> "Missed keys: required_field_1, required_field_2, required_field_3"
Error interception is accomplished on the fetch_value level and Ruby skips required values one by one

How to speed up loading of Marshal objects in Ruby/Rails

I have a mongoid model/class in my Rails application. It looks like this:
class Operation
include Mongoid::Document
include Mongoid::Timestamps
extend Mongoid::MarshallableField
marshallable_field :message
def load_message
message
end
end
message contains an array of several thousands of elements, so it has been converted into a byte stream with Marshal.
I need to be able to load messagefast, but currently it takes approx. 1,4 seconds to load, e.g. with the load_message method as displayed above.
How could I speed things up?
For your reference, here is my configuration:
## app/lib/mongoid/marshallable_field.rb
module Mongoid
module MarshallableField
def marshallable_field(field_name, params = {})
set_method_name = "#{field_name}=".to_sym
get_method_name = "#{field_name}".to_sym
attr_name = "__#{field_name}_marshallable_path".to_sym
send :define_method, set_method_name do |obj|
if Rails.env == "development" || Rails.env == "test"
path = File.expand_path(Rails.public_path + "/../file_storage/#{Time.now.to_i}-#{id}.class_dump")
elsif Rails.env == "production"
path = "/home/ri/prod/current/file_storage/#{Time.now.to_i}-#{id}.class_dump"
end
f = File.new(path, "w")
Marshal.dump(obj, f)
f.close
update_attribute(attr_name, path)
path
end
send :define_method, get_method_name do
if self[attr_name] != nil
file = File.open(self[attr_name], "r")
begin
Marshal.load(file)
rescue ArgumentError => e
Rails.logger.error "Error unmarshalling a field #{attr_name}: #{e}"
nil
end
else
Rails.logger.error "self[attr_name] is nil"
nil
end
end
end
end
end

Silence ActionView::Template::Errors, like "isn't precompiled"

my question is about the standard behavior of the action-view gem when using the rails asset-pipeline.
It throws an Exception and the app-execution stops whenever there's an image which isn't precompiled, so the user just gets to see the standard blank page saying: "... something went wrong".
Something as trivial as a missing image (could be an icon, maybe with just a misspelled name...) shouldn't be a showstopper. Should it be?!
We would like to change this radical behavior to a more mild version: Having the app continue working, but, of course, notifying us about the missing image.
Question:
Is there any other way then monkeypatching the relevant part of the helper method contained in the action-view gem?
Is there any config we could modify so there would be no need for this patch?
Having this kind of monkeypatch is considered a maintenance nightmare in case of gem-updates, isn't it?
This is our actual patch: called: "assetpipe_easy_errors.rb" residing in config/initializers, the relevant method is "digest_for"
Sprockets::Helpers::RailsHelper::AssetPaths.class_eval do
attr_accessor :asset_environment, :asset_prefix, :asset_digests, :compile_assets, :digest_assets
class AssetNotPrecompiledError < StandardError; end
def asset_for(source, ext)
source = source.to_s
return nil if is_uri?(source)
source = rewrite_extension(source, nil, ext)
asset_environment[source]
rescue Sprockets::FileOutsidePaths
nil
end
def digest_for(logical_path)
if digest_assets && asset_digests && (digest = asset_digests[logical_path])
return digest
end
if compile_assets
if digest_assets && asset = asset_environment[logical_path]
return asset.digest_path
end
return logical_path
else
#original code: raise AssetNotPrecompiledError.new("#{logical_path} isn't precompiled")
### own Patch: these next four lines:
Rails.logger.info(" arrg!! an image is missing ")
### example: FeedbackMailer.generic_system_message(subject,bodytext).deliver
FeedbackMailer.generic_system_message("asset error",logical_path).deliver
return logical_path
end
end
def rewrite_asset_path(source, dir, options = {})
if source[0] == ?/
source
else
if digest_assets && options[:digest] != false
source = digest_for(source)
end
source = File.join(dir, source)
source = "/#{source}" unless source =~ /^\//
source
end
end
def rewrite_extension(source, dir, ext)
source_ext = File.extname(source)
if ext && source_ext != ".#{ext}"
if !source_ext.empty? && (asset = asset_environment[source]) &&
asset.pathname.to_s =~ /#{source}\Z/
source
else
"#{source}.#{ext}"
end
else
source
end
end
end
Any ideas are highly appreciated

Papertrail and Carrierwave

I have a model that use both: Carrierwave for store photos, and PaperTrail for versioning.
I also configured Carrierwave for store diferent files when updates (That's because I want to version the photos) with config.remove_previously_stored_files_after_update = false
The problem is that PaperTrail try to store the whole Ruby Object from the photo (CarrierWave Uploader) instead of simply a string (that would be its url)
(version table, column object)
---
first_name: Foo
last_name: Bar
photo: !ruby/object:PhotoUploader
model: !ruby/object:Bla
attributes:
id: 2
first_name: Foo1
segundo_nombre: 'Bar1'
........
How can I fix this to store a simple string in the photo version?
You can override item_before_change on your versioned model so you don't call the uploader accesor directly and use write_attribute instead. Alternatively, since you might want to do that for several models, you can monkey-patch the method directly, like this:
module PaperTrail
module Model
module InstanceMethods
private
def item_before_change
previous = self.dup
# `dup` clears timestamps so we add them back.
all_timestamp_attributes.each do |column|
previous[column] = send(column) if respond_to?(column) && !send(column).nil?
end
previous.tap do |prev|
prev.id = id
changed_attributes.each do |attr, before|
if defined?(CarrierWave::Uploader::Base) && before.is_a?(CarrierWave::Uploader::Base)
prev.send(:write_attribute, attr, before.url && File.basename(before.url))
else
prev[attr] = before
end
end
end
end
end
end
end
Not sure if it's the best solution, but it seems to work.
Adding #beardedd's comment as an answer because I think this is a better way to handle the problem.
Name your database columns something like picture_filename and then in your model mount the uploader using:
class User < ActiveRecord::Base
has_paper_trail
mount_uploader :picture, PictureUploader, mount_on: :picture_filename
end
You still use the user.picture.url attribute to access your model but PaperTrail will store revisions under picture_filename.
Here is a bit updated version of monkeypatch from #rabusmar, I use it for rails 4.2.0 and paper_trail 4.0.0.beta2, in /config/initializers/paper_trail.rb.
The second method override is required if you use optional object_changes column for versions. It works in a bit strange way for carrierwave + fog if you override filename in uploader, old value will be from cloud and new one from local filename, but in my case it's ok.
Also I have not checked if it works correctly when you restore old version.
module PaperTrail
module Model
module InstanceMethods
private
# override to keep only basename for carrierwave attributes in object hash
def item_before_change
previous = self.dup
# `dup` clears timestamps so we add them back.
all_timestamp_attributes.each do |column|
if self.class.column_names.include?(column.to_s) and not send("#{column}_was").nil?
previous[column] = send("#{column}_was")
end
end
enums = previous.respond_to?(:defined_enums) ? previous.defined_enums : {}
previous.tap do |prev|
prev.id = id # `dup` clears the `id` so we add that back
changed_attributes.select { |k,v| self.class.column_names.include?(k) }.each do |attr, before|
if defined?(CarrierWave::Uploader::Base) && before.is_a?(CarrierWave::Uploader::Base)
prev.send(:write_attribute, attr, before.url && File.basename(before.url))
else
before = enums[attr][before] if enums[attr]
prev[attr] = before
end
end
end
end
# override to keep only basename for carrierwave attributes in object_changes hash
def changes_for_paper_trail
_changes = changes.delete_if { |k,v| !notably_changed.include?(k) }
if PaperTrail.serialized_attributes?
self.class.serialize_attribute_changes(_changes)
end
if defined?(CarrierWave::Uploader::Base)
Hash[
_changes.to_hash.map do |k, values|
[k, values.map { |value| value.is_a?(CarrierWave::Uploader::Base) ? value.url && File.basename(value.url) : value }]
end
]
else
_changes.to_hash
end
end
end
end
end
This is what actually functions for me, put this on config/initializers/paper_trail/.rb
module PaperTrail
module Reifier
class << self
def reify_attributes(model, version, attrs)
enums = model.class.respond_to?(:defined_enums) ? model.class.defined_enums : {}
AttributeSerializers::ObjectAttribute.new(model.class).deserialize(attrs)
attrs.each do |k, v|
is_enum_without_type_caster = ::ActiveRecord::VERSION::MAJOR < 5 && enums.key?(k)
if model.send("#{k}").is_a?(CarrierWave::Uploader::Base)
if v.present?
model.send("remote_#{k}_url=", v["#{k}"][:url])
model.send("#{k}").recreate_versions!
else
model.send("remove_#{k}!")
end
else
if model.has_attribute?(k) && !is_enum_without_type_caster
model[k.to_sym] = v
elsif model.respond_to?("#{k}=")
model.send("#{k}=", v)
elsif version.logger
version.logger.warn(
"Attribute #{k} does not exist on #{version.item_type} (Version id: #{version.id})."
)
end
end
end
end
end
end
end
This overrides the reify method to work on S3 + heroku
For uploaders to keep old files from updated or deleted records do this in the uploader
configure do |config|
config.remove_previously_stored_files_after_update = false
end
def remove!
true
end
Then make up some routine to clear old files from time to time, good luck
I want to add to the previous answers the following:
It can happen that you upload different files with the same name, and this may overwrite your previous file, so you won't be able to restore the old one.
You may use a timestamp in file names or create random and unique filenames for all versioned files.
Update
This doesn't seem to work in all edge cases for me, when assigning more than a single file to the same object within a single request request.
I'm using this right now:
def filename
[#cache_id, original_filename].join('-') if original_filename.present?
end
This seems to work, as the #cache_id is generated for each and every upload again (which isn't the case as it seems for the ideas provided in the links above).
#Sjors Provoost
We also need to override pt_recordable_object method in PaperTrail::Model::InstanceMethods module
def pt_recordable_object
attr = attributes_before_change
object_attrs = object_attrs_for_paper_trail(attr)
hash = Hash[
object_attrs.to_hash.map do |k, value|
[k, value.is_a?(CarrierWave::Uploader::Base) ? value.url && File.basename(value.url) : value ]
end
]
if self.class.paper_trail_version_class.object_col_is_json?
hash
else
PaperTrail.serializer.dump(hash)
end
end

How do I preserve case with http.get?

I have a requirement to send an HTTP header in a specific character-case. I am aware that this is against the RFC, but I have a requirement.
http.get seems to change the case of the headers dictionary I supply it. How can I preserve the character-case?
Based on the Tin Man's answer that the Net::HTTP library is calling #downcase on your custom header key (and all header keys), here are some additional options that don't monkey-patch the whole of Net::HTTP.
You could try this:
custom_header_key = "X-miXEd-cASe"
def custom_header_key.downcase
self
end
To avoid clearing the method cache, either store the result of the above in a class-level constant:
custom_header_key = "X-miXEd-cASe"
def custom_header_key.downcase
self
end
CUSTOM_HEADER_KEY = custom_header_key
or subclass String to override that particular behavior:
class StringWithIdentityDowncase < String
def downcase
self
end
end
custom_header_key = StringWithIdentityDowncase.new("X-miXEd-cASe")
The accepted answer does not work. Frankly, I doubt that it ever did since it looks like it would have had to also override split and capitalize, I followed that method back a few commits, it's been that way at least since 2004.
Here is my solution, in answer to this closed question:
require 'net/http'
class Net::HTTP::ImmutableHeaderKey
attr_reader :key
def initialize(key)
#key = key
end
def downcase
self
end
def capitalize
self
end
def split(*)
[self]
end
def hash
key.hash
end
def eql?(other)
key.eql? other.key.eql?
end
def to_s
key
end
end
Now you need to be sure to always use instances of this class as your keys.
request = Net::HTTP::Get.new('/')
user_key = Net::HTTP::ImmutableHeaderKey.new("user")
request[user_key] = "James"
require 'stringio'
StringIO.new.tap do |output|
request.exec output, 'ver', 'path'
puts output.string
end
# >> GET path HTTP/ver
# >> Accept-Encoding: gzip;q=1.0,deflate;q=0.6,identity;q=0.3
# >> Accept: */*
# >> User-Agent: Ruby
# >> user: James
# >>
Mine is one way to do it, but I recommend doing it as #yfeldblum recommends, simply short-circuit downcase for the header keys that need to have their case left-alone.
In multiple places in Net::HTTP::HTTPHeader the headers get folded to lower-case using downcase.
I think it is pretty drastic to change that behavior, but this will do it. Add this to your source and it will redefine the methods in the HTTPHeader module that had downcase in them.
module HTTPHeader
def initialize_http_header(initheader)
#header = {}
return unless initheader
initheader.each do |key, value|
warn "net/http: warning: duplicated HTTP header: #{key}" if key?(key) and $VERBOSE
#header[key] = [value.strip]
end
end
def [](key)
a = #header[key] or return nil
a.join(', ')
end
def []=(key, val)
unless val
#header.delete key
return val
end
#header[key] = [val]
end
def add_field(key, val)
if #header.key?(key)
#header[key].push val
else
#header[key] = [val]
end
end
def get_fields(key)
return nil unless #header[key]
#header[key].dup
end
def fetch(key, *args, &block) #:yield: +key+
a = #header.fetch(key, *args, &block)
a.kind_of?(Array) ? a.join(', ') : a
end
# Removes a header field.
def delete(key)
#header.delete(key)
end
# true if +key+ header exists.
def key?(key)
#header.key?(key)
end
def tokens(vals)
return [] unless vals
vals.map {|v| v.split(',') }.flatten\
.reject {|str| str.strip.empty? }\
.map {|tok| tok.strip }
end
end
I think this is a brute force way of going about it, but nothing else more elegant jumped to mind.
While this should fix the problem for any Ruby libraries using Net::HTTP, it will probably fail for any gems that use Curl or libcurl.
Joshua Cheek's answer is great, but it does in work anymore in Ruby 2.3
This modification fix it:
class Net::HTTP::ImmutableHeaderKey
...
def to_s
caller.first.match(/capitalize/) ? self : #key
end
end
It all falls down into the net/generic_request#write_header.
You could monkey patch the code
# 'net/generic_request' line 319
def write_header(sock, ver, path)
customheaders = {
"My-Custom-Header" => "MY-CUSTOM-HEADER",
"Another-Custom-Header" => "aNoThErCuStOmHeAdEr"
}
buf = "#{#method} #{path} HTTP/#{ver}\r\n"
each_capitalized do |k,v|
customheaders.key?(k) ? kk = customheaders[k] : kk = k
buf << "#{kk}: #{v}\r\n"
end
buf << "\r\n"
sock.write buf
end
and you don't need to rewrite the whole net/http/header, net/generic_request and net/http chain.
It's not the best solution, but it's the easiest one I guess and there's least amount of monkey patching.
Hope it helps.

Resources