Serve stale data from cache - ruby-on-rails

I'm working in a Rails app where I want to serve stale data (where changes are really small and don't matter that much but I want to upgrade the cache in the background).
One way to do this is something like this:
if Rails.cache.exist?('my_view')
Rails.cache.read('my_view')
MaybeUpdateInBackground.perform_async
end
# MaybeUpdateInBackground.rb
def perform
Rails.cache.write('my_view', render_to_string('my_view'))
end
Is there a better pattern for this in RoR?

This ended up getting the job done:
To break it down:
Dummy leverages a "stale cache" - the module can be added to any class.
When you call expensive_but_unimportant_operation_cached it will check Rails.cache - if the value is in there, it'll return it - it will trigger a "refresh" if the value in there is stale.
A refresh is a Sidekiq job that re-runs the function.
class Dummy
extend T::Sig
include WithStaleCache
sig { returns(Integer) }
def expensive_but_unimportant_operation_cached
cache_with_stale_cache(
'expensive_but_unimportant_operation',
expires_in: 1.week,
stale_in: 1.day,
)
end
sig { returns(Integer) }
def self.expensive_but_unimportant_operation
sleep 10
10
end
end
module WithStaleCache
extend T::Sig
sig do
params(
method: String,
serialized_arguments: T.nilable(T::Array[String]),
stale_in: ActiveSupport::Duration,
expires_in: ActiveSupport::Duration,
)
.returns(T.untyped)
end
def cache_with_stale_cache(
method,
serialized_arguments = nil,
stale_in:,
expires_in:
)
SolidAssert.assert(
T.unsafe(self).class.respond_to?(method),
"#{T.unsafe(self).class} must implement class method: #{method}.",
)
cache_key =
build_key(T.unsafe(self).class.to_s, method, serialized_arguments)
cached_value = Rails.cache.read(cache_key)
value = cached_value&.dig('value')
if cached_value.nil?
value =
CacheFillWorker.new.perform(
cache_key,
serialized_arguments,
{ 'expires_in' => expires_in, 'stale_in' => stale_in },
)
elsif cached_value['stale_at'].before?(Time.current)
CacheFillWorker.perform_async(
cache_key,
serialized_arguments,
{ 'expires_in' => expires_in, 'stale_in' => stale_in },
)
end
value
end
sig do
params(
klass: String,
method_name: String,
args: T.nilable(T::Array[String]),
)
.returns(String)
end
def build_key(klass, method_name, args)
"stale_cache/v1/#{args}/#{klass}/#{method_name}"
end
sig { params(cache_key: String).returns([String, String]) }
def self.class_and_method_from_cache_key(cache_key)
T.cast(cache_key.split('/').last(2), [String, String])
end
sig do
params(cache_key: String, serialized_arguments: T.nilable(T::Array[String]))
.returns(T.untyped)
end
def self.perform(cache_key, serialized_arguments)
deserialized_args =
if serialized_arguments.nil?
nil
else
serialized_arguments.map { |arg| JSON.parse(arg) }
end
class_name, method = class_and_method_from_cache_key(cache_key)
klass = class_name.constantize
if deserialized_args.nil?
klass.send(method)
else
klass.send(method, *deserialized_args)
end
end
end
# typed: true
class CacheFillWorker
extend T::Sig
include Sidekiq::Worker
# Arguments are serialized.
sig do
params(
cache_key: String,
serialized_arguments: T.nilable(T::Array[String]),
cache_options: StringKeyHash,
)
.returns(T.untyped)
end
def perform(cache_key, serialized_arguments, cache_options = {})
cached_value = Rails.cache.read(cache_key)
if cached_value.nil?
next_value = WithStaleCache.perform(cache_key, serialized_arguments)
elsif cached_value['stale_at'].before?(Time.current)
next_value = WithStaleCache.perform(cache_key, serialized_arguments)
else
# No-op since the value is not stale.
return cached_value['value']
end
stale_in = cache_options.delete('stale_in').to_i
Rails.cache.write(
cache_key,
{
'value' => next_value,
'stale_at' => stale_in.seconds.from_now,
'cached_at' => Time.zone.now,
},
cache_options,
)
next_value
end
end

Related

Is a ':methods' option in 'to_json' substitutable with an ':only' option?

The to_json option has options :only and :methods. The former is intended to accept attributes and the latter methods.
I have a model that has an attribute foo, which is overwritten:
class SomeModel < ActiveRecord::Base
...
def foo
# Overrides the original attribute `foo`
"the overwritten foo value"
end
end
The overwritten foo method seems to be called irrespective of which option I write the foo under.
SomeModel.first.to_json(only: [:foo])
# => "{..., \"foo\":\"the overwritten foo value\", ...}"
SomeModel.first.to_json(methods: [:foo])
# => "{..., \"foo\":\"the overwritten foo value\", ...}"
This seems to suggest it does not matter whether I use :only or :methods.
Is this the case? I feel something wrong with my thinking.
The source code leads to these:
File activemodel/lib/active_model/serialization.rb, line 124
def serializable_hash(options = nil)
options ||= {}
attribute_names = attributes.keys
if only = options[:only]
attribute_names &= Array(only).map(&:to_s)
elsif except = options[:except]
attribute_names -= Array(except).map(&:to_s)
end
hash = {}
attribute_names.each { |n| hash[n] = read_attribute_for_serialization(n) }
Array(options[:methods]).each { |m| hash[m.to_s] = send(m) }
serializable_add_includes(options) do |association, records, opts|
hash[association.to_s] = if records.respond_to?(:to_ary)
records.to_ary.map { |a| a.serializable_hash(opts) }
else
records.serializable_hash(opts)
end
end
hash
end
File activeresource/lib/active_resource/base.rb, line 1394
def read_attribute_for_serialization(n)
attributes[n]
end
and it seems that an :only option calls attributes[n] and :methods option calls send(m). What is the difference?

OpenStruct issue with Ruby 2.3.1

In Ruby 2.1.5 and 2.2.4, creating a new Collector returns the correct result.
require 'ostruct'
module ResourceResponses
class Collector < OpenStruct
def initialize
super
#table = Hash.new {|h,k| h[k] = Response.new }
end
end
class Response
attr_reader :publish_formats, :publish_block, :blocks, :block_order
def initialize
#publish_formats = []
#blocks = {}
#block_order = []
end
end
end
> Collector.new
=> #<ResourceResponses::Collector>
Collector.new.responses
=> #<ResourceResponses::Response:0x007fb3f409ae98 #block_order=[], #blocks= {}, #publish_formats=[]>
When I upgrade to Ruby 2.3.1, it starts returning back nil instead.
> Collector.new
=> #<ResourceResponses::Collector>
> Collector.new.responses
=> nil
I've done a lot of reading around how OpenStruct is now 10x faster in 2.3 but I'm not seeing what change was made that would break the relationship between Collector and Response. Any help is very appreciated. Rails is at version 4.2.7.1.
Let's have a look at the implementation of method_missing in the current implementation:
def method_missing(mid, *args) # :nodoc:
len = args.length
if mname = mid[/.*(?==\z)/m]
if len != 1
raise ArgumentError, "wrong number of arguments (#{len} for 1)", caller(1)
end
modifiable?[new_ostruct_member!(mname)] = args[0]
elsif len == 0
if #table.key?(mid)
new_ostruct_member!(mid) unless frozen?
#table[mid]
end
else
err = NoMethodError.new "undefined method `#{mid}' for #{self}", mid, args
err.set_backtrace caller(1)
raise err
end
end
The interesting part is the block in the middle that runs when the method name didn't end with an = and when there are no addition arguments:
if #table.key?(mid)
new_ostruct_member!(mid) unless frozen?
#table[mid]
end
As you can see the implementation first checks if the key exists, before actually reading the value.
This breaks your implementation with the hash that returns a new Response.new when a key/value is not set. Because just calling key? doesn't trigger the setting of the default value:
hash = Hash.new { |h,k| h[k] = :bar }
hash.has_key?(:foo)
#=> false
hash
#=> {}
hash[:foo]
#=> :bar
hash
#=> { :foo => :bar }
Ruby 2.2 didn't have this optimization. It just returned #table[mid] without checking #table.key? first.

Call a generic function with or without parameters

I had a code looking like this:
def my_function(obj)
if obj.type == 'a'
return [:something]
elsif obj.type == 'b'
return []
elsif obj.type == 'c'
return [obj]
elsif obj.type == 'd'
return [obj]*2
end
end
I want to separate all these if...elsif blocks into functions like this:
def my_function_with_a
return [:something]
end
def my_function_with_b
return []
end
def my_function_with_c(a_parameter)
return [a_parameter]
end
def my_function_with_d(a_parameter)
return [a_parameter] * 2
end
I call these functions with
def my_function(obj)
send(:"my_function_with_#{obj.type}", obj)
end
The problem is that some functions need parameters, others do not. I can easily define def my_function_with_a(nothing=nil), but I'm sure there is a better solution to do this.
#Dogbert had a great idea with arity. I have a solution like this:
def my_function(obj)
my_method = self.method("my_function_with_#{obj.type}")
return (method.arity.zero? ? method.call : method.call(obj))
end
Check how to call methods in Ruby, for that I will recommend you this two resources: wikibooks and enter link description here.
Take a special note on optional arguments where you can define a method like this:
def method(*args)
end
and then you call call it like this:
method
method(arg1)
method(arg1, arg2)
def foo(*args)
[ 'foo' ].push(*args)
end
>> foo
=> [ 'foo' ]
>> foo('bar')
=> [ 'foo', 'bar' ]
>> foo('bar', 'baz')
=> [ 'foo', 'bar', 'baz' ]
def my_function(obj)
method = method("my_function_with_#{obj.type}")
method.call(*[obj].first(method.arity))
end
Change your function to something like:
def my_function_with_foo(bar=nil)
if bar
return ['foo', bar]
else
return ['foo']
end
end
Now the following will both work:
send(:"my_function_with_#{foo_bar}")
=> ['foo']
send(:"my_function_with_#{foo_bar}", "bar")
=> ['foo', 'bar']
You can also write it like this if you don't want to use if/else and you're sure you'll never need nil in the array:
def my_function_with_foo(bar=nil)
return ['foo', bar].compact
end
You can use a default value
def fun(a_param = nil)
if a_param
return ['raboof',a_param]
else
return ['raboof']
end
end
or...
def fun(a_param : nil)
if a_param
return ['raboof',a_param]
else
return ['raboof']
end
end
The latter is useful if you have multiple parameters because now when you call it you can just pass in the ones that matter right now.
fun(a_param:"Hooray")

Ruby Challenge - Method chaining and Lazy Evaluation

After reading the article http://jeffkreeftmeijer.com/2011/method-chaining-and-lazy-evaluation-in-ruby/, I started looking for a better solution for method chaining and lazy evaluation.
I think I've encapsulated the core problem with the five specs below; can anyone get them all passing?
Anything goes: subclassing, delegation, meta-programming, but discouraged for the latter.
It would be favourable to keep dependencies to a minimum:
require 'rspec'
class Foo
# Epic code here
end
describe Foo do
it 'should return an array corresponding to the reverse of the method chain' do
# Why the reverse? So that we're forced to evaluate something
Foo.bar.baz.should == ['baz', 'bar']
Foo.baz.bar.should == ['bar', 'baz']
end
it 'should be able to chain a new method after initial evaluation' do
foobar = Foo.bar
foobar.baz.should == ['baz', 'bar']
foobaz = Foo.baz
foobaz.bar.should == ['bar', 'baz']
end
it 'should not mutate instance data on method calls' do
foobar = Foo.bar
foobar.baz
foobar.baz.should == ['baz', 'bar']
end
it 'should behave as an array as much as possible' do
Foo.bar.baz.map(&:upcase).should == ['BAZ', 'BAR']
Foo.baz.bar.join.should == 'barbaz'
Foo.bar.baz.inject do |acc, str|
acc << acc << str
end.should == 'bazbazbar'
# === There will be cake! ===
# Foo.ancestors.should include Array
# Foo.new.should == []
# Foo.new.methods.should_not include 'method_missing'
end
it "should be a general solution to the problem I'm hoping to solve" do
Foo.bar.baz.quux.rab.zab.xuuq.should == ['xuuq', 'zab', 'rab', 'quux', 'baz', 'bar']
Foo.xuuq.zab.rab.quux.baz.bar.should == ['bar', 'baz', 'quux', 'rab', 'zab', 'xuuq']
foobarbaz = Foo.bar.baz
foobarbazquux = foobarbaz.quux
foobarbazquuxxuuq = foobarbazquux.xuuq
foobarbazquuxzab = foobarbazquux.zab
foobarbaz.should == ['baz', 'bar']
foobarbazquux.should == ['quux', 'baz', 'bar']
foobarbazquuxxuuq.should == ['xuuq', 'quux', 'baz', 'bar']
foobarbazquuxzab.should == ['zab', 'quux', 'baz', 'bar']
end
end
This is inspired by Amadan's answer but uses fewer lines of code:
class Foo < Array
def self.method_missing(message, *args)
new 1, message.to_s
end
def method_missing(message, *args)
dup.unshift message.to_s
end
end
Trivial, isn't it?
class Foo < Array
def self.bar
other = new
other << 'bar'
other
end
def self.baz
other = new
other << 'baz'
other
end
def bar
other = clone
other.unshift 'bar'
other
end
def baz
other = clone
other.unshift 'baz'
other
end
end
The to_s criterion fails because 1.9 has changed the way Array#to_s works. Change to this for compatibility:
Foo.baz.bar.to_s.should == ['bar', 'baz'].to_s
I want cake.
BTW - metaprogramming here would cut down the code size and increase flexibility tremendously:
class Foo < Array
def self.method_missing(message, *args)
other = new
other << message.to_s
other
end
def method_missing(message, *args)
other = clone
other.unshift message.to_s
other
end
end

Contextual Logging with Log4r

Here's how some of my existing logging code with Log4r is working. As you can see in the WorkerX::a_method, any time that I log a message I want the class name and the calling method to be included (I don't want all the caller history or any other noise, which was my purpose behind LgrHelper).
class WorkerX
include LgrHelper
def initialize(args = {})
#logger = Lgr.new({:debug => args[:debug], :logger_type => 'WorkerX'})
end
def a_method
error_msg("some error went down here")
# This prints out: "WorkerX::a_method - some error went down here"
end
end
class Lgr
require 'log4r'
include Log4r
def initialize(args = {}) # args: debug boolean, logger type
#debug = args[:debug]
#logger_type = args[:logger_type]
#logger = Log4r::Logger.new(#logger_type)
format = Log4r::PatternFormatter.new(:pattern => "%l:\t%d - %m")
outputter = Log4r::StdoutOutputter.new('console', :formatter => format)
#logger.outputters = outputter
if #debug then
#logger.level = DEBUG
else
#logger.level = INFO
end
end
def debug(msg)
#logger.debug(msg)
end
def info(msg)
#logger.info(msg)
end
def warn(msg)
#logger.warn(msg)
end
def error(msg)
#logger.error(msg)
end
def level
#logger.level
end
end
module LgrHelper
# This module should only be included in a class that has a #logger instance variable, obviously.
protected
def info_msg(msg)
#logger.info(log_intro_msg(self.method_caller_name) + msg)
end
def debug_msg(msg)
#logger.debug(log_intro_msg(self.method_caller_name) + msg)
end
def warn_msg(msg)
#logger.warn(log_intro_msg(self.method_caller_name) + msg)
end
def error_msg(msg)
#logger.error(log_intro_msg(self.method_caller_name) + msg)
end
def log_intro_msg(method)
msg = class_name
msg += '::'
msg += method
msg += ' - '
msg
end
def class_name
self.class.name
end
def method_caller_name
if /`(.*)'/.match(caller[1]) then # caller.first
$1
else
nil
end
end
end
I really don't like this approach. I'd rather just use the existing #logger instance variable to print the message and be smart enough to know the context. How can this, or similar simpler approach, be done?
My environment is Rails 2.3.11 (for now!).
After posting my answer using extend, (see "EDIT", below), I thought I'd try using set_trace_func to keep a sort of stack trace like in the discussion I posted to. Here is my final solution; the set_trace_proc call would be put in an initializer or similar.
#!/usr/bin/env ruby
# Keep track of the classes that invoke each "call" event
# and the method they called as an array of arrays.
# The array is in the format: [calling_class, called_method]
set_trace_func proc { |event, file, line, id, bind, klass|
if event == "call"
Thread.current[:callstack] ||= []
Thread.current[:callstack].push [klass, id]
elsif event == "return"
Thread.current[:callstack].pop
end
}
class Lgr
require 'log4r'
include Log4r
def initialize(args = {}) # args: debug boolean, logger type
#debug = args[:debug]
#logger_type = args[:logger_type]
#logger = Log4r::Logger.new(#logger_type)
format = Log4r::PatternFormatter.new(:pattern => "%l:\t%d - %m")
outputter = Log4r::StdoutOutputter.new('console', :formatter => format)
#logger.outputters = outputter
if #debug then
#logger.level = DEBUG
else
#logger.level = INFO
end
end
def debug(msg)
#logger.debug(msg)
end
def info(msg)
#logger.info(msg)
end
def warn(msg)
#logger.warn(msg)
end
def error(msg)
#logger.error(msg)
end
def level
#logger.level
end
def invoker
Thread.current[:callstack] ||= []
( Thread.current[:callstack][-2] || ['Kernel', 'main'] )
end
end
class CallingMethodLogger < Lgr
[:info, :debug, :warn, :error].each do |meth|
define_method(meth) { |msg| super("#{invoker[0]}::#{invoker[1]} - #{msg}") }
end
end
class WorkerX
def initialize(args = {})
#logger = CallingMethodLogger.new({:debug => args[:debug], :logger_type => 'WorkerX'})
end
def a_method
#logger.error("some error went down here")
# This prints out: "WorkerX::a_method - some error went down here"
end
end
w = WorkerX.new
w.a_method
I don't know how much, if any, the calls to the proc will affect the performance of an application; if it ends up being a concern, perhaps something not as intelligent about the calling class (like my old answer, below) will work better.
[EDIT: What follows is my old answer, referenced above.]
How about using extend? Here's a quick-and-dirty script I put together from your code to test it out; I had to reorder things to avoid errors, but the code is the same with the exception of LgrHelper (which I renamed CallingMethodLogger) and the second line of WorkerX's initializer:
#!/usr/bin/env ruby
module CallingMethodLogger
def info(msg)
super("#{#logger_type}::#{method_caller_name} - " + msg)
end
def debug(msg)
super("#{#logger_type}::#{method_caller_name} - " + msg)
end
def warn(msg)
super("#{#logger_type}::#{method_caller_name} - " + msg)
end
def error(msg)
super("#{#logger_type}::#{method_caller_name} - " + msg)
end
def method_caller_name
if /`(.*)'/.match(caller[1]) then # caller.first
$1
else
nil
end
end
end
class Lgr
require 'log4r'
include Log4r
def initialize(args = {}) # args: debug boolean, logger type
#debug = args[:debug]
#logger_type = args[:logger_type]
#logger = Log4r::Logger.new(#logger_type)
format = Log4r::PatternFormatter.new(:pattern => "%l:\t%d - %m")
outputter = Log4r::StdoutOutputter.new('console', :formatter => format)
#logger.outputters = outputter
if #debug then
#logger.level = DEBUG
else
#logger.level = INFO
end
end
def debug(msg)
#logger.debug(msg)
end
def info(msg)
#logger.info(msg)
end
def warn(msg)
#logger.warn(msg)
end
def error(msg)
#logger.error(msg)
end
def level
#logger.level
end
end
class WorkerX
def initialize(args = {})
#logger = Lgr.new({:debug => args[:debug], :logger_type => 'WorkerX'})
#logger.extend CallingMethodLogger
end
def a_method
#logger.error("some error went down here")
# This prints out: "WorkerX::a_method - some error went down here"
end
end
w = WorkerX.new
w.a_method
The output is:
ERROR: 2011-07-24 20:01:40 - WorkerX::a_method - some error went down here
The downside is, via this method, the caller's class name isn't automatically figured out; it's explicit based on the #logger_type passed into the Lgr instance. However, you may be able to use another method to get the actual name of the class--perhaps something like the call_stack gem or using Kernel#set_trace_func--see this thread.

Resources