I'm Just wondering, Is there a chaining concept in ruby.
I wanted to execute series of async tasks or methods one after the other. Is it possible?
Thanks,
Ravi
You might want to create a process class, something like:
class MyProcess
PROCESS_STEPS = %w(
step_one
step_two
step_three
)
class << self
def next_step
new.next_step
end
end # Class Methods
#======================================================================
# Instance Methods
#======================================================================
def next_step
PROCESS_STEPS.each do |process_step|
send(process_step) if send("do_#{process_step}?")
end
end
def step_one
# execute step one task
end
def do_step_one?
# some logic
end
def step_two
# execute step two task
end
def do_step_two?
# some logic
end
def step_three
# execute step three task
end
def do_step_three?
# some logic
end
end
You would probably put that in:
app
|- processes
| |- my_process.rb
Then, at the end of each task, do something like:
MyProcess.next_step
Javascript, where Promises were first introduced, is also synchronous, promises being an abstraction over callbacks in the strictest sense
There are concurrency libraries for Ruby, some of which capture the spirit of Promises to a certain extent, a google search for promise.rb yields some promising results:
https://github.com/lgierth/promise.rb
https://github.com/ruby-concurrency/concurrent-ruby
Perhaps these are not idiomatic ruby, but they do offer some useful paradigms
As far as i can tell, promise.rb is the most commonly used gem for an async mechanism adhering to the js Promise/A+ standard.
This article does a decent job of introducing it: https://medium.com/#gauravbasti2006/lets-keep-our-promise-in-ruby-e45925182fdc
concurrent-ruby is most widely used to implement concurrency related features like promises, similar to other widely used languages. The documentation is pretty straightforward as well:
https://github.com/ruby-concurrency/concurrent-ruby/blob/master/docs-source/promises.in.md
For chaining asynchronous tasks you can use the following:
https://github.com/ruby-concurrency/concurrent-ruby/blob/master/docs-source/promises.in.md#chaining
Related
In my Rails app, I am using sidekiq for job scheduling to lift heavy tasks. So, I have this code in a lot of places:
if Rails.env.development? or Rails.env.test?
#Call the method directly
else #Rails.env.production?
#Call the job via sidekiq that calls the said method
end
Is there any way to clean this out? I am reading software design patterns these days. I am not able to apply that knowledge here. Can you suggest how can this be cleaned up or written in a way that is more manageable?
You can put
require 'sidekiq/testing'
Sidekiq::Testing.inline!
in your development.rb and test.rb config files to get the behaviour you are after. In your applications business logic you would remove the environment conditional and just call the worker (which will now run synchronously in test and development).
How about trying refactoring like following?
module Util
extend self
def execute_method_or_delay_its_execution(obj:, method_name:)
if Rails.env.development? or Rails.env.test?
obj.send(method_name)
else #Rails.env.production?
#Call the job via sidekiq that calls the said method
end
end
end
MyClass1
def my_method
Util.execute_method_or_delay_its_execution(obj: self, method_name: :my_method)
end
end
MyClass2
def my_method
Util.execute_method_or_delay_its_execution(obj: self, method_name: :my_method)
end
end
and then just invoke the methods on object as it should be and the internal delegation should take care of your desired direct execution or delayed execution
mc_1 = MyClass1.new
mc_1.my_method
mc_2 = MyClass2.new
mc_2.my_method
Hope that helps. Thanks.
So I maintain a Rails app with more than 150 database tables. And we are experiencing deadlocks at several locations.
After reading through this post https://hackernoon.com/troubleshooting-and-avoiding-deadlocks-mysql-rails-766913f3cfbc and understanding better the different situations. it seems one common pattern we have is due to unique index waiting for each others on concurrent lock.
So I am looking for a way to say in a model that it should not try to insert two at the time, since MySQL will lock the table. I want it as easy as.
class BingoCard < ActiveRecord::Base
protect_table_locks
end
Which would use a Redis base lock, to wrap around the create operations
I already looked into this answer for ideas. Mutex for ActiveRecord Model
I plan on posting my own answer when I have it.
This is my draft implementation.
if there is enough interest, I will make it a gem
# frozen_string_literal: true
module ActiveRecord
module PersistenceRedisLock
private
def _create_record
_lock_manager.lock(_locked_resource_id, _lock_duration) do |_lock_info|
super
end
end
def _locked_resource_id
#TODO: make it a configurable option
"PersistenceRedisLock#{self.class.table_name}"
end
def _lock_duration
#TODO: make it a configurable option
10.seconds # Maybe too long of a default, but this is a proof of concept for now
end
def _lock_manager
##_lock_manager ||= Redlock::Client.new [Ph::Redis.redis_url_for(:red_locks)]
end
end
class Base
def self.protect_table_locks
self.prepend PersistenceRedisLock
end
end
end
Is there a way to have a model such that only code within the same module can access it?
Something like:
module SomeModule
class SomeActiveRecordModel
# has attribute `some_attribute`
...
end
end
module SomeModule
class SomeOtherClass
def self.sum_of_attribute
SomeActiveRecordModel.sum(:some_attribute)
end
end
end
class OutsideOfModule
def self.sum_of_attribute
SomeModule::SomeActiveRecordModel.sum(:some_attribute)
end
end
SomeModule::SomeOtherClass.sum_of_attribute # works
OutsideOfModule.sum_of_attribute # raises error
Short answer is no. Here's why
Ideally, you want to implement this in your SomeModule. But when you call SomeModule::SomeOtherClass.sum_of_attribute in other classes, you are in a scope of SomeModule::SomeOtherClass.
SomeModule::SomeActiveRecordModel.sum(:some_attribute)
||
\/
module SomeModule
class SomeActiveRecordModel
def sum(*args)
# Here, self => SomeModule::SomeActiveRecordModel
# That's why you won't be able to do any meta trick to the module
# or classes in the module to identify if it's being invoked outside
end
end
end
So you wouldn't know who the original caller is.
You might be able to dig through the call stack to do that. Here's another SO thread you might find helpful if you want to go down that path.
In short, no. But this is more a question of Ruby's approach and philosophy. There are other ways of thinking about the code that allow you achieve something similar to what you're looking for, in a more Rubyesque way.
This answer covers the different ways of making things private.
Which would be the most elegant way to define static methods such as "generate_random_string", "generate_random_user_agent", which are called from different libraries?
What are the best practices?
Best practice as I've seen would include:
Put them in a module in /lib/
Include them as mixins in the rest of your application code.
Make sure they are thoroughly tested with their own rspecs (or whatever test tool you user).
Plan them as if you may at some point want to separate them out into their own gem, or potentially make them available as a service at some point. That doesn't mean design them as separate services from the beginning, but definitely make sure they have no dependencies on any other code in your application.
Some basic code might be something like:
module App::Services
def generate_random_string
# ...
end
def generate_random_user_agent
# ...
end
end
Then in your model or controller code (or wherever), you could include them like this:
class MyModelClass < ActiveRecord::Base
include App::Services
def do_something_here
foo = random_string
# whatever...
end
def random_string
generate_random_string
end
end
Notice I isolated the generate_random_string call in its own method so it can be used in the model class, but potentially be switched out for some other method easily. (This may be a step more than you want to go.)
I have an API that I built in Rails. It runs some methods I've defined in a module and renders their return values as JSON. While I've been developing, the entire code for the API has been the module itself (contents irrelevant), a single route:
controller :cool do
get "cool/query/*args" => :query
end
and this:
class CoolController < ApplicationController
include CoolModule
def query
args = params[:args].split("/")
# convert the API URL to the method name
method_symbol = args[0].tr("-","_").to_sym
if !CoolModule.method_defined?(method_symbol)
return nil
end
# is calling self.method a good idea here, or is there a better way?
render json: self.method(method_symbol).call(args[1], args[2])
end
end
My API (i.e. the module) contains ~30 functions each accepting a variable number of arguments, the routing logic for which I'd like to keep nicely wrapped in the module (as it is now).
It will be used as a "mid-end" (one might say) between my cool ajax front-end and another API which I don't control and is really the back-end proper. So special concern needs to be given since it both receives user input and sends queries to a third party (which I am accountable for).
My questions specifically are:
Will this general strategy (method names directly from queries) be secure/stable for production?
If the strategy is acceptable but my implementation is not, what changes are necessary?
If the strategy is fundamentally flawed, what alternatives should I pursue?
The pessimist in me says 'miles of case-when,' but I'll thank you for your input.
The problem with Module#method_defined? is it may return true on indirect method definitions (other included modules, inherited methods if module is a Class) as well as private methods. This means you (and importantly anyone else who touches the code) will have to be very careful what you do with that module.
So, you could use this approach, but you need to be super explicit to your future maintainers that any method in the module is automatically an external interface. Personally, I would opt for something more explicit, like a simple whitelist of allowed api method names, eg:
require 'set'
module CoolModule
ALLOWED_API_METHODS = Set[
:foo,
:bar,
...
]
def self.api_allowed? meth
ALLOWED_API_METHODS.include? meth.to_sym
end
end
Yeah, you have to maintain the list, but it's not unsightly, it's documentation of an explicit interface; and means you wont get bit by a later coder deciding he needs to add some utility methods to the module for convenience and thus accidentally exporting them to your external api.
Alternately to the single list, you could have a define_for_api method and use that instead of def to declare the api interface methods
module CoolModule
#registered_api_methods = Set.new
def self.define_for_api meth, &block
define method meth, &block
#registered_api_methods << meth
end
def self.api_allowed? meth
#registered_api_methods.include? meth.to_sym
end
def api_dispatch meth, *args
raise ArgumentError unless self.class.api_allowed? meth
send(meth *args)
end
define_for_api :foo do |*args|
do_something_common
...
end
define_for_api :bar do
do_something_common
...
end
# this one is just ordinary method internal to module
private
def do_something_common
end
end