RoR: How to trigger a python script after every order - ruby-on-rails

I am rookie in RoR. I want to run a custom task after every order. To be more specific, I want to run a python script to update the quantity of the same prduct on another site.
So my question is, how can I trigger the script after every order?
Thanks

You can run commands like ruby does. `python commands`, note that '' between.

If Order is an ActiveRecord model, you can use an ActiveRecord callback to run code after each Order is created. For example:
class Order < ActiveRecord::Base
after_create :update_count
private
def update_count
system "python /path/to/script.py #{self.class.count}"
end
end
If the publish script is likely to take a long time, you'll want to spawn a background job to run the script instead. There are a number of Ruby libraries to manage this for you; have a look at sidekiq, resque, or delayed_job for a start.

You can add an after_filter. But this will block.
class OrdersController < ApplicationController
after_filter :update_product_count, :only => :new_order
def new_order
...
end
private
def update_product_count
system "python /path/to/update/script.py"
end
end
Or you can do it in background (I prefer delayed jobs gem)
Order model:
class Order < ActiveRecord::Base
private
def update_product_count order_id
# find products in this order and pass them to python script
products = Order.find(order_id).products.map(&:id)
system "python /path/to/update/script.py #{products}"
end
end
Orders controller:
class OrdersController < ApplicationController
def new_order
order.new
...
order.save!
Order.delay.update_product_count(order.id)
end
end

Related

Separated logs in Rails 6

Long time ago it was possible to set different log files for each class or class hierarchy in Rails, just doing for instance
class ApplicationJob < ActiveJob::Base
self.logger=Logger.new('log/ActiveJob.log')
and the same in each descendent class
Now, running with rails 6 and Spring, I find that the last defined log overwrites the previous ones. Is there any other recipe?
I also notice that there is some systematic in the overwriting. For instance, perhaps all the descendent of ActiveJob get a wrong log file, but the descendents of ActiveRecord get a different logger.
Any clue about what should happen if I do self.logger=Logger.new('nameOfClass.log') for each model, controller and job?
EDIT: I have tried self.logger= in the class definition and MyClass.logger= after the definition of the class. Neither work.
Any clue about what should happen if I do
self.logger=Logger.new('nameOfClass.log') for each model, controller
and job?
Don't know about model, but i'm declaring logger = Logger.('class_name.log') in each controller and job as result can check controller_a.log or controller_b.log / job_a.log or job_b.log separately
UPDATE:
For example (controller)
# a_controller
def AController < ApplicationController
before_action :clogger
def index
#logger.info("index")
...
end
def show
#logger.info("show")
...
end
private
def clogger
#logger = Logger.new('path_to_log/a_controller.log')
end
end
# b_controller
def BController < ApplicationController
before_action :clogger
def index
#logger.info("index")
...
end
def show
#logger.info("show")
...
end
private
def clogger
#logger = Logger.new('path_to_log/b_controller.log')
end
end
And i got a and b controllers log separately. it works same with jobs.

Rails control Job execution

I have a job created with rails g job cleanUp.
Is where any option to check if Job is running? Something like this CleanUpJob.isRunning?
If where is no way to make it without additional gems, which will be the simplest? delayed_job?
Second thing to control Job is progress, any thoughts how to implement CleanUpJob.progress or progress_job should be my choice?
briefly:
I need to create a job with two methods (isRunning?, progress).
I don't really want additional tables if possible.
You can use Class: Sidekiq::ScheduledSet for this purpose.
Documentation here
This Class is used in Sidekiq web interface.
Example:
Save job id (jid) when set job. Then you can call it for queried instance
def is_running?
require 'sidekiq/api'
ss = Sidekiq::ScheduledSet.new
jobs = ss.select {|ret| ret.jid == self.jid}
jobs.any?
end
Or, you can set DB flag inside Job with around_perform hook.
class SomeJob < ActiveJob::Base
queue_as :some_job
around_perform do |job, block|
start_import_process_log job.arguments[0], job.arguments[1] || {}
block.call
finish_import_process_log
end
# ...
private
def start_import_process_log import_process, options={}
#some actions
end
def finish_import_process_log
end
end
In this example associated log record is created.
Or you can use before_perform/ after_perform.
In my practice I'm using creting log records on long tasks.
When I need to find and kill job as example - I'm using Sidekiq::ScheduledSet.

Ruby Gem Delayed_Job: Does not process jobs stored in lib folder

I have installed the Ruby gem Delayed_Job to run tasks in a queue, but it shows some behavior that I don't understand. Delayed_Job is using my local active_record so a very standard installation.
I have the code for a job in a file called test_job.rb in my /lib folder
class TestJob
# Create a entry in the database to track the execution of jobs
DatabaseJob = Struct.new(:text, :emails) do
def perform
# Perform Test Code
end
end
def enqueue
#enqueue the job
Delayed::Job.enqueue DatabaseJob.new('lorem ipsum...', 'test email')
end
end
When I try to call the code from a controller like this, the first time the job seems to get submitted (is listed in rake jobs:work) but it does not run:
require 'test_job'
class ExampleController < ApplicationController
def index
end
def job
# Create a new job instance
job = TestJob.new
# Enqueue the job into Delay_Job
job.enqueue
end
end
Then when I change the controller code to do what my lib class does, it works perfectly. The job does not only get submitted to the queue, but also runs and completes without failures.
require 'test_job'
class ExampleController < ApplicationController
# Create a entry in the database to track the execution of jobs
DatabaseJob = Struct.new(:text, :emails) do
def perform
# Perform Test Code
end
end
def index
end
def job
#enqueue the job
Delayed::Job.enqueue DatabaseJob.new('lorem ipsum...', 'test email')
end
end
The strange thing is that when I switch back to calling the lib job class it works without a problem. Then it does not matter whether the struct is directly defined in the controller or in the class in the lib folder.
Defining the struct inside the controller and submitting the job to the queue this way always seems to work, but afterwards also the lib class starts working and sometimes the lib class works even after a restart of the rails server.
Any ideas? Thank you very much for the help.
Best,
Bastian

Storing data in a job for use when it is performed

I need to store information at the time an Active Job is scheduled for use when it is later performed. I would like to save this information in the Active Job itself, but I'm not sure if that's possible.
Here's a simplified version of what I'm trying, which reproduces a bug I see:
class TestJob < ActiveJob::Base
queue_as :default
attr_reader :save_for_later
def initialize(info)
#save_for_later = info
end
def perform()
logger.info(#save_for_later)
end
end
class CollectionsController < ApplicationController
def schedule_test_job
TestJob.perform_later(Date.new)
end
end
When I call schedule_test_job in the Collections controller, I get an error:
undefined method `map' for nil:NilClass
and the perform action is not called.
I'm assuming I need to persist the information I'm trying to save elsewhere in my database, but I'd like to know if there is a proper way to accomplish what I'm doing here. I also don't understand where the error thrown is coming from.
Actually all the job parameters are passed to perform, so you have to write:
class TestJob < ActiveJob::Base
def perform(date)
logger.info(date)
end
end
TestJob.perform_later(Date.new)
The second problem is that Date is not AFAIK serializable by ActiveJob. But you can easily pass a string and then parse the date ;)

Resque starts jobs, but do nothing

I'm using resque to do some (long time) job. And I have a few classes with the same mixed-in module for queuing. Class Service substitutes in tests, that's why it standalone and (maybe too much) complicated. So the story is when I call
Campaign.perform(user_id)
directly, everything works fine, but when I try to use queue:
Resque.enqueue(Campaign, user_id)
Job created, but seems like do nothing. At least, nothing saves into the database. Which is main task of Campaign class. I can see in resque-web-interface that jobs creates and finished, and finished (to fast, almost just after create) but no result.
I'm new in Resque and not really sure it calls it all (confused how to debug it).
Does anybody have similar problem? thanks for any help.
Module:
module Synchronisable
def self.included(base)
base.extend ClassMethods
end
module ClassMethods
def perform(user_id)
save_objects("#{self.name}::Service".constantize.get_objects(user_id))
end
protected
def save_objects(objects)
raise ArgumentError "should be implemented"
end
end
class Service
def self.get_objects(user)
raise ArgumentError "should be implemented"
end
end
end
One of the classes:
class Campaign < ActiveRecord::Base
include Synchronisable
#queue = :app
class << self
protected
def save_objects(objects)
#some stuff to save objects
end
end
class Service
def self.get_objects(user_id)
#some stuff to get objects
end
end
end
This is a very old question so not sure how rails folder structure was back then but I had the same problem and issue was with inheritance. Seems if you are using Resque your job classes shouldn't inherit from ApplicationJob.
so if your code was like this in (app/jobs/campaign_job.rb):
class Campaign < ApplicationJob
#queue = :a_job_queue
def self.perform
#some background job
end
end
then remove the inheritance i.e "< ApplicationJob"
These jobs are almost certainly failing, due to an Exception. What is resque-web showing you on the Failures tab? You can also get this from the Rails console with:
Resque.info
or
Resque::Failure.all(0)
You should run your worker like this:
nohup QUEUE=* rake resque:work & &> log/resque_worker_QUEUE.log
This will output everything you debug to "log/resque_worker_QUEUE.log" and you will be able to find out what's wrong with your Campaign class.
Try this:
env TERM_CHILD=1 COUNT=2 "QUEUE=*" bundle exec rake resque:workers

Resources