Rails, how to know if a particular request is still running - ruby-on-rails

How can I detect if a particular request is still active?
For example I have this request uuid:
# my_controller.rb
def my_action
request.uuid # -> ABC1233
end
From another request, how can I know if the request with uuid ABC1233 is still working?
For the curious:
Following beanstalk directives I am running cron jobs using URL requests.
I don't want to start the next iteration if the previous one is still running. I can not just relay in a ini/end flag updated by the request because the request some times dies before it finishes.
Using normal cron tasks I was managing this properly using the PID of the process.
But I don't think I can use PID any more because processes in a web server can be reused among different requests.

I don't think Rails (or more correctly, Rack) has support for this since (to the best of my knowledge) each Rails request doesn't know about any other requests. You may try to get access to all running threads (and even processes) but such implementation (if even possible) seems ugly to me
.
How about implementing it yourself?
class ApplicationController < ActionController::Base
before_filter :register_request
after_filter :unregister_request
def register_request
$redis.set request.uuid
end
def unregister_request
$redis.unset request.uuid
end
end
You'll still need to figure out what to do with exceptions since after_filters are skipped (perhaps move this whole code to a middleware: on the before phase of the middleware it writes the uuid to redis and on the after phase it removes the key ). There's a bunch of other ways to achieve this I'm sure and obviously substitute redis with your favorite persistence of choice.

Finally I recovered my previous approach based on PIDs.
I implemented something like this:
# The Main Process
module MyProcess
def self.run_forked
Process.fork do
SynchProcess.run
end
Process.wait
end
def self.run
RedisClient.set Process.pid # store the PID
... my long process code is here
end
def self.still_alive?(pid)
!!Process.kill(0, pid) rescue false
end
end
# In one thread I can do
MyProcess.run_forked
# In another thread I can do
pid = RedisClient.get
MyProcess.still_alive?(pid) # -> true if the process still running
I can call this code from a Rails request and even if the request process is reused the child one is not and I can monitor the PID of the child process to see if the Ruby process is still running.

Related

How can I send/use/modify data which is the result of a Sidekiq background worker of my rails app?

I have a ruby on rails web application deployed on Heroku.
This web app fetches some job feeds of given URLs as XMLs. Then regulates these XMLs and creates a single XML file. It worked pretty well for a while. However, since the #of URLs and job ads increases, it does not work at all. This process sometimes takes up to 45 secs since there are over 35K job vacancies (Heroku sends timeout after 30 secs). I am having an H12 timeout error. This error led me to read this worker dynos and background processing.
I figured out that I should apply the approach below:
Scalable-approach Heroku
Now I am using Redis and Sidekiq on my project. And I am able to create a background worker to do all the dirty work. But here is my question.
Instead of doing this call in the controller class:
def apply
send_data Aggregator.new(providers: providers).call,
type: 'text/xml; charset=UTF-8;',
disposition: 'attachment; filename=indeed_apply_yes.xml'
end
I am doin this perform_async call.
def apply
ReportWorker.perform_async(Time.now)
redirect_to health_path #and returns status 200 ok
end
I implemented this class: ReportWorker calls the Aggregator Service. data_xml is the field that I need to show somewhere or be downloaded automatically when it's ready.
class ReportWorker
include Sidekiq::Worker
sidekiq_options retry: false
data_xml = nil
def perform(start_date)
url_one = 'https://www.examplea.com/abc/download-xml'
url_two = 'https://www.exampleb.com/efg/download-xml'
cursor = 'stop'
providers = [url_one, url_two, cursor]
puts "SIDEKIQ WORKER GENERATING THE XML-DATA AT #{start_date}"
data_xml = Aggregator.new(providers: providers).call
puts "SIDEKIQ WORKER GENERATED THE XML-DATA AT #{Time.now}"
end
end
I know that It's not recommended to make send_data/file methods accessible out of Controller classes. Well, any suggestions on how to do it?
Thanks in advance!!
Do you can set up some database on your application? And then store record about completed jobs there, also you can save the entire file in database, but i recommend some cloud storage (like amazon s3).
And after that you can show current status of queued jobs on some page for user, with button 'download' after job has done

Ruby threads not working after upgrading to Rails 5

I have an API which uses a Service, in which I have used Ruby thread to reduce the response time of the API. I have tried to share the context using the following example. It was working fine with Rails 4, ruby 2.2.1
Now, we have upgraded rails to 5.2.3 and ruby 2.6.5. After which service has stopped working. I can call the service from Console, it works fine. But with API call, service becomes unresponsive once it reaches CurrencyConverter.new. Any Idea what can be the issue?
class ParallelTest
def initialize
puts "Initialized"
end
def perform
# Our sample set of currencies
currencies = ['ARS','AUD','CAD','CNY','DEM','EUR','GBP','HKD','ILS','INR','USD','XAG','XAU']
# Create an array to keep track of threads
threads = []
currencies.each do |currency|
# Keep track of the child processes as you spawn them
threads << Thread.new do
puts currency
CurrencyConverter.new(currency).print
end
end
# Join on the child processes to allow them to finish
threads.each do |thread|
thread.join
end
{ success: true }
end
end
class CurrencyConverter
def initialize(params)
#curr = params
end
def print
puts #curr
end
end
If I remove the CurrencyConverter.new(currency), then everything works fine. CurrencyConverter is a service object that I have.
Found the Issue
Thanks to #anothermh for this link
https://guides.rubyonrails.org/threading_and_code_execution.html#wrapping-application-code
https://guides.rubyonrails.org/threading_and_code_execution.html#load-interlock
As per the blog, When one thread is performing an autoload by evaluating the class definition from the appropriate file, it is important no other thread encounters a reference to the partially-defined constant.
Only one thread may load or unload at a time, and to do either, it must wait until no other threads are running application code. If a thread is waiting to perform a load, it doesn't prevent other threads from loading (in fact, they'll cooperate, and each perform their queued load in turn, before all resuming running together).
This can be resolved by permitting concurrent loads.
https://guides.rubyonrails.org/threading_and_code_execution.html#permit-concurrent-loads
Rails.application.executor.wrap do
urls.each do |currency|
threads << Thread.new do
CurrencyConverter.new(currency)
puts currency
end
ActiveSupport::Dependencies.interlock.permit_concurrent_loads do
threads.map(&:join)
end
end
end
Thank you everybody for your time, I appreciate.
Don't re-invent the wheel and use Sidekiq instead. 😉
From the project's page:
Simple, efficient background processing for Ruby.
Sidekiq uses threads to handle many jobs at the same time in the same process. It does not require Rails but will integrate tightly with Rails to make background processing dead simple.
With 400+ contributors, and 10k+ starts on Github, they have build a solid parallel job execution process that is production ready, and easy to setup.
Have a look at their Getting Started to see it by yourself.

Rails spring wisper listener method caching

It turns out that Spring caches my wisper listener method (I'm writing quite simple Engine).
Example:
app/models/myengine/my_class.rb
class Myengine::MyClass
include Wisper::Publisher
def something
# some logic
publish(:after_something, self)
end
end
config/initializers/wisper.rb
Wisper.subscribe(Myengine::MyObserver.new)
app/observers/myengine/my_observer.rb
class Myengine::MyObserver
def after_something my_class_instance
# any changes here requires Spring manual restart in order to be reflected in tests
another_method
end
def another_method
# all changes here or in any other class methods works ok with Spring and are instantly visible in tests
return true
end
end
By Spring restart I mean manual execution of spring stop command which is really annoying.
What is more mysterious I may change another_method return value to false and then tests are failing which is OK, but when I change after_something method body to let say return false it doesn't have any effect on tests (like the body of the after_something is somehow cached).
It is not high priority problem because this strange behaviour is only visible inside listener method body and easy to overcome by moving all logic to another method in the class. Anyway it might be confusing (especially at the beginning when I didn't know the exact problem).
The problem is properly caused because when you subscribe a listener globally, even if its class is reloaded, the object remains in memory pointing to the class it was originally constructed from, even if the class has been reloaded in the meantime.
Try this in config/initializers/wisper.rb:
Rails.application.config.to_prepare do
Wisper.clear if Rails.env.development?
Wisper.subscribe(Myengine::MyObserver.new)
end
to_prepare will run the block before every request for development environment, but once, as normal for production environment. Therefore provided your listener does not maintain any state it should work as expected.
The Wisper.clear is needed to remove the existing listeners subscribed before we re-subscribe a new instance from the reloaded class. Be aware that #clear will clear all subscribers, so if you have similar code as the above in more than one engine only the last engine to be loaded will have its listeners subscribed.

Render status 200 before executing code in rails controller

I'm integrating a communications api and whenever a text/voice reaches my server(rails controller), I have to send back an OK (200) to the api. I want to send this response before executing my code block because if my code breaks (and is unable to send the OK), the communcations api keeps sending the messages for up to 3 days. Now that just complicates the problem already on my server because it would keep breaking as the same message keeps on coming.
I did some research and found two solutions.
Solution 1: The first solution is below (my current implementation) and it doesnt seem to be working (unless I didnt read the log files properly or I'm hallucinating).
def receive_text_message
head :ok, :content_type => 'text/html'
# A bunch of code down here
end
I thought this should do (per rails doc), but I'm not sure it does.
Solution 2: the second implementation which I'm contemplating is to quickly create a new process/thread to execute the code block and kill off the process that received the message...that way the api gets its OK very quickly and it doesnt have to wait on the successful execution of my code block. I could the spawnling (or spawn) gem to do this. I would go with creating a process since I use passenger (community) server. But new processes would eat up more RAM, plus I think it is harder to debug child processes/thread (i might be wrong on this)
Thanks for the help!
Side question: does rails attempt to restart a process after it just failed?
You could opt for returning a 200 in your controller and start a sidekiq job. That way the 200 will be returned immediately and your controller will be ready to process the next job. So no waste of time and resources in your controller. The let the worker to do the real hard job.
In your controller
def receive_text_message
head :ok, :content_type => 'text/html'
HardWorker.perform_async(params)
end
In your sidekiq worker:
class HardWorker
include Sidekiq::Worker
def perform(params)
# 'Doing hard work'
end
end
I like sidekiq mostly because it is handling the resources more nicely compared to rescue.

Permanent daemon for quering a web resource

I have a rails 3 application and looked around in the internet for daemons but didnt found the right for me..
I want a daemon which fetches data permanently (exchange courses) from a web resource and saves it to the database..
like:
while true
Model.update_attribte(:course, http::get.new("asdasd").response)
end
I've only seen cron like jobs, but they only run after a specific time... I want it permanently, depending on how long it takes to end the query...
Do you understand what i mean?
The gem light-daemon I wrote should work very well in your case.
http://rubygems.org/gems/light-daemon
You can write your code in a class which has a perform method, use a queue system like this and at application startup enqueue the job with Resque.enqueue(Updater).
Obviously the job won't end until the application is stopped, personally I don't like that, but if this is the requirement.
For this reason if you need to execute other tasks you should configure more than one worker process and optionally more than one queue.
If you can edit your requirements and find a trigger for the update mechanism the same approach still works, you only have to remove the while true loop
Sample class needed:
Class Updater
#queue = :endless_queue
def self.perform
while true
Model.update_attribute(:course, http::get.new("asdasd").response)
end
end
end
Finaly i found a cool solution for my problem:
I use the god gem -> http://god.rubyforge.org/
with a bash script (link) for starting / stopping a simple rake task (with an infinite loop in it).
Now it works fine and i have even some monitoring with god running that ensures that the rake task runs ok.

Resources