I have a Rails 5 application (API) and a postgres DB that run on separate docker containers, all on the same AWS EC2 instance, and are controlled by an external manager (manage). manage needs to be able to make a request to the API and tell it to exit. I don't want to just kill the API externally or the docker container, as I want all API requests to complete. I want the API to exit gracefully and only it knows how to do that.
Ruby has exit, exit! and abort. All of them seem to be handled by Rails as exceptions and Rails just continues motoring on.
How can I terminate my rails application from within? Is there some sort of unhandleable exception that I can raise?
Most likely the ApplicationController is rescuing the SystemExit. Looking at the Rails source, there is a rescue Exception in ActionController, which includes ActiveSupport::Rescuable. That's where the controller methods like rescue_from are defined.
I tested this in a controller in a Rails API app, sent it a request, and Rails did indeed exit immediately, with an empty response to the caller:
class ProcessesController < ApplicationController
rescue_from SystemExit, with: :my_exit
def destroy
CleanupClass.method_that_exits
render json: { status: :ok }
end
def my_exit
exit!
end
end
class CleanupClass
def self.method_that_exits
exit
end
end
Related
I'm looking to gracefully handle AWS Aurora's failover mechanism in a Rails 6 app.
Other solutions I've found swap out a specific Rack middleware adapter with one that will reset all ActiveRecord connections if a specific type of exception is raised. That middleware has been removed in Rails 6 so this solution no longer works.
The exception that gets raised is an ActiveRecord::StatementInvalid: "PG::ReadOnlySqlTransaction: ERROR: cannot execute UPDATE in a read-only transaction".
Would the following be an idiomatic way to rescue this exception and reset the connection pool as needed?
class ApplicationController < ActionController::Base
rescue_from ActiveRecord::StatementInvalid, with: :check_read_only_failover
private
def check_read_only_failover(exception)
if /Lost connection|gone away|read-only/.match?(exception.message)
ActiveRecord::Base.clear_all_connections!
end
end
(The regex above was lifted from the linked solution; I have no reason to believe the error condition has changed.)
Recently I had a problem with a Rails application with the following code:
class MyBaseController < ApplicationController
def error(a, b)
# ...
end
end
class MyController < MyBaseController
def my_action
begin
another_method
rescue e
# The next line had a typo, it should
# have been "e" instead of "error" and
# hence it caused and unhandled exception
logger.error("the error was #{error}")
# ...
end
# ...
end
end
As you can see the logging statement will fail as it will try to call the error method from MyBaseController instead of getting the value of e.
This was an honest mistake but from what I see it could have been prevented: when I opened my application in IntelliJ with the Ruby plugin it marked the error with a red squiggle. Is not the first time I've seen one of these errors
My question is: Is there any Gem or tool (besides Intellij) to detect this kind of problems so I can add to my Rakefile and run it in my toolchain before my application gets deployed?
You could add debugger anywhere before or after any code you would like to trace.
Once you do the action from the browser, your running server in shell will pause the work and give you the ability to trace your code and test it.
I would recommend you to read more about debugging Rails applications.
Click here to read more about rails debugging
How can I detect if a particular request is still active?
For example I have this request uuid:
# my_controller.rb
def my_action
request.uuid # -> ABC1233
end
From another request, how can I know if the request with uuid ABC1233 is still working?
For the curious:
Following beanstalk directives I am running cron jobs using URL requests.
I don't want to start the next iteration if the previous one is still running. I can not just relay in a ini/end flag updated by the request because the request some times dies before it finishes.
Using normal cron tasks I was managing this properly using the PID of the process.
But I don't think I can use PID any more because processes in a web server can be reused among different requests.
I don't think Rails (or more correctly, Rack) has support for this since (to the best of my knowledge) each Rails request doesn't know about any other requests. You may try to get access to all running threads (and even processes) but such implementation (if even possible) seems ugly to me
.
How about implementing it yourself?
class ApplicationController < ActionController::Base
before_filter :register_request
after_filter :unregister_request
def register_request
$redis.set request.uuid
end
def unregister_request
$redis.unset request.uuid
end
end
You'll still need to figure out what to do with exceptions since after_filters are skipped (perhaps move this whole code to a middleware: on the before phase of the middleware it writes the uuid to redis and on the after phase it removes the key ). There's a bunch of other ways to achieve this I'm sure and obviously substitute redis with your favorite persistence of choice.
Finally I recovered my previous approach based on PIDs.
I implemented something like this:
# The Main Process
module MyProcess
def self.run_forked
Process.fork do
SynchProcess.run
end
Process.wait
end
def self.run
RedisClient.set Process.pid # store the PID
... my long process code is here
end
def self.still_alive?(pid)
!!Process.kill(0, pid) rescue false
end
end
# In one thread I can do
MyProcess.run_forked
# In another thread I can do
pid = RedisClient.get
MyProcess.still_alive?(pid) # -> true if the process still running
I can call this code from a Rails request and even if the request process is reused the child one is not and I can monitor the PID of the child process to see if the Ruby process is still running.
i'm wondering how can i stop a Rinda ring server, besides killing its process.
i'v checked ring.rb shipped with my ruby 1.9.3, and found the RingServer lacks api to stop itself. It opens an UDPSocket in initialize(), but it dose not contain any code to close that socket.
anybody knows it? thanks ahead. :D
Rinda is part of Distributed Ruby (DRb), so if the goal were to just stop all Rinda and other DRb services, you could do:
DRb.stop_service
If you use that, then in your Rinda service code (the looping method), you need to rescue DRb::DRbConnError to avoid problems trying to write to the TupleSpace, according to: http://www.ruby-forum.com/topic/97023
Not a Rinda service, but here is a simple example I tested that stops a DRb service. It just uses DRb (no Rinda) in Ruby 1.9.3, modified slightly from the example here: http://www.ruby-doc.org/stdlib-1.9.3/libdoc/drb/rdoc/DRb.html
server.rb
#!/usr/local/bin/ruby
require 'drb/drb'
URI="druby://localhost:8787"
class StopAndGiveTimeServer
def get_current_time
DRb.stop_service
return Time.now
end
end
FRONT_OBJECT=StopAndGiveTimeServer.new
$SAFE = 1 # disable eval() and friends
DRb.start_service(URI, FRONT_OBJECT)
DRb.thread.join
client.rb
#!/usr/local/bin/ruby
require 'drb/drb'
SERVER_URI="druby://localhost:8787"
DRb.start_service
timeserver = DRbObject.new_with_uri(SERVER_URI)
puts timeserver.get_current_time
Update: It sounds like you want to monkey patch the ring server to close the socket.
Just create a way to get the existing socket via monkey patch:
module Rinda
class RingServer
attr_accessor :soc
end
end
Then you can keep the instance of the ringserver in an instance variable, e.g. #ringserver, and use it to access the socket to close it, set a new socket, etc. e.g.
def bind_to_different_port(port)
begin
#ringserver.soc.close
rescue => e
puts "#{e.message}\n\t#{e.backtrace.join("\n\t")}"
end
#ringserver.soc=UDPSocket.open
#ringserver.soc.bind('', port)
end
Or skip the attr_accessor and just add a method to RingServer and call a method or two on the RingServer to close, open, bind the socket.
To see how it uses the socket in Ruby 1.9.3: https://github.com/ruby/ruby/blob/v1_9_3_374/lib/rinda/ring.rb
I have been happily using the DelayedJob idiom:
foo.send_later(:bar)
This calls the method bar on the object foo in the DelayedJob process.
And I've been using DaemonSpawn to kick off the DelayedJob process on my server.
But... if foo throws an exception Hoptoad doesn't catch it.
Is this a bug in any of these packages... or do I need to change some configuration... or do I need to insert some exception handling in DS or DJ that will call the Hoptoad notifier?
In response to the first comment below.
class DelayedJobWorker < DaemonSpawn::Base
def start(args)
ENV['RAILS_ENV'] ||= args.first || 'development'
Dir.chdir RAILS_ROOT
require File.join('config', 'environment')
Delayed::Worker.new.start
end
Try monkeypatching Delayed::Worker#handle_failed_job :
# lib/delayed_job_airbrake.rb
module Delayed
class Worker
protected
def handle_failed_job_with_airbrake(job, error)
say "Delayed job failed -- logging to Airbrake"
HoptoadNotifier.notify(error)
handle_failed_job_without_airbrake(job, error)
end
alias_method_chain :handle_failed_job, :airbrake
end
end
This worked for me.
(in a Rails 3.0.10 app using delayed_job 2.1.4 and hoptoad_notifier 2.4.11)
Check out the source for Delayed::Job... there's a snippet like:
# This is a good hook if you need to report job processing errors in additional or different ways
def log_exception(error)
logger.error "* [JOB] #{name} failed with #{error.class.name}: #{error.message} - #{attempts} failed attempts"
logger.error(error)
end
I haven't tried it, but I think you could do something like:
class Delayed::Job
def log_exception_with_hoptoad(error)
log_exception_without_hoptoad(error)
HoptoadNotifier.notify(error)
end
alias_method_chain :log_exception, :hoptoad
end
Hoptoad uses the Rails rescue_action_in_public hook method to intercept exceptions and log them. This method is only executed when the request is dispatched by a Rails controller.
For this reason, Hoptoad is completely unaware of any exception generated, for example, by rake tasks or the rails script/runner.
If you want to have Hoptoad tracking your exception, you should manually integrate it.
It should be quite straightforward. The following code fragment demonstrates how Hoptoad is invoked
def rescue_action_in_public_with_hoptoad exception
notify_hoptoad(exception) unless ignore?(exception) || ignore_user_agent?
rescue_action_in_public_without_hoptoad(exception)
end
Just include Hoptoad library in your environment and call notify_hoptoad(exception) should work. Make sure your environment provides the same API of a Rails controller or Hoptoad might complain.
Just throwing it out there - your daemon should require the rails environment that you're working on. It should look something along the lines of:
RAILS_ENV = ARGV.first || ENV['RAILS_ENV'] || 'production'
require File.join('config', 'environment')
This way you can specify environment in which daemon is called.
Since it runs delayed job chances are daemon already does that (it needs activerecord), but maybe you're only requiring minimal activerecord to make delayed_job happy without rails.