Nginx+Passenger: rufus-scheduler not started - ruby-on-rails

I want to do background processing by using rufus-scheduler. It works fine on my local machine with the Webrick server but it is not working on the production server. My production server uses Nginx + Passenger.
My code is as below
scheduler = Rufus::Scheduler.new
scheduler.in '20s' do
loop do
Rails.logger.error "+++++++++++started+++++++++++"
requests = QrCodeRequestStatus.where(:status => "pending").first(5)
requests.each do |request|
Rails.logger.error "+++++++++++++++++++processing request #{request.client_id} of client #{request.client_id}"
AuthenticationCode.save_qr_codes(Client.find(request.client_id),request,request.quantity.to_i)
end
Rails.logger.error "+++++++++++++++sleeping+++++++++++++++++"
sleep 5
end
end
scheduler.join
I have set "PassengerSpawnMethod direct" in my Passenger config file but still, rufus-scheduler is not working.

Related

Rails: system process won't start in rails server, but will in rails console

I want to start a ngrok process when server starts. To achieve this, I coded a ngrok.rb lib and I call it within an initializer
app/lib/ngrok.rb
require "singleton"
class Ngrok
include Singleton
attr_accessor :api_url, :front_url
def start
if is_running?
return fetch_urls
end
authenticate
started = system("ngrok start --all -log=stdout > #{ENV['APP_ROOT']}/log/ngrok.log &")
system("sleep 1")
if !started
return { api: nil, front: nil }
end
urls = fetch_urls
sync_urls(urls["api_url"], urls["front_url"])
return urls
end
def sync_urls(api_url, front_url)
NgrokSyncJob.perform_later(api_url, front_url)
end
def is_running?
return system("ps aux | grep ngrok")
end
def restart
stop
return start
end
def stop
return system("pkill ngrok")
end
def authenticate
has_file = system("ls ~/.ngrok2/ngrok.yml")
if has_file
return true
else
file_created = system("ngrok authtoken #{ENV['NGROK_TOKEN']}")
if file_created
return system("cat " + ENV['APP_ROOT'] + '/essentials/ngrok/example.yml >> ~/.ngrok2/ngrok.yml')
else
return false
end
end
end
def fetch_urls
logfile = ENV['APP_ROOT'] + '/log/ngrok.log'
file = File.open logfile
text = file.read
api_url = nil
front_url = nil
text.split("\n").each do |line|
next if !line.include?("url=") || !line.include?("https")
if line.split("name=")[1].split(" addr=")[0] == "ncommerce-api"
api_url = line.split("url=")[1]
elsif line.split("name=")[1].split(" addr=")[0] == "ncommerce"
front_url = line.split("url=")[1]
end
end
file.close
self.api_url = api_url
self.front_url = front_url
res = {}
res["api_url"] = api_url
res["front_url"] = front_url
return res
end
end
config/initializers/app-init.rb
module AppModule
class Application < Rails::Application
config.after_initialize do
puts "XXXXXXXXXXXXXXXXXXXXXXX"
Ngrok.instance.start
puts "XXXXXXXXXXXXXXXXXXXXXXX"
end
end
end
When I type rails serve, here is a sample of the output
So we know for sure my initializer is being called, but when I look at rails console if it's running, it's not!
But when I type Ngrok.instance.start in rails console, here's the output:
And it starts!
So, my question is: WHY ON EARTH is system("ngrok start --all -log=stdout > #{ENV['APP_ROOT']}/log/ngrok.log &") NOT working on rails serve, but it is on rails console?
UPDATE
If I use 'byebug' within ngrok.rb and use rails serve, when I exit byebug with "continue", the ngrok process is created and works
You're creating an orphaned process in the way that you use system() to start the ngrok process in the background:
system("ngrok start --all -log=stdout > #{ENV['APP_ROOT']}/log/ngrok.log &")
Note the & at the end of the commandline.
I'd need more details about your runtime environment to tell precisely which system policy kills the orphaned ngrok process right after starting it (which OS? if Linux, is it based on systemd? how do you start rails server, from a terminal or as a system service?).
But what's happening is this:
system() starts an instance of /bin/sh to interpret the commandline
/bin/sh starts the ngrok process in the background and terminates
ngrok is now "orphaned", meaning that its parent process /bin/sh is terminated, so that the ngrok process can't be wait(2)ed for
depending on the environment, the terminating /bin/sh may kill ngrok with a SIGHUP signal
or the OS re-parents ngrok, normally to the init-process (but this depends)
When you use the rails console or byebug, in both cases you're entering an interactive environment, which prepares "process groups", "session ids" and "controlling terminals" in a way suitable for interactive execution. These properties are inherited by child processes, like ngrok. This influences system policies regarding the handling of the orphaned background process.
When ngrok is started from rails server, these properties will be different (depending on the way rails server is started).
Here's a nice article about some of the OS mechanisms that might be involved: https://www.jstorimer.com/blogs/workingwithcode/7766093-daemon-processes-in-ruby
You would probably have better success by using Ruby's Process.spawn to start the background process, in combination with Process.detach in your case. This would avoid orphaning the ngrok process.

Using Redis in Heroku for a React and Rails API

I have a Rails API deployed onto heroku, which serves to a React.js Static page. There both deployed on heroku, and they comunicate through an API link. My struggle is comming when using Redis and Sidekiq.
On my Rails API I got the RedisToGo link Configured with no issues, but when I go to my React app and try to send an email invite I get this message Redis::CannotConnectError: Error connecting to Redis on 127.0.0.1:6379 (ECONNREFUSED).
I thought if I have it configured on my backend then it would work to my react static page app.
Sidekiq.yml
---
:verbose: false
:concurrency: 3
staging:
:concurrency: 1
production:
:concurrency: 5
:queues:
- [mailers,2]
- slack_notifications
- mixpanel
- invoices
- default
- rollbar
Redis.rb
uri = URI.parse(ENV["REDISTOGO_URL"])
REDIS = Redis.new(:url => uri)
sidekiq.rb
Sidekiq::Extensions.enable_delay!
unless Rails.env == 'development' || Rails.env == 'test'
Sidekiq.configure_server do |config|
config.redis = {
url: Rails.application.credentials.redis_url,
password: Rails.application.credentials.redis_password
}
end
Sidekiq.configure_client do |config|
config.redis = {
url: Rails.application.credentials.redis_url,
password: Rails.application.credentials.redis_password
}
end
end
# Turn off backtrace if at all memory issues are popping up as
# backtrace occupies to much memory on redis
# number of lines of backtrace and number of re-tries
Sidekiq.default_worker_options = { backtrace: false, retry: 3 }
Sidekiq.configure_server do |config|
# runs after your app has finished initializing
# but before any jobs are dispatched.
config.on(:startup) do
puts 'Sidekiq is starting...'
# make_some_singleton
end
config.on(:quiet) do
puts 'Got USR1, stopping further job processing...'
end
config.on(:shutdown) do
puts 'Got TERM, shutting down process...'
# stop_the_world
end
end
So my question is the next one:
If I have already configured a REDISTOGO_LINK on my rails app, do I need the same on my react config vars?
Whats the best way to configure sidekiq and Redis on a Rails API using react as a front-end in heroku? I haven't seen something that covers this on the internet.
I would appreciate your help! ;)
You need to run heroku config:set REDIS_PROVIDER=REDISTOGO_URL to tell Sidekiq to use REDISTOGO_URL to connect to Redis.
https://github.com/mperham/sidekiq/wiki/Using-Redis#using-an-env-variable

Oracle database connection is not released after it exceeds maximum idle time in Rails app

I have a Rails app using an Nginx HTTP server and a Unicorn app server. I'm getting the following error when the app server doesn't receive any requests for about 15min:
OCIError: ORA-02396: exceeded maximum idle time, please connect again
After a page refresh, the page loads fine.
I'm using rails 4.2.1, ruby-oci8 2.1.0, and active-record-oracle_enhanced-adapter 1.6.0.
I'm still relatively new to web development, but I'm thinking this error occurs when the oracle connection idles out but app server doesn't know that the connection is bad.
I've tried setting the reaping_frequency to every 15min, but that didn't fix the problem.
How can I manage the database connections and make sure that these idle connections are dropped? Can I set a timeout to drop database connections before oracle times out?
this is my config/unicorn.rb
app_dir = File.expand_path("../..", __FILE__)
shared_dir = "#{app_dir}/shared"
working_directory app_dir
worker_processes 2
preload_app true
timeout 15
listen "#{shared_dir}/sockets/unicorn.sock", :backlog => 64
stderr_path "#{shared_dir}/log/unicorn.stderr.log"
stdout_path "#{shared_dir}/log/unicorn.stderr.log"
pid "{shared_dir}/pids/unicorn.pid"
before_fork do |server,worker|
if defined? ActiveRecord::Base
ActiveRecord::Base.connection.disconnect!
end
end
after_fork do |server,worker|
if defined? ActiveRecord::Base
ActiveRecord::Base.establish_connection
end
end
Here is my fix, which isn't that great.
in the application controller:
before_action :refresh_connection
def refresh_connection
puts Time.now.to_s + ' - refreshing connection'
ActiveRecord::Base.connection.disconnect!
if ActiveRecord::Base.establish_connection
puts Time.now.to_s + ' - new connection established'
else
puts Time.now.to_s + ' - new connection cannot be established'
end
ebd

Rails + xmpp4r + puma. Whem puma run as daemon XMPP disconnects

I've got an initializer which starts xmpp4r client. It works fine when I run puma server as a regular process. But when I start puma as a daemon (-d option) it works for a few seconds and disconnects from the xmpp server. I've got separate thread which handles reconnects, but it doesn't work when puma is a daemon.
def init_reconnection_timer
timers = Timers::Group.new
periodic_timer = timers.every(5) do
if #client.is_disconnected?
begin
Rails.logger.info "XmppConnector ##### reconnecting to #{APP_CONFIG['broker_address']} ..."
connect
Rails.logger.info "XmppConnector ##### connected!"
presence
rescue
end
end
end
Thread.new do
loop do
timers.wait
end
end
end
I've got nothing from this code in log when puma is a daemon.
Right after rails app start it works for a few seconds, receives messages, iqs, no errors just as usual. Then silently disconnects. Here is a constructor of my class:
class XmppConnector
include Singleton
def initialize
#jid = Jabber::JID::new(APP_CONFIG['broker_username'] + '#' + APP_CONFIG['broker_address'])
#jid.resource='rails'
#client = Jabber::Client::new(#jid)
connect
init_presence_callback
init_message_callback
init_iq_callback
init_reconnection_timer
init_subscription_requests
presence
#protocol_interface = RemoteInterface.new
Rails.logger.info "XmppConnector ##### initialized"
end
But when puma run without -d option - no problems at all. Same thing in development and production, my workstation and server.
ruby 2.0.0
puma 2.9.2 Min threads: 0, max threads: 16
rails 4.2.0beta4
There was a bug in puma regarding threads in initializers and now it's fixed https://github.com/puma/puma/issues/617

Redis publish in rails 4 stops server thread

Sorry for english, i'm newbie.
Trying to use redis.publish feature with rails 4 and redis gem to push messages with SSE.
I have this block in my controller
logger.info "test1"
$redis.publish "user", "test"
logger.info "test2"
where $redis -
$redis = Redis.new(:host => '127.0.0.1', :port => 6379, :db => 1,:timeout => 0)
in initializer.
Server console in production print
I, [2013-08-07T22:34:50.138232 #4679] INFO -- : test1
And then nothing. Another request working, but this thread stops.
By the way, this $redis.publish "user", "test" at RAILS_ENV=production rails console run perfectly and message successfully appear at client.
Can you help me?
UPD: Ruby 2.0-p247, Rails 4, Redis 3.0.4
Don't use one connection in initializer to subscribe and publish both.
I created 2 connections, 1 for subscribe in sse writer, 1 in controllers to publish, and everything working normally.

Resources