I've got an initializer which starts xmpp4r client. It works fine when I run puma server as a regular process. But when I start puma as a daemon (-d option) it works for a few seconds and disconnects from the xmpp server. I've got separate thread which handles reconnects, but it doesn't work when puma is a daemon.
def init_reconnection_timer
timers = Timers::Group.new
periodic_timer = timers.every(5) do
if #client.is_disconnected?
begin
Rails.logger.info "XmppConnector ##### reconnecting to #{APP_CONFIG['broker_address']} ..."
connect
Rails.logger.info "XmppConnector ##### connected!"
presence
rescue
end
end
end
Thread.new do
loop do
timers.wait
end
end
end
I've got nothing from this code in log when puma is a daemon.
Right after rails app start it works for a few seconds, receives messages, iqs, no errors just as usual. Then silently disconnects. Here is a constructor of my class:
class XmppConnector
include Singleton
def initialize
#jid = Jabber::JID::new(APP_CONFIG['broker_username'] + '#' + APP_CONFIG['broker_address'])
#jid.resource='rails'
#client = Jabber::Client::new(#jid)
connect
init_presence_callback
init_message_callback
init_iq_callback
init_reconnection_timer
init_subscription_requests
presence
#protocol_interface = RemoteInterface.new
Rails.logger.info "XmppConnector ##### initialized"
end
But when puma run without -d option - no problems at all. Same thing in development and production, my workstation and server.
ruby 2.0.0
puma 2.9.2 Min threads: 0, max threads: 16
rails 4.2.0beta4
There was a bug in puma regarding threads in initializers and now it's fixed https://github.com/puma/puma/issues/617
Related
My Ruby on Rails application remotely starts some scripts on a distant SuSe server (SUSE Linux Enterprise Server 15 SP2). It relies on the net-ssh gem which is declared in the Gemfile: gem 'net-ssh'.
The script is triggerd remotely through the following block:
Net::SSH.start(remote_host, remote_user, password: remote_password) do |ssh|
feed_back = ssh.exec!("#{event.statement}")
end
This works as expected as long as long as the Rails server runs on Windows Server 2016, which is my DEV environment. But when I deploy to the Validation environment, which is SUSE Linux Enterprise Server 15 SP2, I get this error message:
Errno::ENOTTY in myController#myMethod
Inappropriate ioctl for device
On another hand, issuing the SSH request through the command line - from SUSE to SUSE - works as expected too. Reading around I did not find a relevant parameter for the Net::SSH module to solve this.
Your suggestions are welcome!
I finally found out that the message refers to the operating mode of SSH: it requires a sort of terminal emulation - so called pty - wrapped into a SSH chanel.
So I implemented it this way:
Net::SSH.start(remote_host, remote_user, password: remote_password) do |session|
session.open_channel do |channel|
channel.request_pty do |ch, success|
raise "Error requesting pty" unless success
puts "------------ pty successfully obtained"
end
channel.exec "#{#task.statement}" do |ch, success|
abort "could not execute command" unless success
channel.on_data do |ch, data|
puts "------------ got stdout: #{data}"
#task.update_attribute(:return_value, data)
end
channel.on_extended_data do |ch, type, data|
puts "------------ got stderr: #{data}"
end
channel.on_close do |ch|
puts "------------ channel is closing!"
end
end
end
### Wait until the session closes
session.loop
end
This solved my issue.
Note:
The answer proposed above was only a part of the solution. The same error occured again with this source code when deploying to the production server.
The issue appears to be the password to the SSH target: I retyped it by hand instead of doing the usual copy/paste from MS Excel, and the SSH connection is now successful!
As the error raised is not a simple "connection refused", I suspect that the password string had a specific character encoding, or an unexpected ending character.
As the first proposed solution provides a working example, I leave it there.
I want to start a ngrok process when server starts. To achieve this, I coded a ngrok.rb lib and I call it within an initializer
app/lib/ngrok.rb
require "singleton"
class Ngrok
include Singleton
attr_accessor :api_url, :front_url
def start
if is_running?
return fetch_urls
end
authenticate
started = system("ngrok start --all -log=stdout > #{ENV['APP_ROOT']}/log/ngrok.log &")
system("sleep 1")
if !started
return { api: nil, front: nil }
end
urls = fetch_urls
sync_urls(urls["api_url"], urls["front_url"])
return urls
end
def sync_urls(api_url, front_url)
NgrokSyncJob.perform_later(api_url, front_url)
end
def is_running?
return system("ps aux | grep ngrok")
end
def restart
stop
return start
end
def stop
return system("pkill ngrok")
end
def authenticate
has_file = system("ls ~/.ngrok2/ngrok.yml")
if has_file
return true
else
file_created = system("ngrok authtoken #{ENV['NGROK_TOKEN']}")
if file_created
return system("cat " + ENV['APP_ROOT'] + '/essentials/ngrok/example.yml >> ~/.ngrok2/ngrok.yml')
else
return false
end
end
end
def fetch_urls
logfile = ENV['APP_ROOT'] + '/log/ngrok.log'
file = File.open logfile
text = file.read
api_url = nil
front_url = nil
text.split("\n").each do |line|
next if !line.include?("url=") || !line.include?("https")
if line.split("name=")[1].split(" addr=")[0] == "ncommerce-api"
api_url = line.split("url=")[1]
elsif line.split("name=")[1].split(" addr=")[0] == "ncommerce"
front_url = line.split("url=")[1]
end
end
file.close
self.api_url = api_url
self.front_url = front_url
res = {}
res["api_url"] = api_url
res["front_url"] = front_url
return res
end
end
config/initializers/app-init.rb
module AppModule
class Application < Rails::Application
config.after_initialize do
puts "XXXXXXXXXXXXXXXXXXXXXXX"
Ngrok.instance.start
puts "XXXXXXXXXXXXXXXXXXXXXXX"
end
end
end
When I type rails serve, here is a sample of the output
So we know for sure my initializer is being called, but when I look at rails console if it's running, it's not!
But when I type Ngrok.instance.start in rails console, here's the output:
And it starts!
So, my question is: WHY ON EARTH is system("ngrok start --all -log=stdout > #{ENV['APP_ROOT']}/log/ngrok.log &") NOT working on rails serve, but it is on rails console?
UPDATE
If I use 'byebug' within ngrok.rb and use rails serve, when I exit byebug with "continue", the ngrok process is created and works
You're creating an orphaned process in the way that you use system() to start the ngrok process in the background:
system("ngrok start --all -log=stdout > #{ENV['APP_ROOT']}/log/ngrok.log &")
Note the & at the end of the commandline.
I'd need more details about your runtime environment to tell precisely which system policy kills the orphaned ngrok process right after starting it (which OS? if Linux, is it based on systemd? how do you start rails server, from a terminal or as a system service?).
But what's happening is this:
system() starts an instance of /bin/sh to interpret the commandline
/bin/sh starts the ngrok process in the background and terminates
ngrok is now "orphaned", meaning that its parent process /bin/sh is terminated, so that the ngrok process can't be wait(2)ed for
depending on the environment, the terminating /bin/sh may kill ngrok with a SIGHUP signal
or the OS re-parents ngrok, normally to the init-process (but this depends)
When you use the rails console or byebug, in both cases you're entering an interactive environment, which prepares "process groups", "session ids" and "controlling terminals" in a way suitable for interactive execution. These properties are inherited by child processes, like ngrok. This influences system policies regarding the handling of the orphaned background process.
When ngrok is started from rails server, these properties will be different (depending on the way rails server is started).
Here's a nice article about some of the OS mechanisms that might be involved: https://www.jstorimer.com/blogs/workingwithcode/7766093-daemon-processes-in-ruby
You would probably have better success by using Ruby's Process.spawn to start the background process, in combination with Process.detach in your case. This would avoid orphaning the ngrok process.
I want to do background processing by using rufus-scheduler. It works fine on my local machine with the Webrick server but it is not working on the production server. My production server uses Nginx + Passenger.
My code is as below
scheduler = Rufus::Scheduler.new
scheduler.in '20s' do
loop do
Rails.logger.error "+++++++++++started+++++++++++"
requests = QrCodeRequestStatus.where(:status => "pending").first(5)
requests.each do |request|
Rails.logger.error "+++++++++++++++++++processing request #{request.client_id} of client #{request.client_id}"
AuthenticationCode.save_qr_codes(Client.find(request.client_id),request,request.quantity.to_i)
end
Rails.logger.error "+++++++++++++++sleeping+++++++++++++++++"
sleep 5
end
end
scheduler.join
I have set "PassengerSpawnMethod direct" in my Passenger config file but still, rufus-scheduler is not working.
I have a Rails app using an Nginx HTTP server and a Unicorn app server. I'm getting the following error when the app server doesn't receive any requests for about 15min:
OCIError: ORA-02396: exceeded maximum idle time, please connect again
After a page refresh, the page loads fine.
I'm using rails 4.2.1, ruby-oci8 2.1.0, and active-record-oracle_enhanced-adapter 1.6.0.
I'm still relatively new to web development, but I'm thinking this error occurs when the oracle connection idles out but app server doesn't know that the connection is bad.
I've tried setting the reaping_frequency to every 15min, but that didn't fix the problem.
How can I manage the database connections and make sure that these idle connections are dropped? Can I set a timeout to drop database connections before oracle times out?
this is my config/unicorn.rb
app_dir = File.expand_path("../..", __FILE__)
shared_dir = "#{app_dir}/shared"
working_directory app_dir
worker_processes 2
preload_app true
timeout 15
listen "#{shared_dir}/sockets/unicorn.sock", :backlog => 64
stderr_path "#{shared_dir}/log/unicorn.stderr.log"
stdout_path "#{shared_dir}/log/unicorn.stderr.log"
pid "{shared_dir}/pids/unicorn.pid"
before_fork do |server,worker|
if defined? ActiveRecord::Base
ActiveRecord::Base.connection.disconnect!
end
end
after_fork do |server,worker|
if defined? ActiveRecord::Base
ActiveRecord::Base.establish_connection
end
end
Here is my fix, which isn't that great.
in the application controller:
before_action :refresh_connection
def refresh_connection
puts Time.now.to_s + ' - refreshing connection'
ActiveRecord::Base.connection.disconnect!
if ActiveRecord::Base.establish_connection
puts Time.now.to_s + ' - new connection established'
else
puts Time.now.to_s + ' - new connection cannot be established'
end
ebd
I'm using libreoffice/openoffice as a headless process to handle some document conversion tasks on my server that I "submit" via unoconv. Once in a while, the process that actually does the work, soffice.bin, seems to get wedged. I tried playing around with strace and saw that when launching new unoconv instances, they still connect and talk to the soffice process, just that nothing else happens after the 'bad' document goes in. If it were so simply as to just detect that soffice does not talk to incoming sockets any more, it'd be easy to write a watchdog. But it's not that simple, apparently. Any ideas how to tell when things have gone south?
Here's what I set up as a cron job:
def monitor_unoconv
retval = false
target_dir = "/tmp/monitor_unoconv"
begin
Timeout::timeout(30) do
FileUtils.mkdir_p(target_dir)
FileUtils.cp(File.dirname(__FILE__) + "/../hello.odt", target_dir)
Dir.chdir target_dir do
retval = system("unoconv -f html hello.odt")
end
end
rescue => e
STDERR.puts "Caught error #{e.inspect}"
retval = false
end
if !retval
STDERR.puts "soffice process appears hung. Killing it"
STDERR.puts `killall soffice.bin`
sleep 5
STDERR.puts `killall -9 soffice.bin`
end
end
Seems to work ok.
Issue might be with soffice mutiple threads,
so resolving trick might be like:
Using unoconv as a service.
creating a init.d script and starting as a daemon.
So instead of unoconv calling soffice to start , unoconv as a service will keep running a single and maintain it.
create starting process file as follows:
!/bin/sh
case "$1" in
start)
/usr/bin/unoconv --listener &
;;
stop)
killall soffice.bin
;;
restart)
killall soffice.bin
sleep 1
/usr/bin/unoconv --listener &
;;
esac