How to configure queue_classic logging - ruby-on-rails

I couldn't find anywhere a solution on how to make queue_classic write logs to a file. Scrolls, which Queue_Classic uses for logging, doesn't seem to have any example either.
Could someone provide a working example?

The logging within the method called by QC will be the source for the logging. For example, in rails. Any call to Rails.logger will go to the log file appropriate to your RAILS_ENV. The log data coming from scrolls goes to stdout, so you could pipe STDOUT from your queues to a log file when you start them.
You could control your queues with god.rb giving a god.rb configuration instance similar to this (I've left your configuration for number of queues, directories, etc up to you):
number_queues.times do |queue_num|
God.watch do |w|
w.name = "QC-#{queue_num}"
w.group = "QC"
w.interval = 5.minutes
w.start = "bundle exec rake queue:work" # This is your rake task to start QC listening
w.gid = 'nginx'
w.uid = 'nginx'
w.dir = rails_root
w.keepalive
w.env = {"RAILS_ENV" => rails_env}
w.log = "#{log_dir}/qc.stdout.log" # Or.... "#{log_dir}//qc-#{queue_num}.stdout.log"
# determine the state on startup
w.transition(:init, { true => :up, false => :start }) do |on|
on.condition(:process_running) do |c|
c.running = true
end
end
end
end
FWIW, I find the STDOUT log data to be less than helpful and may end up just sending it to the bitbucket.
If the log data is useless, we should consider removing it.
DEBUG is very nice when I'm running it from a commandline. For times when I am trying to figure out what I've done to bork everything up (usually bundling issues, path issues, etc). Or times when I am going to demo what is going on in a queue.
For me, INFO logging consists of the standard lib=queue_classic level=info action=insert_job elapsed=16 message and any STDOUT/STDERR from forked executions or from PostgreSQL. I've not extended any of the logging classes since scrolls goes to STDOUT and my tasks are in an environment that provides logging.
Sure, it could be removed. I think that REALLY depends on the environment and what the queue is doing. If I were to be doing something that did not have Rails.logger then I would use the QC.log and scrolls more effectively and instrument my tasks in that manner.
As I play with it, I may keep my configuration as-is just because of the output coming from methods/applications called by the tasks themselves. I might decide to override QC.log code to add date/time. I am still working to determine what fits my needs.
Sorry, my last line was really focused on the environment example I had given.
original here: enter link description here

Related

Rails Custom Logger Writes To Log Sporadically

I've set up a custom logger in an initializer:
# /config/initializers/logging.rb
log_file = File.open("#{Example::Application.config.root}/log/app.log", "a")
AppLogger = ActiveSupport::BufferedLogger.new(log_file)
AppLogger.level = Logger::DEBUG
AppLogger.auto_flushing = true
AppLogger.debug "App Logger Up"
Although it creates the log file when I start the application, it doesn't write to the log file. Either from AppLogger.debug "App Logger Up" in the initializer or similar code elsewhere in the running application.
However, intermittently I do find log statements in the file, though seemingly without any pattern. It would seem that it is buffering the log messages and dumping them when it feels like it. However I'm setting auto_flushing to true which should cause it to flush the buffer immediately.
What is going on and how can I get it working?
Note: Tailing the log with $ tail -f "log/app.log" also doesn't pick up changes.
I'm using POW, but I've also tried with WEBrick and get the same (lack of) results.
Found the answer in a comment to the accepted answer to on this question:
I added:
log_file.sync = true
And it now works.

Delayed_job is no longer logging

For some reason, my Rails app is no longer logging my DelayedJob gem's activity, either to a separate log (delayed_job.log) or to the main Rails development log. I am also using the Workless gem, should this be relevant.
It used to say things like this:
2013-06-07T19:16:25-0400: [Worker(delayed_job.workless host:MyNames-MacBook-Pro.local pid:28504)] Starting job worker
2013-06-07T19:16:38-0400: [Worker(delayed_job.workless host:MyNames-MacBook-Pro.local pid:28504)] MyApp#scrape completed after 13.0290
2013-06-07T19:16:38-0400: [Worker(delayed_job.workless host:MyNames-MacBook-Pro.local pid:28504)] 1 jobs processed at 0.0761 j/s, 0 failed ...
2013-06-07T19:16:43-0400: [Worker(delayed_job.workless host:MyNames-MacBook-Pro.local pid:28504)] Exiting...
But nothing is appearing anymore in any of the logs.
How can I make sure that DelayedJob activity logs to the main development log? I don't mind if it also logs to a separate log, but the important thing is that I see its activity in my Mac's console.
I have searched for answers to this issue online (such as here and nothing is working.
Here is my delayed_job_config.rb: (in config/initializers)
Delayed::Worker.sleep_delay = 60
Delayed::Worker.max_attempts = 2
Delayed::Worker.max_run_time = 20.minutes
Delayed::Worker.logger = Rails.logger
Delayed::Worker.logger.auto_flushing = true
Please let me know if you'd like more code from my app - I'd be happy to provide it. Much thanks from a Rails newbie.
I finally got this to work. All thanks to Seamus Abshere's answer to the question here. I put what he posted below in an initializer file. This got delayed_job to log to my development.rb file (huzzah!).
However, delayed_job still isn't logging into my console (for reasons I still don't understand). I solved that by opening a new console tab and entering tail -f logs/development.log.
Different from what Seamus wrote, though, auto-flushing=true is deprecated in Rails 4 and my Heroku app crashed. I resolved this by removing it from my initializer file and placing it in my environments/development.rb file as config.autoflush_log = true. However, I found that neither of the two types of flushing were necessary to make this work.
Here is his code (without the auto-flushing):
file_handle = File.open("log/#{Rails.env}_delayed_jobs.log", (File::WRONLY | File::APPEND | File::CREAT))
# Be paranoid about syncing
file_handle.sync = true
# Hack the existing Rails.logger object to use our new file handle
Rails.logger.instance_variable_set :#log, file_handle
# Calls to Rails.logger go to the same object as Delayed::Worker.logger
Delayed::Worker.logger = Rails.logger
If the above code doesn't work, try replacing Rails.logger with RAILS_DEFAULT_LOGGER.

Fork, Ruby, ActiveRecord and File Descriptors on Fork

I understand that when we fork a process the child process inherits a copy of the parents open file descriptors and offsets. According to the man pages this refers to the same file descriptors used by the parent. Based on that theory in the following program
puts "Process #{Process.pid}"
file = File.open('sample', 'w')
forked_pid = fork do
sleep(10)
puts "Writing to file now..."
file.puts("Hello World. #{Time.now}")
end
file.puts("Welcome to winter of my discontent #{Time.now}")
file.close
file = nil
Question 1:
Shouldn't the forked process which is sleeping for 10 seconds lose its file descriptor and not be able to write to the file as the parent process completes and closes the file and exits.
Question 2: But for whatever reason if this works then how does ActiveRecord lose its connection in this scenario. It only works if I set :reconnect => true on ActiveRecord connect can it actually connect, which means its losing connection.
require "rubygems"
require "redis"
require 'active_record'
require 'mysql2'
connection = ActiveRecord::Base.establish_connection({
:adapter =&gt 'mysql2',
:username =&gt 'root_user',
:password =&gt 'Pi',
:host =&gt 'localhost',
:database => 'list_development',
:socket =&gt '/var/lib/mysql/mysql.sock'
})
class User &lt ActiveRecord::Base
end
u = User.first
puts u.inspect
fork do
sleep 3
puts "*" * 50
puts User.first.inspect
puts "*" * 50
end
puts User.first.inspect
However, the same is not true with Redis (v2.4.8) which does not lose connection on a fork, again. Does the it try to reconnect internally on a fork?
If thats the case then why isn't the write file program not throwing an error.
Could somebody explain whats going on here. Thanks
If you close a file descriptor in one process it stays valid in the other process, this is why your file example works fine.
The mysql case is different because it's a socket with another process at the end. When you call close on the mysql adapter (or when the adapter gets garbage collected when ruby exits) it actually sends a "QUIT" command to the server saying that you're disconnecting, so the server tears down its side of the socket. In general you really don't want to share a mysql connection between two processes - you'll get weird errors depending on whether the two processes are trying to use the socket at the same time.
If closing a redis connection just closes the socket (as opposed to sending a "I'm going away " message to the server) then the child connection should continue to work because the socket won't actually have been closed

Why is my Ruby script utilizing 90% of my CPU?

I wrote a admin script that tails a heroku log and every n seconds, it summarizes averages and notifies me if i cross a certain threshold (yes I know and love new relic -- but I want to do custom stuff).
Here is the entire script.
I have never been a master of IO and threads, I wonder if I am making a silly mistake. I have a couple of daemon threads that have while(true){} which could be the culprit. For example:
# read new lines
f = File.open(file, "r")
f.seek(0, IO::SEEK_END)
while true do
select([f])
line = f.gets
parse_heroku_line(line)
end
I use one daemon to watch for new lines of a log, and the other to periodically summarize.
Does someone see a way to make it less processor-intensive?
This probably runs hot because you never really block while reading from the temporary file. IO::select is a thin layer over POSIX select(2). It looks like you're trying to block until the file is ready for reading, but select(2) considers EOF to be ready ("a file descriptor is also ready on end-of-file"), so you always return right away from select then call gets which returns nil at EOF.
You can get a truer EOF reading and nice blocking behavior by avoiding the thread which writes to the temp file and instead using IO::popen to fork the %x[heroku logs --ps router --tail --app pipewave-cedar] log tailer, connected to a ruby IO object on which you can loop over gets, exiting when gets returns nil (indicating the log tailer finished). gets on the pipe from the tailer will block when there's nothing to read and your script will only run as hot as it takes to do your line parsing and reporting.
EDIT: I'm not set up to actually try your code, but you should be able to replace the log tailer thread and your temp file read loop with this code to get the behavior described above:
IO.popen( %w{ heroku logs --ps router --tail --app my-heroku-app } ) do |logf|
while line = logf.gets
parse_heroku_line(line) if line =~ /^/
end
end
I also notice your reporting thread does not do anything to synchronize access to #total_lines, #total_errors, etc. So, you have some minor race conditions where you can get inconsistent values from the instance vars that parse_heroku_line method updates.
select is about whether a read would block. f is just a plain old file, so you when get to the end reads don't block, they just return nil instantly. As a result select returns instantly rather than waiting for something to be appending to the file as I assume you're expecting. Because of this you're sitting in a tight busy loop, so high cpu is to be expected.
If you are at eof (you could either check f.eof? or whether gets returns nil), then you could either start sleeping (perhaps with some sort of back off) or use something like listen to be notified of filesystem changes

Dynamic Ruby Daemon Management

I have a Ruby process that listens on a given device. I would like to spin up/down instances of it for different devices with a rails app. Everything I can find for Ruby daemons seems to be based around a set number of daemons running or background processing with message queues.
Should I just be doing this with Kernel.spawn and storing the PIDs in the database? It seems a bit hacky but if there isn't an existing framework that allows me to bring up/down daemons it seems I may not have much choice.
Instead of spawning another script and keeping the PIDs in the database, you can do it all within the same script, using fork, and keeping PIDs in memory. Here's a sample script - you add and delete "worker instances" by typing commands "add" and "del" in console, exiting with "quit":
#pids = []
#counter = 0
def add_process
#pids.push(Process.fork {
loop do
puts "Hello from worker ##{#counter}"
sleep 1
end
})
#counter += 1
end
def del_process
return false if #pids.empty?
pid = #pids.pop
Process.kill('SIGTERM', pid)
true
end
def kill_all
while del_process
end
end
while cmd = gets.chomp
case cmd.downcase
when 'quit'
kill_all
exit
when 'add'
add_process
when 'del'
del_process
end
end
Of course, this is just an example, and for sending comands and/or monitoring instances you can replace this simple gets loop with a small Sinatra app, or socket interface, or named pipes etc.

Resources