I stumbled to learn that my rails3.1 log file is super large, around 21mb.
Is this, in terms of size normal? What the log file would like in the production environment?
Besides, can I get rid of the log?thanks
The log folder of your Rails application holds three log files corresponding to each of the standard environments. Log files can grow very large over time. A rake task is provided to allow the easy clearing of the log files.
rake log:clear
# Truncates all *.log files in log/ to zero bytes
# Specify which logs with LOGS=test,development,production
you can just delete the file!
Rails will create a new log if one doesn't exist.
Obviously save / backup the file if it's important, but usually it's not.
You can also zip the backuped up file (and then delete the source) if you want to keep it on the same drive but still save space.
To automatically rotate log files (the best long-term solution) use log rotate as described here:
Ruby on Rails production log rotation
then you can set it and forget it!
To actually change what gets logged see:
http://dennisreimann.de/blog/silencing-the-rails-log-on-a-per-action-basis/
According to the documentation, if you want to limit the size of the log folder, put this in your 'development.rb'-file:
config.logger = ActiveSupport::Logger.new(config.paths['log'].first, 1, 50 * 1024 * 1024)
With this, your log files will never grow bigger than 50Mb. You can change the size to your own preference. The ‘1’ in the second parameter means that 1 historic log file will be kept, so you’ll have up to 100Mb of logs – the current log and the previous chunk of 50Mb.
I automatically clear the logs in development on each server start with config/initializers/clear_development_log.rb:
if Rails.env.development?
`rake log:clear`
end
You may want to use logrotate. Have a look at the answer to this question: Ruby on Rails production log rotation.
Yes, You can using syntax like this:
config.logger = ActiveSupport::Logger.new(config.log_file, num_of_file_to_keep, num_of_MB*1024*1024)
Example:
config.logger = ActiveSupport::Logger.new(config.log_file, 2, 20*1024*1024)
It not only using for Rails log, you can using log file of any services run with rails, such as: rpush log, ...
config.logger = ActiveSupport::Logger.new(nil) does the trick and completely disables logging to a file (console output is preserved).
A fair compromise, in an initializer:
Rake::Task['log:clear'].invoke if Rails.env.development? || Rails.env.test?
If you don't like waiting for rake log:clear to load its environment and only want to clear one log on the fly, you can do the following:
cat /dev/null > log/mylog.log # Or whatever your log's name is
(This allows the log to stay online while the app is running, whereas rm log/mylog.log would require restarting the app.)
Related
This is the reverse of the question I have seen several times elsewhere, in which someone wants to see how to create an another, separate Rails log from the main development log. For some reason, my Rails app is logging my DelayedJob gem's activity to a separate log (delayed_job.log), but I want it to log to the main development.log file. I am using the Workless gem and NewRelic as well, should this be potentially relevant (although I experimented on this by removing NewRelic, and the issue still remained).
I'm not clear on how this happened. However, I was having some trouble earlier with seeing SQL insertions and deletions in my log, and another user kindly suggested that I use the following in an initializer file:
if defined?(Rails) && !Rails.env.nil?
logger = Logger.new(STDOUT)
ActiveRecord::Base.logger = logger
ActiveResource::Base.logger = logger
end
Once I did this, I saw the SQL statements, but no longer saw the DelayedJob information in the main development log.
So my question is: How can I make sure that DelayedJob activity logs to the main development log? I don't mind if it also logs to a separate log, but the important thing is that I see its activity in my Mac's console.
Please let me know if you'd like more code from my app - I'd be happy to provide it. Much thanks from a Rails newbie.
Try adding the following line to config/initializers/delayed_job_config.rb
Delayed::Worker.logger = Logger.new(STDOUT)
I finally got this to work. All thanks to Seamus Abshere's answer to the question here. I put what he posted below in an initializer file. This got delayed_job to log to my development.rb file (huzzah!).
However, delayed_job still isn't logging into my console (for reasons I still don't understand). I solved that by opening a new console tab and entering tail -f log/development.log.
Different from what Seamus wrote, though, auto-flushing=true is deprecated in Rails 4 and my Heroku app crashed. I resolved this by removing it from my initializer file and placing it in my environments/development.rb file as config.autoflush_log = true. However, I found that neither of the two types of flushing were necessary to make this work.
Here is his code (without the auto-flushing):
file_handle = File.open("log/#{Rails.env}_delayed_jobs.log", (File::WRONLY | File::APPEND | File::CREAT))
# Be paranoid about syncing
file_handle.sync = true
# Hack the existing Rails.logger object to use our new file handle
Rails.logger.instance_variable_set :#log, file_handle
# Calls to Rails.logger go to the same object as Delayed::Worker.logger
Delayed::Worker.logger = Rails.logger
If the above code doesn't work, try replacing Rails.logger with RAILS_DEFAULT_LOGGER.
I'm using ruby standard logger, I want rotational daily one, so in my code I have :
Logger.new("#{$ROOT_PATH}/log/errors.log", 'daily')
It is working perfectly, but it created two files errors.log.20130217 and errors.log.20130217.1.
How can I force it to create just one file a day ?
Your code is correct for a long-running application.
What's happening is you're running code more than once on a given day.
The first time you run it, Ruby creates a log file "errors.log".
When the day changes, Ruby renames the file to "errors.log.20130217".
But somehow you ran the code again, perhaps you're running two apps (or processes, or workers, or threads) that use similar code, and your logger saw that the file name "errors.log.20130217" already existed.
Your logger didn't want to clobber that file, but still needed to rename "errors.log" to a date, so the logger instead created a different file name "errors.log.20130217.1"
To solve this, run your code just once.
If you're running multiple apps called "foo" and "bar" then use log file names like "foo-errors.log" and "bar-errors.log". Or if you're using multiple workers, give each worker its own log file name (e.g. by using the worker's process id, or worker pool array index, or however you're keeping track of your workers).
If you really want to solve this using the Ruby logger, you'll need to override the logger #shift_log_period so it doesn't choose a ".1" suffix. You could subclass Logger and write your worn #shift_log_period to detect that there is an existing log file for the date, and if so, use it instead of doing the file rename.
This is the code causing it from the logger:
def shift_log_period(period_end)
postfix = period_end.strftime("%Y%m%d") # YYYYMMDD
age_file = "#{#filename}.#{postfix}"
if FileTest.exist?(age_file)
# try to avoid filename crash caused by Timestamp change.
idx = 0
# .99 can be overridden; avoid too much file search with 'loop do'
while idx < 100
idx += 1
age_file = "#{#filename}.#{postfix}.#{idx}"
break unless FileTest.exist?(age_file)
end
end
#dev.close rescue nil
File.rename("#{#filename}", age_file)
#dev = create_logfile(#filename)
return true
There is no solution (AFAIK) using the Ruby logger, with its built-in rotator, to manage logs written by multiple apps (a.k.a. workers, processes, threads) simultaneously. This is because each of the apps gets it own log file handle.
Alternatively, use any of the good log rotator tools, such as logrotate as suggested by the Tin Man user in the question comments: http://linuxcommand.org/man_pages/logrotate8.html
In general, logrotate will be your best bet IMHO.
I have read the documentation on this guide and the class. I wish to create a logger which take logging informations each day and after let's say a week, delete the oldest logging information automatically each time.
logfile = File.open(RAILS_ROOT + '/log/'+ (Date.today << 1).to_s + '_custom.log', 'a') #create log file
logfile.sync = true #automatically flushes data to file
CUSTOM_LOGGER = CustomLogger.new(logfile, 'daily') #constant accessible anywhere
Plus, I wish to create a custom logging, so for instance something that looks like this (format):
class MyLogger < Logger
def format_message(severity, timestamp, progname, msg)
"#{timestamp} : #{msg}\n"
end
end
So basically, I would like to have a better idea where to place everything correctly under which directory. For instance, where MyLogger should be logically placed... (anywhere? A helper? or under app/config/ ?
Is that a valid way to implement this?
I made it by putting everything in config/initializers and creating a file named my_logger.rb. I'm still stuck at deleting/managing log files.
Does the server handle that part with a log rotation ( I know there's something with logrotation from the linux OS)? Or Rails can handle that internally?
Where should MyLogger be logically placed?
Probably put it under /lib. You can then require it from the initializer where you set the custom logger.
How can you periodically delete the oldest logging information?
There are a countless ways you can do this and choosing will be based on your constraints. You haven't spoken much about your constraints, so it's going to be hard to give you the just-right answer. E.g. you could clean up old logs every time you add a new log entry, you could run a cron job, you could install some non-Rails software that does log rotation and other log maintenance, you could use Papertrail, if you use Heroku you could look up https://devcenter.heroku.com/articles/scheduled-jobs-custom-clock-processes.
Remember Rails is designed more to handle requests and respond to them in the context of that request, than to run maintenance outside of the context of receiving a request. You could do maintenance as a side-effect of every format_message request to MyLogger, checking for the oldest logging entry and if you find one older than a week, delete them. You haven't given a constraint why you can't do this in-process, and if you're prototyping something early and portable, then this would get you going fast.
I am trying to debug a model in Rails so I'm using this code:
logger.debug('asasd')
However, I'm tailing the log file development.log but I'm not seeing it add to this file.
I am certain this module is being run
I have confirmed that runtime errors are logging to this file, and I see them when I tail.
How do I get this to work?
Make sure that you have set the log level to debug in environments/appropriate_env_file.rb:
config.log_level = :debug
and also make sure you are tailing the correct log file based on the environment you are running against.
You could attempt to call flush on the logger to force it to write to this file. Usually this would happen after every request:
logger.debug("asasd")
logger.flush
There's also the auto_flushing setting on the Rails.logger instance itself:
Rails.logger.auto_flushing = true
This will make the call to logger.flush unnecessary, as Rails will automatically flush the buffered output to the log file whenever it is written to.
Is there an easy way to see the actual SQL generated by a rails migration?
I have a situation where a migration to change a column type worked on my local development machine by partially failed on the production server.
My postgreSQL versions are different between local and production (7 on production, 8 on local) so I'm hoping by looking at the SQL generated on the successful migration locally I can work out a SQL statement to run on production to fix things....
Look at the log files: log/development.log locally vs log/production.log on your server.
I did some digging and found another way this can be achieved too... (This way only gives you the SQL so it was a bit easier for me to read)
Postgresql will log all the queries executed if you put this line in your config file: (there's an example which has been commented out in the "What to log" section of the config file)
log_statement = 'all'
Then I rolled back and re-ran my migration locally to find the SQL I was looking for.
This method also gives you the SQL in a format where you can easily paste it into something like PGAdmin's query builder and mess around with it.
You could set the logger to STDOUT at the top of your migration's change, up, or down methods. Example:
class SomMigration < ActiveRecord::Migration
def change
ActiveRecord::Base.logger = Logger.new(STDOUT)
# ...
end
end
Or see this answer for adding SQL logging to all rake tasks