how to delete rails log file after certain size - ruby-on-rails

I have a daemon that runs constantly which fills up the log file(development.log or production.log) pretty quickly. What is the best way to delete the log file after certain size or delete the portion before certain day.

config.logger = Logger.new(config.log_path, 50, 1.megabyte)
but beware that multiple mongrels can have issues with this.

The best way is to set up log rotation, but how you do this is very platform dependent,
so you should add a comment about what you're using, both for development and production.
For our apps running on Linux, we have a file /etc/logrotate.d/appname for each app,
that looks something like this:
/path/to/rails_root_for_app/log/production.log {
daily
missingok
rotate 7
compress
delaycompress
notifempty
create 640 capistrano capistrano
}
This will move the log into a new file once a day, keeping a compressed backup file for each
of the last 7 days.
If you just want to empty the file without keeping any of the data in it while the daemon is
running, simply do this from a shell:
> /path/to/rails_root_for_app/log/development.log
This will truncate the file to 0 bytes length.

I prefer a monthly log file in my production.rb file
config.logger = Logger.new(config.log_path, 'monthly')

Or even better, if all your environments are on either Mac or Linux, and have /usr/sbin/rotatelogs, just use that. It's much more flexible, and doesn't have the data loss issue that logrotate has (even if you use copytruncate).
Add this inside config/application.rb (or just config/environments/production.rb if you only want rotation in prod):
log_pipe = IO.popen("/usr/sbin/rotatelogs #{Rails.root}/log/#{Rails.env}.%Y%m%d.log 86400", 'a')
config.logger = Logger.new(log_pipe)
(From this blog post)

Or you can delegate logging to syslog

Related

What are the default paths added to AIDE's database?

Please excuse me for my English ^^'
I'm trying to answer to my title question.
There is the content of my /etc/aide/aide.conf :
# AIDE conf
# The daily cron job depends on these paths
database=file:/var/lib/aide/aide.db
database_out=file:/var/lib/aide/aide.db.new
database_new=file:/var/lib/aide/aide.db.new
gzip_dbout=no
# Set to no to disable summarize_changes option.
summarize_changes=yes
# Set to no to disable grouping of files in report.
grouped=yes
# standard verbose level
verbose = 6
# Set to yes to print the checksums in the report in hex format
report_base16 = no
# if you want to sacrifice security for speed, remove some of these
# checksums. Whirlpool is broken on sparc and sparc64 (see #429180,
# #420547, #152203).
Checksums = sha256+sha512+rmd160+haval+gost+crc32+tiger
# The checksums of the databases to be printed in the report
# Set to 'E' to disable.
database_attrs = Checksums
# check permissions, owner, group and file type
OwnerMode = p+u+g+ftype
# Check size and block count
Size = s+b
# Files that stay static
InodeData = OwnerMode+n+i+Size+l+X
StaticFile = m+c+Checksums
# Files that stay static but are copied to a ram disk on startup
# (causing different inode)
RamdiskData = InodeData-i
# Check everything
Full = InodeData+StaticFile
# Files that change their mtimes or ctimes but not their contents
VarTime = InodeData+Checksums
# Files that are recreated regularly but do not change their contents
VarInode = VarTime-i
# Files that change their contents during system operation
VarFile = OwnerMode+n+l+X
# Directories that change their contents during system operation
VarDir = OwnerMode+n+i+X
# Directories that are recreated regularly and change their contents
VarDirInode = OwnerMode+n+X
# Directories that change their mtimes or ctimes but not their contents
VarDirTime = InodeData
# Logs grow in size. Log rotation of these logs will be reported, so
# this should only be used for logs that are not rotated daily.
Log = OwnerMode+n+S+X
# Logs that are frequently rotated
FreqRotLog = Log-S
# The first instance of a rotated log: After the log has stopped being
# written to, but before rotation
LowLog = Log-S
# Rotated logs change their file name but retain all their other properties
SerMemberLog = Full+I
# The first instance of a compressed, rotated log: After a LowLog was
# compressed.
LoSerMemberLog = SerMemberLog+ANF
# The last instance of a compressed, rotated log: After this name, a log
# will be removed
HiSerMemberLog = SerMemberLog+ARF
# Not-yet-compressed log created by logrotate's dateext option:
# These files appear one rotation (renamed from the live log) and are gone
# the next rotation (being compressed)
LowDELog = SerMemberLog+ANF+ARF
# Compressed log created by logrotate's dateext option: These files appear
# once and are not touched any more.
SerMemberDELog = Full+ANF
I don't understand why AIDE adds just over 400.000 entries to the new database when I execute the following command : update-aide.conf ; aideinit
In the config file there is nowhere selection lines or restricted selection lines, so I'm wondering if AIDE doesn't add some by default.
I'm on Ubuntu 18.04.4 so the package aide comes with aide-common wrapper package.
I would like to have a clean aide.conf file but when I tried to delete SerMemberDELog = Full+ANF for example, I get the following error :
846:Error in expression:
Configuration error
error checking aide config, not running aide
AIDE --init return code 255
Big thanks to anyone who will help me :) !
If you need more details I'm always here.
Finally I managed to solve my problem,
The /etc/aide/aide.conf config file isn't the unique file used by AIDE,
when you run update-aide.conf wrapper, it actually uses this file and many other conf files present in the /etc/aide/aide.conf.d directory.
Easy fix is to move or delete these files and from now you will be able to clean your /etc/aide/aide.conf file :)
Have a good day !

Paperclip Nginx 504 Gateway Time-out

I have a Rails 4 application that allows to upload videos using the jQuery Dropzone plugin and the paperclip gem. Each uploaded video is encoded into multiple formats and uploaded to Amazon S3 in the background using delayed_paperclip, av-transcoder and sidekiq gems.
All works fine with most videos, but with a higher size like 1.1GB after the upload reaches what seems like the end of the progress bar of the dropzone plugin it returns an Nginx 504 Gateway Time-out.
As far as server goes, the rails app runs on Nginx + Passenger on a couple of servers that are behind a load balancer (Nginx used here too). I do not have timeouts set in the upstream section of the load balancer, the client_max_body_size is set to 2000M (both on the load balancer and servers), I've tried setting passenger_pool_idle_time to a large value (600), that didn't help, I have also tried setting send_timeout (600s), nothing made any difference.
Note: When making those changes, I did them on the host files of both servers as well as of the load balancer and always restarted nginx afterwards.
I've read also several answers regarding similar problems like this one and this one but still can't figure this out, google wasn't much more helpful either.
Some extra notes for those unfamiliar with the whole paperclip/delayed_paperclip process, the file is uploaded to the server and then the operation is done as far as the user is concerned, in the background the post processing of the videos (encoding/uploading to S3) is pushed to Redis as a job and Sidekiq processes it whenever it has time/resources.
What could be causing this issue? How can I debug this and solve it?
UPDATE
Thanks to Sergey's answer I was able to solve the issue. Since I was restricted to a specific version of Paperclip, I couldn't update it to the newest version that has the fix, therefore I'll leave here what I ended up doing.
In the engine that I use to handle the uploads I've added the following code in the engine_name.rb file to override the methods from Paperclip that needed fixing:
Paperclip::AbstractAdapter.class_eval do
def copy_to_tempfile(src)
link_or_copy_file(src.path, destination.path)
destination
end
def link_or_copy_file(src, dest)
Paperclip.log("Trying to link #{src} to #{dest}")
FileUtils.ln(src, dest, force: true) # overwrite existing
#destination.close
#destination.open.binmode
rescue Errno::EXDEV, Errno::EPERM, Errno::ENOENT => e
Paperclip.log("Link failed with #{e.message}; copying link #{src} to #{dest}")
FileUtils.cp(src, dest)
end
end
Paperclip::AttachmentAdapter.class_eval do
def copy_to_tempfile(source)
if source.staged?
link_or_copy_file(source.staged_path(#style), destination.path)
else
source.copy_to_local_file(#style, destination.path)
end
destination
end
end
Paperclip::Storage::Filesystem.class_eval do
def flush_writes #:nodoc:
#queued_for_write.each do |style_name, file|
FileUtils.mkdir_p(File.dirname(path(style_name)))
begin
move_file(file.path, path(style_name))
rescue SystemCallError
File.open(path(style_name), "wb") do |new_file|
while chunk = file.read(16 * 1024)
new_file.write(chunk)
end
end
end
unless #options[:override_file_permissions] == false
resolved_chmod = (#options[:override_file_permissions] &~ 0111) || (0666 &~ File.umask)
FileUtils.chmod( resolved_chmod, path(style_name) )
end
file.rewind
end
after_flush_writes # allows attachment to clean up temp files
#queued_for_write = {}
end
private
def move_file(src, dest)
# Support hardlinked files
if File.identical?(src, dest)
File.unlink(src)
else
FileUtils.mv(src, dest)
end
end
end
I faced similar issue a while ago. Maybe, my experience will help.
We had m3.medium instance on Amazon with 4Gb of memory.
User could be able to upload large video files. We faced an issue of 504 error when uploading files larger than 400Mb.
During monitoring and logging the upload process it appeared that Paperclip creates 4 files per attachment and thus all the instance resources work on a file system.
Here there is a description of this problem
https://github.com/thoughtbot/paperclip/issues/1642
and proposed a solution - use links instead of files when possible. You can see the appropriate code changes here
https://github.com/arnonhongklay/paperclip/commit/cd80661df18d7cd112944bfe26d90cb87c928aad
However 2 days ago Paperclip was updated to 5.2.0 version and they implemented similar solution.
So for now it creates only one file per attachment. Thus our file system is not overloaded and after updating to 5.2.0 version we stopped receiving 504 error.
Conclusion:
Use monkey patch from the link attached above if you're restricted in Paperclip version for some reason
Update Paperclip to 5.2.0 version. Should help.

What to do about huge stacktrace.log file in grails

The project I'm working on has a stacktrace.log file that is over 160GB in space. This is killing my hardrive space. What can I do to avoid this.
You should use rolling file appender so that the log file does not grow that huge size.
Use configuration like:
rollingFile name:'stacktrace', file:'stacktrace.log',
maxFileSize:'100MB', maxBackupIndex:5
Here every log file will be maximum 100 MB. You can control how many previous file will be existed by 'maxBackupIndex'.
You can empty the existing huge file by(in linux)
cat /dev/null > /path/to/file/stacktrace.log

How to recursively download FTP folder in parallel in Ruby?

I need to cache an ftp folder locally in ruby. Right now I'm using ftp_sync to download the ftp folder but it's painfully slow, do you guys know any library that can download the folder files in parallel?
Thanks!
The syncftp gem may help you:
http://rubydoc.info/gems/syncftp/0.0.3/frames
Ruby has a decent built-in FTP library in case you want to roll your own:
http://www.ruby-doc.org/stdlib-1.9.3/libdoc/net/ftp/rdoc/Net/FTP.html
To download files in parallel, you can use multiple threads with timeouts:
Ruby Net::FTP Timeout Threads
A great way to get parallel work done is Celluloid, the concurrent framework:
https://github.com/celluloid/celluloid
All that said, if the download speed is limited to your overall network bandwidth, then none of these approaches will help much.
To speed up the transfers in this case, be sure you're only downloading the information that's changed: new files and changed sections of existing files.
Segmented downloading can give massive speedups in some cases, such as downloaded big log files where only a small percentage of the file has changed, and the changes are all at the end of the file, and are all appends.
You can also consider shelling out to the command line. There are many tools that can help you with this. A good general-purpose one is "curl", which supports simple ranges for FTP files as well, for example you can get the first 100 bytes of a document using FTP like this:
curl -r 0-99 ftp://www.get.this/README
Are you open to other protocols besides FTP? Take a look at the "rsync" command, which is excellent for download synchronization. The rsync command has many optimizations to transfer just the changed data. For example rsync can sync a remote directory to a local directory like this:
rsync -auvC me#my.com:/remote/foo/ /local/foo/
Take a look at Curb. It's a wrapper around Curl, and can do multiple connections in parallel.
This is a modified version of one of their examples:
require 'curb'
urls = %w[
http://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.3-p286.tar.bz2
http://www.python.org/ftp/python/2.7.3/Python-2.7.3.tar.bz2
]
responses = {}
m = Curl::Multi.new
# add a few easy handles
urls.each do |url|
responses[url] = Curl::Easy.new(url)
puts "Queuing #{ url }..."
m.add(responses[url])
end
spinner_counter = 0
spinner = %w[ | / - \ ]
m.perform do
print 'Performing downloads ', spinner[spinner_counter], "\r"
spinner_counter = (spinner_counter + 1) % spinner.size
end
puts
urls.each do |url|
print "[#{ url } #{ responses[url].total_time } seconds] Saving #{ responses[url].body_str.size } bytes..."
File.open(File.basename(url), 'wb') { |fo| fo.write(responses[url].body_str) }
puts 'done.'
end
That'll pull in both the Ruby and Python source (which are pretty big so they'll take about a minute, depending on your internet connection and host). You won't see any files appear until the last block, where they get written out.

How to rotate log file on time?

Now In the production.rb in environments I write code
config.logger = Logger.new("#{Rails.root.to_s}/log/production.log", 'daily')
So that do log file will rotate every day at midnight.
How do I write code for rotate log with define time?
https://stackoverflow.com/a/4883967/1241447 - using system logrotate.d
http://www.ruby-doc.org/stdlib-1.9.3/libdoc/logger/rdoc/Logger.html#method-c-new - using built-in functionality

Resources