reading large csv files in a rails app takes up a lot of memory - Strategy to reduce memory consumption? - ruby-on-rails

I have a rails app which allows users to upload csv files and schedule the reading of multiple csv files with help of delayed_job gem. The problem is the app reads each file in its entirity into memory and then writes to the database. If its just 1 file being read its fine, but when multiple files are read the RAM on the server gets full and causes the app to hang.
I am trying to find a solution for this problem.
One solution I researched is to break the csv file into smaller parts and save them on the server, and read the smaller files. see this link
example: split -b 40k myfile segment
Not my preferred solution. Are there any other approaches to solve this where I dont have to break the file. Solutions must be ruby code.
Thanks,

You can make use of CSV.foreach to read just chunks of your CSV file:
path = Rails.root.join('data/uploads/.../upload.csv') # or, whatever
CSV.foreach(path) do |row|
# process row[i] here
end
If it's run in a background job, you could additionally call GC.start every n rows.
How it works
CSV.foreach operates on an IO stream, as you can see here:
def IO.foreach(path, options = Hash.new, &block)
# ...
open(path, options) do |csv|
csv.each(&block)
end
end
The csv.each part is a call to IO#each, which reads the file line by line (rb_io_getline_1 invokation) and leaves the line read to be garbage collected:
static VALUE
rb_io_each_line(int argc, VALUE *argv, VALUE io)
{
// ...
while (!NIL_P(str = rb_io_getline_1(rs, limit, io))) {
rb_yield(str);
}
// ...
}

Related

Memory consumption when using Nokogiri and XML

I have been trying to figure out what's up with my Rails app as it relates to memory and Nokogiri's XML parsing. For some reason, this one function alone consumes up about 1GB of memory and does not release it when it's completed. I'm not quite sure what's going on here.
def split_nessus
# To avoid consuming too much memory, we're going to split the Nessus file
# if it's over 10MB into multiple smaller files.
file_size = File.size(#nessus_files[0]).to_f / 2**20
files = []
if file_size >= 10
file_num = 1
d = File.open(#nessus_files[0])
content = Nokogiri::XML(d.read)
d.close
data = Nokogiri::XML("<data></data>")
hosts_num = 1
content.xpath("//ReportHost").each do |report_host|
data.root << report_host
hosts_num += 1
if hosts_num == 100
File.open("#{#nessus_files[0]}_nxtmp_#{file_num}", "w") {|f| f.write(data.to_xml)}
files << "#{#nessus_files[0]}_nxtmp_#{file_num}"
data = Nokogiri::XML("<data></data>")
hosts_num = 1
file_num += 1
end
end
#nessus_files = files
end
end
Since Rails crashes when trying to parse a 100MB+ XML file, I've decided to break XML files into separate files if they're over 10MB, and just trying to handle them individually.
Any thoughts as to why this will not release about 1GB of memory when it's completed?
Nokogiri uses system libraries like libxml and libxslt under the hood. Because of that I would assume that it's probably not an issue in Ruby's garbage collection but somewhere else.
If you are working with large files, it's usually a good idea to stream-process them, so that you do not load the whole file into memory, which in itself will result in huge memory consumption as large strings are very memory consuming.
Because of this, when working with large XML files, you should use a stream parser. In Nokogiri this is Nokogiri::XML::SAX.

How can I ZIP and stream many files without appending to memory on Rails5/Ruby2.4? [duplicate]

I need to serve some data from my database in a zip file, streaming it on the fly such that:
I do not write a temporary file to disk
I do not compose the whole file in RAM
I know that I can do streaming generation of zip files to the filesystemk using ZipOutputStream as here. I also know that I can do streaming output from a rails controller by setting response_body to a Proc as here. What I need (I think) is a way of plugging those two things together. Can I make rails serve a response from a ZipOutputStream? Can I get ZipOutputStream give me incremental chunks of data that I can feed into my response_body Proc? Or is there another way?
Short Version
https://github.com/fringd/zipline
Long Version
so jo5h's answer didn't work for me in rails 3.1.1
i found a youtube video that helped, though.
http://www.youtube.com/watch?v=K0XvnspdPsc
the crux of it is creating an object that responds to each... this is what i did:
class ZipGenerator
def initialize(model)
#model = model
end
def each( &block )
output = Object.new
output.define_singleton_method :tell, Proc.new { 0 }
output.define_singleton_method :pos=, Proc.new { |x| 0 }
output.define_singleton_method :<<, Proc.new { |x| block.call(x) }
output.define_singleton_method :close, Proc.new { nil }
Zip::IoZip.open(output) do |zip|
#model.attachments.all.each do |attachment|
zip.put_next_entry "#{attachment.name}.pdf"
file = attachment.file.file.send :file
file = File.open(file) if file.is_a? String
while buffer = file.read(2048)
zip << buffer
end
end
end
sleep 10
end
end
def getzip
self.response_body = ZipGenerator.new(#model)
#this is a hack to preven middleware from buffering
headers['Last-Modified'] = Time.now.to_s
end
EDIT:
the above solution didn't ACTUALLY work... the problem is that rubyzip needs to jump around the file to rewrite the headers for entries as it goes. particularly it needs to write the compressed size BEFORE it writes the data. this is just not possible in a truly streaming situation... so ultimately this task may be impossible. there is a chance that it might be possible to buffer a whole file at a time, but this seemed less worth it. ultimately i just wrote to a tmp file... on heroku i can write to Rails.root/tmp less instant feedback, and not ideal, but neccessary.
ANOTHER EDIT:
i got another idea recently... we COULD know the compressed size of the files if we do not compress them. the plan goes something like this:
subclass the ZipStreamOutput class as follows:
always use the "stored" compression method, in other words do not compress
ensure we never seek backwards to change file headers, get it all right up front
rewrite any code related to TOC that seeks
I haven't tried to implement this yet, but will report back if there's any success.
OK ONE LAST EDIT:
In the zip standard: http://en.wikipedia.org/wiki/Zip_(file_format)#File_headers
they mention that there's a bit you can flip to put the size, compressed size and crc AFTER a file. so my new plan was to subclass zipoutput stream so that it
sets this flag
writes sizes and CRCs after the data
never rewinds output
furthermore i needed to get all the hacks in order to stream output in rails fixed up...
anyways it all worked!
here's a gem!
https://github.com/fringd/zipline
I had a similar issue. I didn't need to stream directly, but only had your first case of not wanting to write a temp file. You can easily modify ZipOutputStream to accept an IO object instead of just a filename.
module Zip
class IOOutputStream < ZipOutputStream
def initialize io
super '-'
#outputStream = io
end
def stream
#outputStream
end
end
end
From there, it should just be a matter of using the new Zip::IOOutputStream in your Proc. In your controller, you'd probably do something like:
self.response_body = proc do |response, output|
Zip::IOOutputStream.open(output) do |zip|
my_files.each do |file|
zip.put_next_entry file
zip << IO.read file
end
end
end
It is now possible to do this directly:
class SomeController < ApplicationController
def some_action
compressed_filestream = Zip::ZipOutputStream.write_buffer do |zos|
zos.put_next_entry "some/filename.ext"
zos.print data
end
compressed_filestream .rewind
respond_to do |format|
format.zip do
send_data compressed_filestream .read, filename: "some.zip"
end
end
# or some other return of send_data
end
end
This is the link you want:
http://info.michael-simons.eu/2008/01/21/using-rubyzip-to-create-zip-files-on-the-fly/
It builds and generates the zipfile using ZipOutputStream and then uses send_file to send it directly out from the controller.
Use chunked HTTP transfer encoding for output: HTTP header "Transfer-Encoding: chunked" and restructure the output according to the chunked encoding specification, so no need to know the resulting ZIP file size at the begginning of the transfer. Can be easily coded in Ruby with the help of Open3.popen3 and threads.

Tracking Upload Progress of File to S3 Using Ruby aws-sdk

Firstly, I am aware that there are quite a few questions that are similar to this one in SO. I have read most, if not all of them, over the past week. But I still can't make this work for me.
I am developing a Ruby on Rails app that allows users to upload mp3 files to Amazon S3. The upload itself works perfectly, but a progress bar would greatly improve user experience on the website.
I am using the aws-sdk gem which is the official one from Amazon. I have looked everywhere in its documentation for callbacks during the upload process, but I couldn't find anything.
The files are uploaded one at a time directly to S3 so it doesn't need to load it into memory. No multiple file upload necessary either.
I figured that I may need to use JQuery to make this work and I am fine with that.
I found this that looked very promising: https://github.com/blueimp/jQuery-File-Upload
And I even tried following the example here: https://github.com/ncri/s3_uploader_example
But I just could not make it work for me.
The documentation for aws-sdk also BRIEFLY describes streaming uploads with a block:
obj.write do |buffer, bytes|
# writing fewer than the requested number of bytes to the buffer
# will cause write to stop yielding to the block
end
But this is barely helpful. How does one "write to the buffer"? I tried a few intuitive options that would always result in timeouts. And how would I even update the browser based on the buffering?
Is there a better or simpler solution to this?
Thank you in advance.
I would appreciate any help on this subject.
The "buffer" object yielded when passing a block to #write is an instance of StringIO. You can write to the buffer using #write or #<<. Here is an example that uses the block form to upload a file.
file = File.open('/path/to/file', 'r')
obj = s3.buckets['my-bucket'].objects['object-key']
obj.write(:content_length => file.size) do |buffer, bytes|
buffer.write(file.read(bytes))
# you could do some interesting things here to track progress
end
file.close
After read the source code of the AWS gem, I've adapted (or mostly copy) the multipart upload method to yield the current progress based on how many chunks have been uploaded
s3 = AWS::S3.new.buckets['your_bucket']
file = File.open(filepath, 'r', encoding: 'BINARY')
file_to_upload = "#{s3_dir}/#{filename}"
upload_progress = 0
opts = {
content_type: mime_type,
cache_control: 'max-age=31536000',
estimated_content_length: file.size,
}
part_size = self.compute_part_size(opts)
parts_number = (file.size.to_f / part_size).ceil.to_i
obj = s3.objects[file_to_upload]
begin
obj.multipart_upload(opts) do |upload|
until file.eof? do
break if (abort_upload = upload.aborted?)
upload.add_part(file.read(part_size))
upload_progress += 1.0/parts_number
# Yields the Float progress and the String filepath from the
# current file that's being uploaded
yield(upload_progress, upload) if block_given?
end
end
end
The compute_part_size method is defined here and I've modified it to this:
def compute_part_size options
max_parts = 10000
min_size = 5242880 #5 MB
estimated_size = options[:estimated_content_length]
[(estimated_size.to_f / max_parts).ceil, min_size].max.to_i
end
This code was tested on Ruby 2.0.0p0

Generating a CSV and uploading it to S3 when finished in a background job

I'm providing users with the ability to download an extremely large amount of data via CSV. To do this, I'm using Sidekiq and putting the task off into a background job once they've initiated it. What I've done in the background job is generate a csv containing all of the proper data, storing it in /tmp and then call save! on my model, passing the location of the file to the paperclip attribute which then goes off and is stored in S3.
All of this is working perfectly fine locally. My problem now lies with Heroku and it's ability to store files for a short duration dependent on what node you're on. My background job is unable to find the tmp file that gets saved because of how Heroku deals with these files. I guess I'm searching for a better way to do this. If there's some way that everything can be done in-memory, that would be awesome. The only problem is that paperclip expects an actual file object as an attribute when you're saving the model. Here's what my background job looks like:
class CsvWorker
include Sidekiq::Worker
def perform(report_id)
puts "Starting the jobz!"
report = Report.find(report_id)
items = query_ranged_downloads(report.start_date, report.end_date)
csv = compile_csv(items)
update_report(report.id, csv)
end
def update_report(report_id, csv)
report = Report.find(report_id)
report.update_attributes(csv: csv, status: true)
report.save!
end
def compile_csv(items)
clean_items = items.compact
path = File.new("#{Rails.root}/tmp/uploads/downloads_by_title_#{Process.pid}.csv", "w")
csv_string = CSV.open(path, "w") do |csv|
csv << ["Item Name", "Parent", "Download Count"]
clean_items.each do |row|
if !row.item.nil? && !row.item.parent.nil?
csv << [
row.item.name,
row.item.parent.name,
row.download_count
]
end
end
end
return path
end
end
I've omitted the query method for readabilities sake.
I don't think Heroku's temporary file storage is the problem here. The warnings around that mostly center around the facts that a) dynos are ephemeral, so anything you write can and will disappear without notice; and b) dynos are interchangeable, so the presence of inter-request tempfiles are a matter of luck when you have more than one web dyno running. However, in no situation do temporary files just vanish while your worker is running.
One thing I notice is that you're actually creating two temporary files with the same name:
> path = File.new("/tmp/filename", "w")
=> #<File:/tmp/filename>
> path.fileno
=> 3
> CSV.open(path, "w") do |csv| csv << %w(foo bar baz); puts csv.fileno end
4
=> nil
You could change the path = line to just set the filename (instead of opening it for writing), and then make update_report open the filename for reading. I haven't dug into what Paperclip does when you give it an empty, already-overwritten, opened-for-writing file handle, but changing that flow may well fix the issue.
Alternately, you could do this in memory instead: generate the CSV as a string and give it to Paperclip as a StringIO. (Paperclip supports certain non-file objects, including StringIOs, using e.g. Paperclip::StringioAdapter.) Try something like:
# returns a CSV as a string
def compile_csv(items)
CSV.generate do |csv|
# ...
end
end
def update_report(report_id, csv)
report = Report.find(report_id)
report.update_attributes(csv: StringIO.new(csv), status: true)
report.save!
end

CSV.open(..) block does not execute, and ends immediately

I am opening a CSV file for writing with the following code, as indicated in CSV documentation here (http://ruby-doc.org/stdlib-1.9.2/libdoc/csv/rdoc/CSV.html):
CSV.open( csv_file_out, 'wb' ) { |csv_line_out|
#stuff happens here
}
the block does not execute, i.e. after the CSV.open command debugger goes directly to the end of block.
thinking "just in case", I try the same with IO modes w+b wt w+t ab a+b. I also tried creating the file before the in case there was a problem with opening it. nothing changes.
when I stop the debugger at the CSV.open line, I am able to execute a block manually and create a csv file myself. I can also create the file frmo the exact same command from console. But it won't work when executed in the server.
Any ideas on what's going on are very welcome. Thanks in advance!
You don't need the {} brackets before your |csv_line_out|
Your trying to iterate sth. (the files content) that does not exist so its logical that it ends imediatly! Try it this way:
CSV.open(csv_file_out, "wb") do |csv|
csv << [column1, column2,...]
end

Resources