Delete folder after send_file in Rails - ruby-on-rails

I'm sending the file file.txt from my Rails controller using send_file, and then delete the folder containing it.
send_file("#{Rails.root}/public/folder/file.txt")
FileUtils.remove_dir "#{Rails.root}/public/folder", true
When I did this, file.txt was sent and deleted. However, folder was not deleted. But if I remove the send_file line, then folder will be deleted.
How do I make it delete folder?
EDIT: Interestingly, I found that inside folder there is a hidden file called .__afs2B0C, probably preventing the deletion. I have no idea how this file is created! The file stays for only around 15 minutes before disappearing.
EDIT2: I've tried inspecting the content of the temp file with vi, but it's unreadable gibberish. When I removed only the send_file line, the folder was correctly deleted. When I removed only the FileUtils.remove_dir line, the folder contains no temp file.

Are you sure the send_file is not still sending the file when you are removing the dir, it may be asynchronous if it uses X-SendFile? That would cause an error when trying to remove the dir.
So you should probably be queuing this delete action, or doing it with a sweeper later, rather than trying to do it straight after sending the file to streaming.
I'm not completely clear on which file you are sending, so it would be useful to include an actual example of file path, and file type, and how it is created in your question.
Possible help with debugging:
Log in and monitor the folder while you perform the following actions:
Write out a very large file (> 60MB say), and check there is no invisible file created during your file creation process - I'm not clear on which file you are actually sending
Set up a large file transfer on a slow connection, and watch for the creation and possibly growing of this file (it might be related to compressing the file served on the fly for example).
Given that sendfile may still be sending (for large files) via the web server (x-send-file is now default) when you try to delete, I'd try looking into delayed solutions.
Possible solutions:
Use send_data rather than send_file (if files are small)
Schedule the deletion of the folder for later with something like delayed_job
Set up a sweeper which removes the folders at the end of each day

Not sure why that hidden file is there, it could be an offshoot of X-send-file or even of wget (partial progress or something).
Ideally, you should use Tempfile to do things like this. The code is based of you're comment about what you are doing. Also, I am using two gems one for downloading and another for zipping.
This way, you don't need to make a folder at all, just a zip file directly. All the content files of the zip will be deleted on their own. After downloading the zip just delete it. Here also I should mention that you could run into a glitch somewhere, since the send_file will hand over the transfer to the webserver, and as such you don't the rails process to delete the file while it is still being served. So even with this, and it working well on localhost, I would strongly advise using a custom scheduled background garbage collector in production.
require 'open-uri'
require 'zip/zip'
zip_path = "#{Rails.root}/public/test.zip"
urls_to_fetch = ['abc.com', 'xyz.com']
Zip::ZipFile.open(zip_path, Zip::ZipFile::CREATE) do |zipfile|
urls_to_fetch.each_with_index do |url, index|
# intialize new temp file
file = Tempfile.new(index.to_s)
# fetch the file using open-uri or wget and save it as a tmpfile
open(url, 'rb') do |read_file|
file.write(read_file.read)
end
end
# add the temp file to the list of files to zip
zipfile.add(File.basename(file), file.path)
end
# send the zipfile for download
send_file zip_path
# delete the zipfile
FileUtils.rm zip_path
However, this should not be mandatory. If you are doing things without Tempfiles, please check the rights that the rails runner has on the target directory.
The FileUtils documentation has details regarding local security vulnerabilities when trying to delete files / folders.

See here... works for me
file = File.open(Rails.root.join('public', 'uploads', filename), "rb")
contents = file.read
file.close
File.delete(filepath) if File.exist?(filepath)
send_data(contents, :filename => filename)

maybe you could try this solution:
http://info.michael-simons.eu/2008/01/21/using-rubyzip-to-create-zip-files-on-the-fly/

it is so simple but dangerous. Use shell command to achieve it . Put it after send_file in Controller
system ("rm -rf public/folder")

Related

How to change extension of file on run time in ruby on rails

I am working on a project and I want to show two links to download same file but with different extensions, first link download the file with actual extension and another link download the file with changed extension. Like I have a 1.txt file first link is to download 1.txt file and I want another link to download 1.docx file using ruby on rails.
First link works properly which downloads actual file and I have created a method for second link.
def downloaddocxfile
require 'fileutils'
Dir.glob(params[:file]).each do |f|
if File.extname(f) != '.docx'
FileUtils.cp f, "#{File.dirname(f)}/#{File.basename(f,'.*')}.docx"
send_file "#{File.dirname(f)}/#{File.basename(f,'.*')}.docx"
# system("rm -rf #{File.dirname(f)}/#{File.basename(f,'.*')}.docx")
else
send_file "#{params[:file]}"
end
end
end
This method create a copy of original file and change the extension to .docx.
I don't want to show two files with different extensions in the list of files. So I want to remove that file which is created with .docx extension once it is downloaded. So, How can I do that?
send_file allows you to specify a filename via the :filename option.
Assuming that there's a file x.foo on your server, then:
send_file('x.foo', filename: 'y.bar')
would send the file x.foo under the name y.bar to the browser.
It's up to the browser to use the suggested name, but most browsers will save the file as y.bar.

Move all files with same extension at once in ruby

In terminal i could have used something like:
mv *.ext /some/output/dir/
I want to so the same in ruby. I can use system command for that under backticks (`) or using the system(), but how to achieve the same in ruby way?
I tried:
FileUtils.mv '*.sql', '/some/output/dir/'
This is not working as it looks specifically for a file name '*.sql'
You can do:
FileUtils.mv Dir.glob('*.sql'), '/some/output/dir/'
You need to use a Glob, as in:
Dir.glob('*.sql').each do|f|
# ... your logic here
end
or more succinct:
Dir.glob('*.sql').each {|f| FileUtils.mv(f, 'your/path/here')}
Check the official documentation on FileUtils#mv which has even an example with Glob.
Update: If you want to be sure you don't iterate (although I wouldn't worry about it that much) you can always execute what you consider to be optimized in shell, directly from ruby, e.g.:
`mv *.ext /some/output/dir/`
I will do using FileUtils::cp, as it copies a file content src to dest. If dest is a directory, copies src to dest/src. If src is a list of files, then dest must be a directory.
FileUtils.cp Dir['*.sql'], '/some/output/dir/'
I wouldn't use ::mv, as if file and dest exist on the different disk partition, the file is copied then the original file is removed.
But if you don't bother with the deletion of the original files, then go with ::mv.

How do I get this ruby on rails app to copy a file to my hard drive?

I have a file coming into my rails app from another website. The POST data looks like this:
Parameters: {"file"=>#<ActionDispatch::Http::UploadedFile:0x007fa03cf0c8d0 #original_filename="Georgia.svg", #content_type="application/octet-stream", #headers="Content-Disposition: form-data; name=\"file\"; filename=\"Georgia.svg\"\r\nContent-Type: application/octet-stream\r\n", #tempfile=#<File:/var/folders/g0/m3jlqvpd4cbc3khznvn5c_7m0000gn/T/RackMultipart20130507-52672-1sw119a>>, "originalFileName"=>"Georgia.ttf"}
My controller code is this:
def target
#incoming_file = params[:file]
file_name = params[:originalFileName]
File.open("/Users/my_home_directory/#{file_name}", "w+b") {|f| f.write(#thing)}
end
Right now, I can create a file on my hard drive that contains a line of text that shows the object.
This is the code from the file created in my hard drive.
<ActionDispatch::Http::UploadedFile:0x007fa03cd1c318>
I can write a file with the name of the uploaded file.
I can't seem to figure out how to write the data from the file to my drive. I'm new to ruby on rails. Help me see what I'm missing.
Thx.
Obvious solution would be the one suggested by Richie Min, but it is a bad solution in terms of memory usage, which might get critical if you start uploading large files. Since
File.open(...) {|f| f.write(#incoming_file.read)}
reads whole uploaded file in memory with #incoming_file.read. Better option would be:
def target
#incoming_file = params[:file]
file_name = params[:originalFileName]
FileUtils.mv #incoming_file.tempfile, "/Users/my_home_directory/#{file_name}"
end
Uploaded data is always stored in temporary files, and UploadedFile.read is actually just a proxy to actual File object, which is accessible trough UploadedFile.tempfile. This, however, could also be not the best solution if temporary folder and destination directory are on different partitions or even on different disk drives, but still much better than reading the whole file in memory in Rails controller.
You can test it with:
curl -X POST -F file=#[some large file] -F originalFileName=somefilename.ext http://[your url]
use
File.open("/Users/my_home_directory/#{file_name}", "w+b") {|f| f.write(#incoming_file.read)}

Write data from rails to S3 line by line or once as a larger whole?

I have a script which takes a file, manipulates some data, and writes an output .csv file. The .csv file should be available for the user to view and download. This is a rails app on heroku with S3.
Right now the script writes a hard coded local filesystem output file "line by line". When I integrate this script with rails, heroku, & amazon S3 do I have to restructure the script to build an array line by line in the controller and write it once as a whole to S3? or do I continue writing to S3 line by line as I do locally?
It appears like I need to build an array in the controller and post to S3? Then a controller 'show' action would reference the file for instance variables used in the view. Almost makes me wonder if the user can just make the csv on the client side and never have to make a file to store on S3? Is this a job for AJAX?
I'm looking at the aws-sdk now to access the file as I would any other file on my local system.
rough example of the as is, write per line code:
file_in.each_line do |line|
#some line manipulation
file_out << output
end
Easy to switch this code to build an array and then write once... I originally wrote it line by line so I don't have the whole file in a large array...
file_in.each_line do |line|
#some line manipulation
#array.push(output)
end
file_out << #array
S3 is not a local filesystem - you need to build the file locally and then send it to S3 (there is software that will make s3 look like a filesystem although I don't know whether you can get that to run on heroku).
If your file is large you can do multipart uploads, but each part (other than the last) must be at least 5MB.

rails reading lines of file before upload with paperclip

I am trying to upload a file in rails (using paperclip), and I want to process some of the file data before letting paperclip send it off to s3 storage. In my controller, I just grab the file parameter (which does give me a file) and then I try to read the lines into an array
csv_file = params[:activity][:data]
array = IO.readlines(csv_file.path)
The problem is, I'm only getting the last line of the file. I tried using .rewind, but still get just the last line.
I dislike readlines and I always use regular expressions. Try this.
End of line - \n
Handy block structure to ensure that the file handle is closed:
File.open(csv_file.path) do |f|
a = f.readlines
process a...
end
Reading a whole file into memory might not be a good idea depending on the size of the files.

Resources