Ruby storing remote files in Zip using RubyZip - ruby-on-rails

I have a model called Image. Images have files attached using Dragonfly that are stored in S3.
I have a requirement that I need to zip up all images.
I'm using:
Zip::ZipFile.open(tmp_zip, Zip::ZipFile::CREATE) do |zipfile|
zipfile.add("image.jpg", image_path)
end
The problem I'm running into is that this works if image_path is local. When you need to call to S3 for the file, image_path is a remote path, such as http://example.s3.amazonaws.com/foo/image.jpg, and I don't think that there is a RubyZip method that handles that.
I'm debating on writing something that creates a temp file from the remote path, adds that temp file to the zip, then deletes the temp file.
But before I do that, does anyone know if RubyZip or some other zip library handles zipping up remote files? Or is there a better/easier method?
Thanks!

I have faced same issue and I have found a solution. So I am sharing it, might help someone.
You can add any remote file to zip without saving it in a temp file, then read it from temp file and finally deleting temp file.
create zip and add remote files in it
Zip::OutputStream.open(tmp_zip) do |zos|
zos.put_next_entry("image.jpg")
zos.print(URI.parse(image_url).read)
end
If you want to add any local files inside above temp_zip then you can open it again:
open this zip again and add any local files you want
zipfile = Zip::File.open(tmp_zip)
zipfile.add("report.pdf", my_pdf_path)
zipfile.close

One option would be to mount s3 locally. There various ways to do this using ftp like programers and there are dedicated programs as well. It depends on the OS you're running as well.
I don't see a way to stream a zip via zip using a remote URL.

Related

ICS FTP - Upload a folder containing multiple subfolders and files to a ftp server

I'm trying to upload a folder to a ftp server using Overbyte Ics Ftp component.
From what I understand there is no built in function to upload a folder containing files and sub-folders to a ftp so I have to create a recursion in order to upload them all into one call.
What is the correct approach to this problem?
I'm thinking about doing this:
scan the local folder that I want to upload and separate folders from files
for each folder name check if exists on the ftp. If not exists the create it
after creating all folders to the ftp server check if local file exists on the ftp. If not exists start uploading the file to the created directory.
Is this the proper way to do it?
Is there an easier approach to this task?
Thank you!

How do you create a file without touching the hard disk?

I'm trying to create PDFs that can be stored on an external server.
I do this:
File.new("temp.pdf", "w").close
File.open("temp.pdf", "wb") do |f|
f.write(bytes)
end
File.open("temp.pdf", "r") do |f|
# upload `f` to server
end
File.delete("temp.pdf")
then upload them to the server.
On my local machine this works fine, but, I recently tried running on another machine, and I got a permissions error in the log.
Is there a way to:
Write bytes to a file.
Never touch the hard disk.
Why don't you just upload the bytes to the server?
You may have to go a little lower-level than normal, but check for instance the UploadIO class of the multipart-post gem.
I realize I have to write to file and delete the file since UploadIO takes in an open file
So I created an new file, wrote the content to it, passed it in as a File.open to UploadIO, and then deleted the file after I send it.

Ruby file copy produces different file

I'm not very familiar with file handling in ruby. A problem I've come accross is that reading and writing a binary file doesn't produce exactly the same file.
clone = Tempfile.new(tempfile.original_filename)
FileUtils.copy_stream(tempfile, clone)
clone.flush
From the image below it is clear that it is not an exact file copy, when I try to open the newly created file in an image viewer it reports that the file is corrupt. I have tried copying the file in different ways such as clone.write(tempfile.read), etc. without success.
*The file viewer also indicates the original is ANSI Dos/Windows and the clone is ANSI Macintosh. The file size also differs by about 200 bytes.
What I'm trying to accomplish is actually simply using a Tempfile twice. A file is uploaded via rails and given to me as a Tempfile. I want to submit it to two different restful services and RestClient.post closes the file automatically. Another option would be to submit some sort of in memory stream clone to RestClient so that it can not close my file. If I submit File.open(tempfile.path) to RestClient it produces the same broken file, this indicates that the reading is the problem and not the writing. If I submit the original Tempfile object to RestClient it works perfectly but then it is closed and deleted and I cannot send it again.
Please help!
Regards,
Pierre
It would be much more helpful to see a hex view of these files instead of a text editor's intepretation. My guess is that at least one of the files is not opened in binary mode. In Ruby 1.9, try
open(filename, 'rb')
open(filename, 'wb')
Tempfile.new(filename, :binmode => true)
for opening a file for reading / writing and to create a binary temporary file, respectively.

How do I generate files and then zip/compress with Heroku?

I sort of want to do the reverse of this.
Instead of unzipping and adding the collection files to S3 I want to
On user's request:
generate a bunch of xml files
zip the xml files with some images (pre-existing images hosted on s3)
download zip
Does anybody know agood way of doing this? I think I could manage this no problem on a normal machine but Heroku complicates things somewhat in that it has a read-only filesystem.
From the heroku documentation on the read-only filesystem:
There are two directories that are writeable: ./tmp and ./log (under your application root). If you wish to drop a file temporarily for the duration of the request, you can write to a filename like #{RAILS_ROOT}/tmp/myfile_#{Process.pid}. There is no guarantee that this file will be there on subsequent requests (although it might be), so this should not be used for any kind of permanent storage.
You should be able to pretty easily write your generated xml files to tmp/ and keep track of the names, download and write the s3 files to the same directory, and (maybe?) invoke a zip command as long as the output is in tmp/, then serve the file to the browser with the correct mime type to prompt a download. I would only be concerned with how big the filesize is and if heroku has an undocumented limit on what they'll allow in the tmp directory. Especially since you are only performing this action for a one-time download in the duration of a single request, I think you have a good chance of being able to do it.
Edit: Looking around a bit, you might be able to use something like RubyZip to create your zip file if you want to avoid calling system commands.

Ruby on rails: Image downloads with Authentication/Authorization/Time outs

I'm having few doubts on implementing file downloads. I'm creating an app where I use attachment_fu with Amazon s3 to upload files. Things are working pretty well so far on uploading side. Now its the time to start the file downloads. Here is what I need, a logged in user search and browse for Images and they should able to add the files in to a download basket (Let's say its a Download Shopping Cart). Finally the user should be able to download these file(s) from S3 probably as a zipped file.
Is there any plugin/gem where I can use for this?
The downside of giving the customer a zip file of all the files is that you'll need to first pull all of the files from S3 back onto your server, then zip them.
You can certainly do that if you want, but it will take a bit of time, you would not want to do it synchronously as part of the browser request. Instead, do it as a background job using delayed_job or similar.
To do the actual zipping, use Zlib::GzipWriter See http://ruby-doc.org/core/classes/Zlib/GzipWriter.html -- it is part of standard Ruby
You could then:
email the user the actual zip file as an attachment
email the user the link to the zip file on your server
or upload the zip file to s3, then email a link to the zip file on s3
Remember to create a clean up task/job to remove the old zip files from your system...
Alternative is to not zip the files together, instead, give the user one or more links to download the files separately.
S3 enables you to create a url to an S3 file that can be used for a set period of time. (The file would be private on S3 so a straight link to it won't work.) Here's how to create it using attachment-fu and aws-s3 gem:
# I added this as a method to my model for the files stored in S3
def authenticated_s3_url
# return a publicly usable url
connect_to_aws # a local method which connects/re-connects to s3
S3Object.url_for(full_filename,
bucket_name,
:expires_in => 60 * 60) # 1 hour
end

Resources