I'm transferring a WAV file from server to storage after manipulation, so I transfer the file to Heroku, then confirm the file exists before running the manipulation, however, File.exists? returns that the file does not exist, I feel its a naming or / path issue however cannot figure it.
I save the file URL in the file object which gives a URL example below
/uploads/wav_file/wbWavAudioFile/116/REff7e0b513481000322f530c849ddcccd.wav
On the HTML page, I can access and read this as well as download the file from the Heroku instance ( I re-rake the files on deploy this is proof of concept work and will use persistent storage after)
However, if I call
if File.exists?(Rails.root+call.wbAudioFile.url)
puts "file exists"
#do some manipulation to file
else
puts "file DOES NOT EXISIT"
end
I get File not found and falls to does not exsist
is there a case-sensitivity issue? a / instead of \ issue?
Or am I needing to define the path differently?
Advice appreciated.
I circumvented this issue after considering that the production model would post the files to a Bucket, so just coded for the production specification instead of testing locally.
Related
I am using the aws-sdk gem to read a CSV file stored in AWS S3.
Referencing the AWS doc. So far I have:
Aws::S3::Resource.new.bucket(ENV['AWS_BUCKET_NAME']).object(s3_key).get({ response_target: "#{Rails.root}/tmp/items.csv" })
In Pry, this returns:
output error: #<IOError: closed stream>
However, navigating to tmp/. I can see the items.csv file and it contains the right content. I am not certain wether the return value is an actual error.
My second concern. Is it fine to store temporary files in "#{Rails.root}/tmp/"?
Or should I consider another approach?
I can load the file in memory and then CSV.parse. Will this have implications if the CSV file is huge?
I'm not sure how to synchronously return a file object using the aws gem.
But I can offer some advice on the other topics you mentioned.
First of all, /tmp - I've found that saving files here is a working approach. On AWS, I've used this directory to create a local LRU cache for S3-stored images. The key thing is to preemp the situation where the file has been automatically deleted. The file needs to be refetched if this happens. By the way, Heroku has a 'read-only filesystem' but still permits you to write into /tmp.
The second part is the question of synchronously returning a file object.
While it may be possible to do this using the S3 gem, I've found success fetching it over HTTP using something like open-uri or mechanize. If it's not supposed to be a publically-available asset, you can change the permissions on S3 to restrict access to your server.
I am using cloudfoundry. I upload a file and save the file..my routine returns the path and filename
/var/vcap/data/dea/apps/Dwarfquery-0-99065f0be8880d91916257931ed91162/app/tmp/region1-legends10-11-2012-20:53.xml
However the scheduled resque routine which tries to read it using File.Open returns the following error
Errno::ENOENT
Error
No such file or directory - /var/vcap/data/dea/apps/Dwarfquery-0-99065f0be8880d91916257931ed91162/app/tmp/region1-legends10-11-2012-20:53.xml
This is the path returned by the Upload Server...I have added require 'open-uri' at the top of my Job Class
The line that is failing is
File.open(fpath, 'r+') do |f|
where fpath the the file/directory returning the error
I'm not proficient with ruby at all, but just to clarify:
Are the bit that uploads and the Resque routine part of the same "app" (in Cloud Foundry sense?)
Are you trying to read the file soon after it has been uploaded, or long after (in particular, after your app has/could have been restarted?)
This is important because:
Each "app" has its own temporary folder and obviously one app can't access another app's filesystem. This also holds if you deployed your app with multiple "instances". Each instance is a separate process that has its own filesystem.
local filesystem storage is ephemeral and is wiped clean every time the app restarts
If you need to access binary data between apps, you will want to use some kind of storage (e.g. Mongo's GridFS) to have it persisted and visible by both apps.
I'm trying to create PDFs that can be stored on an external server.
I do this:
File.new("temp.pdf", "w").close
File.open("temp.pdf", "wb") do |f|
f.write(bytes)
end
File.open("temp.pdf", "r") do |f|
# upload `f` to server
end
File.delete("temp.pdf")
then upload them to the server.
On my local machine this works fine, but, I recently tried running on another machine, and I got a permissions error in the log.
Is there a way to:
Write bytes to a file.
Never touch the hard disk.
Why don't you just upload the bytes to the server?
You may have to go a little lower-level than normal, but check for instance the UploadIO class of the multipart-post gem.
I realize I have to write to file and delete the file since UploadIO takes in an open file
So I created an new file, wrote the content to it, passed it in as a File.open to UploadIO, and then deleted the file after I send it.
I'm writing a Rails application that serves files stored on a remote server to the end user.
In my case the files are stored on S3 but the user requests the file via the Rails-application (hiding the actual URL). If the file was on my servers local file-system, I could use the Apache header X-Sendfile to free up the Ruby process for other requests while Apache took over the task of sending the file to the client. But in my case - where the file is not on the local file-system, but on S3 - it seems that I'm forced to download it temporarily inside Rails before sending it to the client.
Isn't there a way for Apache to serve a "remote" file to the client that is not actually on the server it self. I don't mind if Apache has to download the file for this to work, as long as I don't have to tie up the Ruby process while it's going on.
Any suggestions?
Thomas, I have similar requirements/issues and I think I can answer your problem. First (and I'm not 100% sure you care for this part), hiding the S3 url is quite easy as Amazon allows you to point CNAMES to your bucket and use a custom URL instead of the amazon URL. To do that, you need to point your DNS to the correct amazon URL. When I set mine up it was similar to this: files.domain.com points to files.domain.com.s3.amazonaws.com. Then you need to create the bucket with the name of your custom URL (files.domain.com in this example). How to call that URL will be different depending on which gem you use, but a word of warning was that the attachment_fu plugin I was using was incorrectly sending me to files.domain.com/files.domain.com/name_of_file.... I couldn't find the setting to fix it, so a simple .sub method for the S3 portion of the plugin fixed it.
On to your other questions, to execute some rails code (like recording the hit in the db) before downloading you can simply do this:
def download
file = File.find(...
# code to record 'hit' to database
redirect_to 3Object.url_for(file.filename,
bucket,
:expires_in => 3.hours)
end
That code will still cause the file to be served by S3, but and still give you the ability to run some ruby. (Of course the above code won't work as is, you will need to point it to the correct file and bucket and my amazon keys are saved in a config file. The above is also using the syntax for the AWS::S3 gem - http://amazon.rubyforge.org/).
Second, the Content-Disposition: attachment issue is a bit more tricky. Hopefully, your situation is a bit more simple than mine and the following solution can work. Assuming the object 'file' (in this example) is the correct S3 object, you can set the disposition to attachment by
file.content_disposition = "attachment"
file.save
The above code can be executed after the file exists on the S3 server (unlike some other headers and permissions), which is nice and it can also be added when you upload the file (syntax depends on your plugin). I'm still trying to find a way to tell S3 to send it as an attachment and only when requested (not every time), and if you find that, please let me know your solution. I need to be able to sometimes download it and other times save embed images (for example) into HTML. I'm not using the above mentioned redirect but fortunately it seems that if you embed (such as a HTML image tag) a file with the content-disposition/attachment header, and the browser still displays the image normally (but I haven't throughly tested that across enough browsers to send it in the wild).
Hope that helps! Good luck.
I need to inspect the bits of an uploaded file before it's ever saved off to the file system. PHP's documentation has a nice page that tells me exactly what properties are available for me to use (http://us3.php.net/manual/en/features.file-upload.post-method.php), but I can't find something similar for Ruby and/or Rails.
I've also tried logging a JSON-formatted string of the upload, but that just gives me a redundant UTF-8 error. I can't think of anything else to try.
Can anyone offer any insight or point me to the right place?
Thanks.
UPDATE: I'm running Apache 2.2.11 on OS X (Leopard) in case Peter is right (see below).
UPDATE: In case it helps, my input parameter is logged as "upload"=>#<File:/tmp/RackMultipart.64239.1>. I'm just not sure how to access it to get to its "parts".
As far as I've been able to tell or find, there is no physical file until an upload is read. This is inline with derfred's reply. The only metadata that can be accessed is:
uploaded_file.content_type # the uploaded file's MIME type
uploaded_file.original_path # which is really just the name of the file
Additionally, there's a read method on the uploaded_file that allows the file' content to be accessed and, presumably, written to the permanent file system.
Something else that I've noticed is that the only means Ruby offers to inspect the MIME type of a file is in this content_type property of an uploaded file. Once the file is on the file system, there's no longer any way of accessing or determining the MIME type.
I think this depends on the web server you're using. I remember having different fields for mongrel, apache and nginx.
AFAIK Rails and the various app servers totally abstract the upload part. However here is a thorough discussion of the topic:
http://www.jedi.be/blog/2009/04/10/rails-and-large-large-file-uploads-looking-at-the-alternatives/
This is just a File object, something that you can duplicate by going:
File.open("some_file")
The /tmp/RackMultipart.64239.1 is just a filename.
If you want to see/output its contents from the controller:
puts params[:upload].read