How to retrieve a FileBlob from 'ActionDispatch::Http::UploadedFile' instance? - ruby-on-rails

I have used 'remotipart' gem to upload files asynchronously to server side. The instance passed to the server side is of 'UploadedFile' .
The Rails API mentions all the methods( like read(), open() ) and attributes for the class, however I am not sure how to retrieve the File and store it on the database?
Googling took me to no tutorials using this class.

This is rarely documented because most people use gems to handle file uploads.
Let's say your object is called file. You could determine the path using this:
file.tempfile.to_path.to_s
You should move the file because it's stored in /tmp and might be deleted by the system. You can use FileUtils.mv method for this. Then you can add the new path of the file to the database.

Related

What is the recommended approach to parse a CSV file stored in S3?

I am using the aws-sdk gem to read a CSV file stored in AWS S3.
Referencing the AWS doc. So far I have:
Aws::S3::Resource.new.bucket(ENV['AWS_BUCKET_NAME']).object(s3_key).get({ response_target: "#{Rails.root}/tmp/items.csv" })
In Pry, this returns:
output error: #<IOError: closed stream>
However, navigating to tmp/. I can see the items.csv file and it contains the right content. I am not certain wether the return value is an actual error.
My second concern. Is it fine to store temporary files in "#{Rails.root}/tmp/"?
Or should I consider another approach?
I can load the file in memory and then CSV.parse. Will this have implications if the CSV file is huge?
I'm not sure how to synchronously return a file object using the aws gem.
But I can offer some advice on the other topics you mentioned.
First of all, /tmp - I've found that saving files here is a working approach. On AWS, I've used this directory to create a local LRU cache for S3-stored images. The key thing is to preemp the situation where the file has been automatically deleted. The file needs to be refetched if this happens. By the way, Heroku has a 'read-only filesystem' but still permits you to write into /tmp.
The second part is the question of synchronously returning a file object.
While it may be possible to do this using the S3 gem, I've found success fetching it over HTTP using something like open-uri or mechanize. If it's not supposed to be a publically-available asset, you can change the permissions on S3 to restrict access to your server.

laravel 5.1 renaming file before uploading on s3

For safety reasons I would like to rename the files submitted to my application before uploading them to S3. For local storage I can use Storage::move afterwards. But for S3 I am having trouble. How do I do it? Also, instead of using move is it possible to rename them before storing? Right no my app renames the files without any actions to something like phpK69RGR.jpg May be I can just configure the random string method? I also tried using the php rename command before upload but my webservice started erroring out. I know its a very basic question but for some reason I am having trouble with it.
This is outlined in the docs.
$request->file('photo')->move($destinationPath, $fileName);
$fileName is an optional parameter that renames the file.
So with that, you could simply place this inside your controller:
//Generate random name
$fileName = str_random(30);
$request->file('photo')->move($destinationPath, $fileName);

Custom filepath on server parse.com

I'm working with parse.com for my server end. I'm wondering if there's a way for files to be saved into subfolders. For example my file is currently saved with a url like this:
http://files.parsetfss.com/bb2767e6-fc18-4ff5-a071-199803c9aac2/tfss-d056e28e-1e02-49dd-930b-e46790a2e38d-Drums.png
is there a way I can get it to look like this instead:
http://files.parsetfss.com/bb2767e6-fc18-4ff5-a071-199803c9aac2/tfss-d056e28e-1e02-49dd-930b-e46790a2e38d/Drums.png
and for the same extension (tfss-d056e28e-1e02-49dd-930b-e46790a2e38d) apply to each row?
The reason I need this is because I'm actually uploading html files and it can't find its assets if they get renamed...
Have a look at the Cloud Hosting documentation here:
https://parse.com/docs/hosting_guide
Basically whatever files/folders you put in the "public" folder will be publicly available.
You can use it to upload files you want to be shared normally, instead of the way you described in your question which is for files you want to attach to objects.

What is the appropriate extention for creating database in iOS?

I am creating a database application using the SQLite3 library.
I have created a database file using each of the file extensions: .db, .sqlite, and .sql.
All are working fine for me, but my question is which extension should I use in general?
Is there any difference between these file extentions?
The Sqlite documentation seems to use the '.db' extension consistently, but I've seen plenty of Sqlite files that use '.sqlite' instead. Use whatever is meaningful to you. For example, if you're using Core Data to create the database, you might use '.cd' or .'coredata' to remind yourself not to modify the database outside of Core Data. Unless you're planning to transfer the file to some other machine (and really, even then) it won't matter.
The database will live in your application's sandbox, so users will never have to know about the filename or the extension, and other applications typically won't ever see it either. Just give it a distinct name so you can tell it apart from other files that your app might be saving to the same location.

Alternative to X-sendfile in Apache for sending file given a URL?

I'm writing a Rails application that serves files stored on a remote server to the end user.
In my case the files are stored on S3 but the user requests the file via the Rails-application (hiding the actual URL). If the file was on my servers local file-system, I could use the Apache header X-Sendfile to free up the Ruby process for other requests while Apache took over the task of sending the file to the client. But in my case - where the file is not on the local file-system, but on S3 - it seems that I'm forced to download it temporarily inside Rails before sending it to the client.
Isn't there a way for Apache to serve a "remote" file to the client that is not actually on the server it self. I don't mind if Apache has to download the file for this to work, as long as I don't have to tie up the Ruby process while it's going on.
Any suggestions?
Thomas, I have similar requirements/issues and I think I can answer your problem. First (and I'm not 100% sure you care for this part), hiding the S3 url is quite easy as Amazon allows you to point CNAMES to your bucket and use a custom URL instead of the amazon URL. To do that, you need to point your DNS to the correct amazon URL. When I set mine up it was similar to this: files.domain.com points to files.domain.com.s3.amazonaws.com. Then you need to create the bucket with the name of your custom URL (files.domain.com in this example). How to call that URL will be different depending on which gem you use, but a word of warning was that the attachment_fu plugin I was using was incorrectly sending me to files.domain.com/files.domain.com/name_of_file.... I couldn't find the setting to fix it, so a simple .sub method for the S3 portion of the plugin fixed it.
On to your other questions, to execute some rails code (like recording the hit in the db) before downloading you can simply do this:
def download
file = File.find(...
# code to record 'hit' to database
redirect_to 3Object.url_for(file.filename,
bucket,
:expires_in => 3.hours)
end
That code will still cause the file to be served by S3, but and still give you the ability to run some ruby. (Of course the above code won't work as is, you will need to point it to the correct file and bucket and my amazon keys are saved in a config file. The above is also using the syntax for the AWS::S3 gem - http://amazon.rubyforge.org/).
Second, the Content-Disposition: attachment issue is a bit more tricky. Hopefully, your situation is a bit more simple than mine and the following solution can work. Assuming the object 'file' (in this example) is the correct S3 object, you can set the disposition to attachment by
file.content_disposition = "attachment"
file.save
The above code can be executed after the file exists on the S3 server (unlike some other headers and permissions), which is nice and it can also be added when you upload the file (syntax depends on your plugin). I'm still trying to find a way to tell S3 to send it as an attachment and only when requested (not every time), and if you find that, please let me know your solution. I need to be able to sometimes download it and other times save embed images (for example) into HTML. I'm not using the above mentioned redirect but fortunately it seems that if you embed (such as a HTML image tag) a file with the content-disposition/attachment header, and the browser still displays the image normally (but I haven't throughly tested that across enough browsers to send it in the wild).
Hope that helps! Good luck.

Resources