How to update a file/image in ROR using AWS S3? - ruby-on-rails

I'm developing an image editing app in Ruby on Rails, and I want to update my image on AWS S3 cloud storage.
Currently I have a system where the user signs in, then using Carrierwave uploader uploads the image on S3 using fog in production,
then I have an AJAX call to the controller which triggers the editing using mini_magick gem, then finally reloading the image.
The problem is that I don't know how to reupload it on S3 (updating the image), locally it works fine, but the problem is in production on Heroku with S3.
One of the answers was this, but for me it doesn't work: (AWS::S3::S3Object.store 's3/path/to/untitled.txt', params[:textarea_contents_params_name], your_bucket_name).
This is my code:
def flip # controller
imagesource = params["imagesource"].to_s # URL
image = MiniMagick::Image.open("#{imagesource}")
image.combine_options do |i|
i.flip
end
# image.write "#{imagesource}" # development
# production
AWS::S3::Base.establish_connection!(
:access_key_id => 'xxxxxxxxxxxxxx',
:secret_access_key => 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
)
AWS::S3::S3Object.store '#{imagesource}', image, 'mybucket'
AWS::S3::S3Object.store('#{imagesource}', open(image), 'mybucket') #2nd attempt
respond_to do |format|
format.json {render :json => {:result => image}}
end
end

You need to write the file to disk (as you do in the development) before attempting to upload the file to disk. Your code should explicitly write the data to a temporary location (eg use a Tempfile) rather than let that location be controlled by input from the user
If as the comment suggests the user input is a URL, recent security updates may prevent you from directly passing a URL to minimagick. If so download the image first (to a temporary file) and then pass that to minimagick.
It looks like you are using aws-s3 for your s3 uploads (super old and maintained etc). If so, it's not completely clear what the values of the parameters are, but when you upload the image to s3 you probably only specify the path portion of the URL. The second argument to store can either be an IO object (such as an instance of File) or a string containing the actual data.

Related

Updating Paperclip path file names from on server to s3

I have a paperclip instance that I am migrating my files to a different area. Originally the files were stored on my server and just given a filename based on the id of the record created and the original id. Now I'm moving them to s3 and want to update the filenames to work appropriately. I setup my paperclip config like so:
:path => ":class/:attachment/:hash-:style.:extension",
:url => ":s3_domain_url",
:hash_secret => SECRET,
:hash_data => ":class/:attachment/:id/:updated_at"
I updated the original records filenames for my files to be unique and moved them over to my s3 instance. Unfortunately now I am unable to pull down the files from s3 and I think it is because paperclip is using the wrong path for the filenames. One that is based off the path default that is now set using my config file. I want to be able to update my files file_name field so that the path is correct for the new files and I am able to download them appropriately. Is there a way to call paperclips hashing function based on my secret and hash_data directly so I can update those file_name fields and be able to pull those records now? Everything that has been uploaded since the move from my original servers seems to work appropriately.
Say you have a model User with an attachment named profile_pic;
Go into the rails console eg. rails c and then get an object for the model you have the attachment on, eg. u = User.find(100).
Now type u.profile_pic.url to get the url or u.profile_pic_file_name to get the filename.
To see the effect of other options (for example your old options) you can do;
p = u.profile_pic # gets the paperclip attachment for profile_pic
puts p.url # gets the current url
p.options.merge!(url: '/blah/:class/:attachment/:id_partition/:style/:filename')
puts p.url # now shows url with the new options
Similarly p.path will show the local file path with whatever options you pick.
Long story short, something like;
User.where('created_at < some_date').map do |x|
"#{x.id} #{x.profile_pic_file_name} #{x.profile_pic.path}"
end
should give you what you want :)

Carrierwave + Fog + S3 remove file without going through a model

I am building an application that has a chat component to it. The application allows users to upload files to the chat. The chat is all javascript but i wanted to use Carrierwave for the uploads because i am using it elsewhere in the application. I am doing the handling of the uploads through AJAX so that i can get into Rails land and let Carrierwave take over.
I have been able to get the chat to successfully upload the files to the correct location in my S3 bucket. The thing i can't figure out is how to delete the files. Here is my code the uploads the files - this is the method that is called from the route that the AJAX call hits.
def upload
file = File.open(params[:file_0].tempfile)
uploader = ChatUploader.new
uploader.store!(file)
end
There is little to no documentation with Carrierwave on how to upload files without going through a model and basically NO documentation on how to remove files without going through a model. I assume it is possible though - i just need to know what to call. So i guess my question is how do i delete files?
UPDATE (11/23)
I got the code to save and delete files from S3 using these methods:
# code to save the file
def upload
file = File.open(params[:file_0].tempfile)
uploader = ChatUploader.new
uploader.store!(file)
uploader.store_path()
end
# code to remove files
def remove_file
file = params[:file]
uploader = ChatUploader.new
uploader.retrieve_from_store!(file)
uploader.remove!
end
My only issue now is that the filename for the uploaded file is not correct. It saves all files with a "RackMultipart" and then some numbers which look like a date, time, and identifier? (example: RackMultipart20141123-17740-1tq4j1g) Need to try and use the original filename plus maybe a timestamp for uniqueness.
I believe it has something to do with these two lines:
file = File.open(params[:file_0].tempfile)
and
uploader.store!(file)

Rails: allow download of files stored on S3 without showing the actual S3 URL to user

I have a Rails application hosted on Heroku. The app generates and stores PDF files on Amazon S3. Users can download these files for viewing in their browser or to save on their computer.
The problem I am having is that although downloading of these files is possible via the S3 URL (like "https://s3.amazonaws.com/my-bucket/F4D8CESSDF.pdf"), it is obviously NOT a good way to do it. It is not desirable to expose to the user so much information about the backend, not to mention the security issues that rise.
Is it possible to have my app somehow retrieve the file data from S3 in a controller, then create a download stream for the user, so that the Amazon URL is not exposed?
You can create your s3 objects as private and generate temporary public urls for them with url_for method (aws-s3 gem). This way you don't stream files through your app servers, which is more scalable. It also allows putting session based authorization (e.g. devise in your app), tracking of download events, etc.
In order to do this, change direct links to s3 hosted files into links to controller/action which creates temporary url and redirects to it. Like this:
class HostedFilesController < ApplicationController
def show
s3_name = params[:id] # sanitize name here, restrict access to only some paths, etc
AWS::S3::Base.establish_connection!( ... )
url = AWS::S3::S3Object.url_for(s3_name, YOUR_BUCKET, :expires_in => 2.minutes)
redirect_to url
end
end
Hiding of amazon domain in download urls is usually done with DNS aliasing. You need to create CNAME record aliasing your subdomain, e.g. downloads.mydomain, to s3.amazonaws.com. Then you can specify :server option in AWS::S3::Base.establish_connection!(:server => "downloads.mydomain", ...) and S3 gem will use it for generating links.
Yes, this is possible - just fetch the remote file with Rails and either store it temporarily on your server or send it directly from the buffer. The problem with this is of course the fact that you need to fetch the file first before you can serve it to the user. See this thread for a discussion, their solution is something like this:
#environment.rb
require 'open-uri'
#controller
def index
data = open(params[:file])
send_data data, :filename => params[:name], ...
end
This issue is also somewhat related.
First you need create a CNAME in your domain, like explain here.
Second you need create a bucket with the same name that you put in CNAME.
And to finish you need add this configurations in your config/initializers/carrierwave.rb:
CarrierWave.configure do |config|
...
config.asset_host = 'http://bucket_name.your_domain.com'
config.fog_directory = 'bucket_name.your_domain.com'
...
end

Rails and Paperclip, use a different image, not an uploaded one to be processed

I was wondering there was a way to specify a different image for the one to be processed by paperclip?
So instead of the user uploading an image, an image url would be used instead or an exsiting different image on your server thought could just be pointed to?
Cheers.
Edit:
Just to be clear, what I'm looking for is, when an image is uploaded, processed, moved to a folder and attached to a folder. That base image is not uploaded via a form but instead is fetched from a URL and them processed, moved etc 
Here's a method to use a RemoteFile instead of an uploaded file... Check out the blog post for how to create a RemoteFile, which is a subclass of TempFile
#console
remote_file = RemoteFile.new("http://www.google.com/intl/en_ALL/images/logo.gif")
remote_file.original_filename #=> logo.gif
remote_file.content_type #= image/gif
#controller
def import
#...snip
#imported_user.images.create(:file => RemoteFile.new( url_to_image ))
#...snip
end
http://coryodaniel.com/index.php/2010/03/05/attaching-local-or-remote-files-to-paperclip-and-milton-models-in-rails-mocking-content_type-and-original_filename-in-a-tempfile/
I'm not sure if your requirement is for the user to provide this image or not, but there is a way to have a default image if they don't provide one. This is done using the default_url and default style options like so:
has_attached_image :my_image
:default_url => "/path/to/default_image.jpg",
:default_style => :thumb

amazon s3 virtual hosting of bucket

I have uploaded a file on s3 using paperclip.. the file upload process works fine..
Now i wanted to download it. In my model i have set my :s3_host_alias.. now as the file is private.. so if i am trying to fetch the file using paperclip url method... it's giving me access denied error...
and if i am using S3Object.url_for method then the url return is s3.amazonaws.com/mybucket/path_of_file.
I don't want tht s3.amazonaws.com to be shown in the url so used :s3_host_alias in my model
and created a CNAME inmy DNS server... now if i am directly using #object.url then its giving the correct url but throws access denied error. because i guess the access_key and signature is not passed..
Is there a way to fetch private file from s3 using paperclip by using canonical url..
I don't use paperclip, but yes, you can sign a S3 request using a virtual hostname.
I had this problem using Paperclip and the AWS::S3 gem. Paperclip set up everything fine for non-authenticated requests. But falling back to AWS::S3 to generate an authenticated URL didn't use the S3 host alias.
You can pass AWS::S3 a server option on connect, but I didn't need or want a connection just to get the URL. I also couldn't see a way to set it via configuration (so it would apply outside of a connection). Even glancing at the source, it looks like it's non-configurable.
So, I created a monkey patch. My Ruby-fu (and maybe my OO-fu) aren't super high, so there may be a better way to do this, but it works for what I need. Basically, I pass url_for an :s3_host_alias param on the option hash, and then the monkey patch uses that if it's passed. If it's passed, it also has to remove the bucket from the path that's generated.
So....
You can create this 1-line file, RAILS_ROOT/initializers/load_patches.rb, to load all patches in RAILS_ROOT/lib:
Dir[File.join(Rails.root, 'lib', 'patches', '**', '*.rb')].sort.each { |patch| require(patch) }
Then create the file RAILS_ROOT/lib/patches/aws.rb with this code:
http://pastie.org/1622881
And you can call for an authenticated url with something along these lines (Configuration is a custom class for storing, natch, configuration values) :
AWS::S3::S3Object.url_for(media.path(style || media.default_style), media.bucket_name, :expires_in => expires_in, :use_ssl => false, :s3_host_alias => Configuration.s3_host_alias)

Resources