I have a paperclip instance that I am migrating my files to a different area. Originally the files were stored on my server and just given a filename based on the id of the record created and the original id. Now I'm moving them to s3 and want to update the filenames to work appropriately. I setup my paperclip config like so:
:path => ":class/:attachment/:hash-:style.:extension",
:url => ":s3_domain_url",
:hash_secret => SECRET,
:hash_data => ":class/:attachment/:id/:updated_at"
I updated the original records filenames for my files to be unique and moved them over to my s3 instance. Unfortunately now I am unable to pull down the files from s3 and I think it is because paperclip is using the wrong path for the filenames. One that is based off the path default that is now set using my config file. I want to be able to update my files file_name field so that the path is correct for the new files and I am able to download them appropriately. Is there a way to call paperclips hashing function based on my secret and hash_data directly so I can update those file_name fields and be able to pull those records now? Everything that has been uploaded since the move from my original servers seems to work appropriately.
Say you have a model User with an attachment named profile_pic;
Go into the rails console eg. rails c and then get an object for the model you have the attachment on, eg. u = User.find(100).
Now type u.profile_pic.url to get the url or u.profile_pic_file_name to get the filename.
To see the effect of other options (for example your old options) you can do;
p = u.profile_pic # gets the paperclip attachment for profile_pic
puts p.url # gets the current url
p.options.merge!(url: '/blah/:class/:attachment/:id_partition/:style/:filename')
puts p.url # now shows url with the new options
Similarly p.path will show the local file path with whatever options you pick.
Long story short, something like;
User.where('created_at < some_date').map do |x|
"#{x.id} #{x.profile_pic_file_name} #{x.profile_pic.path}"
end
should give you what you want :)
Related
I'm developing an image editing app in Ruby on Rails, and I want to update my image on AWS S3 cloud storage.
Currently I have a system where the user signs in, then using Carrierwave uploader uploads the image on S3 using fog in production,
then I have an AJAX call to the controller which triggers the editing using mini_magick gem, then finally reloading the image.
The problem is that I don't know how to reupload it on S3 (updating the image), locally it works fine, but the problem is in production on Heroku with S3.
One of the answers was this, but for me it doesn't work: (AWS::S3::S3Object.store 's3/path/to/untitled.txt', params[:textarea_contents_params_name], your_bucket_name).
This is my code:
def flip # controller
imagesource = params["imagesource"].to_s # URL
image = MiniMagick::Image.open("#{imagesource}")
image.combine_options do |i|
i.flip
end
# image.write "#{imagesource}" # development
# production
AWS::S3::Base.establish_connection!(
:access_key_id => 'xxxxxxxxxxxxxx',
:secret_access_key => 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
)
AWS::S3::S3Object.store '#{imagesource}', image, 'mybucket'
AWS::S3::S3Object.store('#{imagesource}', open(image), 'mybucket') #2nd attempt
respond_to do |format|
format.json {render :json => {:result => image}}
end
end
You need to write the file to disk (as you do in the development) before attempting to upload the file to disk. Your code should explicitly write the data to a temporary location (eg use a Tempfile) rather than let that location be controlled by input from the user
If as the comment suggests the user input is a URL, recent security updates may prevent you from directly passing a URL to minimagick. If so download the image first (to a temporary file) and then pass that to minimagick.
It looks like you are using aws-s3 for your s3 uploads (super old and maintained etc). If so, it's not completely clear what the values of the parameters are, but when you upload the image to s3 you probably only specify the path portion of the URL. The second argument to store can either be an IO object (such as an instance of File) or a string containing the actual data.
I have a field on my model Site called file_link. In the edit form, I want there to be a field for file_link with a Browse button next to the field, which pulls up a file browser on their local computer. I want them to be able to select a file, then have rails save the users local Path to the file, not the actual file.
For Example: file_link should save the path: N:\Projects\excelfile.xlsx
How can this be achieved?
when you visit a website, there is no way for the website to access your filesystem unless they somehow hack you. This isn't specific to Rails sites ... it's a fundemental security precaution of the internet. If you want to access your users' filesystem, you may be able able to do it with Javascript after gettin their permission, but might not be be easy. See Can javascript access a filesystem?
However, if you are building this app for localhost use only, you can use Ruby to manipulate/show the filesystem all you won't. But it's going to be limited to the filesystem running the Ruby program.
There few gems able to help you with that. And one of them is called Carriervawe.
Gem LINK
u = Site.new
u.file_link = params[:file] # Assign a file like this, or
# like this
File.open('somewhere') do |f|
u.file_link = f
end
u.save!
u.file_link.url # => '/url/to/file.png'
u.file_link.current_path # => 'path/to/file.png'
u.file_link_identifier # => 'file.png'
Is it possible to allow the user to rename the uploaded file?
If there is a share link, will it be automatically updated. I am not able to do this since i cant first figure out how to rename the file.
You can rename the files and then change the record file name. For instance, based on this answer, you can do:
(record.image.styles.keys+[:original]).each do |style|
path = record.image.path(style)
FileUtils.move(path, File.join(File.dirname(path), new_file_name))
end
record.image_file_name = new_file_name
record.save
If you're using Amazon S3, you can do:
AWS::S3::S3Object.move_to record.image.path(style), new_file_path, record.image.bucket_name
Check this out: Paperclip renaming files after they're saved
I'm making a small app to upload plain text files using Paperclip. I have an Upload model that has a document attachment. I want to rename the uploaded file so that it is the same as Upload.title.
I've used a Paperclip interpolation to do this.
#config/initializers/paperclip.rb
Paperclip.interpolates('upload_title') do |attachment, style|
attachment.instance.title.parameterize
end
#app/models/upload.rb
has_attached_file :document,
:url => "/:attachment/:id/:upload_title.:extension",
:path => ":rails_root/public/:attachment/:id/:upload_title.:extension"
However, the file itself is renamed but the document_file_name in the database remains as it was.
I've made a test app and uploaded to github here
Here I create a new Upload and attach the file "Original File Name.txt"
garethrees.co.uk/misc/new.JPG
Here you see the new Upload created, still with the original file name.
garethrees.co.uk/misc/created.JPG
And also in the database, the document_file_name remains the same as it was.
garethrees.co.uk/misc/db.JPG
However, in the actual filesystem the document is renamed.
garethrees.co.uk/misc/finder.JPG
I really need both records to match as I need to use the Paperclip path in order for users to download the files.
Thanks
create a callback function for after_document_post_process where you set the document_file_name yourself to the title + extension.
I have uploaded a file on s3 using paperclip.. the file upload process works fine..
Now i wanted to download it. In my model i have set my :s3_host_alias.. now as the file is private.. so if i am trying to fetch the file using paperclip url method... it's giving me access denied error...
and if i am using S3Object.url_for method then the url return is s3.amazonaws.com/mybucket/path_of_file.
I don't want tht s3.amazonaws.com to be shown in the url so used :s3_host_alias in my model
and created a CNAME inmy DNS server... now if i am directly using #object.url then its giving the correct url but throws access denied error. because i guess the access_key and signature is not passed..
Is there a way to fetch private file from s3 using paperclip by using canonical url..
I don't use paperclip, but yes, you can sign a S3 request using a virtual hostname.
I had this problem using Paperclip and the AWS::S3 gem. Paperclip set up everything fine for non-authenticated requests. But falling back to AWS::S3 to generate an authenticated URL didn't use the S3 host alias.
You can pass AWS::S3 a server option on connect, but I didn't need or want a connection just to get the URL. I also couldn't see a way to set it via configuration (so it would apply outside of a connection). Even glancing at the source, it looks like it's non-configurable.
So, I created a monkey patch. My Ruby-fu (and maybe my OO-fu) aren't super high, so there may be a better way to do this, but it works for what I need. Basically, I pass url_for an :s3_host_alias param on the option hash, and then the monkey patch uses that if it's passed. If it's passed, it also has to remove the bucket from the path that's generated.
So....
You can create this 1-line file, RAILS_ROOT/initializers/load_patches.rb, to load all patches in RAILS_ROOT/lib:
Dir[File.join(Rails.root, 'lib', 'patches', '**', '*.rb')].sort.each { |patch| require(patch) }
Then create the file RAILS_ROOT/lib/patches/aws.rb with this code:
http://pastie.org/1622881
And you can call for an authenticated url with something along these lines (Configuration is a custom class for storing, natch, configuration values) :
AWS::S3::S3Object.url_for(media.path(style || media.default_style), media.bucket_name, :expires_in => expires_in, :use_ssl => false, :s3_host_alias => Configuration.s3_host_alias)