I have a Project model which has_many :attachments.
In the attachment model I have the following code:
has_attached_file :document, default_url: "",
storage: :s3,
s3_credentials: {
access_key_id: "...",
secret_access_key: "..."
},
bucket: "projects",
path: ":id/:filename"
Is path: ":id/:filename" enough to create a uniqe path? I can't find what options are available for path.
If for every record in the attachments table you can have only one AWS key, the :id on its own is sufficient for uniqueness.
You might consider whether you want to obfuscate the URLs for the files with a hash as well, though, or the URLs become predictable. That may not be desirable.
Related
I have three questions related to paperclip and AWS S3.
1) In my model which has paperclip, I have following code:
has_attached_file :attachment,
:url => "/songs/:user_id/:basename.:extension",
:path => "/songs/:user_id/:basename.:extension"
What's the difference between URL and PATH?
2) What is :basename.:extension?
3) Let's say there are two models: User and File. User has many File. Paperclip path and url are configured in File model.
In config/initializers/paperclip.rb, I put below code:
Paperclip.interpolates :user_id do |attachment, style|
attachment.instance.criteria.user_id
end
I confirm that above code is working fine. My file gets saved at songs/5/song.mp3. I would like to save the mp3 file at songs/user_id_5/song.mp3. I tried doing below but it doesn't work.
Paperclip.interpolates :user_id do |attachment, style|
'user_id_' + attachment.instance.criteria.user_id
end
How do I make it as I want to ?
in S3 language path is the Key of your item and url is your s3 endpoint
From docs
url: There are four options for the S3 url. You can choose to have the bucket's name placed domain-style (bucket.s3.amazonaws.com) or
path-style (s3.amazonaws.com/bucket). You can also specify a CNAME
(which requires the CNAME to be specified as :s3_alias_url. You can
read more about CNAMEs and S3 at
docs.amazonwebservices.com/AmazonS3/latest/index.html?VirtualHosting.html
Normally, this won't matter in the slightest and you can leave the
default (which is path-style, or :s3_path_url). But in some cases
paths don't work and you need to use the domain-style
(:s3_domain_url). Anything else here will be treated like path-style.
path: This is the key under the bucket in which the file will be stored. The URL will be constructed from the bucket and the path. This is what you will want to interpolate. Keys should be unique, like filenames, and despite the fact that S3 (strictly speaking) does not support directories, you can still use a / to separate parts of your file name.
you can configure the bucket or url in your config and just pass the path (i.e. where to store the file) when you call the method
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
bucket: 'mybucket'),
access_key_id: ENV.fetch('AWS_ACCESS_KEY_ID'),
secret_access_key: ENV.fetch('AWS_SECRET_ACCESS_KEY'),
s3_region: 'aws_region_id',
}
}
I don't know
You need string interpolation
Paperclip.interpolates :user_id do |attachment, style|
"user_id_#{attachment.instance.criteria.user_id}"
end
We use cloudfront for our images hosted on s3 through paperclip; however it has some aggressive caching and we have part of our code that needs fresh data (some image manipulation).
Is there any way of overriding s3_host_alias on calling the url?
Everything I've found so far regarding this topic talks about adding cloudfront, not about ignoring it; and even then, everything is systemwide.
Our paperclip config:
# Paperclip Config
Paperclip.options[:command_path] = "/usr/bin/"
config.paperclip_defaults = {
storage: :s3,
s3_protocol: :https,
url: ':s3_alias_url',
default_url: "https://#{SETTINGS['s3']['bucket']}.s3.amazonaws.com/missing/:class/:attachment/:style.png",
s3_host_alias: SETTINGS['s3']['cdn_url'],
s3_credentials: {
bucket: SETTINGS['s3']['bucket'],
access_key_id: SETTINGS['s3']['access_key_id'],
secret_access_key: SETTINGS['s3']['secret_access_key']
}
}
This might be dirty but you can try this.
#set the value in your controller
Photo.my_custom_attr = "My custom value"
#In your model
class Photo < ActiveRecord::Base
cattr_accessor :my_custom_attr
Paperclip.interpolates :my_custom_attr do |attachment, style|
Photo.my_custom_attr
end
end
I had this same need so I did some digging through the paperclip code and here's what I found:
The s3 storage module has still registered the interpolation you're looking for, it just isn't used when you call #url on the attachment since you probably have set something like url: ':s3_alias_url' set in your call to has_attached_file. So, what you can do is use the Paperclip interpolator manually by calling interpolate.
For example:
Paperclip::Interpolations.interpolate(':s3_path_url', User.first.avatar, :original)
You can also substitute the other interpolations the s3 module defines for ':s3_path_url', e.g. ':s3_domain_url'.
I have a file already on S3 that I'd like to associate to a pre-existing instance of the Asset model.
Here's the model:
class Asset < ActiveRecord::Base
attr_accessible(:attachment_content_type, :attachment_file_name,
:attachment_file_size, :attachment_updated_at, :attachment)
has_attached_file :attachment, {
storage: :s3,
s3_credentials: {
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
},
convert_options: { all: '-auto-orient' },
url: ':s3_alias_url',
s3_host_alias: ENV['S3_HOST_ALIAS'],
path: ":class/:attachment/:id_partition/:style/:filename",
bucket: ENV['S3_BUCKET_NAME'],
s3_protocol: 'https'
}
end
Let's say the path is assets/attachments/000/111/file.png, and the Asset instance I want to associate with the file is asset. Referring at the source, I've tried:
options = {
storage: :s3,
s3_credentials: {
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
},
convert_options: { all: '-auto-orient' },
url: ':s3_alias_url',
s3_host_alias: ENV['S3_HOST_ALIAS'],
path: "assets/attachments/000/111/file.png",
bucket: ENV['S3_BUCKET_NAME'],
s3_protocol: 'https'
}
# The above is identical to the options given in the model, except for the path
Paperclip::Attachment.new("file.png", asset, options).save
As far as I can tell, this did not affect asset in any way. I cannot set asset.attachment.path manually.
Other questions on SO do not seem to address this specifically.
"paperclip images not saving in the path i've set up", "Paperclip and Amazon S3 how to do paths?", and so on involve setting up the model, which is already working fine.
Anyone have any insight to offer?
As far as I can tell, I do need to turn the S3 object into a File, as suggested by #oregontrail256. I used the Fog gem to do this.
s3 = Fog::Storage.new(
:provider => 'AWS',
:aws_access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:aws_secret_access_key => ENV['AWS_SECRET_ACCESS_KEY']
)
directory = s3.directories.get(ENV['S3_BUCKET_NAME'])
fog_file = directory.files.get(path)
file = File.open("temp", "wb")
file.write(fog_file.body)
asset.attachment = file
asset.save
file.close
Paperclip attachments have a copy_to_local_file() method that allows you to make a local copy of the attachment. So what about:
file_name = "temp_file"
asset1.attachment.copy_to_local_file(:style, file_name)
file = File.open(file_name)
asset2.attachment = file
file.close
asset2.save!
Even if you destroy asset1, you now have a copy of the attachment saved by asset2 separately. You probably want to do this in a background job if you're doing many of them.
Credit to this answer too: How to set a file upload programmatically using Paperclip
How is it possible to allow users to upload images on a website, but the actual uploading is done completely on amazon's servers (so as to not burden your own servers with upload throughput).
Can someone explain how this is performed?
i.e. a user wants to upload an image, instead of streaming the file to my server, and then from my server to amazon's s3 service, it bypasses my server altogether and sends it to amazon.
You can check out these docs provided by Amazon.
You can implement the process by using a SWF uploader, or this gem.
CarrierWave can be used with CarrierWaveDirect to upload images directly to S3. This will also allow you to process the image in a background job.
However, if you want to completely eliminate both the upload and processing burden from your dynos, check out Cloudinary which is unique in that it does all image processing on their servers as well as providing storage for them.
if your using paperclip cant you just do the following?
create a s3.yml file in config
development:
bucket: bucket-dev
access_key_id: xxx
secret_access_key: xxx
test:
bucket: bucket-test
access_key_id: xxx
secret_access_key: xxx
production:
bucket: bucket-pro
access_key_id: xxx
secret_access_key: xxx
#paperclip
has_attached_file :photo,
:styles => {
:thumb=> "100x100#",
:small => "400x400>" },
:storage => :s3,
:s3_credentials => "#{RAILS_ROOT}/config/s3.yml",
:path => "/:style/:id/:filename"
What are some possible implications for Image upload with Paperclip working on my local machine but not when deployed to Heroku?
When its deployed to Heroku, the image won't save.
As far as I know you can't write directly to Heroku's file system, so I am assuming that is your problem. It makes sense to use something like Amazon s3 for image storing. Take a look at this: Amazon S3 in Heroku
Once you have configured your s3, you want to change the paperclip's has_attached_file to something like this:
has_attached_file :my_picture,
:styles => { :medium => "275x275>" },
:storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml",
:path => "user/:attachment/:style/:id.:extension"
Where s3.yml would be the configuration file where you define access keys, buckets...
It should look something like this:
production:
access_key_id: [Your Key]
secret_access_key: [Your Secret]
bucket: [Your bucket name]
Here's another guide/article written by one of Paperclip's developers, it explains in detail how to integrate Paperclip with Heroku and S3