Paperclip: Choose between saving files local or S3 at runtime - ruby-on-rails

I'm using Paperclip for saving files. I have configured successfully for Paperclip saving files directly to Amazon S3. But in some situations, I need files only be saved locally. My question is: How can I do this.
Here is my example Paperclip configuration:
Paperclip.interpolates(:upload_url) { |attachment, style| "#{ENV.fetch('UPLOAD_PROTOCOL', 'http')}://#{ENV.fetch('UPLOAD_DOMAIN', 'localhost:3000')}/uploads/:class/:attachment/:id_:style.:extension" }
Paperclip::Attachment.default_options.merge!(
storage: :s3,
s3_region: ENV['CEPH_REGION'],
s3_protocol: 'http',
s3_host_name: ENV['CEPH_HOST_NAME'],
s3_credentials: {
access_key_id: ENV['CEPH_ACCESS_KEY_ID'],
secret_access_key: ENV['CEPH_SECRET_KEY'],
},
s3_options: {
endpoint: ENV['CEPH_END_POINT'],
force_path_style: true
},
s3_permissions: 'public-read',
bucket: ENV['CEPH_BUCKET'],
url: ':s3_path_url',
path: ':class/:id/:basename.:extension',
use_timestamp: false
)
module Paperclip
def self.string_to_io(options)
data = StringIO.new(options[:data])
data.class.class_eval{ attr_accessor :original_filename }
data.original_filename = options[:original_file_name]
data
end
end

You could use a lambda. As shown here. https://github.com/thoughtbot/paperclip#dynamic-configuration
it would look something like this.
class YOURMODEL < ActiveRecord::Base
has_attached_file :FILE, storage: lambda { |attachment| (attachment.instance.use_s3? ? :s3 : :filesystem) }
end
Then you would have to add a method use_s3? to your model to check if you wanted to store the file locally or with s3.

Related

Paperclip - Upload to different S3 URLs from Different models

I had a model from which I used paperclip and stored to S3.
In paperclip.rb
bucket_name = (Rails.env != 'production') ? "mcds_staging_fulltext" : 'mcds_fulltext'
Paperclip::Attachment.default_options.merge!({
storage: :s3,
s3_credentials: {
bucket: bucket_name
},
url: "#{CUSTOMER}/static_cover_images/:style/:basename.:extension",
path: "#{CUSTOMER}/static_cover_images/:style/:basename.:extension"
})
model1.rb
has_attached_file :cover_image, styles: { :original => ["100%"], :thumbnail => ["100*100", :png] }
validates_attachment_content_type :cover_image, content_type: /\Aimage\/.*\Z/, message: 'Invalid Content Type. Please upload jpg/jpeg/png/gif'
validates_attachment_size :cover_image, :in => 0.megabytes..5.megabytes, :message => 'must be smaller than 5 MB'
this works poperly and store my images in S3 in correct location.
Now I have another model from where I need to upload a paperclip attachment to a different S3 Location.
In model2.rb
has_attached_file :xslt
Paperclip::Attachment.default_options.merge!({url: "#{CUSTOMER}/xslts/:style/:basename.:extension"})
validates_attachment_content_type :xslt, content_type: "application/xslt+xml", message: 'Invalid Content Type. Please upload jpg/jpeg/png/gif'
validates_attachment_size :xslt, :in => 0.megabytes..5.megabytes, :message => 'must be smaller than 5 MB'
but this attachment still stores to my model1 S3 and not in the url specified in model2.
What am I doing wrong?
I completely removed paper_clip.rb file and added paper_clip configurations in therespective models.
model1:
has_attached_file :file1,
storage: :s3,
s3_credentials: {
bucket: bucket_name
},
url: "#{CUSTOMER}/cover_images/:style/:basename.:extension",
path: "#{CUSTOMER}/cover_images/:style/:basename.:extension"
model2:
has_attached_file :file2,
storage: :s3,
s3_credentials: {
bucket: bucket_name
},
url: "#{CUSTOMER}/file2_folder/:style/:basename.:extension",
path: "#{CUSTOMER}/file2_folder/:style/:basename.:extension"
this stores theattachment correctly in differen S3 folders.

How to change S3 bucket URL to bucket first then url second

I am creating a rails API app using Paperclip and aws-sdk gems.
The app saves the URL as a string. The url saved is the following.
http://s3.amazonaws.com/S3_BUCKET_/profiles/avatars/000/000/001/original/avatar.png?1457514823
I cant open the above image. Its because the url for it when taken from s3 is the following
http://S3_BUCKET_/s3.amazonaws.com//profiles/avatars/000/000/001/original/avatar.png?1457514823
See how the bucket is first? But the url saved in the database has the bucket second? How do i change the saved URL to have the bucket first?
config/initializers/paperclip.rb
Paperclip::Attachment.default_options.update(
default_url: "https://#{Rails.application.secrets.bucket}.s3-ap-southeast-2.amazonaws.com/" \
"/profiles/avatars/default/missing.jpg")
config/aws.yml
development: &defaults
access_key_id: s3_access_key
secret_access_key: s3 secret key
s3_region: ap-southeast-2
test:
secret_access_key: s3 secret key
staging:
<<: *defaults
access_key_id: s3_access_key
secret_access_key: <%= ENV["SECRET_KEY_BASE"] %>
production:
<<: *defaults
access_key_id: s3_access_key
secret_access_key: <%= ENV["SECRET_KEY_BASE"] %>
profile.rb it has the attachment saved
require "base64"
class Profile < ActiveRecord::Base
belongs_to :user
validates :user, presence: true
has_attached_file :avatar, styles: { thumb: "100x100>" }
validates_attachment_content_type :avatar, content_type: /image/i
def avatar_url
avatar && avatar.url
end
def avatar_base64=(image_base64)
file = Paperclip.io_adapters.for(image_base64)
file.original_filename = file.content_type.sub("image/", "avatar.")
self.avatar = file
end
You can add a default url in config/initializers/paperclip.rb like this:
Paperclip::Attachment.default_options[:url] = ':s3_domain_url'
Or you can configure directly in your environment configuration, i.e. config/environments/production.rb:
config.paperclip_defaults = {
storage: :s3,
url: ':s3_domain_url',
...
}
It's important to note that :s3_domain_url is a string, not a symbol.

Associate object to pre-existing file on S3, using Paperclip

I have a file already on S3 that I'd like to associate to a pre-existing instance of the Asset model.
Here's the model:
class Asset < ActiveRecord::Base
attr_accessible(:attachment_content_type, :attachment_file_name,
:attachment_file_size, :attachment_updated_at, :attachment)
has_attached_file :attachment, {
storage: :s3,
s3_credentials: {
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
},
convert_options: { all: '-auto-orient' },
url: ':s3_alias_url',
s3_host_alias: ENV['S3_HOST_ALIAS'],
path: ":class/:attachment/:id_partition/:style/:filename",
bucket: ENV['S3_BUCKET_NAME'],
s3_protocol: 'https'
}
end
Let's say the path is assets/attachments/000/111/file.png, and the Asset instance I want to associate with the file is asset. Referring at the source, I've tried:
options = {
storage: :s3,
s3_credentials: {
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
},
convert_options: { all: '-auto-orient' },
url: ':s3_alias_url',
s3_host_alias: ENV['S3_HOST_ALIAS'],
path: "assets/attachments/000/111/file.png",
bucket: ENV['S3_BUCKET_NAME'],
s3_protocol: 'https'
}
# The above is identical to the options given in the model, except for the path
Paperclip::Attachment.new("file.png", asset, options).save
As far as I can tell, this did not affect asset in any way. I cannot set asset.attachment.path manually.
Other questions on SO do not seem to address this specifically.
"paperclip images not saving in the path i've set up", "Paperclip and Amazon S3 how to do paths?", and so on involve setting up the model, which is already working fine.
Anyone have any insight to offer?
As far as I can tell, I do need to turn the S3 object into a File, as suggested by #oregontrail256. I used the Fog gem to do this.
s3 = Fog::Storage.new(
:provider => 'AWS',
:aws_access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:aws_secret_access_key => ENV['AWS_SECRET_ACCESS_KEY']
)
directory = s3.directories.get(ENV['S3_BUCKET_NAME'])
fog_file = directory.files.get(path)
file = File.open("temp", "wb")
file.write(fog_file.body)
asset.attachment = file
asset.save
file.close
Paperclip attachments have a copy_to_local_file() method that allows you to make a local copy of the attachment. So what about:
file_name = "temp_file"
asset1.attachment.copy_to_local_file(:style, file_name)
file = File.open(file_name)
asset2.attachment = file
file.close
asset2.save!
Even if you destroy asset1, you now have a copy of the attachment saved by asset2 separately. You probably want to do this in a background job if you're doing many of them.
Credit to this answer too: How to set a file upload programmatically using Paperclip

Paperclip multiple storage

I want to move my assets folder to Amazon S3 and since it has a big size, during the transaction i need to upload files both in my local storage and amazon s3 through paperclip.
Is there a way to configure paperclip to store uploaded files both on filesystem and amazon s3?
Maybe you'd benefit from this:
http://airbladesoftware.com/notes/asynchronous-s3/
What you'll have to do is firstly upload to your local storage, and then "asynchronously" upload to S3
This is typically done through the likes of Resque or DelayedJob (as the tutorial demonstrates), and will require you to run some sort of third-party processing engine on your server (typically Redis or similar)
From the tutorial:
### Models ###
class Person < ActiveRecord::Base
has_attached_file :local_image,
path: ":rails_root/public/system/:attachment/:id/:style/:basename.:extension",
url: "/system/:attachment/:id/:style/:basename.:extension"
has_attached_file :image,
styles: {large: '500x500#', medium: '200x200#', small: '70x70#'},
convert_options: {all: '-strip'},
storage: :s3,
s3_credentials: "#{Rails.root}/config/s3.yml",
s3_permissions: :private,
s3_host_name: 's3-eu-west-1.amazonaws.com',
s3_headers: {'Expires' => 1.year.from_now.httpdate,
'Content-Disposition' => 'attachment'},
path: "images/:id/:style/:filename"
after_save :queue_upload_to_s3
def queue_upload_to_s3
Delayed::Job.enqueue ImageJob.new(id) if local_image? && local_image_updated_at_changed?
end
def upload_to_s3
self.image = local_image.to_file
save!
end
end
class ImageJob < Struct.new(:image_id)
def perform
image = Image.find image_id
image.upload_to_s3
image.local_image.destroy
end
end
### Views ###
# app/views/people/edit.html.haml
# ...
= f.file_field :local_image
# app/views/people/show.html.haml
- if #person.image?
= image_tag #person.image.expiring_url(20, :small)
- else
= image_tag #person.local_image.url, size: '70x70'

Rails Paperclip S3 ArgumentError (missing required :bucket option):

I've been stuck on this for ages now and can't figure out what's wrong. There are a lot of people that seem to have this same problem, but I can't actually find any answers that actually work.
production.rb
config.paperclip_defaults = {
:storage => :s3,
:s3_credentials => {
:bucket => ENV['my bucket name is here'],
:access_key_id => ENV['my key is here'],
:secret_access_key => ENV['my secret key is here']
}
}
game.rb
require 'aws/s3'
class Game < ActiveRecord::Base
attr_accessible :swf, :swf_file_name, :name, :description, :category, :age_group, :dimension_x, :dimension_y, :image, :image_file_name, :feature_image, :feature_image_file_name, :developer, :instructions, :date_to_go_live, :date_to_show_countdown, :plays
has_attached_file :swf
has_attached_file :image
has_attached_file :feature_image
def swfupload_file=(data)
data.content_type =
MIME::Types.type_for(data.original_filename).first.content_type
logger.warn("Data content type is: #{data.content_type}")
self.file = data
end
end
paperclip.rb
Paperclip::Attachment.default_options[:url] = ':s3_domain_url'
Paperclip::Attachment.default_options[:path] = '/:class/:attachment/:id_partition/:style/:filename'
Here is my paperclip initialization stuff:
Paperclip::Attachment.default_options.merge!({
storage: :s3,
s3_credentials: {
access_key_id: ENV['S3_KEY'],
secret_access_key: ENV['S3_SECRET'],
bucket: "#{ENV['S3_BUCKET']}-#{Rails.env}"
},
url: ":s3_domain_url",
path: "/:class/:attachment/:id_partition/:style/:filename"
})
This assumes that we have three environment variables setup called, you guessed it... S3_KEY, S3_SECRET, and S3_BUCKET. I did a little trick so that I could have a different bucket in each environment by adding Rails.env to the bucket variable.
You seem to indicate in your question that you're putting the actual name of the bucket in the reference to ENV, which would not work. You should put the name of the bucket in the environment variable and use the name of the environment variable as the key.
I hope this helps.

Resources