I am creating a rails API app using Paperclip and aws-sdk gems.
The app saves the URL as a string. The url saved is the following.
http://s3.amazonaws.com/S3_BUCKET_/profiles/avatars/000/000/001/original/avatar.png?1457514823
I cant open the above image. Its because the url for it when taken from s3 is the following
http://S3_BUCKET_/s3.amazonaws.com//profiles/avatars/000/000/001/original/avatar.png?1457514823
See how the bucket is first? But the url saved in the database has the bucket second? How do i change the saved URL to have the bucket first?
config/initializers/paperclip.rb
Paperclip::Attachment.default_options.update(
default_url: "https://#{Rails.application.secrets.bucket}.s3-ap-southeast-2.amazonaws.com/" \
"/profiles/avatars/default/missing.jpg")
config/aws.yml
development: &defaults
access_key_id: s3_access_key
secret_access_key: s3 secret key
s3_region: ap-southeast-2
test:
secret_access_key: s3 secret key
staging:
<<: *defaults
access_key_id: s3_access_key
secret_access_key: <%= ENV["SECRET_KEY_BASE"] %>
production:
<<: *defaults
access_key_id: s3_access_key
secret_access_key: <%= ENV["SECRET_KEY_BASE"] %>
profile.rb it has the attachment saved
require "base64"
class Profile < ActiveRecord::Base
belongs_to :user
validates :user, presence: true
has_attached_file :avatar, styles: { thumb: "100x100>" }
validates_attachment_content_type :avatar, content_type: /image/i
def avatar_url
avatar && avatar.url
end
def avatar_base64=(image_base64)
file = Paperclip.io_adapters.for(image_base64)
file.original_filename = file.content_type.sub("image/", "avatar.")
self.avatar = file
end
You can add a default url in config/initializers/paperclip.rb like this:
Paperclip::Attachment.default_options[:url] = ':s3_domain_url'
Or you can configure directly in your environment configuration, i.e. config/environments/production.rb:
config.paperclip_defaults = {
storage: :s3,
url: ':s3_domain_url',
...
}
It's important to note that :s3_domain_url is a string, not a symbol.
Related
I'm using Paperclip for saving files. I have configured successfully for Paperclip saving files directly to Amazon S3. But in some situations, I need files only be saved locally. My question is: How can I do this.
Here is my example Paperclip configuration:
Paperclip.interpolates(:upload_url) { |attachment, style| "#{ENV.fetch('UPLOAD_PROTOCOL', 'http')}://#{ENV.fetch('UPLOAD_DOMAIN', 'localhost:3000')}/uploads/:class/:attachment/:id_:style.:extension" }
Paperclip::Attachment.default_options.merge!(
storage: :s3,
s3_region: ENV['CEPH_REGION'],
s3_protocol: 'http',
s3_host_name: ENV['CEPH_HOST_NAME'],
s3_credentials: {
access_key_id: ENV['CEPH_ACCESS_KEY_ID'],
secret_access_key: ENV['CEPH_SECRET_KEY'],
},
s3_options: {
endpoint: ENV['CEPH_END_POINT'],
force_path_style: true
},
s3_permissions: 'public-read',
bucket: ENV['CEPH_BUCKET'],
url: ':s3_path_url',
path: ':class/:id/:basename.:extension',
use_timestamp: false
)
module Paperclip
def self.string_to_io(options)
data = StringIO.new(options[:data])
data.class.class_eval{ attr_accessor :original_filename }
data.original_filename = options[:original_file_name]
data
end
end
You could use a lambda. As shown here. https://github.com/thoughtbot/paperclip#dynamic-configuration
it would look something like this.
class YOURMODEL < ActiveRecord::Base
has_attached_file :FILE, storage: lambda { |attachment| (attachment.instance.use_s3? ? :s3 : :filesystem) }
end
Then you would have to add a method use_s3? to your model to check if you wanted to store the file locally or with s3.
I am getting a 'fetch': key not found: "S3_BUCKET_NAME" (KeyError) error in rails (4.2.3) using 'aws-sdk', '~> 2.3' and "paperclip", "~> 5.0.0"
I have set the Keys in my environment via terminal and running heroku config shows them listed.
In both my config/environments/development.rb as well as in my config production.rb I have included:
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
bucket: ENV.fetch('S3_BUCKET_NAME'),
access_key_id: ENV.fetch('AWS_ACCESS_KEY_ID'),
secret_access_key: ENV.fetch('AWS_SECRET_ACCESS_KEY'),
s3_region: ENV.fetch('AWS_REGION'),
}
}
I have also included the above code in my user.rb model, but for the sake of reference it looks like this in the model:
has_attached_file :avatar,
styles: { medium: "300x300#", thumb: "100x100#" },
:convert_options => {
:thumb => "-quality 75 -strip" },
:storage => :s3,
:s3_credentials => {
:bucket => ENV['S3_BUCKET_NAME'],
:access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:secret_access_key => ENV['AWS_SECRET_ACCESS_KEY'],
:region => ENV['AWS_REGION']
},
:path => ":filename.:extension",
# :path => ":rails_root/public/system/:attachment/:id/:style/:filename",
:default_url => "default_img.png"
validates_attachment_content_type :avatar, content_type: /\Aimage\/.*\Z/
I have also included the env vars in my secrets.yml:
development:
secret_key_base: 817c07d41b8524495628fbe91fb1f0535ade65aa96a3fee379a8d16c29cc1f7b167f537442e547422ab17ee9700028a95896eb1c0717de06dfe7895d15ddb5ce
secret_key: sk_test_xxx
publishable_key: pk_test_xxx
access_key_id: xxx
secret_access_key: xxx
s3_bucket_name: 'bucket-name'
test:
secret_key_base: a38e71848a4d9bc63fa8dce4522add10a4931b10e6786f0cab6a9eb1643e271b992f52fa6eff672b0d03687003099c0632477dd26b246ac4e637c52c69ec4ab0
# Do not keep production secrets in the repository,
# instead read values from the environment.
production:
secret_key_base: <%= ENV["SECRET_KEY_BASE"] %>
secret_key: <%= ENV["SECRET_KEY"] %>
publishable_key: <%= ENV["PUBLISHABLE_KEY"] %>
access_key_id: <%= ENV['AWS_ACCESS_KEY_ID'] %>
secret_access_key: <%= ENV['AWS_SECRET_ACCESS_KEY'] %>
s3_bucket_name: <%= ENV["S3_BUCKET_NAME"]%>
although that may not have been required. Googling around and going through the few other posts on SO related to this error have given me little to go on - does anybody have any ideas on what the issue may be?
'fetch': key not found: "S3_BUCKET_NAME" means that the environment variables S3_BUCKET_NAME does not have a value.
In your case you are using Heroku. Follow the instructions in the link below.
For Heroku: https://devcenter.heroku.com/articles/config-vars.
If you are using ENV['variables'] you need to have them set in every environment. test, production and development. If you are developing on a PC, MAC, or Linux you need to make sure the environment variables are set.
For linux:
https://www.digitalocean.com/community/tutorials/how-to-read-and-set-environmental-and-shell-variables-on-a-linux-vps
For OSX:
I do it the same was as linux, I just set them in my ~/.zshrc by adding a line like this. If you aren't using zshell then add a line to your ~/.bashrc or ~/.bash_profile.
export ENV_VARIABLE_NAME="value"
For Windows:
I don't know how, but I am sure google does.
I'm using AWS-SDK gem in my Rails project, and I want a kind-of initializer file to connect directly to my repo and make changes directly in the Rails console, something like this:
# At config/initializers/aws.rb
Aws::S3::Client.new(
:access_key_id => 'ACCESS_KEY_ID',
:secret_access_key => 'SECRET_ACCESS_KEY'
)
I've looked for documentation or tutorials but it's not clear for me. How do I do it? Thank you!
i think you can try like this
put this in aws.rb
AWS.config(
:access_key_id => ENV['ACCESS_KEY_ID'],
:secret_access_key => ENV['SECRET_ACCESS_KEY']
)
and when you initialize the object wherever you need, will call the configuration
s3 = AWS::S3.new
To share configuration between AWS service client in a Rails application, configure the AWS SDK for Ruby from a config initializer.
# config/initializers/aws-sdk.rb
Aws.config.update(
credentials: Aws::Credentials.new('access-key-id', 'secret-access-key'),
region: 'us-east-1',
)
Now you can construct a client object from any service without any options:
s3 = Aws::S3::Client.new
ec2 = Aws::EC2::Client.new
Please note, you should avoid hard-coding credentials into your application. This can be a security risk if your source code is accessed and it makes it difficult to rotate credentials.
I recommend using hands-off configuration via ENV['AWS_ACCESS_KEY_ID'] and ENV['AWS_SECRET_ACCESS_KEY'], or an EC2 instance profile.
Finally, I've found the solution:
Create the file aws.rb in your /config/initializers folder.
In aws.rb write:
S3Client = Aws::S3::Client.new(
access_key_id: 'ACCESS_KEY_ID',
secret_access_key: 'SECRET_ACCESS_KEY',
region: 'REGION'
)
That's it. Thank you all for your answers!
Also with aws-sdk-rails (1.0.0)
# config/initializers/aws.rb
Aws.config[:credentials] = Aws::Credentials.new(ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY'])
TRY THIS CONFIGURATION:-
in config/intitalizers/s3.rb
Paperclip.interpolates(:s3_eu_url) { |attachment, style|
"#{attachment.s3_protocol}://s3-eu-west-1.amazonaws.com/#{attachment.bucket_name}/#{attachment.path(style).gsub(%r{^/}, "")}"
}
config/initializers/paperclip.rb
require 'paperclip/media_type_spoof_detector'
module Paperclip
class MediaTypeSpoofDetector
def spoofed?
false
end
end
end
Paperclip::Attachment.default_options[:url] = ':s3_domain_url'
Paperclip::Attachment.default_options[:path] = '/:class/:id/:style/:filename'
S3_CREDENTIALS = Rails.root.join("config/s3.yml")
/config/s3.yml
development:
bucket: development_bucket
access_key_id: AKIA-----API KEYS---------MCLXQ
secret_access_key: qTNF1-------API KEYS--------DTy+rPubaaG
production:
bucket: production_bucket
access_key_id: AKI-----API KEYS--------LXQ
secret_access_key: qTNF1dW---API KEYS---+rPubaaG
hope you have gem "aws-sdk"in the Gemfile
add you asset in model
has_attached_file :avatar,
:styles => {:view => "187x260#"},
:storage => :s3,
:s3_permissions => :private,
:s3_credentials => S3_CREDENTIALS
verify using rails console with static image in public
Image.create(avatar: File.new("#{Rails.root}/public/images/colorful_blue.jpg"))
I have a file already on S3 that I'd like to associate to a pre-existing instance of the Asset model.
Here's the model:
class Asset < ActiveRecord::Base
attr_accessible(:attachment_content_type, :attachment_file_name,
:attachment_file_size, :attachment_updated_at, :attachment)
has_attached_file :attachment, {
storage: :s3,
s3_credentials: {
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
},
convert_options: { all: '-auto-orient' },
url: ':s3_alias_url',
s3_host_alias: ENV['S3_HOST_ALIAS'],
path: ":class/:attachment/:id_partition/:style/:filename",
bucket: ENV['S3_BUCKET_NAME'],
s3_protocol: 'https'
}
end
Let's say the path is assets/attachments/000/111/file.png, and the Asset instance I want to associate with the file is asset. Referring at the source, I've tried:
options = {
storage: :s3,
s3_credentials: {
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
},
convert_options: { all: '-auto-orient' },
url: ':s3_alias_url',
s3_host_alias: ENV['S3_HOST_ALIAS'],
path: "assets/attachments/000/111/file.png",
bucket: ENV['S3_BUCKET_NAME'],
s3_protocol: 'https'
}
# The above is identical to the options given in the model, except for the path
Paperclip::Attachment.new("file.png", asset, options).save
As far as I can tell, this did not affect asset in any way. I cannot set asset.attachment.path manually.
Other questions on SO do not seem to address this specifically.
"paperclip images not saving in the path i've set up", "Paperclip and Amazon S3 how to do paths?", and so on involve setting up the model, which is already working fine.
Anyone have any insight to offer?
As far as I can tell, I do need to turn the S3 object into a File, as suggested by #oregontrail256. I used the Fog gem to do this.
s3 = Fog::Storage.new(
:provider => 'AWS',
:aws_access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:aws_secret_access_key => ENV['AWS_SECRET_ACCESS_KEY']
)
directory = s3.directories.get(ENV['S3_BUCKET_NAME'])
fog_file = directory.files.get(path)
file = File.open("temp", "wb")
file.write(fog_file.body)
asset.attachment = file
asset.save
file.close
Paperclip attachments have a copy_to_local_file() method that allows you to make a local copy of the attachment. So what about:
file_name = "temp_file"
asset1.attachment.copy_to_local_file(:style, file_name)
file = File.open(file_name)
asset2.attachment = file
file.close
asset2.save!
Even if you destroy asset1, you now have a copy of the attachment saved by asset2 separately. You probably want to do this in a background job if you're doing many of them.
Credit to this answer too: How to set a file upload programmatically using Paperclip
I've been stuck on this for ages now and can't figure out what's wrong. There are a lot of people that seem to have this same problem, but I can't actually find any answers that actually work.
production.rb
config.paperclip_defaults = {
:storage => :s3,
:s3_credentials => {
:bucket => ENV['my bucket name is here'],
:access_key_id => ENV['my key is here'],
:secret_access_key => ENV['my secret key is here']
}
}
game.rb
require 'aws/s3'
class Game < ActiveRecord::Base
attr_accessible :swf, :swf_file_name, :name, :description, :category, :age_group, :dimension_x, :dimension_y, :image, :image_file_name, :feature_image, :feature_image_file_name, :developer, :instructions, :date_to_go_live, :date_to_show_countdown, :plays
has_attached_file :swf
has_attached_file :image
has_attached_file :feature_image
def swfupload_file=(data)
data.content_type =
MIME::Types.type_for(data.original_filename).first.content_type
logger.warn("Data content type is: #{data.content_type}")
self.file = data
end
end
paperclip.rb
Paperclip::Attachment.default_options[:url] = ':s3_domain_url'
Paperclip::Attachment.default_options[:path] = '/:class/:attachment/:id_partition/:style/:filename'
Here is my paperclip initialization stuff:
Paperclip::Attachment.default_options.merge!({
storage: :s3,
s3_credentials: {
access_key_id: ENV['S3_KEY'],
secret_access_key: ENV['S3_SECRET'],
bucket: "#{ENV['S3_BUCKET']}-#{Rails.env}"
},
url: ":s3_domain_url",
path: "/:class/:attachment/:id_partition/:style/:filename"
})
This assumes that we have three environment variables setup called, you guessed it... S3_KEY, S3_SECRET, and S3_BUCKET. I did a little trick so that I could have a different bucket in each environment by adding Rails.env to the bucket variable.
You seem to indicate in your question that you're putting the actual name of the bucket in the reference to ENV, which would not work. You should put the name of the bucket in the environment variable and use the name of the environment variable as the key.
I hope this helps.