Prevent image upload to AWS in development/test in Paperclip - ruby-on-rails

I have inherited a project that uses Paperclip for image processing, which also uploads to a AWS bucket, normally I use Carrierwave and choose to save files locally when in Test or Development environments
CarrierWave.configure do |config|
if Rails.env.test?
config.storage = :file
config.enable_processing = false
else
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:aws_secret_access_key => ENV['AWS_SECRET_ACCESS_KEY'],
:region => 'eu-west-1'
}
config.fog_directory = ENV['AWS_BUCKET']
config.fog_public = true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'}
end
end
How can I achieve the same thing with paperclip? I have read that you can Define Defaults in a initializer file.
But I am a bit unsure on what options to pass.

You can create an initializer like this
# config/initializers/paperclip.rb
if Rails.env.development? || Rails.env.test?
Paperclip::Attachment.default_options[:storage] = 'filesystem'
else
Paperclip::Attachment.default_options[:storage] = 's3'
Paperclip::Attachment.default_options[:s3_credentials] = {
bucket: ENV['AWS_BUCKET'],
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY']
}
# other config...
end
For more options about S3, see also http://www.rubydoc.info/gems/paperclip/Paperclip/Storage/S3
Just add those options to the Paperclip::Attachment.default_options hash :)

Paperclip can have different storage for each field, so look for how s3 storage is selected.
Probably it's has_attached_file :foo, storage: :s3, ..., to save locally storage should be :filesystem

Related

amazon s3 variables not working in heroku environment rails

Hello I have included given code
def store_s3(file)
# We create a connection with amazon S3
AWS.config(access_key_id: ENV['S3_ACCESS_KEY'], secret_access_key: ENV['S3_SECRET'])
s3 = AWS::S3.new
bucket = s3.buckets[ENV['S3_BUCKET_LABELS']]
object = bucket.objects[File.basename(file)]
# the file is not the content of the file is the route
# file_data = File.open(file, 'rb')
object.write(file: file)
# save the file and return an url to download it
object.url_for(:read, response_content_type: 'text/csv')
end
this code is working correctly in my local data is stored in amazon but when I had deployed code in heroku server I had made variables on server too.
is there anything which I am missing here please let me know cause of issue.
I don't see region, in your example is S3_Hostname your region?
for myself, region was just like 'us-west-2'.
If you want to setup your s3 with carrierwave and gem fog you can do it like this on config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_provider = 'fog/aws'
config.fog_directory = 'name for s3 directory'
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => 'your access key',
:aws_secret_access_key => 'your secret key',
:region => 'your region ex: eu-west-2'
}
end

Merge uploaded Amazon S3 images into CloudFront

I started to integrate CloudFront into my exciting Rails App, everything with CloudFront is working fine, except that the old uploaded images can't be accessed.
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:aws_secret_access_key => ENV['AWS_SECRET_ACCESS_KEY'],
}
config.asset_host = ENV['CLOUDFRONT_ENDPOINT']
config.fog_directory = 'oktob-editor'
config.fog_public = true
config.fog_attributes = {'Cache-Control'=>"max-age=#{365.day.to_i}"}
end
Example of old uploaded image
https://oktob-editor.s3.amazonaws.com/uploads/post/image/127/thumb_Ruby_on_Rails.svg.png
After I integrated CloudFront and set asset_host it becomes
http://ID.cloudfront.net/uploads/post/image/127/thumb_Ruby_on_Rails.svg.png
with
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>D368D2E641BBBB64</RequestId><HostId></HostId></Error>
So is there a way that enable old images to work properly with CloudFront
Seems like changing the Restrict Bucket Access to Yes makes it to work

Ruby on Rails - AWS-SDK configuration file

I'm using AWS-SDK gem in my Rails project, and I want a kind-of initializer file to connect directly to my repo and make changes directly in the Rails console, something like this:
# At config/initializers/aws.rb
Aws::S3::Client.new(
:access_key_id => 'ACCESS_KEY_ID',
:secret_access_key => 'SECRET_ACCESS_KEY'
)
I've looked for documentation or tutorials but it's not clear for me. How do I do it? Thank you!
i think you can try like this
put this in aws.rb
AWS.config(
:access_key_id => ENV['ACCESS_KEY_ID'],
:secret_access_key => ENV['SECRET_ACCESS_KEY']
)
and when you initialize the object wherever you need, will call the configuration
s3 = AWS::S3.new
To share configuration between AWS service client in a Rails application, configure the AWS SDK for Ruby from a config initializer.
# config/initializers/aws-sdk.rb
Aws.config.update(
credentials: Aws::Credentials.new('access-key-id', 'secret-access-key'),
region: 'us-east-1',
)
Now you can construct a client object from any service without any options:
s3 = Aws::S3::Client.new
ec2 = Aws::EC2::Client.new
Please note, you should avoid hard-coding credentials into your application. This can be a security risk if your source code is accessed and it makes it difficult to rotate credentials.
I recommend using hands-off configuration via ENV['AWS_ACCESS_KEY_ID'] and ENV['AWS_SECRET_ACCESS_KEY'], or an EC2 instance profile.
Finally, I've found the solution:
Create the file aws.rb in your /config/initializers folder.
In aws.rb write:
S3Client = Aws::S3::Client.new(
access_key_id: 'ACCESS_KEY_ID',
secret_access_key: 'SECRET_ACCESS_KEY',
region: 'REGION'
)
That's it. Thank you all for your answers!
Also with aws-sdk-rails (1.0.0)
# config/initializers/aws.rb
Aws.config[:credentials] = Aws::Credentials.new(ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY'])
TRY THIS CONFIGURATION:-
in config/intitalizers/s3.rb
Paperclip.interpolates(:s3_eu_url) { |attachment, style|
"#{attachment.s3_protocol}://s3-eu-west-1.amazonaws.com/#{attachment.bucket_name}/#{attachment.path(style).gsub(%r{^/}, "")}"
}
config/initializers/paperclip.rb
require 'paperclip/media_type_spoof_detector'
module Paperclip
class MediaTypeSpoofDetector
def spoofed?
false
end
end
end
Paperclip::Attachment.default_options[:url] = ':s3_domain_url'
Paperclip::Attachment.default_options[:path] = '/:class/:id/:style/:filename'
S3_CREDENTIALS = Rails.root.join("config/s3.yml")
/config/s3.yml
development:
bucket: development_bucket
access_key_id: AKIA-----API KEYS---------MCLXQ
secret_access_key: qTNF1-------API KEYS--------DTy+rPubaaG
production:
bucket: production_bucket
access_key_id: AKI-----API KEYS--------LXQ
secret_access_key: qTNF1dW---API KEYS---+rPubaaG
hope you have gem "aws-sdk"in the Gemfile
add you asset in model
has_attached_file :avatar,
:styles => {:view => "187x260#"},
:storage => :s3,
:s3_permissions => :private,
:s3_credentials => S3_CREDENTIALS
verify using rails console with static image in public
Image.create(avatar: File.new("#{Rails.root}/public/images/colorful_blue.jpg"))

Carrierwave fog Amazon S3 images not displaying

I have installed carrierwave and fog, have successfully uploaded the images and viewed them the first time, but now it does not show the images anymore.
Here is my config file app/config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'AKIAJKOHTE4WTXCCXAMA', # required
:aws_secret_access_key => 'some secret key here', # required
:region => 'eu-east-1', # optional, defaults to 'us-east-1'
:host => 'https://s3.amazonaws.com', # optional, defaults to nil
:endpoint => 'https://s3.amazonaws.com:8080' # optional, defaults to nil
}
config.fog_directory = 'createmysite.co.za' # required
config.fog_public = false # optional, defaults to true
#config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
This is what the url looks like of the image that is supposed to display
<img alt="Normal_selection_003" src="https://createmysite.co.za.s3.amazonaws.com/uploads/portfolio/image/3/normal_Selection_003.png?AWSAccessKeyId=AKIAJKOHTE4WTXCCXAMA&Signature=8PLq8WCkfrkthmfVGfXX9K6s5fc%3D&Expires=1354859553">
when I open the image url this is the output from amazon
https://createmysite.co.za.s3.amazonaws.com/uploads/portfolio/image/3/normal_Selection_003.png?AWSAccessKeyId=AKIAJKOHTE4WTXCCXAMA&Signature=8PLq8WCkfrkthmfVGfXX9K6s5fc%3D&Expires=1354859553
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>3F179B7CE417BC12</RequestId>
<HostId>
zgh46a+G7UDdpIHEEIT0C/rmijShOKAzhPSbLpEeVgUre1iDc9f7TSOwaJdQpR65
</HostId>
</Error>
Update
new config file (added fog url expiry) app/config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'AKIAJKOHTE4WTXCCXAMA', # required
:aws_secret_access_key => 'chuck norris', # required
}
config.fog_directory = 'createmysite.co.za' # required
config.fog_public = false # optional, defaults to true
config.fog_authenticated_url_expiration = 600 # (in seconds) => 10 minutes
end
works like a charm!
You've set config.fog_public to false and are using Amazon S3 for storage. URLs for private files through S3 are temporary (they're signed and have an expiry). Specifically, the URL posted in your question has an Expires=1354859553 parameter.
1354859553 is Fri, 07 Dec 2012 05:52:33 GMT, which is in the past from the current time, so the link has effectively expired, which is why you're getting the Access Denied error.
You can adjust the expiry out further (the default is 600 seconds) by setting
config.fog_authenticated_url_expiration = ... # some integer here
If you want non-expiring links either
set config.fog_public to true
have your application act as a middle man, serving the files up through send_file. Here is at least one question on SO covering this

carrierwave, Excon::Errors::MovedPermanently in RegistrationsController#update error

ive been trying to get carrierwave to work with amazon s3. instead of
storage :s3
i have
storage :fog
changing it to storage :s3 gives an immediate error
https://stackoverflow.com/questions/10629827/carrierwave-cant-convert-nil-into-string-typeerror-when-using-s3
so i changed it to storage :fog like the rdoc below says.
http://rubydoc.info/gems/carrierwave/frames
however when i try to upload an image, i get this crazy error. im using the devise gem as well.
my full stack trace is
Excon::Errors::MovedPermanently in RegistrationsController#update
Excon::Errors::MovedPermanently (Expected(200) <=> Actual(301 Moved Permanently)
request => {:connect_timeout=>60, :headers=>{"Content-Length"=>95472, "Content-Type"=>"image/jpeg", "x-amz-acl"=>"private", "Cache-Control"=>"max-age=315576000", "Date"=>"Thu, 17 May 2012 05:28:55 +0000", "Authorization"=>"AWS AKIAIN6SC3YSGBSUKV4Q:kZOG9mG01jYn48ImFMYbgxAAQRk=", "Host"=>"user.a.777.s3-eu-west-1.amazonaws.com:443"}, :instrumentor_name=>"excon", :mock=>false, :read_timeout=>60, :retry_limit=>4, :ssl_ca_file=>"/Users/sasha/.rvm/gems/ruby-1.9.3-p125/gems/excon-0.13.4/data/cacert.pem", :ssl_verify_peer=>true, :write_timeout=>60, :host=>"user.a.777.s3-eu-west-1.amazonaws.com", :path=>"/uploads%2Fuser%2Fimage%2F59%2Fidea.jpg", :port=>"443", :query=>nil, :scheme=>"https", :body=>#<File:/Users/sasha/Desktop/rails_projects/blue_eyes/public/uploads/tmp/20120516-2228-19160-9893/idea.jpg>, :expects=>200, :idempotent=>true, :method=>"PUT"}
response => #<Excon::Response:0x007fd72a146820 #body="<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>PermanentRedirect</Code><Message>The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.</Message><RequestId>F5F5AF888E837622</RequestId><Bucket>user.a.777</Bucket><HostId>IShK3GIthzCQysLOKXnR+ijJiHmMuUtXBOpFxQM4uCvJgkEHfmFn43LL4oWmpT82</HostId><Endpoint>s3.amazonaws.com</Endpoint></Error>", #headers={"x-amz-request-id"=>"F5F5AF888E837622", "x-amz-id-2"=>"IShK3GIthzCQysLOKXnR+ijJiHmMuUtXBOpFxQM4uCvJgkEHfmFn43LL4oWmpT82", "Content-Type"=>"application/xml", "Transfer-Encoding"=>"chunked", "Date"=>"Thu, 17 May 2012 05:29:00 GMT", "Connection"=>"close", "Server"=>"AmazonS3"}, #status=301>):
app/controllers/registrations_controller.rb:30:in `update'
i dont know what that even means.
in my initializers/carrierwave.rb i have..
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'somekey', # required
:aws_secret_access_key => 'secretkey', # required
:region => 'eu-west-1' # optional, defaults to 'us-east-1'
}
config.fog_directory = 'bucket.name' # required
#config.fog_host = 'https://s3.amazonaws.com' # optional, defaults to nil
config.fog_public = false # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
and my uploader file has
#storage :s3
storage :fog
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
my gem file has
gem 'carrierwave'
gem 'thin'
gem 'fog'
when i boot my server, instead of webrick, it uses thin in development as well.
are my configurations wrong?
help would be much appreciated!
ive been super stuck on this carrierwave/s3 issue
I ran into this the earlier today and it was a problem with the region. Just take it out and let it be set by the default.
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'somekey', # required
:aws_secret_access_key => 'secretkey' # required
}
config.fog_directory = 'bucket.name' # required
#config.fog_host = 'https://s3.amazonaws.com' # optional, defaults to nil
config.fog_public = false # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
For me worked this configuration
config.fog_directory = 'bucket_name'
config.fog_host = 'https://s3-eu-west-1.amazonaws.com/bucket_name'
I had the same problem.
Following the 3 steps below worked for me.
1.Change the default region when creating a bucket
2.Edit my carrierwave.rb file(as shown below)
initializers/carrierwave.rb:
if Rails.env.production?
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => ENV['S3_ACCESS_KEY'],
:aws_secret_access_key => ENV['S3_SECRET_KEY'],
:region => ENV['S3_REGION']
}
config.fog_directory = ENV['S3_BUCKET']
end
end
3.Configure heroku in the command line as in: heroku config:set S3_REGION='your region'
Just like #Jason Bynum said, do not specify the region and let it default.
If you still fail, don't worry, At this time, heroku will give hint to you like your region specified is wrong and should be xxx
And you know how to fill the region right now :)
The followings work for me:
CarrierWave.configure do |config|
config.fog_credentials = {
provider: 'AWS', # required
aws_access_key_id: ENV['S3_KEY'], # required
aws_secret_access_key: ENV['S3_SECRET'], # required
region: 'ap-southeast-1', # optional, defaults to 'us-east-1'
}
config.fog_directory = 'your_bucket_name' # required
config.fog_public = false # optional, defaults to true
config.fog_attributes = { 'Cache-Control' => "max-age=#{365.day.to_i}" } # optional, defaults to {}
end
Gemfile:
gem 'carrierwave', '0.10.0'
gem 'fog', '1.36.0'

Resources