Ruby on rails Spree and S3 - ruby-on-rails

I am having a difficult time connecting Amazon S3 buckets to a e-commerce spree application. I have added the necessary code to the initializer/spree.rb and set the necessary environment variables. This is the code inside my spree.rb
# Configure Solidus Preferences
# See http://docs.solidus.io/Spree/AppConfiguration.html for details
Spree.config do |config|
# Without this preferences are loaded and persisted to the database. This
# changes them to be stored in memory.
# This will be the default in a future version.
config.use_static_preferences!
# Core:
# Default currency for new sites
config.currency = "USD"
# from address for transactional emails
config.mails_from = "expressmobiletechs#gmail.com"
# Uncomment to stop tracking inventory levels in the application
# config.track_inventory_levels = false
# When set, product caches are only invalidated when they fall below or rise
# above the inventory_cache_threshold that is set. Default is to invalidate cache on
# any inventory changes.
# config.inventory_cache_threshold = 3
# Frontend:
config.logo = "logo.jpg"
config.admin_interface_logo = "logo.jpg"
# Template to use when rendering layout
# config.layout = "spree/layouts/spree_application"
# Admin:
# Custom logo for the admin
# config.admin_interface_logo = "logo/solidus_logo.png"
# Gateway credentials can be configured statically here and referenced from
# the admin. They can also be fully configured from the admin.
#
# config.static_model_preferences.add(
# Spree::Gateway::StripeGateway,
# 'stripe_env_credentials',
# secret_key: ENV['STRIPE_SECRET_KEY'],
# publishable_key: ENV['STRIPE_PUBLISHABLE_KEY'],
# server: Rails.env.production? ? 'production' : 'test',
# test_mode: !Rails.env.production?
# )
end
Spree::Frontend::Config.configure do |config|
config.use_static_preferences!
config.locale = 'en'
end
Spree::Backend::Config.configure do |config|
config.use_static_preferences!
config.locale = 'en'
end
Spree::Api::Config.configure do |config|
config.use_static_preferences!
config.requires_authentication = true
end
Spree.user_class = "Spree::LegacyUser"
attachment_config = {
s3_credentials: {
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY'],
bucket: ENV['S3_BUCKET_NAME']
},
storage: :s3,
s3_headers: { "Cache-Control" => "max-age=31557600" },
s3_protocol: "https",
bucket: ENV['S3_BUCKET_NAME'],
url: ":s3_domain_url",
styles: {
mini: "48x48>",
small: "100x100>",
product: "240x240>",
large: "600x600>"
},
path: "/spree/:class/:id/:style/:basename.:extension",
default_url: "/spree/:class/:id/:style/:basename.:extension",
default_style: "product"
}
attachment_config.each do |key, value|
Spree::Image.attachment_definitions[:attachment][key.to_sym] = value
end
I am trying to get my spree stores image uploading for products to work with s3. No other discussion on stack has been able to help me.

Related

Prevent image upload to AWS in development/test in Paperclip

I have inherited a project that uses Paperclip for image processing, which also uploads to a AWS bucket, normally I use Carrierwave and choose to save files locally when in Test or Development environments
CarrierWave.configure do |config|
if Rails.env.test?
config.storage = :file
config.enable_processing = false
else
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:aws_secret_access_key => ENV['AWS_SECRET_ACCESS_KEY'],
:region => 'eu-west-1'
}
config.fog_directory = ENV['AWS_BUCKET']
config.fog_public = true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'}
end
end
How can I achieve the same thing with paperclip? I have read that you can Define Defaults in a initializer file.
But I am a bit unsure on what options to pass.
You can create an initializer like this
# config/initializers/paperclip.rb
if Rails.env.development? || Rails.env.test?
Paperclip::Attachment.default_options[:storage] = 'filesystem'
else
Paperclip::Attachment.default_options[:storage] = 's3'
Paperclip::Attachment.default_options[:s3_credentials] = {
bucket: ENV['AWS_BUCKET'],
access_key_id: ENV['AWS_ACCESS_KEY_ID'],
secret_access_key: ENV['AWS_SECRET_ACCESS_KEY']
}
# other config...
end
For more options about S3, see also http://www.rubydoc.info/gems/paperclip/Paperclip/Storage/S3
Just add those options to the Paperclip::Attachment.default_options hash :)
Paperclip can have different storage for each field, so look for how s3 storage is selected.
Probably it's has_attached_file :foo, storage: :s3, ..., to save locally storage should be :filesystem

Can CarrierWave upload to Amazon S3 but serve through CloudFront?

I'm working on a small rails site which allows some users to upload images and others to see them. I started using CarrierWave with S3 as the storage medium and everything worked great but then I wanted to experiment with using CouldFront. I first added a distribution to my S3 bucket and then changed the CarrierWave configuration I was using to this:
CarrierWave.configure do |config|
config.storage = :fog
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => ENV['S3_ACCESS_KEY_ID'], # required
:aws_secret_access_key => ENV['S3_SECRET_ACCESS_KEY'], # required
:region => 'eu-west-1',
}
config.asset_host = 'http://static.my-domain.com/some-folder'
config.fog_public = true # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
I should mention that http://static.my-domain.com is a CNAME entry pointing to a CloudFront endpoint (some-id.cloudfront.net). The result is that the pictures are shown correctly, URLs look like this: http://static.my-domain.com/some-folder/uploads/gallery_image/attachment/161/large_image.jpg but whenever I try to upload a photo or for that matter get the size of the uploaded attachment I get the following exception:
Excon::Errors::MovedPermanently: Expected(200) <=> Actual(301 Moved Permanently)
response => #<Excon::Response:0x007f61fc3d1548 #data={:body=>"",
:headers=>{"x-amz-request-id"=>"some-id", "x-amz-id-2"=>"some-id",
"Content-Type"=>"application/xml", "Transfer-Encoding"=>"chunked",
"Date"=>"Mon, 31 Mar 2014 21:16:45 GMT", "Connection"=>"close", "Server"=>"AmazonS3"},
:status=>301, :remote_ip=>"some-ip"}, #body="", #headers={"x-amz-request-id"=>"some-id",
"x-amz-id-2"=>"some-id", "Content-Type"=>"application/xml",
"Transfer-Encoding"=>"chunked", "Date"=>"Mon, 31 Mar 2014 21:16:45 GMT",
"Connection"=>"close", "Server"=>"AmazonS3"}, #status=301, #remote_ip="some-ip"
Just to add some more info, I tried the following:
removing the region entry
using the CloudFront URL directly instead of the CNAME
specifying the Amazon endpoint (https://s3-eu-west1.amazonaws.com)
but all of them had no effect.
Is there something I'm missing or is it that CarrierWave does not support this at this time?
The answer to the question is YES. The reason why it didn't work with my configuration is that I was missing the fog_directory entry. When I added my asset_host, I removed fog_directory since the CDN urls being generated where malformed. I later found out that this was due to having fog_public set to false. After getting the proper CDN urls, I forgot to add fog_directory back since I could see my images and thought everything was fine. Anyway the correct configuration is:
CarrierWave.configure do |config|
config.storage = :fog
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => ENV['S3_ACCESS_KEY_ID'], # required
:aws_secret_access_key => ENV['S3_SECRET_ACCESS_KEY'], # required
:region => 'eu-west-1'
}
config.fog_directory = '-bucket-name-/-some-folder-'
config.asset_host = 'https://static.my-domain.com/-some-folder-'
config.fog_public = true # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
Try setting :asset_host in your Uploader like so:
class ScreenshotUploader < CarrierWave::Uploader::Base
storage :fog
# Configure uploads to be stored in a public Cloud Files container
def fog_directory
'my_public_container'
end
# Configure uploads to be delivered over Rackspace CDN
def asset_host
"c000000.cdn.rackspacecloud.com"
end
end
Inspired from https://github.com/carrierwaveuploader/carrierwave/wiki/How-to%3A-Store-private-public-uploads-in-different-Cloud-Files-Containers-with-Fog

Carrierwave fog Amazon S3 images not displaying

I have installed carrierwave and fog, have successfully uploaded the images and viewed them the first time, but now it does not show the images anymore.
Here is my config file app/config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'AKIAJKOHTE4WTXCCXAMA', # required
:aws_secret_access_key => 'some secret key here', # required
:region => 'eu-east-1', # optional, defaults to 'us-east-1'
:host => 'https://s3.amazonaws.com', # optional, defaults to nil
:endpoint => 'https://s3.amazonaws.com:8080' # optional, defaults to nil
}
config.fog_directory = 'createmysite.co.za' # required
config.fog_public = false # optional, defaults to true
#config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
This is what the url looks like of the image that is supposed to display
<img alt="Normal_selection_003" src="https://createmysite.co.za.s3.amazonaws.com/uploads/portfolio/image/3/normal_Selection_003.png?AWSAccessKeyId=AKIAJKOHTE4WTXCCXAMA&Signature=8PLq8WCkfrkthmfVGfXX9K6s5fc%3D&Expires=1354859553">
when I open the image url this is the output from amazon
https://createmysite.co.za.s3.amazonaws.com/uploads/portfolio/image/3/normal_Selection_003.png?AWSAccessKeyId=AKIAJKOHTE4WTXCCXAMA&Signature=8PLq8WCkfrkthmfVGfXX9K6s5fc%3D&Expires=1354859553
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>3F179B7CE417BC12</RequestId>
<HostId>
zgh46a+G7UDdpIHEEIT0C/rmijShOKAzhPSbLpEeVgUre1iDc9f7TSOwaJdQpR65
</HostId>
</Error>
Update
new config file (added fog url expiry) app/config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'AKIAJKOHTE4WTXCCXAMA', # required
:aws_secret_access_key => 'chuck norris', # required
}
config.fog_directory = 'createmysite.co.za' # required
config.fog_public = false # optional, defaults to true
config.fog_authenticated_url_expiration = 600 # (in seconds) => 10 minutes
end
works like a charm!
You've set config.fog_public to false and are using Amazon S3 for storage. URLs for private files through S3 are temporary (they're signed and have an expiry). Specifically, the URL posted in your question has an Expires=1354859553 parameter.
1354859553 is Fri, 07 Dec 2012 05:52:33 GMT, which is in the past from the current time, so the link has effectively expired, which is why you're getting the Access Denied error.
You can adjust the expiry out further (the default is 600 seconds) by setting
config.fog_authenticated_url_expiration = ... # some integer here
If you want non-expiring links either
set config.fog_public to true
have your application act as a middle man, serving the files up through send_file. Here is at least one question on SO covering this

carrierwave, Excon::Errors::MovedPermanently in RegistrationsController#update error

ive been trying to get carrierwave to work with amazon s3. instead of
storage :s3
i have
storage :fog
changing it to storage :s3 gives an immediate error
https://stackoverflow.com/questions/10629827/carrierwave-cant-convert-nil-into-string-typeerror-when-using-s3
so i changed it to storage :fog like the rdoc below says.
http://rubydoc.info/gems/carrierwave/frames
however when i try to upload an image, i get this crazy error. im using the devise gem as well.
my full stack trace is
Excon::Errors::MovedPermanently in RegistrationsController#update
Excon::Errors::MovedPermanently (Expected(200) <=> Actual(301 Moved Permanently)
request => {:connect_timeout=>60, :headers=>{"Content-Length"=>95472, "Content-Type"=>"image/jpeg", "x-amz-acl"=>"private", "Cache-Control"=>"max-age=315576000", "Date"=>"Thu, 17 May 2012 05:28:55 +0000", "Authorization"=>"AWS AKIAIN6SC3YSGBSUKV4Q:kZOG9mG01jYn48ImFMYbgxAAQRk=", "Host"=>"user.a.777.s3-eu-west-1.amazonaws.com:443"}, :instrumentor_name=>"excon", :mock=>false, :read_timeout=>60, :retry_limit=>4, :ssl_ca_file=>"/Users/sasha/.rvm/gems/ruby-1.9.3-p125/gems/excon-0.13.4/data/cacert.pem", :ssl_verify_peer=>true, :write_timeout=>60, :host=>"user.a.777.s3-eu-west-1.amazonaws.com", :path=>"/uploads%2Fuser%2Fimage%2F59%2Fidea.jpg", :port=>"443", :query=>nil, :scheme=>"https", :body=>#<File:/Users/sasha/Desktop/rails_projects/blue_eyes/public/uploads/tmp/20120516-2228-19160-9893/idea.jpg>, :expects=>200, :idempotent=>true, :method=>"PUT"}
response => #<Excon::Response:0x007fd72a146820 #body="<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>PermanentRedirect</Code><Message>The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.</Message><RequestId>F5F5AF888E837622</RequestId><Bucket>user.a.777</Bucket><HostId>IShK3GIthzCQysLOKXnR+ijJiHmMuUtXBOpFxQM4uCvJgkEHfmFn43LL4oWmpT82</HostId><Endpoint>s3.amazonaws.com</Endpoint></Error>", #headers={"x-amz-request-id"=>"F5F5AF888E837622", "x-amz-id-2"=>"IShK3GIthzCQysLOKXnR+ijJiHmMuUtXBOpFxQM4uCvJgkEHfmFn43LL4oWmpT82", "Content-Type"=>"application/xml", "Transfer-Encoding"=>"chunked", "Date"=>"Thu, 17 May 2012 05:29:00 GMT", "Connection"=>"close", "Server"=>"AmazonS3"}, #status=301>):
app/controllers/registrations_controller.rb:30:in `update'
i dont know what that even means.
in my initializers/carrierwave.rb i have..
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'somekey', # required
:aws_secret_access_key => 'secretkey', # required
:region => 'eu-west-1' # optional, defaults to 'us-east-1'
}
config.fog_directory = 'bucket.name' # required
#config.fog_host = 'https://s3.amazonaws.com' # optional, defaults to nil
config.fog_public = false # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
and my uploader file has
#storage :s3
storage :fog
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
my gem file has
gem 'carrierwave'
gem 'thin'
gem 'fog'
when i boot my server, instead of webrick, it uses thin in development as well.
are my configurations wrong?
help would be much appreciated!
ive been super stuck on this carrierwave/s3 issue
I ran into this the earlier today and it was a problem with the region. Just take it out and let it be set by the default.
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'somekey', # required
:aws_secret_access_key => 'secretkey' # required
}
config.fog_directory = 'bucket.name' # required
#config.fog_host = 'https://s3.amazonaws.com' # optional, defaults to nil
config.fog_public = false # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
For me worked this configuration
config.fog_directory = 'bucket_name'
config.fog_host = 'https://s3-eu-west-1.amazonaws.com/bucket_name'
I had the same problem.
Following the 3 steps below worked for me.
1.Change the default region when creating a bucket
2.Edit my carrierwave.rb file(as shown below)
initializers/carrierwave.rb:
if Rails.env.production?
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => ENV['S3_ACCESS_KEY'],
:aws_secret_access_key => ENV['S3_SECRET_KEY'],
:region => ENV['S3_REGION']
}
config.fog_directory = ENV['S3_BUCKET']
end
end
3.Configure heroku in the command line as in: heroku config:set S3_REGION='your region'
Just like #Jason Bynum said, do not specify the region and let it default.
If you still fail, don't worry, At this time, heroku will give hint to you like your region specified is wrong and should be xxx
And you know how to fill the region right now :)
The followings work for me:
CarrierWave.configure do |config|
config.fog_credentials = {
provider: 'AWS', # required
aws_access_key_id: ENV['S3_KEY'], # required
aws_secret_access_key: ENV['S3_SECRET'], # required
region: 'ap-southeast-1', # optional, defaults to 'us-east-1'
}
config.fog_directory = 'your_bucket_name' # required
config.fog_public = false # optional, defaults to true
config.fog_attributes = { 'Cache-Control' => "max-age=#{365.day.to_i}" } # optional, defaults to {}
end
Gemfile:
gem 'carrierwave', '0.10.0'
gem 'fog', '1.36.0'

AWS::S3::NoConnectionEstablished error using aws-s3 gem in Rails

I am getting a AWS::S3::NoConnectionEstablished exception when trying to download a file using paperclip + s3. I can fire up s3sh and create a connection just fine with the s3 credentials in my config. What is the best next step I can take to debug this issue? This is what my model looks like:
has_attached_file :file,
:storage => :s3,
:s3_permssions => :private,
:path => lambda { |attachment| ":id_partition/:basename.:extension" },
:url => lambda { |attachment| "products/:id/:basename.:extension" },
:s3_credentials => "#{Rails.root}/config/amazon_s3.yml",
:bucket => "products.mycompany.com"
And the error occurs here:
def temporary_s3_url(options={})
options.reverse_merge! :expires_in => 10.minutes #, :use_ssl => true
hard_url = AWS::S3::S3Object.url_for file.path, file.options[:bucket], options
# Use our vanity URL
hard_url.gsub("http://s3.amazonaws.com/products.mycompany.com","http://products.mycompany.com")
end
I tried hard coding a connection as the first line in the temporary_s3_url method but I get a "bucket not found" error. I think the problem is definitely that paperclip is having a problem initializing my s3 configuration.
Remember that storing to S3 is not dependable--the connection can be lost, the store fail before completing, etc.
I created my own library routines that attempt to do the store, but catch various errors. For the no connection error, I reconnect. For other storage errors, I retry (up to three times). You may also want to wait a second between retries.
Added
Below is the library routine I use for AWS calls.
You'd need to add/modify the rescue clauses to catch the errors that you're experiencing. Your connection_reset and error reporting methods will also be specific to your sw.
# Usage example:
# aws_repeat("Storing #{bucket}/#{obj}"){
# AWS::S3::S3Object.store(obj, data, bucket, opt)}
def aws_repeat(description = nil)
# Calls the block up to 3 times, allowing for AWS connection reset problems
for i in 1..3
begin
yield
rescue Errno::ECONNRESET => e
ok = false
ActiveRecord::Base.logger.error \
"AWS::S3 *** Errno::ECONNRESET => sleeping"
sleep(1)
if i == 1
# reset connection
connect_to_aws # re-login in to AWS
ActiveRecord::Base.logger.error \
"AWS::S3 *** Errno::ECONNRESET => reset connection"
end
else
ok = true
break
end
end
unless ok
msg = "AWS::S3 *** FAILURE #{description.to_s}"
ActiveRecord::Base.logger.error msg
security_log(msg)
end
ok
end
############################################
############################################
def connect_to_aws
# load params. Cache at class (app) level
##s3_config_path ||= RAILS_ROOT + '/config/amazon_s3.yml'
##s3_config ||=
YAML.load_file(##s3_config_path)[ENV['RAILS_ENV']].symbolize_keys
AWS::S3::Base.establish_connection!(
:access_key_id => ##s3_config[:access_key_id],
:secret_access_key => ##s3_config[:secret_access_key],
:server => ##s3_config[:server],
:port => ##s3_config[:port],
:use_ssl => ##s3_config[:use_ssl],
:persistent => false # from http://www.ruby-forum.com/topic/110842
)
true
end
I have paperclip with S3 and Heroku on two apps. This is what worked for me:
In your mode:
has_attached_file :image,
:styles => { :thumb => "250x250>" },
:storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml",
:path => "username/:attachment/:style/:id.:extension"
in config/s3.yml
development:
bucket: name
access_key_id: xyz
secret_access_key: xyz
test:
bucket: name
access_key_id: xyz
secret_access_key: xyz
production:
bucket: name
access_key_id: xyz
secret_access_key: xyz
and of course in your environment.rb you need to have the gem included or however you include gems.

Resources