I have just recently setup my rails 3.2 app to use the carrierwave gem and upload files to S3. What I don't see is the ability to use a different bucket per uploader. Does anyone know if this is a possiblity?
The bucket is specified via the fog_directory config. This configuration option is defined on the uploader and could simply be overwritten with your own method.
Just add the following to your uploader:
def fog_directory
# your bucket name here
end
The carrierwave wiki explains how to use a separate s3 bucket for each uploader:
def initialize(*)
super
self.fog_credentials = {
:provider => 'AWS', # required
:aws_access_key_id => 'YOURAWSKEYID', # required
:aws_secret_access_key => 'YOURAWSSECRET', # required
}
self.fog_directory = "YOURBUCKET"
end
Multiple buckets is not currently supported by CarrierWave. You can separate files between uploaders by adding prefixes (folders) to the store_dir. Pull requests are welcome though if this is something you'd like to work on!
Related
I followed this tutorial:
http://lifesforlearning.com/uploading-images-with-carrierwave-to-s3-on-rails/
I had working carrierwave uploader which was storing files to disk space
What I did step by step:
1)added fog gem and run bundle install and bundle update
2)in config/initializers I created r3.rb file with this:
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => 'mykey',
:aws_secret_access_key => 'mysecretkey',
:region => 'us-west-2' # Change this for different AWS region.
}
config.fog_directory = "bucket-main"
end
I ran rails s and tried to save some photo. But as you can see on the picture my bucket is empty.So they must be stored to my disk.
What do I do now?
Update I changed storage to fog.
Here is my photouploader class code:
# encoding: utf-8
class PhotoUploader < CarrierWave::Uploader::Base
storage :fog
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
And now I get this error:
hostname "bucket-main.bucket-main.s3-us-west-1.amazonaws.com" does not
match the server certificate (OpenSSL::SSL::SSLError)
i eventually solved my problem by updating
bundle update fog
and
bundle update carrierwave
Try adding path_style to your config and the fog_directory
config.fog_credentials = {
...
:path_style => true
}
config.fog_directory = 'bucket-main'
I just spent a few hours tracking down the cause of this error, which I was also getting:
hostname "bucket-main.bucket-main.s3-us-west-1.amazonaws.com" does not match the server certificate (OpenSSL::SSL::SSLError)
The odd thing is how the bucket name is repeated twice in the hostname. It turned out I had configured the wrong region name. Notice in your config.fog_credentials you have
:region => 'us-west-2'
...but the hostname in the exception has s3-us-west-1? If your bucket is in one AWS region, but you configure a different region in your Fog credentials, Fog will try to follow a redirect from AWS, and somehow the bucket name gets doubled up in this situation. Fog produces a warning about the redirect, but Carrierwave ends up hiding this from you.
Set :region in your Fog credentials to where the bucket actually is in AWS, and the does not match the server certificate exception will stop happening.
I have a working heroku app. But since heroku doesn't provide persistent file storage, I'd like to use amazon s3.
I found heroku tutorial https://devcenter.heroku.com/articles/s3
But it seems confusing to me and may be a little bit complicated.
Right now I use carrierwave gem for storing files.
So may be you can give me a small and simple example of code u use to store files to amazon file storage?
UPDATE:
I found this code(is it that simple just a few lines in carrierwave, add gem called fog and that's all? or will I need something else?):
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => "YOUR AMAZON ACCESS KEY",
:aws_secret_access_key => "YOUR AMAZON SECRET KEY",
:region => 'us-west-1' # Change this for different AWS region. Default is 'us-east-1'
}
config.fog_directory = "bucket-name"
end
So this seems like it should be quite easy... everyone is saying just to use config.asset_host. When I set that though, all the links inside my app still point to S3.
CarrierWave.configure do |config|
config.storage = :fog
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => AWS_ACCESS_KEY_ID,
:aws_secret_access_key => AWS_SECRET_ACCESS_KEY,
:region => 'us-east-1'
}
config.fog_authenticated_url_expiration = 3.hours
config.asset_host = "http://xyz123.cloudfront.net"
config.fog_directory = S3_BUCKET_NAME
config.fog_public = false
config.fog_attributes = {
'Cache-Control' => "max-age=#{1.year.to_i}"
}
end
here is how I call my files...
image_tag book.attachments.first.filename.file.authenticated_url(:thumb175)
It looks to me like public_url prepends the proper host, but it takes 0 arguments... so how am I supposed to pass the proper response-content-disposition and response-content-type and the link expire time?
I had the same problem and spent far too long figuring out the answer! It turns out that when you set fog_public = false CarrierWave will ignore config.asset_host. You can demo this by setting config.fog_public = true: your URLs will now be CloudFront URLs rather than S3 URLs. This issue has been raised previously:
https://github.com/carrierwaveuploader/carrierwave/issues/1158
https://github.com/carrierwaveuploader/carrierwave/issues/1215
In a recent project I was happy using CarrierWave to handle uploads to S3, but wanted it to return a signed CloudFront URL when using Model.attribute_url. I came up with the following (admittedly ugly) workaround that I hope others can benefit from or improve upon:
Add the 'cloudfront-signer' gem to your project and configure it per the instructions. Then add the following override of /lib/carrierwave/uploader/url.rb in a new file in config/initializers (note the multiple insertions of AWS::CF::Signer.sign_url):
module CarrierWave
module Uploader
module Url
extend ActiveSupport::Concern
include CarrierWave::Uploader::Configuration
include CarrierWave::Utilities::Uri
##
# === Parameters
#
# [Hash] optional, the query params (only AWS)
#
# === Returns
#
# [String] the location where this file is accessible via a url
#
def url(options = {})
if file.respond_to?(:url) and not file.url.blank?
file.method(:url).arity == 0 ? AWS::CF::Signer.sign_url(file.url) : AWS::CF::Signer.sign_url(file.url(options))
elsif file.respond_to?(:path)
path = encode_path(file.path.gsub(File.expand_path(root), ''))
if host = asset_host
if host.respond_to? :call
AWS::CF::Signer.sign_url("#{host.call(file)}#{path}")
else
AWS::CF::Signer.sign_url("#{host}#{path}")
end
else
AWS::CF::Signer.sign_url((base_path || "") + path)
end
end
end
end # Url
end # Uploader
end # CarrierWave
Then override /lib/carrierwave/storage/fog.rb by adding the following to the bottom of the same file:
require "fog"
module CarrierWave
module Storage
class Fog < Abstract
class File
include CarrierWave::Utilities::Uri
def url
# Delete 'if statement' related to fog_public
public_url
end
end
end
end
end
Lastly, in config/initializers/carrierwave.rb:
config.asset_host = "http://d12345678.cloudfront.net"
config.fog_public = false
That's it. You can now use Model.attribute_url and it will return a signed CloudFront URL to a private file uploaded by CarrierWave to your S3 bucket.
I think you found this for yourself, but public urls will not expire. If you want that you'll need to use authenticated urls. For public urls I think you can simply get the url and append whatever query params you would like, at least for now. If that works well for you, we can certainly see about patching things to do the right thing.
Everything is working as expected locally. Once I push to heroku I can no longer upload images.
The error code I get from heroku logs is:
Excon::Errors::Forbidden (Expected(200) <=> Actual(403 Forbidden)
The XML response contains: <Code>AccessDenied</Code><Message>Access Denied</Message>
My fog.rb:
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => ENV["ACCESS_KEY_ID"],
:aws_secret_access_key => ENV["SECRET_ACCESS_KEY"]
#:region => 'eu-west-1'
}
#Required for Heroku
config.cache_dir = "#{Rails.root}/tmp/uploads"
config.fog_directory = ENV["BUCKET_NAME"]
end
My Uploader:
class ImageUploader < CarrierWave::Uploader::Base
storage :fog
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
end
Heroku has the correct environment variables - I used the figaro gem. I also set them manually after I got the 403 the first few times to make sure figaro had no errors.
I thought this may be a problem with the region but my bucket is US and carrierwave documentation says the default is us-east-1
What is causing the issue on Heroku but not locally?
Forbidden may mean an issue with the configured directory (rather than the other credentials). Are you using the same BUCKET_NAME value both locally and on Heroku? I know I've certainly tried to use things with a different bucket that I had not yet created (which might also given this error). So checking the value is what you expect (and that the bucket already exists) are a couple good starting points. Certainly happy to discuss and continue helping if that doesn't solve it for you though.
I have a Rails app that is using Carrierwave for file uploads. It has been working fine but I want to start using Amazon S3 for my image storage. I am getting this error:
ArgumentError ( is not a recognized storage provider):
app/controllers/salons_controller.rb:52:in `update'
I have made sure I have the latest gems for Carrierwave and Fog. This is in my Gemfile:
gem 'carrierwave'
gem 'aws-sdk'
gem 'fog'
fog.rb looks like:
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => 'MYACCESSKEY',
:aws_secret_access_key => 'MYSECRETKACCESSKEY',
:region => 'us-east-1'
}
config.fog_directory = 'andrunix'
config.fog_public = true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'}
end
The Uploader class looks like:
class SalonImageUploader < CarrierWave::Uploader::Base
include CarrierWave::RMagick
storage :fog
def store_dir
# "andrunix" is the bucket name on S3
"andrunix/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
end
If I change the storage back to 'file', it works fine. Setting storage to 'fog' generates this error.
OK, I'm an idiot. :)
At some point, I don't know where, I added a fog.rb file with my CarrierWave configuration to the lib/carrierwave/storage directory. I got desperate, paid for a Railscasts subscription so I could watch episode #383 (http://railscasts.com/episodes/383-uploading-to-amazon-s3?autoplay=true) and at 3:02 I found the error of my ways. The Carrierwave configuration needed to be placed in config/initializers/carrierwave.rb.
I don't know where I got this other location but once I moved the config to the proper location, everything is good.
I just ran into the same problem, and people must be aware that any typo in the config file : "config/initializers/carrierwave.rb", leads to that error.