I am attempting to use Carrierwave with Amazon S3 in my Rails app, and I keep getting the error
"Excon::Errors::Forbidden (Expected(200) <=> Actual(403 Forbidden)."
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.
I also receive the warning
"[WARNING] fog: the specified s3 bucket name() is not a valid dns name, which will negatively impact performance. For details see: http://docs.amazonwebservices.com/AmazonS3/latest/dev/BucketRestrictions.html"
config/initializers/carrierwave.rb:
CarrierWave.configure do |config|
config.fog_credentials = {
provider: 'AWS',
aws_access_key_id: ENV["AWS_ACCESS_KEY_ID"],
aws_secret_access_key: ENV["AWS_ACCESS_KEY"]
}
config.fog_directory = ENV["AWS_BUCKET"]
end
My bucket name is "buildinprogress"
I've double checked that my access key ID and access key are correct.
How can I fix this error??
It is a problem with Fog/Excom that kept throwing random errors for me too.
My fix was to remove gem 'fog' and replace it with gem 'carrierwave-aws' instead.
Then, in your *_uploader.rb change
storage :fog ---> storage :aws
and update your carrierwave.rb file Ex.:
CarrierWave.configure do |config|
config.storage = :aws # required
config.aws_bucket = ENV['S3_BUCKET'] # required
config.aws_acl = :public_read
config.aws_credentials = {
access_key_id: ENV['S3_KEY'], # required
secret_access_key: ENV['S3_SECRET'] # required
}
config.aws_attributes = {
'Cache-Control'=>"max-age=#{365.day.to_i}",
'Expires'=>'Tue, 29 Dec 2015 23:23:23 GMT'
}
end
For more info check out the carrierwave-aws GitHub page
Related
Everything has been great. Until I checked out a new branch, and now I get this error. I use figaro which generated an application.yml to store the env variables for the aws credentials. I have successfully been able to deploy to heroku and use my aws keys to upload pics, etc. to my bucket. Then I checkout a new branch and this error. I even went back to the old branch where everything was just peachy and this error won't go away. I'm frustrated. I even go into the terminal and do echo $aws_access_key_id and I don't get nil, I get the access key. Something doesn't add up...
fog.rb
CarrierWave.configure do |config|
config.fog_credentials = {
provider: 'AWS',
aws_access_key_id: ENV['aws_access_key_id'],
aws_secret_access_key: ENV['aws_secret_access_key'],
region: 'us-east-1'
}
config.fog_directory = ENV['AWS_BUCKET']
if Rails.env.development? || Rails.env.test?
CarrierWave.configure do |config|
config.storage = :file
end
end
# Use AWS storage if in production
if Rails.env.production?
CarrierWave.configure do |config|
config.storage = :fog
end
end
end
application.yml
aws_access_key_id: "key"
aws_secret_access_key: "key"
I'm trying to set up Carrierwave and Fog to handle image and file uploads on a rails app that I have hosted on AWS' Elastic Beanstalk.
I'm a little confused on how to properly set up the Fog config.
I tried using my AWS Access and Secret keys (commented out in the example below). That through an error on my EB CLI (ERROR: NotAuthorizedError - Operation Denied. The security token included in the request is invalid.)
I'm tyring to use IAM instead of having my Access/Secret codes in my ruby code. Can anyone tell me how to set this up properly?
Here's my config file:
CarrierWave.configure do |config|
# Use local storage if in development or test
if Rails.env.development? || Rails.env.test?
CarrierWave.configure do |config|
config.storage = :file
end
end
# Use AWS storage if in production
if Rails.env.production?
CarrierWave.configure do |config|
config.storage = :fog
end
end
config.fog_credentials = {
:provider => 'AWS', # required
# :aws_access_key_id => 'My Access', # required
# :aws_secret_access_key => 'My Secret', # required
:use_iam_profile => true,
:region => 'eu-west-2' # optional, defaults to 'us-east-1'
}
config.fog_directory = 'elasticbeanstalk-us-west-2-XXXXXXXXXX' # required
#config.fog_host = 'https://assets.example.com' # optional, defaults to nil
config.fog_public = false # optional, defaults to true
config.fog_attributes = {'Cache-Control'=>'max-age=315576000'} # optional, defaults to {}
end
This is a setup that works for me:
config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_provider = 'fog/aws' # required
config.fog_credentials = {
provider: 'AWS', # required
aws_access_key_id: ENV['aws_access_key_id'], # required
aws_secret_access_key: ENV['aws_secret_access_key'], # required
#region: 'Singapore', # optional, defaults to 'us-east-1'
#host: 's3.example.com', # optional, defaults to nil
#endpoint: 'olucube-images.s3-website-ap-southeast-1.amazonaws.com', # optional, defaults to nil
}
config.fog_directory = ENV['fog_directory'] # required
#config.fog_public = false # optional, defaults to true
# config.fog_attributes = { 'Cache-Control' => "max-age=#{365.day.to_i}" }, # optional, defaults to {}
end
and I used figaro gem to hold my credentials as follow:
config/application.yml
aws_access_key_id: 'XXXXXXXXXXXXXXXXXXXX'
aws_secret_access_key: 'XXXXXXXXXXXXXXXXXX'
fog_directory: 'myAppName'
This was a bit a of a wild ride. I had a hard time figuring out that Figaro gem. It's probably simple but I didn't really understand it. So for a test, I put my keys directly in the code. It still didn't work.
I pushed my code to github (publicly) and didn't think too much of it. I was going to change the keys just in case. Before I was able to do that someone found my code on github and gained access to my AWS account. They started a bunch of EC2 instances and racked up $3000 worth of usage in a few hours!
My AWS account got suspended and I'm still dealing with having the charges reversed.
Anyway. I found out that you can actually set environment variables on the Elastic Beanstalk web interface. Its under Configuration → Software Configuration. So I did that instead of using Fiagro (much safer IMO). Now it works great. I simplified my Carrierwave config file to only use AWS calling the environment variables from EB. Here's the file:
# config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_provider = 'fog/aws'
config.fog_credentials = {
provider: 'AWS',
aws_access_key_id: ENV['S3_KEY'],
aws_secret_access_key: ENV['S3_SECRET'],
region: ENV['S3_REGION']
}
config.fog_directory = ENV['S3_BUCKET']
config.fog_public = false
config.storage = :fog
end
I changed my uploader files to use fog too. Here's an example:
# app/uploaders/image_uploader.rb
class ImageUploader < CarrierWave::Uploader::Base
storage :fog
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
def extension_white_list
%w(jpg jpeg gif png)
end
end
Everything works great now. I hope this helps someone else.
I have a problem with authentication, i try to upload but i get this
The request signature we calculated does not match the signature you provided. Check your key and signing method.
initializer carrierwave.rb
CarrierWave.configure do |config| # optional, defaults to "fog"
config.fog_credentials = {
provider: 'AWS', # required
aws_access_key_id: ENV["AWS_ACCESS_KEY_ID"], # required
aws_secret_access_key: ENV["AWS_ACCESS_SECRET_KEY"], # required
}
config.fog_directory = ENV["AWS_BUCKET_NAME"] # required
config.fog_attributes = { 'Cache-Control' => "max-age=#{365.day.to_i}" } # optional, defaults to {}
end
gem "carrierwave"
gem "fog"
gem "figaro"
set the environment variables
i also created a bucket in S3.
Is there anything i am missing here? It works locally though.
Also changed the image_uploader.rb to storage :fog
I do uploads to amazon S3 with carrierwave that works fine.
But Now I want to add a delete function I tried this:
AWS::S3::S3Object.delete(#vid.video, 'bucket')
I got this error:
uninitialized constant MoviesController::AWS
The reason is clear .. But how do I set the AWS constant correctly and where?
config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => '----',
:aws_secret_access_key => '----',
:region => 'eu-central-1',
}
config.fog_use_ssl_for_aws = false
config.fog_directory = 'bucekt'
config.storage = :fog
end
You must first configure the AWS gem. Add this code to the config/initializers/aws.rb file.
Aws.config.update({
region: '<default-region>',
credentials: Aws::Credentials.new('<access-key-id>', '<secret-access-key')
})
You can also set the environment variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_REGION on your server and the SDK will pick them up automatically.
Then, anywhere in your app or a controller action, you can call the S3 API like this:
def some_action
# You can simply call Aws::S3::Client.new
# if you are already configuring using the
# above methods or configure by passing
# parameters explicitly
s3_client = Aws::S3::Client.new(
credentials: Aws::Credentials.new('<aws_access_key_id>', '<aws_secret_key>'),
region: '<aws_region>'
)
# delete object by passing bucket and object key
s3_response = s3_client.delete_object({
bucket: '<bucket-name>', # required
key: '<object-key>', # required
})
end
I have just migrated from paperclip to carrierwave and managed to successfully get uploading to S3 to work locally on my machine, but after deploying the Rails application to my server (which uses Ubuntu, Passenger with nginx, was a mission getting it to work), and when i try to upload an image, it tries to save it to public/uploads/... which comes up with a permission denied error, I have looked and searched everywhere to find out why its not working, and have found nothing.
My Uploader file:
class AvatarUploader < CarrierWave::Uploader::Base
include CarrierWave::Compatibility::Paperclip
storage :fog
def store_dir
"/uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
end
fog.rb
CarrierWave.configure do |config|
config.storage = :fog
config.fog_credentials = {
provider: 'AWS', # required
aws_access_key_id: '*******', # required
aws_secret_access_key: '********', # required
region: 'ap-southeast-2', # optional, defaults to 'us-east-1'
}
config.fog_directory = 'publicrant' # required
# config.fog_public = false
config.fog_attributes = { 'Cache-Control' => "max-age=#{365.day.to_i}" } # optional, defaults to {}
end
Ok so after hours of googling and failing miserably at finding a solution, turns out, in a production environment it did need to put the file temporary in uploads/tmp before it pushes it to S3 Bucket
Seems like you doesn't have read permissions for other users (o+r). Check it use command:
namei -lm <absolute path to your current/public>
and grant read permissions:
chmod o+r <directory>
In your case I think it will be /home/<user> directory.