I'm trying to set up the Amazon Simple Storage Service for use with rails. I'm getting this error message:
The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.
The problem is that I chose the Frankfurt S3 region, and there only the V4 scheme is supported.
It's the same error message as in this post, which directs you to the solution
here, with instructions how to "set the :s3_signature_version parameter to :v4 when constructing the client". The command is:
s3 = AWS::S3::Client.new(:s3_signature_version => :v4)
My question is, how do I do this? Where do I put this code?
EDIT:
I tried putting :s3_signature_version => :v4 in carrier_wave.rb as follows, but during the upload to heroku it said [fog][WARNING] Unrecognized arguments: s3_signature_version, and it didn't make any difference, I still get the error.
config/initializers/carrier_wave.rb:
if Rails.env.production?
CarrierWave.configure do |config|
config.fog_credentials = {
# Configuration for Amazon S3
:provider => 'AWS',
:aws_access_key_id => ENV['S3_ACCESS_KEY'],
:aws_secret_access_key => ENV['S3_SECRET_KEY'],
:s3_signature_version => :v4
}
config.fog_directory = ENV['S3_BUCKET']
end
end
EDIT:
I've created a new bucket using the Northern California region, for which this isn't supposed to be a problem, but I'm still getting exactly the same error message.
EDIT:
This doesn't make any difference either:
if Rails.env.production?
CarrierWave.configure do |config|
config.fog_credentials = {
# Configuration for Amazon S3
:provider => 'AWS',
:aws_access_key_id => ENV['S3_ACCESS_KEY'],
:aws_secret_access_key => ENV['S3_SECRET_KEY']
}
config.fog_directory = ENV['S3_BUCKET']
config.fog_attributes = {:s3_signature_version => :v4}
end
end
I had the problem, that Spree v2.3 was fixated to aws-sdk v1.27.0. But the parameter s3_signature_version was introduced in v1.31.0 (and set per default for China).
So in my case the following configuration for Frankfurt has totally been ignored:
AWS.config(
region: 'eu-central-1',
s3_signature_version: :v4
)
I found this old question from the other direction, trying to take the advice in https://github.com/fog/fog/issues/3450 and set signature to version 2 (to test a hypothesis). Delving into the source a bit, it turns out the magic phrase is :aws_signature_version => 4, so like this:
config.fog_credentials = {
# Configuration for Amazon S3
:provider => 'AWS',
:aws_access_key_id => ENV['S3_ACCESS_KEY'],
:aws_secret_access_key => ENV['S3_SECRET_KEY'],
:aws_signature_version => 4
}
I had this same problem and could not find any guidance on where to implement the s3_signature_version: :v4 command.
In the end, I basically deleted the existing bucket in Frankfurt and created on in the Standard US zone and it works (after updating the permissions policy attached to the user accessing the bucket to reflect that the bucket has changed).
I would love to have the bucket in Frankfurt but I don't have another 16 hours to spend going round in circles with this issue so if anybody is able to add a bit more direction on how to incorporate the s3_signature_version: :v4 line, that would be great.
For other users following Michael Hartl's Rails Tutorial:
you (might*) need at least v 1.26 of the 'fog' gem. Modify your Gemfile accordingly, and don't forget to '$ bundle install'.
*the reason is that some S3 buckets require authorization signature version 4. In the future probably all of them will, and at least Frankfurt (zone eu-central-1) requires v4 authorization.
This has been supported since fog v1.26:
https://github.com/fog/fog/blob/v1.26.0/lib/fog/aws/storage.rb
Related
I created a rails api but I have a problem with image upload.
I'm using carrierwave , the upload of picture is working but I get a wrong link.
Example :
This is the link I find in the RESTful api :
https://s3.eu-west-2.amazonaws.com/gpsql/uploads/driver/picture/35/imagename.png
But when I check S3 storage I find a different link :
https://s3.eu-west-2.amazonaws.com/gpsql/gpsql/gpsql/uploads/driver/picture/35/imagename.png
This is initializer for s3 carrierwave :
CarrierWave.configure do |config|
config.fog_provider = 'fog/aws' # required
config.fog_credentials = {
provider: 'AWS', # required
aws_access_key_id: '...', # required
aws_secret_access_key: '...', # required
region: 'us-west-2',
path_style: true,
}
config.fog_directory = 'gpsql' # required
config.asset_host = 'https://s3.eu-west-2.amazonaws.com/gpsql'
config.fog_attributes = {'Cache-Control' => "max-age=#{365.day.to_i}"} # optional, defaults to {}
end
In picture uploader :
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
end
How can I fix the link that is shown in the RESTful api also why there is so much "bucket name" in amazon link why not something straightforward link/bucketname/image.png
For the first link I find in restful api it doesn't work at all I get access denied or key not found for the second one in amazon s3 it works without any problem.
One of the problem is this
config.asset_host = 'https://s3.eu-west-2.amazonaws.com/gpsql'
it should be
config.asset_host = 'https://s3.eu-west-2.amazonaws.com'
Anyway I don't know why it's repeating twice...
So, if you can you should fix it in the configuration and move the folder in S3 to the proper place
If you can't move it I would try to change the store dir to "gpsql/gpsql/uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
I'm not sure if that works but that would be my first step
I'm writing a Rails 3 app that uses Paperclip to transcode a video file attachment into a bunch of other formats, and then to store the resulting files. It all works fine for local storage, but I am trying to make it work using Paperclip's Fog support to store files in a bucket on our own Ceph cluster. However, I can't seem to find the right configuration options to make Fog talk to my Ceph server.
Here is a snippet from my Rails class:
has_attached_file :videofile,
:storage => :fog,
:fog_credentials => { :aws_access_key_id => 'xxx', :aws_secret_access_key => 'xxx', :provider => 'AWS'},
:fog_public => true,
:url => ":id/:filename",
:fog_directory => 'replay',
:fog_host => 'my-hostname',
Writes using this setup fail because Paperclip attempts to save to Amazon S3 rather than the host I've provided. I have a non-Rails / non-Paperclip toy script working just fine:
conn = Fog::Storage.new({
:aws_access_key_id => 'xxx',
:aws_secret_access_key => 'xxx',
:host => 'my-hostname',
:path_style => true,
:provider => "AWS",
})
This correctly connects to my local Ceph server. So I suspect there is something I'm not configuring in Paperclip properly - but what?
Here's the relevant hunk from fog.rb that I think is causing the connection to only go to AWS:
def host_name_for_directory
if #options[:fog_directory].to_s =~ Fog::AWS_BUCKET_SUBDOMAIN_RESTRICTON_REGEX
"#{#options[:fog_directory]}.s3.amazonaws.com"
else
"s3.amazonaws.com/#{#options[:fog_directory]}"
end
end
the error was just from an improperly configured Ceph cluster. For anyone who finds this thread, as long as you:
Have your wildcard DNS set up properly for your Ceph frontend;
Ceph configured to recognize as such
Pass in :host in :fog_credentials, which would be the FQDN of the Ceph frontend
:fog_host, which apparently needs to be the URL for your bucket, e.g. https://bucket.ceph-server.foobar.com.
Paperclip will work out of the box. I don't think that it is documented anywhere that you can use :host but it works.
I have (what I thought was) a perfectly working Heroku-Carrierwave-AWS.
I can upload images like a charm.
I now need to send the respective images, via a JSON request, to an app. This has been working on my test server, but for some reason I'm getting the following from my Heroku Logs from my rails call:
Started POST "/downloadUserPhotos" for ?? at 2014-05-03 03:27:38 +0000
Errno::ENOENT (No such file or directory # rb_sysopen - uploads/photo/mainphoto/1/largeimage.jpg):
app/controllers/stats_controller.rb:22:in `read'
app/controllers/stats_controller.rb:22:in `downloadPhotos'
I'm pretty sure this has something to do with the following Ruby/Rails code:
def downloadPhotos
#photos = Photo.find_by_user_id(current_user.id)
#mainphoto = Base64.strict_encode64(File.read(#photos.mainphoto.current_path))
end
When I use my console on Heroku and type the following:
#photos = Photo.find(1)
It works and I get the correct record shown. When I ask for current_path for mainphoto, I get:
irb(main):002:0> #photos.mainphoto.current_path
=> "uploads/photo/mainphoto/1/largeimage.jpg"
So, it knows it exists. And it's in the right place.
Can anyone enlighten me (or point me in the right direction) as to why I can't use File.read. And, more importantly, how I get it to now read the image file and encode it??
This has perplexed me somewhat.
I've tried to use #photos.mainphoto.url, but other than giving me the whole URL, it still doesn't find the file using File.read.
My carrier wave config is:
CarrierWave.configure do |config|
config.fog_credentials = {
# Configuration for Amazon S3
:provider => 'AWS',
:aws_access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:aws_secret_access_key => ENV['AWS_SECRET_ACCESS_KEY'],
:region => ENV['S3_REGION']
}
config.cache_dir = "#{Rails.root}/tmp/uploads"
config.fog_directory = ENV['S3_BUCKET_NAME']
end
And I have the following in my Uploader:
def store_dir
"uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.user.id}"
end
Thanks in advance.
In the line:
#mainphoto = Base64.strict_encode64(File.read(#photos.mainphoto.current_path))
Change current_path for url
I am currently working in a project with Rails, and I have faced the need to import the existing server details from Amazon using the fog library.
I have tried some initial code to get the access to AWS, and at this point I have got the connection with the credentials.
The issue is that when I continue to get that instance details, it does not return anything.
require 'fog'
aws_credentials = {
:aws_access_key_id => "ACCESS ID"
:aws_secret_access_key "SECRET ID"
}
conn2 = Fog::Compute.new(aws_credentials.merge(:provider => 'AWS'))
conn2.servers.all.each do |i|
puts i.id
end
Could anyone please help me fixing this behavior?
The Amazon provider in fog defaults to using the us-east-1 region. It could be that your servers are in another region. To specify a different region by passing the :region into your Fog::Compute constructor. Valid regions include ['ap-northeast-1', 'ap-southeast-1', 'ap-southeast-2', 'eu-west-1', 'sa-east-1', 'us-east-1', 'us-west-1', 'us-west-2'.
So for instance if you are using region ap-northeast-1, your code would look like the following:
require 'fog'
aws_credentials = {
:aws_access_key_id => "ACCESS ID"
:aws_secret_access_key "SECRET ID"
}
conn2 = Fog::Compute.new(aws_credentials.merge(:provider => 'AWS', :region => 'ap-northeast-1' ))
conn2.servers.all.each do |i|
puts i.id
end
I get this warning when all rails server, console setup.
[WARNING] fog: the specified s3 bucket name(hesaplabakalim-production/assets/new_opengraph) is not a valid dns name, which will negatively impact performance.
My fog configuration is like
connection = Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => "dummy",
:aws_secret_access_key => "dummy"
})
$directory = connection.directories.create(
:key => "dummy/assets/new_opengraph",
:public => true
)
I must actually create an bucket that's name is dummy and after that walk to assets/new_opengraph folder but i could not find it in fog documentation
I searched on fog gem's github page and i found this issue and solution.
"on the empty folder, I have used zero byte files that we hide in the console that gives the semblance of creating empty folders. Cheap trick but works."
https://github.com/fog/fog/issues/1370