I get this warning when all rails server, console setup.
[WARNING] fog: the specified s3 bucket name(hesaplabakalim-production/assets/new_opengraph) is not a valid dns name, which will negatively impact performance.
My fog configuration is like
connection = Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => "dummy",
:aws_secret_access_key => "dummy"
})
$directory = connection.directories.create(
:key => "dummy/assets/new_opengraph",
:public => true
)
I must actually create an bucket that's name is dummy and after that walk to assets/new_opengraph folder but i could not find it in fog documentation
I searched on fog gem's github page and i found this issue and solution.
"on the empty folder, I have used zero byte files that we hide in the console that gives the semblance of creating empty folders. Cheap trick but works."
https://github.com/fog/fog/issues/1370
Related
I am using Fog Storage to upload my files to s3. I have to check whether a folder exists or not.I don't need the prefix function because it checks the starting. I need something which matches the folder name accurately
Structure of the s3
mynewbucket(bucketname)
nhdata-231(folder or directory name)
rsadata-56787(folder or directory name)
pfadata-1456(folder or directory name)
I have to check whether a specific folder is present or not. I am sharing my code
s3 = Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => ENV["ACCESSKEYID"],
:aws_secret_access_key => ENV["SECRETACCESSKEY"],
:region => 'us-east-2'
})
directory = s3.directories.get(ENV["BUCKET"])
//Here I have to check before creating any folder whether it exist or not.
file = directory.files.create(key: full_bucket_path, public: true)
file.body = image_contents
file.save
file.public_url
I have created a new S3 bucket on aws.
I have a certificate issue I didn't have with the same code on my original bucket.
Here is the code :
AWS.config(access_key_id: AWS_ACCESS_KEY_ID, secret_access_key:AWS_SECRET_ACCESS_KEY, region: S3_REGION)
s3 = AWS::S3.new
bucket = s3.buckets[S3_BUCKET_NAME]
#resp = bucket.objects.with_prefix('categories/'+#category.id.to_s+"/")
#resp.each do |item|
end
returns the following error when "#resp.each" is executed:
hostname does not match the server certificate (OpenSSL::SSL::SSLError)
ENV variables were updated with new region and new bucket name
Uploading images is working
#resp is returning AWS::S3::ObjectCollection:0x007f815e099d18
my bucket name doesn't contain dots
Is there something to configurate on AWS S3 to avoid this error?
I was having the same issue, and I solved it by doing:
Aws::S3::Client.new(
:access_key_id => 'YOUR_ACCESS_KEY_ID',
:secret_access_key => 'YOUR_SECRET_ACCESS_KEY',
:region => 'YOUR_REGION',
:force_path_style => true)
Basically, by specifying also a path style.
Let me know if it works!
I'm trying to set up the Amazon Simple Storage Service for use with rails. I'm getting this error message:
The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.
The problem is that I chose the Frankfurt S3 region, and there only the V4 scheme is supported.
It's the same error message as in this post, which directs you to the solution
here, with instructions how to "set the :s3_signature_version parameter to :v4 when constructing the client". The command is:
s3 = AWS::S3::Client.new(:s3_signature_version => :v4)
My question is, how do I do this? Where do I put this code?
EDIT:
I tried putting :s3_signature_version => :v4 in carrier_wave.rb as follows, but during the upload to heroku it said [fog][WARNING] Unrecognized arguments: s3_signature_version, and it didn't make any difference, I still get the error.
config/initializers/carrier_wave.rb:
if Rails.env.production?
CarrierWave.configure do |config|
config.fog_credentials = {
# Configuration for Amazon S3
:provider => 'AWS',
:aws_access_key_id => ENV['S3_ACCESS_KEY'],
:aws_secret_access_key => ENV['S3_SECRET_KEY'],
:s3_signature_version => :v4
}
config.fog_directory = ENV['S3_BUCKET']
end
end
EDIT:
I've created a new bucket using the Northern California region, for which this isn't supposed to be a problem, but I'm still getting exactly the same error message.
EDIT:
This doesn't make any difference either:
if Rails.env.production?
CarrierWave.configure do |config|
config.fog_credentials = {
# Configuration for Amazon S3
:provider => 'AWS',
:aws_access_key_id => ENV['S3_ACCESS_KEY'],
:aws_secret_access_key => ENV['S3_SECRET_KEY']
}
config.fog_directory = ENV['S3_BUCKET']
config.fog_attributes = {:s3_signature_version => :v4}
end
end
I had the problem, that Spree v2.3 was fixated to aws-sdk v1.27.0. But the parameter s3_signature_version was introduced in v1.31.0 (and set per default for China).
So in my case the following configuration for Frankfurt has totally been ignored:
AWS.config(
region: 'eu-central-1',
s3_signature_version: :v4
)
I found this old question from the other direction, trying to take the advice in https://github.com/fog/fog/issues/3450 and set signature to version 2 (to test a hypothesis). Delving into the source a bit, it turns out the magic phrase is :aws_signature_version => 4, so like this:
config.fog_credentials = {
# Configuration for Amazon S3
:provider => 'AWS',
:aws_access_key_id => ENV['S3_ACCESS_KEY'],
:aws_secret_access_key => ENV['S3_SECRET_KEY'],
:aws_signature_version => 4
}
I had this same problem and could not find any guidance on where to implement the s3_signature_version: :v4 command.
In the end, I basically deleted the existing bucket in Frankfurt and created on in the Standard US zone and it works (after updating the permissions policy attached to the user accessing the bucket to reflect that the bucket has changed).
I would love to have the bucket in Frankfurt but I don't have another 16 hours to spend going round in circles with this issue so if anybody is able to add a bit more direction on how to incorporate the s3_signature_version: :v4 line, that would be great.
For other users following Michael Hartl's Rails Tutorial:
you (might*) need at least v 1.26 of the 'fog' gem. Modify your Gemfile accordingly, and don't forget to '$ bundle install'.
*the reason is that some S3 buckets require authorization signature version 4. In the future probably all of them will, and at least Frankfurt (zone eu-central-1) requires v4 authorization.
This has been supported since fog v1.26:
https://github.com/fog/fog/blob/v1.26.0/lib/fog/aws/storage.rb
I'm writing a Rails 3 app that uses Paperclip to transcode a video file attachment into a bunch of other formats, and then to store the resulting files. It all works fine for local storage, but I am trying to make it work using Paperclip's Fog support to store files in a bucket on our own Ceph cluster. However, I can't seem to find the right configuration options to make Fog talk to my Ceph server.
Here is a snippet from my Rails class:
has_attached_file :videofile,
:storage => :fog,
:fog_credentials => { :aws_access_key_id => 'xxx', :aws_secret_access_key => 'xxx', :provider => 'AWS'},
:fog_public => true,
:url => ":id/:filename",
:fog_directory => 'replay',
:fog_host => 'my-hostname',
Writes using this setup fail because Paperclip attempts to save to Amazon S3 rather than the host I've provided. I have a non-Rails / non-Paperclip toy script working just fine:
conn = Fog::Storage.new({
:aws_access_key_id => 'xxx',
:aws_secret_access_key => 'xxx',
:host => 'my-hostname',
:path_style => true,
:provider => "AWS",
})
This correctly connects to my local Ceph server. So I suspect there is something I'm not configuring in Paperclip properly - but what?
Here's the relevant hunk from fog.rb that I think is causing the connection to only go to AWS:
def host_name_for_directory
if #options[:fog_directory].to_s =~ Fog::AWS_BUCKET_SUBDOMAIN_RESTRICTON_REGEX
"#{#options[:fog_directory]}.s3.amazonaws.com"
else
"s3.amazonaws.com/#{#options[:fog_directory]}"
end
end
the error was just from an improperly configured Ceph cluster. For anyone who finds this thread, as long as you:
Have your wildcard DNS set up properly for your Ceph frontend;
Ceph configured to recognize as such
Pass in :host in :fog_credentials, which would be the FQDN of the Ceph frontend
:fog_host, which apparently needs to be the URL for your bucket, e.g. https://bucket.ceph-server.foobar.com.
Paperclip will work out of the box. I don't think that it is documented anywhere that you can use :host but it works.
I am currently working in a project with Rails, and I have faced the need to import the existing server details from Amazon using the fog library.
I have tried some initial code to get the access to AWS, and at this point I have got the connection with the credentials.
The issue is that when I continue to get that instance details, it does not return anything.
require 'fog'
aws_credentials = {
:aws_access_key_id => "ACCESS ID"
:aws_secret_access_key "SECRET ID"
}
conn2 = Fog::Compute.new(aws_credentials.merge(:provider => 'AWS'))
conn2.servers.all.each do |i|
puts i.id
end
Could anyone please help me fixing this behavior?
The Amazon provider in fog defaults to using the us-east-1 region. It could be that your servers are in another region. To specify a different region by passing the :region into your Fog::Compute constructor. Valid regions include ['ap-northeast-1', 'ap-southeast-1', 'ap-southeast-2', 'eu-west-1', 'sa-east-1', 'us-east-1', 'us-west-1', 'us-west-2'.
So for instance if you are using region ap-northeast-1, your code would look like the following:
require 'fog'
aws_credentials = {
:aws_access_key_id => "ACCESS ID"
:aws_secret_access_key "SECRET ID"
}
conn2 = Fog::Compute.new(aws_credentials.merge(:provider => 'AWS', :region => 'ap-northeast-1' ))
conn2.servers.all.each do |i|
puts i.id
end