Check specific folder exists on s3 bucket - ruby-on-rails

I am using Fog Storage to upload my files to s3. I have to check whether a folder exists or not.I don't need the prefix function because it checks the starting. I need something which matches the folder name accurately
Structure of the s3
mynewbucket(bucketname)
nhdata-231(folder or directory name)
rsadata-56787(folder or directory name)
pfadata-1456(folder or directory name)
I have to check whether a specific folder is present or not. I am sharing my code
s3 = Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => ENV["ACCESSKEYID"],
:aws_secret_access_key => ENV["SECRETACCESSKEY"],
:region => 'us-east-2'
})
directory = s3.directories.get(ENV["BUCKET"])
//Here I have to check before creating any folder whether it exist or not.
file = directory.files.create(key: full_bucket_path, public: true)
file.body = image_contents
file.save
file.public_url

Related

amazon s3 variables not working in heroku environment rails

Hello I have included given code
def store_s3(file)
# We create a connection with amazon S3
AWS.config(access_key_id: ENV['S3_ACCESS_KEY'], secret_access_key: ENV['S3_SECRET'])
s3 = AWS::S3.new
bucket = s3.buckets[ENV['S3_BUCKET_LABELS']]
object = bucket.objects[File.basename(file)]
# the file is not the content of the file is the route
# file_data = File.open(file, 'rb')
object.write(file: file)
# save the file and return an url to download it
object.url_for(:read, response_content_type: 'text/csv')
end
this code is working correctly in my local data is stored in amazon but when I had deployed code in heroku server I had made variables on server too.
is there anything which I am missing here please let me know cause of issue.
I don't see region, in your example is S3_Hostname your region?
for myself, region was just like 'us-west-2'.
If you want to setup your s3 with carrierwave and gem fog you can do it like this on config/initializers/carrierwave.rb
CarrierWave.configure do |config|
config.fog_provider = 'fog/aws'
config.fog_directory = 'name for s3 directory'
config.fog_credentials = {
:provider => 'AWS',
:aws_access_key_id => 'your access key',
:aws_secret_access_key => 'your secret key',
:region => 'your region ex: eu-west-2'
}
end

"hostname does not match the server certificate" with aws-sdk-ruby when accessing S3 objects

I have created a new S3 bucket on aws.
I have a certificate issue I didn't have with the same code on my original bucket.
Here is the code :
AWS.config(access_key_id: AWS_ACCESS_KEY_ID, secret_access_key:AWS_SECRET_ACCESS_KEY, region: S3_REGION)
s3 = AWS::S3.new
bucket = s3.buckets[S3_BUCKET_NAME]
#resp = bucket.objects.with_prefix('categories/'+#category.id.to_s+"/")
#resp.each do |item|
end
returns the following error when "#resp.each" is executed:
hostname does not match the server certificate (OpenSSL::SSL::SSLError)
ENV variables were updated with new region and new bucket name
Uploading images is working
#resp is returning AWS::S3::ObjectCollection:0x007f815e099d18
my bucket name doesn't contain dots
Is there something to configurate on AWS S3 to avoid this error?
I was having the same issue, and I solved it by doing:
Aws::S3::Client.new(
:access_key_id => 'YOUR_ACCESS_KEY_ID',
:secret_access_key => 'YOUR_SECRET_ACCESS_KEY',
:region => 'YOUR_REGION',
:force_path_style => true)
Basically, by specifying also a path style.
Let me know if it works!

Extra object in Google Cloud Storage when using Fog

I am following the fog.io/storage example for creating a directory and then uploading a file. Everything works great when I push my file into Google Cloud Storage except there is always a "binary/octet-stream" file named exactly as the deepest file I create.
My code is very similar to the AWS example in that I create a directory and from that new directory, I create a file. The directory structure is created properly and the file is uploaded properly but there is always an extra, 0-byte file. My code looks like:
job_number = 100
connection = Fog::Storage.new({
:provider => 'Google',
:google_storage_access_key_id => YOUR_GCE_ACCESS_KEY_ID,
:google_storage_secret_access_key => YOUR_GCE_SECRET_ACCESS_KEY
})
directory = connection.directories.create(
:key => "test-project/uploads/#{job_number}",
:public => false
)
file = directory.files.create(
:key => 'file.pdf',
:content_type => 'application/pdf',
:body => File.open("/path/to/my/file.pdf"),
:public => false
)
The directory structure is perfect (gs://test-project/uploads/100 folder exists) and the file.pdf file exists in that directory as well (gs://test-project/uploads/100/file.pdf).
The problem is that after the:
directory = connection.directories.create(
:key => "test-project/uploads/#{job_number}",
:public => false
)
command runs, there is a file at gs://test-project/uploads/100 as well as a directory gs://test-project/uploads/100/. When I walk through the code, the connection.directories.create(...) command is definitely creating the extra file but I cannot figure out why.
I have also tried to add a trailing slash to the key value for the connection.directories.create(...) command but that actually creates a different directory structure problem that is worse than this (this isn't bad, just annoying).
Has anyone seen this or know how to correctly have the directory structure created through Fog?
Instead of creating the directory right up to the file, just create/get the base directory/bucket and then save the file with the rest of the directory structure. So it would look like this:
job_number = 100
connection = Fog::Storage.new({
:provider => 'Google',
:google_storage_access_key_id => YOUR_GCE_ACCESS_KEY_ID,
:google_storage_secret_access_key => YOUR_GCE_SECRET_ACCESS_KEY
})
directory = connection.directories.create(
:key => "test-project",
:public => false
)
file = directory.files.create(
:key => 'uploads/#{job_number}/file.pdf',
:content_type => 'application/pdf',
:body => File.open("/path/to/my/file.pdf"),
:public => false
)

Paperclip/Carrierwave: How do I prevent my images from being uploaded to a nameless folder?

I'm using Paperclip and Carrierwave to upload images to S3. Currently, images for a certain model are being uploaded to a nameless folder in the root of the bucket. How can I ensure the folder they're uploaded to doesn't have an empty name? Here's the relevant code from the Paperclip/Carrierwave initializer
fog_credentials = {
:provider => "AWS",
:aws_access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:aws_secret_access_key => ENV['AWS_SECRET_ACCESS_KEY'],
}
# Carrierwave
config.storage = :fog
config.fog_credentials = fog_credentials
config.fog_directory = ENV['AWS_S3_BUCKET']
config.fog_public = true
# Paperclip
Paperclip::Attachment.default_options[:storage] = :fog
Paperclip::Attachment.default_options[:fog_credentials] = fog_credentials
Paperclip::Attachment.default_options[:fog_directory] = ENV['AWS_S3_BUCKET']
Paperclip::Attachment.default_options[:fog_host] = ENV['AWS_S3_ASSET_HOST']
Paperclip::Attachment.default_options[:url] = ":class/:id_partition/:attachment/:style/:filename"
Paperclip::Attachment.default_options[:path] = ":url"
**Edited**
I forgot to mention that I'm using spree which seems to rewrite these options somewhere along the line.
I changed the URL and path options by setting these explicitly
Spree::Image.attachment_definitions[:attachment][:url] = "spree/products/:id/:style/:basename.:extension"
Spree::Image.attachment_definitions[:attachment][:path] = "spree/products/:id/:style/:basename.:extension"
whose defaults were both prefixed by a slash. According to this paperclip-aws gem,prefixing these options with a slash will create a nameless folder in the root of the bucket.
I forgot to mention that I'm using spree which seems to rewrite these options somewhere along the line.
I changed the URL and path options by setting these explicitly
Spree::Image.attachment_definitions[:attachment][:url] = "spree/products/:id/:style/:basename.:extension"
Spree::Image.attachment_definitions[:attachment][:path] = "spree/products/:id/:style/:basename.:extension"
whose defaults were both prefixed by a slash. According to this paperclip-aws gem,prefixing these options with a slash will create a nameless folder in the root of the bucket.

Amazon s3 bucket name warning from Fog gem

I get this warning when all rails server, console setup.
[WARNING] fog: the specified s3 bucket name(hesaplabakalim-production/assets/new_opengraph) is not a valid dns name, which will negatively impact performance.
My fog configuration is like
connection = Fog::Storage.new({
:provider => 'AWS',
:aws_access_key_id => "dummy",
:aws_secret_access_key => "dummy"
})
$directory = connection.directories.create(
:key => "dummy/assets/new_opengraph",
:public => true
)
I must actually create an bucket that's name is dummy and after that walk to assets/new_opengraph folder but i could not find it in fog documentation
I searched on fog gem's github page and i found this issue and solution.
"on the empty folder, I have used zero byte files that we hide in the console that gives the semblance of creating empty folders. Cheap trick but works."
https://github.com/fog/fog/issues/1370

Resources