How to verify AWS S3 put response in Rails - ruby-on-rails

I am using aws-skd-s3 gem in my Rails project.
Create S3 resoure
s3 = Aws::S3::Resource.new(access_key_id: #####,
secret_access_key: #####,
region: 'us-east-1')
Create an S3 object
path = 'sample'
key = test.csv
obj = s3.bucket(#{bucket_name}).object("#{path}" + key)
Store CSV in S3
obj.put(body: csv_response, content_type: 'text/csv')
How to verify that put method stored the csv in S3 without any issues?
Is there any status code available for put method in S3 to verify?

Two ways to go about it:
Store the result. It should be a PutObjectOutput type object. You can check out the official method documentation of the put request method.
The second way to go about it is to make a exists? call right after your put request is completed. Something like this:
s3 = Aws::S3::Resource.new(region: 'ap-southeast-1') # change to the region you use
obj = s3.bucket('bucket-name').object("path/to/object/in/bucket")
if obj.exists?
# Object was uploaded successfully!
else
# No it wasn't!
end
Hope that helps!

One way I've seen or read other people doing it is calculating a md5 hash of the original file before upload and then match that with the etag value from the response of obj.put

Related

Add Tag while uploading an object to Amazon s3 in rails

I'm trying to upload CSV files to Amazon S3.
I'm able to add metadata using the below code snippet:
s3_obj.upload_file(file_to_be_uploaded, {"content_type": "application/octet-stream"}
How can I add suitable tags (key-value pairs) -- for example exp: tag = { marked_to_delete: "true" } -- while uploading?
You should be able to do that by passing tagging: "marked_to_delete=true" as an option.
Options are passed to an instance of AWS::S3::Client's put_object method. The docs give a similar example:
resp = client.put_object({
body: "filetoupload",
bucket: "examplebucket",
key: "exampleobject",
server_side_encryption: "AES256",
tagging: "key1=value1&key2=value2",
})

How to test AWS S3 get object method ?

I am using aws sdk for ruby to retrieve an object from a bucket then read it. My code is something like:
def import_from_s3
#initiate the client
s3 = Aws::S3::Client.new({
region: region,
access_key_id: key_id,
secret_access_key: secret
})
#Get the object
resp = s3.get_object(bucket: bucket, key: key)
end
My question is how do I test this method without mocking it?
Here is the documentation on how to go about it.
Stubbing the aws client response
I used the default stub and it worked just fine.
Aws.config[:s3] = {stub_responses: {get_object: {body: StringIO.new("XYZ")}}}
You don't need to (and you shouldn't even try) to test #get_object. That is not implemented by your code and you should assume it has been tested and it works. As for you method #import_from_s3, you have two options. You either don't test it since it is just a thin wrapper around #get_object; or you can make assertions/expectations on its return value.

Uploading a file to AWS S3 with ACL set to public_read

In my Rails app I save customer RMA shipping labels to an S3 bucket on creation. I just updated to V2 of the aws-sdk gem, and now my code for setting the ACL doesn't work.
Code that worked in V1.X:
# Saves label to S3 bucket
s3 = AWS::S3.new
obj = s3.buckets[ENV['S3_BUCKET_NAME']].objects["#{shippinglabel_filename}"]
obj.write(open(label.label('pdf').postage_label.label_pdf_url, 'rb'), :acl => :public_read)
.write seems to have been deprecated, so I'm using .put now. Everything is working, except when I try to set the ACL.
New code for V2.0:
# Saves label to S3 bucket
s3 = Aws::S3::Resource.new
obj = s3.bucket(ENV['S3_BUCKET_NAME']).object("#{shippinglabel_filename}")
obj.put(Base64.decode64(label_base64), { :acl => :public_read })
I get an Aws::S3::Errors::InvalidArgument error, pointed at the ACL.
This code works for me:
photo_obj = bucket.object object_name
photo_obj.upload_file path, {acl: 'public-read'}
so you need to use the string 'public-read' for the acl. I found this by seeing an example in object.rb

How to s3 object URL that works with cloudfront?

I'm currently storing files privately on S3. In my Rails app, in the attachment.rb model I can obtain a public URL for the private file like so:
def cdn_url ( style='original' )
attachment.s3_object(style).url_for( :read, secure: true, response_content_type: self.meta['file_content_type'], expires: 1.hour ).to_s
end
The problem is this is providing a URL to S3 and rewriting the URL to use my Cloudfront origin url is erroring with:
The request signature we calculated does not match the signature you provided. Check your key and signing method.
How can I get a public URL asset like below but serve the asset via Cloudfront?
First Way (Easy)
Just use aws_cf_signer gem. Put it in you bundler.
WIth this you can do something like
def cdn_url (options = {})
style = options[:style] || 'original'
cloudfront_domain = options[:cloudfront_domain] || 'example.cloudfront.net'
cloudfront_pem_key_path = options[:cloudfront_pem_key_path]
cloudfront_key_paid_id = options[:cloundfrount_key_paid_id]
path = attachment.path(style) #path of the file
# you can get this values from your aws a/c , most probably by going int
# https://console.aws.amazon.com/iam/home?#security_credential
signer = AwsCfSigner.new(cloudfront_pem_key_path, cloudfront_key_paid_id)
# this configuration may vary.
# visit https://github.com/dylanvaughn/aws_cf_signer
# and check all available settings/options
url = signer.sign(path, :ending => Time.now + 3600)
cloudfront_domain + url
end
With this you can access the url with something like this
cdn_url(cloudfront_pem_key_path: '/users/downloads/pri.pem' , cloudfront_key_paid_id: '33243424XXX')
Second way
# A simple function to return a signed, expiring url for Amazon Cloudfront.
# This will require openssl, digest/sha1, base64 and maybe other libraries.
module CloudFront
def get_signed_expiring_url(domain,path, expires_in, private_key_filename, key_pair_id)
# AWS works on UTC, so make sure you are not using local time
expires = (Time.now.getutc + expires_in).to_i.to_s
private_key = OpenSSL::PKey::RSA.new(File.read(private_key_filename))
# path should be your S3 path without a leading slash and without a file extension.
# e.g. files/private/52
policy = %Q[{"Statement":[{"Resource":"#{path}","Condition":{"DateLessThan":{"AWS:EpochTime":#{expires}}}}]}]
signature = Base64.strict_encode64(private_key.sign(OpenSSL::Digest::SHA1.new, policy))
# I'm not sure exactly why this is required, but it's in Amazon's perl script and seems necessary
# Different base64 implementations maybe?
signature.tr!("+=/", "-_~")
"#{domain}#{path}?Expires=#{expires}&Signature=#{signature}&Key-Pair-Id=#{key_pair_id}"
end
end
With this you can do something like
def cdn_url ( style='original',cloudfront_pem_key_path,key_pair_id)
path = attachment.path(style) #path of the file
# you can get this values from your aws a/c , most probably by going int
CloudFront.get_signed_expiring_url 'example.cloudfront.net', path, 45.seconds ,'/users/downloads/pri.pem', 'as12XXXXX')
end
Give a try, may be it will work. Be sure to properly set properly bucket access policy. check this out if you are seeing accessDenied error http://www.jppinto.com/2011/12/access-denied-to-file-amazon-s3-bucket/
Use the aws sdk gem.
See the API Documentation
Details about generating a presigned URL for an operation on the object
Provide the access-key-id and secret-access-key:-
S3 = AWS::S3.new(
:access_key_id => 'access_key_id',
:secret_access_key => 'secret_access_key')
In controller put these lines:--
bucket = S3.buckets['bucket_name']
s3_obj = bucket.objects["Path-to-file"]
return s3_obj.url_for(:read, :expires => 60*60).to_s
This link will expires in 1 Hour. After that the link will be not accessible.

s3 bucket..The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint

Hi I'm using below code to get the size of a bucket.Researched all over but the only way was to loop through each file.While looping through ,some buckets seems to created in a different region and I'm ending up with above error
AWS::S3::PermanentRedirect: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint. from /home//.rvm/gems/ruby-1.9.2-p180/gems/aws-s3-0.6.2/lib/aws/s3/error.rb:38:in `raise'
The end point is us-west-1,
Need help in fixing the above issue also how do I switch my code dynamically to region where my bucket belongs to. Also need suggestion on adding exception in case of failure Below is my code.
Please feel free to comment.
def get_bucket
s3 = AWS::S3::Base.establish_connection!(:access_key_id => #config[:ACCESS_KEY_ID], :secret_access_key => #config[:SECRET_ACCESS_KEY])
if !s3.nil?
AWS::S3::Service.buckets.each do |bucket|
puts bucket.inspect
if !bucket.nil?
size = 0
# I'm harding coding below bucket names, for code not to fail
if ![
'cf-templates-m01ixtvp0jr0-us-west-1',
'cf-templates-m01ixtvp0jr0-us-west-2',
'elasticbeanstalk-us-west-1-767904627276',
'elasticbeanstalk-us-west-1-akiai7bucgnrthi66w6a',
'medidata-rave-cdn'
].include? bucket.name
bucket_size = AWS::S3::Bucket.find(bucket.name)
if !bucket_size.nil?
bucket_size.each do |obj|
if !obj.nil?
size += obj.size.to_i
end
end
end
end
load_bucket(bucket.name,bucket.creation_date,size,#config[:ACCOUNT_NAME])
end
end
end
end
The problem is that buckets can exist in different regions, and while you can list all buckets from the same connection (unlike other AWS entities that are locked to the location they were created in), other operations on buckets require you to log in to the specific "endpoint" (region) to which they are constrained.
My solution is to check where the bucket is located and then re-login to that region:
s3 = AWS::S3.new(#awscreds)
if s3.buckets[bucket].location_constraint != #awscreds[:region] then
# need to re-login, otherwise the S3 upload will fail
s3 = AWS::S3.new(#awscreds.merge(region: s3.buckets[bucket].location_constraint))
end
I don't understand how you're building the URL to access your bucket.
If it's in US-Standard, you can say http://s3.amazonaws.com/BUCKETNAME/path/to/file. If it's anywhere else, that doesn't work (non-coincidentally, you're limited to domain-allowed characters (lowercase and numbers only) for bucket names) and you use http://BUCKETNAME.s3.amazonaws.com/path/to/file.
This article may be of help: http://docs.aws.amazon.com/general/latest/gr/rande.html

Resources