Recommended way to generate a presigned url to S3 Bucket in Ruby - ruby-on-rails

I am trying to generate a pre-signed url on my Rails server to send to the browser, so that the browser can upload to S3.
It seems like aws-sdk-s3 is the gem to use going forward. But unfortunately, I haven't come across documentation for the gem that would provide clarity. There seem to be a few different ways of doing so, and would appreciate any guidance on the difference in the following methods -
Using Aws::S3::Presigner.new (https://github.com/aws/aws-sdk-ruby/blob/master/aws-sdk-core/lib/aws-sdk-core/s3/presigner.rb) but it doesn't seem to take in an object parameter or auth credentials.
Using Aws::S3::Resource.new, but it seems like aws-sdk-resources is not going to be maintained. (https://aws.amazon.com/blogs/developer/upgrading-from-version-2-to-version-3-of-the-aws-sdk-for-ruby-2/)
Using Aws::S3::Object.new and then calling the put method on that object.
Using AWS::SigV4 directly.
I am wondering how they differ, and the implications of choosing one over the other? Any recommendations are much appreciated, especially with aws-sdk-s3.
Thank you!

So, thanks to the tips by #strognjz above, here is what worked for me using `aws-sdk-s3'.
require 'aws-sdk-s3'
#credentials below for the IAM user I am using
s3 = Aws::S3::Client.new(
region: 'us-west-2', #or any other region
access_key_id: AWS_ACCESS_KEY_ID,
secret_access_key: AWS_SECRET_ACCESS_KEY
)
signer = Aws::S3::Presigner.new(client: s3)
url = signer.presigned_url(
:put_object,
bucket: S3_BUCKET_NAME,
key: "${filename}-#{SecureRandom.uuid}"
)

This will work using the aws-sdk-s3 gem
aws_client = Aws::S3::Client.new(
region: 'us-west-2', #or any other region
access_key_id: AWS_ACCESS_KEY_ID,
secret_access_key: AWS_SECRET_ACCESS_KEY
)
s3 = Aws::S3::Resource.new(client: aws_client)
bucket = s3.bucket('bucket-name')
obj = bucket.object("${filename}-#{SecureRandom.uuid}")
url = obj.presigned_url(:put)
additional http verbs:
obj.presigned_url(:put)
obj.presigned_url(:head)
obj.presigned_url(:delete)

Related

AWS::S3::Errors::InvalidAccessKeyId with valid credentials

I'm getting the following error when trying to upload a file to an S3 bucket:
AWS::S3::Errors::InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
The file exists, the bucket exists, the bucket allows uploads, the credentials are correct, and using CyberDuck with the same credentials i can connect and upload files to that bucket just fine. Most answers around here point to the credentials being overridden by environment variables, that is not the case here, i've tried passing them directly as strings, and outputting them just to make sure, it's the right credentials.
v1
AWS.config(
:access_key_id => 'key',
:secret_access_key => 'secret'
)
s3 = AWS::S3.new
bucket = AWS::S3.new.buckets['bucket-name']
obj = bucket.objects['filename']
obj.write(file: 'path-to-file', acl:'private')
this is using the v1 version of the gem (aws-sdk-v1) but I've tried also using v3 and I get the same error.
v3
Aws.config.update({
region: 'eu-west-1',
credentials: Aws::Credentials.new('key_id', 'secret')
})
s3 = Aws::S3::Resource.new(region: 'eu-west-1')
bucket = s3.bucket('bucket-name')
obj = bucket.object('filename')
ok = obj.upload_file('path-to-file')
Note: the error is thrown on the obj.write line.
Note 2: This is a rake task from a Ruby on Rails 4 app.
Finally figured it out, the problem was that because we are using a custom endpoint the credentials were not found, I guess that works differently with custom endpoints.
Now to specify the custom endpoint you'll need to use a config option that for some reason is not documented (or at least I didn't find it anywhere), I actually had to go through paperclip's code to see how those guys were handling this.
Anyway here's how the config for v1 looks like with the added config for the endpoint:
AWS.config(
:access_key_id => 'key',
:secret_access_key => 'secret',
:s3_endpoint => 'custom.endpoint.com'
)
Hopefully that will save somebody some time.

How to upload images for ruby on rails app using paperclip gem onto IBM Bluemix Object Storage using S3 API?

I have been using the tutorial:
https://devcenter.heroku.com/articles/paperclip-s3
For adding images to S3 storage on AWS, but now want to use IBM's Object Storage which supports S3 API (using gem 'aws-sdk').
Using below:
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
bucket: ENV.fetch('S3_BUCKET_NAME'),
access_key_id: ENV.fetch('ACCESS_KEY_ID'),
secret_access_key: ENV.fetch('SECRET_ACCESS_KEY'),
s3_region: ENV.fetch('REGION'),
}
}
where REGION = 'us-geo'
gives the error
Seahorse::Client::NetworkingError (Failed to open TCP connection to david-provest-movies.s3.us-geo.amazonaws.com:443 (getaddrinfo: Name or service not known)).
How would I change the 'david-provest-movies.s3.us-geo.amazonaws.com:443' to the desired 'david-provest-movies.s3-api.us-geo.objectstorage.softlayer.net' URL?
Link to the API:
https://developer.ibm.com/recipes/tutorials/cloud-object-storage-s3-api-intro/
Thank you :)
That is not a valid region for s3.
us-east-2
us-west-1
us-west-2
ca-central-1
ap-south-1
ap-northeast-2
ap-southeast-1
ap-southeast-2
ap-northeast-1
eu-central-1
eu-west-1
eu-west-2
sa-east-1
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
Here is bluemix
 Note: Using third-party tools or SDKs may enforce setting a ‘LocationConstraint’ when creating a bucket. If required, this value must be set to ‘us-standard’. Unlike AWS S3, IBM COS uses this value to indicate the storage class of an object. This field has no relation to physical geography or region - those values are provided within the endpoint used to connect to the service. Currently, the only permitted values for ‘LocationCostraint’ are ‘us-standard’, ‘us-vault’, ‘us-cold’, ‘us-flex’, ‘us-south-standard’, ‘us-south-vault’, us-south-cold’, and ‘us-south-flex’.
I haven't actually tried to use paperclip yet, but the issue here is that the IBM Cloud endpoint needs to be specified. I'll take a closer look at the paperclip docs, but it looks like something along the lines of specifying the bucket URL (where {bucket-name} is hard coded in the below snippet but could be constructed) or some other method to explicitly specify the host name or root URL. There's a chance that paperclip has the AWS endpoint hardcoded and the team at IBM will need to contribute a method for setting a custom endpoint (which is fully supported in the aws-sdk gem when creating clients).
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
bucket: ENV.fetch('S3_BUCKET_NAME'),
access_key_id: ENV.fetch('ACCESS_KEY_ID'),
secret_access_key: ENV.fetch('SECRET_ACCESS_KEY'),
s3_region: ENV.fetch('REGION'),
bucket_url: 'https://{bucket-name}.s3-api.us-geo.objectstorage.softlayer.net'
}
}
Let me know if it seems like using an explicit bucket URL doesn't work, and I'll look into whether the endpoint is hardcoded in paperclip and what we can do to fix it.

How to test AWS S3 get object method ?

I am using aws sdk for ruby to retrieve an object from a bucket then read it. My code is something like:
def import_from_s3
#initiate the client
s3 = Aws::S3::Client.new({
region: region,
access_key_id: key_id,
secret_access_key: secret
})
#Get the object
resp = s3.get_object(bucket: bucket, key: key)
end
My question is how do I test this method without mocking it?
Here is the documentation on how to go about it.
Stubbing the aws client response
I used the default stub and it worked just fine.
Aws.config[:s3] = {stub_responses: {get_object: {body: StringIO.new("XYZ")}}}
You don't need to (and you shouldn't even try) to test #get_object. That is not implemented by your code and you should assume it has been tested and it works. As for you method #import_from_s3, you have two options. You either don't test it since it is just a thin wrapper around #get_object; or you can make assertions/expectations on its return value.

Rails Amazon S3 authorizing private files using presigned urls

I have the following problem,
In my rails 4 app I am hosting images / videos on s3. Currently I made all the files public and for example an image I can access by storing the public link in the database.
However, I want some of the images videos to be private.
I looked at the presigned url options using the following
s3 = Aws::S3::Client.new(
region: AWS_REGION,
access_key_id: S3_CONFIG['access_key_id'],
secret_access_key: S3_CONFIG['secret_access_key']
)
resource = Aws::S3::Resource.new(client: s3)
bucket = resource.bucket(BUCKET_NAME)
utilities = bucket.objects(prefix: '/folder').each do |obj|
obj.presigned_url(:get, expires_in: 3600).to_s
end
This works fine, but how would I use the presigned url since I can obviously not store them in the db like the public links.
I am using aws-sdk version 2
I am also wondering if this in general is a good solution?
Thanks for any hints,
Jean
Here is the Presigner Doc
Example:
signer = Aws::S3::Presigner.new
url = signer.presigned_url(:put_object, bucket: "bucket", key: "path")

Amazon S3: set permissions using aws-sdk-ruby

I'm getting Access Denied for a new images path in S3. This in the same bucket, that I have worked with and for other images everything is fine. Can it be a permissions issue? If so, how can I get (and set) permissions using aws-sdk-ruby?
The documentation has poor examples and I don't have access to amazon dashboard.
Can I do these manipulations through the gem anyway?
From their docs # aws-sdk doc
als you may want to reference the bucket doc
S3 supports a number of canned ACLs for buckets and objects. These include:
:private
:public_read
:public_read_write
:authenticated_read
:bucket_owner_read (object-only)
:bucket_owner_full_control (object-only)
:log_delivery_write (bucket-only)
Here is an example of providing a canned ACL to a bucket:
s3.buckets['bucket-name'].acl = :public_read
The syntax has changed on
gem 'aws-sdk', '~> 2'
It is now with round brackets, instead of square.
and in singular
s3.bucket(object)
instead of
s3.buckets[object]
Full-Example :
s3 = Aws::S3::Resource.new
source = s3.bucket('bucket-name').object('some/path/filename.txt')
destination = s3.bucket('same-or-other-bucket').object('destination/filename2.txt')
copy_to(destination, acl:'public-read')

Resources