I'm getting Access Denied for a new images path in S3. This in the same bucket, that I have worked with and for other images everything is fine. Can it be a permissions issue? If so, how can I get (and set) permissions using aws-sdk-ruby?
The documentation has poor examples and I don't have access to amazon dashboard.
Can I do these manipulations through the gem anyway?
From their docs # aws-sdk doc
als you may want to reference the bucket doc
S3 supports a number of canned ACLs for buckets and objects. These include:
:private
:public_read
:public_read_write
:authenticated_read
:bucket_owner_read (object-only)
:bucket_owner_full_control (object-only)
:log_delivery_write (bucket-only)
Here is an example of providing a canned ACL to a bucket:
s3.buckets['bucket-name'].acl = :public_read
The syntax has changed on
gem 'aws-sdk', '~> 2'
It is now with round brackets, instead of square.
and in singular
s3.bucket(object)
instead of
s3.buckets[object]
Full-Example :
s3 = Aws::S3::Resource.new
source = s3.bucket('bucket-name').object('some/path/filename.txt')
destination = s3.bucket('same-or-other-bucket').object('destination/filename2.txt')
copy_to(destination, acl:'public-read')
Related
I am using aws-skd-s3 gem in my Rails project.
Create S3 resoure
s3 = Aws::S3::Resource.new(access_key_id: #####,
secret_access_key: #####,
region: 'us-east-1')
Create an S3 object
path = 'sample'
key = test.csv
obj = s3.bucket(#{bucket_name}).object("#{path}" + key)
Store CSV in S3
obj.put(body: csv_response, content_type: 'text/csv')
How to verify that put method stored the csv in S3 without any issues?
Is there any status code available for put method in S3 to verify?
Two ways to go about it:
Store the result. It should be a PutObjectOutput type object. You can check out the official method documentation of the put request method.
The second way to go about it is to make a exists? call right after your put request is completed. Something like this:
s3 = Aws::S3::Resource.new(region: 'ap-southeast-1') # change to the region you use
obj = s3.bucket('bucket-name').object("path/to/object/in/bucket")
if obj.exists?
# Object was uploaded successfully!
else
# No it wasn't!
end
Hope that helps!
One way I've seen or read other people doing it is calculating a md5 hash of the original file before upload and then match that with the etag value from the response of obj.put
I am trying to generate a pre-signed url on my Rails server to send to the browser, so that the browser can upload to S3.
It seems like aws-sdk-s3 is the gem to use going forward. But unfortunately, I haven't come across documentation for the gem that would provide clarity. There seem to be a few different ways of doing so, and would appreciate any guidance on the difference in the following methods -
Using Aws::S3::Presigner.new (https://github.com/aws/aws-sdk-ruby/blob/master/aws-sdk-core/lib/aws-sdk-core/s3/presigner.rb) but it doesn't seem to take in an object parameter or auth credentials.
Using Aws::S3::Resource.new, but it seems like aws-sdk-resources is not going to be maintained. (https://aws.amazon.com/blogs/developer/upgrading-from-version-2-to-version-3-of-the-aws-sdk-for-ruby-2/)
Using Aws::S3::Object.new and then calling the put method on that object.
Using AWS::SigV4 directly.
I am wondering how they differ, and the implications of choosing one over the other? Any recommendations are much appreciated, especially with aws-sdk-s3.
Thank you!
So, thanks to the tips by #strognjz above, here is what worked for me using `aws-sdk-s3'.
require 'aws-sdk-s3'
#credentials below for the IAM user I am using
s3 = Aws::S3::Client.new(
region: 'us-west-2', #or any other region
access_key_id: AWS_ACCESS_KEY_ID,
secret_access_key: AWS_SECRET_ACCESS_KEY
)
signer = Aws::S3::Presigner.new(client: s3)
url = signer.presigned_url(
:put_object,
bucket: S3_BUCKET_NAME,
key: "${filename}-#{SecureRandom.uuid}"
)
This will work using the aws-sdk-s3 gem
aws_client = Aws::S3::Client.new(
region: 'us-west-2', #or any other region
access_key_id: AWS_ACCESS_KEY_ID,
secret_access_key: AWS_SECRET_ACCESS_KEY
)
s3 = Aws::S3::Resource.new(client: aws_client)
bucket = s3.bucket('bucket-name')
obj = bucket.object("${filename}-#{SecureRandom.uuid}")
url = obj.presigned_url(:put)
additional http verbs:
obj.presigned_url(:put)
obj.presigned_url(:head)
obj.presigned_url(:delete)
I have been using the tutorial:
https://devcenter.heroku.com/articles/paperclip-s3
For adding images to S3 storage on AWS, but now want to use IBM's Object Storage which supports S3 API (using gem 'aws-sdk').
Using below:
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
bucket: ENV.fetch('S3_BUCKET_NAME'),
access_key_id: ENV.fetch('ACCESS_KEY_ID'),
secret_access_key: ENV.fetch('SECRET_ACCESS_KEY'),
s3_region: ENV.fetch('REGION'),
}
}
where REGION = 'us-geo'
gives the error
Seahorse::Client::NetworkingError (Failed to open TCP connection to david-provest-movies.s3.us-geo.amazonaws.com:443 (getaddrinfo: Name or service not known)).
How would I change the 'david-provest-movies.s3.us-geo.amazonaws.com:443' to the desired 'david-provest-movies.s3-api.us-geo.objectstorage.softlayer.net' URL?
Link to the API:
https://developer.ibm.com/recipes/tutorials/cloud-object-storage-s3-api-intro/
Thank you :)
That is not a valid region for s3.
us-east-2
us-west-1
us-west-2
ca-central-1
ap-south-1
ap-northeast-2
ap-southeast-1
ap-southeast-2
ap-northeast-1
eu-central-1
eu-west-1
eu-west-2
sa-east-1
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
Here is bluemix
Note: Using third-party tools or SDKs may enforce setting a ‘LocationConstraint’ when creating a bucket. If required, this value must be set to ‘us-standard’. Unlike AWS S3, IBM COS uses this value to indicate the storage class of an object. This field has no relation to physical geography or region - those values are provided within the endpoint used to connect to the service. Currently, the only permitted values for ‘LocationCostraint’ are ‘us-standard’, ‘us-vault’, ‘us-cold’, ‘us-flex’, ‘us-south-standard’, ‘us-south-vault’, us-south-cold’, and ‘us-south-flex’.
I haven't actually tried to use paperclip yet, but the issue here is that the IBM Cloud endpoint needs to be specified. I'll take a closer look at the paperclip docs, but it looks like something along the lines of specifying the bucket URL (where {bucket-name} is hard coded in the below snippet but could be constructed) or some other method to explicitly specify the host name or root URL. There's a chance that paperclip has the AWS endpoint hardcoded and the team at IBM will need to contribute a method for setting a custom endpoint (which is fully supported in the aws-sdk gem when creating clients).
config.paperclip_defaults = {
storage: :s3,
s3_credentials: {
bucket: ENV.fetch('S3_BUCKET_NAME'),
access_key_id: ENV.fetch('ACCESS_KEY_ID'),
secret_access_key: ENV.fetch('SECRET_ACCESS_KEY'),
s3_region: ENV.fetch('REGION'),
bucket_url: 'https://{bucket-name}.s3-api.us-geo.objectstorage.softlayer.net'
}
}
Let me know if it seems like using an explicit bucket URL doesn't work, and I'll look into whether the endpoint is hardcoded in paperclip and what we can do to fix it.
In my Rails app I save customer RMA shipping labels to an S3 bucket on creation. I just updated to V2 of the aws-sdk gem, and now my code for setting the ACL doesn't work.
Code that worked in V1.X:
# Saves label to S3 bucket
s3 = AWS::S3.new
obj = s3.buckets[ENV['S3_BUCKET_NAME']].objects["#{shippinglabel_filename}"]
obj.write(open(label.label('pdf').postage_label.label_pdf_url, 'rb'), :acl => :public_read)
.write seems to have been deprecated, so I'm using .put now. Everything is working, except when I try to set the ACL.
New code for V2.0:
# Saves label to S3 bucket
s3 = Aws::S3::Resource.new
obj = s3.bucket(ENV['S3_BUCKET_NAME']).object("#{shippinglabel_filename}")
obj.put(Base64.decode64(label_base64), { :acl => :public_read })
I get an Aws::S3::Errors::InvalidArgument error, pointed at the ACL.
This code works for me:
photo_obj = bucket.object object_name
photo_obj.upload_file path, {acl: 'public-read'}
so you need to use the string 'public-read' for the acl. I found this by seeing an example in object.rb
I have been able to have third party clients upload files directly to AWS s3 and then process those files with paperclip with the following line in the model:
my_object.file_attachment = URI.parse(URI.escape(my_bucket.s3.amazonaws.com/whatever.ext))
That line downloads the file, processes it and then saves it appropriately. The problem is, in order for that line to work, I have to provide anonymous read privileges for the upload location. So my question is: How do avoid that? My thought is to use the aws-sdk to download the file - so I have been trying stuff like:
file = Tempfile.new('temp', :encoding => 'ascii-8bit')
bucket.objects[aws_key].read do |chunk|
file.write chunk
end
my_object.file_attachment = file
and variations on that theme, but nothing is working so far. Any insights would be most helpful.
Solution I am not very happy with
You can generate a temporary privileged URL using the AWS SDK:
s3 = AWS::S3.new
bucket = s3.buckets['bucket_name']
my_object.file_attachment = bucket.objects['relative/path/of/uploaded/file.ext'].url_for(:read)
As #laertiades says in his amended question, one solution is to create a temporary, pre-signed URL using the AWS SDK.
AWS SDK version 1
In AWS SDK version 1, that looks like this:
s3 = AWS::S3.new
bucket = s3.buckets['bucket_name']
my_object.file_attachment = bucket.objects['relative/path/of/uploaded/file.ext'].url_for(:read)
AWS documentation: http://docs.aws.amazon.com/AWSRubySDK/latest/AWS/S3/S3Object.html#url_for-instance_method
AWS SDK version 2
In AWS SDK version 2, it looks like this with the optional expires_in parameter (credit to this answer on another question):
presigner = Aws::S3::Presigner.new
my_object.file_attachment = presigner.presigned_url(:get_object, # get_object method means read-only
bucket: 'bucket-name',
key: "relative/path/of/uploaded/file.ext",
expires_in: 10.minutes.to_i # time should be in seconds
).to_s
AWS documentation: http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Presigner.html