Uploading a file to AWS S3 with ACL set to public_read - ruby-on-rails

In my Rails app I save customer RMA shipping labels to an S3 bucket on creation. I just updated to V2 of the aws-sdk gem, and now my code for setting the ACL doesn't work.
Code that worked in V1.X:
# Saves label to S3 bucket
s3 = AWS::S3.new
obj = s3.buckets[ENV['S3_BUCKET_NAME']].objects["#{shippinglabel_filename}"]
obj.write(open(label.label('pdf').postage_label.label_pdf_url, 'rb'), :acl => :public_read)
.write seems to have been deprecated, so I'm using .put now. Everything is working, except when I try to set the ACL.
New code for V2.0:
# Saves label to S3 bucket
s3 = Aws::S3::Resource.new
obj = s3.bucket(ENV['S3_BUCKET_NAME']).object("#{shippinglabel_filename}")
obj.put(Base64.decode64(label_base64), { :acl => :public_read })
I get an Aws::S3::Errors::InvalidArgument error, pointed at the ACL.

This code works for me:
photo_obj = bucket.object object_name
photo_obj.upload_file path, {acl: 'public-read'}
so you need to use the string 'public-read' for the acl. I found this by seeing an example in object.rb

Related

AWS::S3::Errors::InvalidAccessKeyId with valid credentials

I'm getting the following error when trying to upload a file to an S3 bucket:
AWS::S3::Errors::InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
The file exists, the bucket exists, the bucket allows uploads, the credentials are correct, and using CyberDuck with the same credentials i can connect and upload files to that bucket just fine. Most answers around here point to the credentials being overridden by environment variables, that is not the case here, i've tried passing them directly as strings, and outputting them just to make sure, it's the right credentials.
v1
AWS.config(
:access_key_id => 'key',
:secret_access_key => 'secret'
)
s3 = AWS::S3.new
bucket = AWS::S3.new.buckets['bucket-name']
obj = bucket.objects['filename']
obj.write(file: 'path-to-file', acl:'private')
this is using the v1 version of the gem (aws-sdk-v1) but I've tried also using v3 and I get the same error.
v3
Aws.config.update({
region: 'eu-west-1',
credentials: Aws::Credentials.new('key_id', 'secret')
})
s3 = Aws::S3::Resource.new(region: 'eu-west-1')
bucket = s3.bucket('bucket-name')
obj = bucket.object('filename')
ok = obj.upload_file('path-to-file')
Note: the error is thrown on the obj.write line.
Note 2: This is a rake task from a Ruby on Rails 4 app.
Finally figured it out, the problem was that because we are using a custom endpoint the credentials were not found, I guess that works differently with custom endpoints.
Now to specify the custom endpoint you'll need to use a config option that for some reason is not documented (or at least I didn't find it anywhere), I actually had to go through paperclip's code to see how those guys were handling this.
Anyway here's how the config for v1 looks like with the added config for the endpoint:
AWS.config(
:access_key_id => 'key',
:secret_access_key => 'secret',
:s3_endpoint => 'custom.endpoint.com'
)
Hopefully that will save somebody some time.

How to verify AWS S3 put response in Rails

I am using aws-skd-s3 gem in my Rails project.
Create S3 resoure
s3 = Aws::S3::Resource.new(access_key_id: #####,
secret_access_key: #####,
region: 'us-east-1')
Create an S3 object
path = 'sample'
key = test.csv
obj = s3.bucket(#{bucket_name}).object("#{path}" + key)
Store CSV in S3
obj.put(body: csv_response, content_type: 'text/csv')
How to verify that put method stored the csv in S3 without any issues?
Is there any status code available for put method in S3 to verify?
Two ways to go about it:
Store the result. It should be a PutObjectOutput type object. You can check out the official method documentation of the put request method.
The second way to go about it is to make a exists? call right after your put request is completed. Something like this:
s3 = Aws::S3::Resource.new(region: 'ap-southeast-1') # change to the region you use
obj = s3.bucket('bucket-name').object("path/to/object/in/bucket")
if obj.exists?
# Object was uploaded successfully!
else
# No it wasn't!
end
Hope that helps!
One way I've seen or read other people doing it is calculating a md5 hash of the original file before upload and then match that with the etag value from the response of obj.put

How to use AWS S3, copy file from one bucket to another and get the url back to store in carrierwave

I am using Ruby on Rails AWS SDK to copy files from one bucket to another and store the url in carrierwave. The app uses carrierwave and I need to store the new copied url to the database field that carrierwave uses. The problem is the url is private so I tried to just generate the url and store it in remote_file_url but it can't access the file. The files are very large so I can't use carrierwave to upload the file to sdk. I tried and had no success.
obj.public_url did not work for me. When I copied the file it worked but not when moved it from one bucket to another.
Here is what I have:
picture_name = "my_picture.jpg"
#picture = Picture.find(id)
upload_dir = "uploads/picture/file/#{#picture.id}/my_picture.jpg"
s3 = Fog::Storage.new(provider: 'AWS', :aws_access_key_id => access_key, :aws_secret_access_key => secret_access_key ,:region => region)
obj = s3.copy_object('my-temp-bucket',
picture_name,
"my-target-bucket",
upload_dir, acl: 'public-read')
#picture.remote_image_url = obj.url_for(:read, :expires => 10*60)
#picture.save
I also tried with no luck.
#picture.remote_image_url = obj.public_url
#picture.save
I get an error undefined method `bucket' for #
Thank you for your help!!!

Rename Object in S3 using Ruby

I want to rename an item in s3 using the Ruby sdk. How do I do this?
I have tried:
require 'aws-sdk'
s3 = AWS.config(
:region => 'region',
:access_key_id => 'key',
:secret_access_key => 'key'
)
b = AWS::S3::Bucket.new(client: s3, name: 'taxalli')
b.objects.each do |obj|
obj.rename_to('imports/files/' + line.split(' ').last.split('/').last)
end
But I dont see anything in the new sdk for moves or renames.
In AWS-SDK version 2 is now method called "move_to" (http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Object.html#move_to-instance_method) which you can use in this case. Technically it will still copy & delete that file on S3 but you don't need to copy & delete it manually and most importantly it will not delete that file if that copy action fails from some reason.
There is no such thing as renaming in the Amazon S3 SDK. Basically what you have to do is copy the object and then delete the old one.
require 'aws-sdk'
require 'open-uri'
creds = Aws::SharedCredentials.new(profile_name: 'my_profile')
s3 = Aws::S3::Client.new( region: 'us-east-1',
credentials: creds)
s3.copy_object(bucket: "my_bucket",
copy_source: URI::encode("my_bucket/MyBookName.pdf"),
key: "my_new_book_name.pdf")
s3.delete_object(bucket: "my_bucket",
key: "MyBookName.pdf")
have you Rails Paperclip S3 rename thousands of files? or https://gist.github.com/ericskiff/769191 ?

how to assign paperclip to file on aws using aws sdk

I have been able to have third party clients upload files directly to AWS s3 and then process those files with paperclip with the following line in the model:
my_object.file_attachment = URI.parse(URI.escape(my_bucket.s3.amazonaws.com/whatever.ext))
That line downloads the file, processes it and then saves it appropriately. The problem is, in order for that line to work, I have to provide anonymous read privileges for the upload location. So my question is: How do avoid that? My thought is to use the aws-sdk to download the file - so I have been trying stuff like:
file = Tempfile.new('temp', :encoding => 'ascii-8bit')
bucket.objects[aws_key].read do |chunk|
file.write chunk
end
my_object.file_attachment = file
and variations on that theme, but nothing is working so far. Any insights would be most helpful.
Solution I am not very happy with
You can generate a temporary privileged URL using the AWS SDK:
s3 = AWS::S3.new
bucket = s3.buckets['bucket_name']
my_object.file_attachment = bucket.objects['relative/path/of/uploaded/file.ext'].url_for(:read)
As #laertiades says in his amended question, one solution is to create a temporary, pre-signed URL using the AWS SDK.
AWS SDK version 1
In AWS SDK version 1, that looks like this:
s3 = AWS::S3.new
bucket = s3.buckets['bucket_name']
my_object.file_attachment = bucket.objects['relative/path/of/uploaded/file.ext'].url_for(:read)
AWS documentation: http://docs.aws.amazon.com/AWSRubySDK/latest/AWS/S3/S3Object.html#url_for-instance_method
AWS SDK version 2
In AWS SDK version 2, it looks like this with the optional expires_in parameter (credit to this answer on another question):
presigner = Aws::S3::Presigner.new
my_object.file_attachment = presigner.presigned_url(:get_object, # get_object method means read-only
bucket: 'bucket-name',
key: "relative/path/of/uploaded/file.ext",
expires_in: 10.minutes.to_i # time should be in seconds
).to_s
AWS documentation: http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Presigner.html

Resources