I used to write this to generate a presigned URL in aws-sdk V1 :
AWS.config(S3Config::S3_CONFIG)
bucket = AWS.s3.buckets[S3Config::S3_CONFIG[:bucket]]
presigned_url = bucket.presigned_post(
key: "attachments/#{SecureRandom.uuid}/${filename}",
success_action_status: 201, acl: 'public-read'
)
Which sent an OPTIONS/POST working request like this :
https://gist.github.com/gotoAndBliss/cdd8818b8adce58d1b625f68e2633199
The biggest difference is that the Status Code is 201 Created
I then updated to V2 and rewrote it to this :
presigned_url = Aws::S3::PresignedPost.new(aws_creds, aws_region, S3Config::BUCKET, {
key: "attachments\/#{SecureRandom.uuid}\/\${filename}",
metadata: {"original-filename" => "${filename}"},
acl: 'public-read', success_action_status: ['201']
})
To which I'm pretty sure was properly written. But this generates this request :
https://gist.github.com/gotoAndBliss/43a4a88adc5c2be0b70b66d551a72a84
The biggest difference is the Status Code : 204 No Content
I went through this line for line and it seems like everything else is identical. Would anyone know why these are failing? Or what sets them apart?
Below is my process for doing presigned uploads. One thing I'm noticing right off the bat is you are escaping the / in your key. I don't and it works fine. I use ENV variables for the secret stuff.
config/initializers/aws.rb:
Aws.config.update(
{region: 'us-west-2',
credentials: Aws::Credentials.new(ENV['AWS_ACCESS_KEY_ID'],
ENV['AWS_SECRET_ACCESS_KEY']),
}
)
S3_BUCKET = Aws::S3::Resource.new.bucket(ENV['S3_BUCKET_NAME'])
In my controller I generate the url like this:
#s3_direct_post = S3_BUCKET.presigned_post(
key: "my_bucket_folder/#{SecureRandom.uuid}/${filename}",
success_action_status: '201',
acl: 'public-read')
Related
I have Rails app with embedded images. What I want is to upload these images to s3 and serve theme from there instead of form original source Do I have to download the img to my server before upload it to s3?
Short answer: If you're scraping someone's content, then...yes, you need to pull the file down before uploading to to S3.
Long answer: If the other site (the original source) is working with you, you can give them a Presigned URL that they can use to upload to your S3 bucket.
From Amazon's docs: https://docs.aws.amazon.com/AmazonS3/latest/dev/UploadObjectPreSignedURLRubySDK.html
#Uploading an object using a presigned URL for SDK for Ruby - Version 3.
require 'aws-sdk-s3'
require 'net/http'
s3 = Aws::S3::Resource.new(region:'us-west-2')
obj = s3.bucket('BucketName').object('KeyName')
# Replace BucketName with the name of your bucket.
# Replace KeyName with the name of the object you are creating or replacing.
url = URI.parse(obj.presigned_url(:put))
body = "Hello World!"
# This is the contents of your object. In this case, it's a simple string.
Net::HTTP.start(url.host) do |http|
http.send_request("PUT", url.request_uri, body, {
# This is required, or Net::HTTP will add a default unsigned content-type.
"content-type" => "",
})
end
puts obj.get.body.read
# This will print out the contents of your object to the terminal window.
I have been trying to implement FineUploader in Rails and I am running into the following error:
Invalid according to Policy: Extra input fields: success_action_status
I am using the example from the FineUploader docs and my signature and policy signing are coming through properly. It looks like FineUploader is passing "success_action_status" in the POST to S3 and that is causing the issue?
Does anyone know if I need to add something additional to my bucket policy on S3 or do I need to change a parameter on FineUploader?
Here is the implementation that I am using for the FineUploader JS control:
var uploader = new qq.s3.FineUploader({
element: document.getElementById('fine-uploader'),
request: {
endpoint: 'https://MyBUCKET.s3.amazonaws.com/',
accessKey: 'MY_ACCESS_KEY'
},
signature: {
endpoint: 'home/generatesignature'
},
uploadSuccess: {
endpoint: '/s3/success'
},
iframeSupport: {
localBlankPagePath: '/success.html'
}
});
Update
After going around with trying to get Rails to work with FineUploader I was finally able to get it to work. For anyone that runs into issues with implementing FineUploader/Rails the key is that the S3 policy is sent in the post body to the server and that is what needs to be encoded and signed before returning. Here is the action to make everything work serverside in Rails
def generatesignature
policy = Base64.encode64(request.raw_post).gsub("\n","")
s3_signature = Base64.encode64(
OpenSSL::HMAC.digest(
OpenSSL::Digest::Digest.new('sha1'),
'YOUR_SECRET_KEY', policy)
).gsub("\n","")
params[:signature]= s3_signature
params[:policy] = policy
render :json => params, :status => 200 and return
end
I am using Ruby on Rails and AWS gem.
I can get pre-signed URL for upload and download.
But when I get the URL there is no file, and so setting acl to 'public-read'
on the download-url doesn't work.
Use case is this: 1, server provides the user a path to upload content to my bucket that is not readable without credentials. 2, And that content needs to be public later: readable by anyone.
To clarify:
I am not uploading the file, I am providing URL for my users to upload. At that time, I also want to give the user a URL that is readable by the public. It seems like it would be easier if I uploaded the file by myself. Also, read URL needs to never expire.
When you generate a pre-signed URL for a PUT object request, you can specify the key and the ACL the uploader must use. If I wanted the user to upload an objet to my bucket with the key "files/hello.txt" and the file should be publicly readable, I can do the following:
s3 = Aws::S3::Resource.new
obj = s3.bucket('bucket-name').object('files/hello.text')
put_url = obj.presigned_url(:put, acl: 'public-read', expires_in: 3600 * 24)
#=> "https://bucket-name.s3.amazonaws.com/files/hello.text?X-Amz-..."
obj.public_url
#=> "https://bucket-name.s3.amazonaws.com/files/hello.text"
I can give the put_url to someone else. This URL will allow them to PUT an object to the URL. It has the following conditions:
The PUT request must be made within the given expiration. In the example above I specified 24 hours. The :expires_in option may not exceed 1 week.
The PUT request must specify the HTTP header of 'x-amz-acl' with the value of 'public-read'.
Using the put_url, I can upload any an object using Ruby's Net::HTTP:
require 'net/http'
uri = URI.parse(put_url)
request = Net::HTTP::Put.new(uri.request_uri, 'x-amz-acl' => 'public-read')
request.body = 'Hello World!'
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
resp = http.request(request)
Now the object has been uploaded by someone else, I can make a vanilla GET request to the #public_url. This could be done by a browser, curl, wget, etc.
You have two options:
Set the ACL on the object to 'public-read' when you PUT the object. This allows you to use the public url without a signature to GET the object.
Let the ACL on the object default to private and provide pre-signed GET urls for users. These expire, so you have to generate new URLs as needed. A pre-signed URL allows someone to send GET request to the object without credentials themselves.
Upload a public object and generate a public url:
require 'aws-sdk'
s3 = Aws::S3::Resource.new
s3.bucket('bucket-name').object('key').upload_file('/path/to/file', acl:'public-read')
s3.public_url
#=> "https://bucket-name.s3.amazonaws.com/key"
Upload a private object and generate a GET url that is good for 1-hour:
s3 = Aws::S3::Resource.new
s3.bucket('bucket-name').object('key').upload_file('/path/to/file')
s3.presigned_url(:get, expires_in: 3600)
#=> "https://bucket-name.s3.amazonaws.com/key?X-Amz-Algorithm=AWS4-HMAC-SHA256&..."
I am very new to Amazon web services. I am able to generate presigned url for Amazon S3 but not for Amazon SQS.
Thanks in advance for your time and help.
Amazon SQS does not support the creation of a presigned URL. You can check out this forum post for more information.
You can generate presigned urls for SQS using the REST api. Here's an example of a presigned url being creating for reading a message off a queue in Ruby using the aws sdk.
require "httparty"
require("aws-sdk")
message_body = { "hello": "world" }
msg_body_encoded = CGI.escape(message_body.to_json)
signer = Aws::Sigv4::Signer.new(
service: "sqs",
region: "us-east-1",
access_key_id: "ACCESS KEY ID",
secret_access_key: "SECRET ACCESS KEY"
)
presigned_url = signer.presign_url(
http_method: "get",
url: "https://sqs.us-east-1.amazonaws.com/123456789/SomeQueueNmae.fifo/?Action=SendMessage&MessageBody=#{msg_body_encoded}&MessageDeduplicationId=#{dedup_id}&MessageGroupId=Default",
expires_in: 1000.day.to_i
)
response = HTTParty.get(presigned_url, headers: { "Accept": "application/json" })
puts presigned_url
puts response
# {"SendMessageResponse":{"ResponseMetadata":{"RequestId":"12345678910-100c-510b-96af-12345678910"},"SendMessageResult":{"MD5OfMessageAttributes":null,"MD5OfMessageBody":"fbc24bcc7a1794758fc1327fcfebdaf6","MD5OfMessageSystemAttributes":null,"MessageId":"12345678910-db6c-4e17-b6ac-12345678910","SequenceNumber":"12345678910"}}}
I've written an example for ReceiveMessage and SendMessage via Ruby in this gist.
I have a rails application and I would love to download part of the file from Amazon S3 with following code:
url = URI.parse('https://topdisplay.s3-eu-west-1.amazonaws.com/uploads/song/url/15/09_-_No_Goodbyes.mp3?AWSAccessKeyId=dfsfsdf#fdfsd&Signature=fsdfdfdgfvvsersf') # turn the string into a URI
http = Net::HTTP.new(url.host, url.port)
http.use_ssl = true #S3 uses SSL, isn't it?
req = Net::HTTP::Get.new(url.path) # init a request with the url
req.range = (0..4096) # limit the load to only 4096 bytes
res = http.request(req) # load the mp3 file
Mp3Info.open( StringIO.open(res.body) ) do |m| #do the parsing
puts m
end
The url is correct, I can download a file through browser. But I get 403 error from amazon at http.request command:
res = http.request(req)
=> #<Net::HTTPForbidden 403 Forbidden readbody=true>
How I can download that file with rails? =)
By the way, finally, I've got another solution. I needed that code to check track length after uploading it to the website. So it was looked like that:
upload track to S3 -> download part of it -> check length
But later I've noticed that carrierwave automatically uploads everything to tmp folder first, so uploading process actually looks like that:
upload to tmp -> upload from website to amazon s3 -> save
And if we call :before_save callback we will be able to open track before it's uploading to S3.
So the code should look like that:
before_save :set_duration
Mp3Info.open( 'public'+url.to_s ) do |m| #do the parsing
self.duration = m.length.to_i
self.name = m.tag.title if self.name == ""
end
In that case I simplified the process a lot :)
Have a sexy day!
right now you are only making a request to the path, i think you need to include the query portion as well
full_path = (url.query.blank?) ? url.path : "#{url.path}?#{url.query}"
req = Net::HTTP::Get.new(full_path)
see also - http://house9.blogspot.com/2010/01/ruby-http-get-with-nethttp.html