I am developing an iOS app which saves picture in an Amazon S3 server. Unfortunately, the owner of the server prefers not to give me his secret key. Instead, he generates and provides me with a signature he says I can use to upload a file.
The problem is that I cannot find a way to perform that. Especially with the Amazon S3 sample "S3Uploader".
Do you have any idea about that ?
Thanks in advance
The secret key is only needed to calculate the signature, so if you already have a signature you don't need it. You do however need the access key id (so that amazon knows which secret key to use to validate the signature).
I had a quick look at the iOS sdk docs and it doesn't look like they provide a way to short-circuit the signature calculation process. Uploading a file is easy though, you just make a PUT request:
PUT /ObjectName HTTP/1.1
Host: BucketName.s3.amazonaws.com
Date: date
Authorization: signatureValue
Content-Length: 1234
There are details of extra headers you can use in the documentation.
Another workflow is that instead of providing you with signature values, the remote service could use the amazon STS api to return temporary credentials authorised only to upload files to the specified bucket. If you go this route then you would be able to just use the SDK provided by amazon.
Related
The recommended way to get a public readable reference to a Google Storage file seems to be to use Signed URLs.
I need to retrieve a storage reference based on the URL, so that when my database record is deleted I can delete its files from Storage as well.
The signed URL for a file stored in path/file.jpeg seems to follow the pattern:
https://storage.googleapis.com/bucket.name/path%2Ffile.jpeg?foobar
So I am currently using a regex to take the text between bucket.name and the ? character, then replacing %2F with /. I would like to know:
Is this reliable?
Is there any API in official libraries that does this for me? Could not find any.
Is there any better approach? Like storing the storage path in the database record, along with the signed url (seems overkill to me).
The recommended way to get a public readable reference to a Cloud Storage object is just by allowing public access to it, by doing this you will get a URL in the form of storage.googleapis.com/[your-bucket]/[path-to-file]/[file].
-Is this reliable?
Signed URLs are meant to be used when requiring access (read, write or delete) just for a limited time, thus using a Signed URL for the current application needs may not be the best approach since you are using regex to get the appropriate URL path but ignoring all the text after “?” which requires certain computational process to be signed.
-Is there any API in official libraries that does this for me? Could not find any.
Not sure if you are referring to extracting the path from the signed URL, if that is the case then the answer is no.
-Is there a better approach?
Using the public access permission could be another option. If you are using the signed url to also have delete permissions but not really using the limited time functionality then the best approach is to use object public access, create a service account with enough permissions (delete Cloud Storage objects) and use the storage client library to delete the object from the bucket when the DB record is deleted.
I am writing an app on iOS that uses the Amazon API to display a list of products in a category.
The problem I'm having is in signing the API. I am using the advertising API in India, and am using scratchpad to test out the api call.
For the unsigned url generated by Amazon Scratch I have:
http://webservices.amazon.in/onca/xml?Service=AWSECommerceService&Operation=BrowseNodeLookup&SubscriptionId=IAMHIDINGTHISINFO&AssociateTag=HIDINGTHIS-XX&BrowseNodeId=1350388031&ResponseGroup=BrowseNodeInfo
For the signed I have:
http://webservices.amazon.in/onca/xml?AWSAccessKeyId= IAMHIDINGTHISINFO&AssociateTag=HIDINGTHIS-XX&BrowseNodeId=1350388031&Operation=BrowseNodeLookup&ResponseGroup=BrowseNodeInfo&Service=AWSECommerceService&Timestamp=2016-11-21T16%3A06%3A05.000Z&Signature=LETSSAYITGENERATEDTHIS
Following the steps on Amazon's documentation on signing the, I get the final canonical form as:
GET webservices.amazon.co.in/onca/xml AWSAccessKeyId= IAMHIDINGTHISINFO&AssociateTag=HIDINGTHIS-XX&BrowseNodeId=1350388031&Operation=BrowseNodeLookup&ResponseGroup=BrowseNodeInfo&Service=AWSECommerceService&Timestamp=2016-11-20T22%3A55%3A41.000Z
Which is following their steps EXACTLY. I prepend GET\nwebservices.amazon.co.in\n/onca/xml\n to the byte order the rest of the keys and then use HMAC SHA-256 to obtain the signature. Despite this, I get an incorrect signature generated.I know the hashing algorithm I use is correct since when I hash the example that they give in the documentation, it generates the exact hash (I am using AWSSignatureSignerUtility from their iOS SDK).
Can someone please tell me if I should not prepend GET\nwebservices.amazon.co.in\n/onca/xml\n or if it should be something else?
Just figured out the problem with the help of Signed Requests Helper.
Apparently, I was supposed to prepend
GET\nwebservices.amazon.in\n/onca/xml\n
and not
GET\nwebservices.amazon.co.in\n/onca/xml\n
I am trying to stream a file from a remote storage service (not s3 :-)) to the client using Ruby on Rails 4.2.
My server needs to stay in the middle of things to authenticate the client request but also to build up the request to the remote storage service since all requests to that service need to be authenticated using a custom header param. This makes it not possible to do a simple redirect_to and let the client download the file directly (but do let me know if this IS in fact possible using rails!). Also I want to keep the url of the file cloaked for the client.
Up until now I am using a gem called ZipLine but this also does not work as it still buffers the remote file before sending it to the client. As I am using unicorn/nginx, this might also be due to a setting in either of those two, that prevents proper streaming.
As per rails doc's instructions I have tried adding
listen 3000, tcp_nopush: false
to config/unicorn.rb but to no avail.
A solution might be to cache the remote file locally for a certain period and just serve that file. This would make some things easier but also creating new headaches like keeping the remote and cached files in sync, setting the right triggers for cache expiration, etc.
So to sum up:
1) How do I accomplish the scenario above?
2) If this is not a intelligent/efficient way of doing things, should I just cache a remote copy?
3) What are your experiences/recommendations in given scenario?
I have come across various solutions scattered around the interweb but none inspire a complete solution.
Thanks!
I am assuming you the third party storage service has an HTTP access. If you did consider using redirect_to, I assume the service also provides a means to allow per download authorization. Like unique key in the header that expires and does not expose your secret api keys or HMAC signed URL with expiration time as a param.
Anyhow, most cloud storage services provide this kind of file access. I would highly recommend let the service stream the file. Your app should simply authorize the user and redirect to the service. Rails allows you to add custom headers while redirecting. It is discussed in Rails guides.
10.2.1 Setting Custom Headers
If you want to set custom headers for a response then response.headers
is the place to do it. The headers attribute is a hash which maps
header names to their values, and Rails will set some of them
automatically. If you want to add or change a header, just assign it
to response.headers
So your action code would end up being something like this:
def download
# do_auth_check
response.headers["Your-API-Auth-Key"] = "SOME-RANDOM-STRING"
redirect_to url
end
Don't use up unnecessary server resources by streaming through them all those downloads. We are paying cloud services to that after all :)
The answer proposed in Embed API credentials in iOS code is not an option for me.
My app communicates with a back-end SOAP API over HTTPS. My API credentials are sent in every request.
I don't have control over the server implementation, so I'm not able to add an intermediary authentication server and migrate to a token-based implementation.
Because I have to embed my credentials with my app's binary (I understand that this is far from ideal, on principle), i am looking for best practices to make my credentials as secure as is possible.
From what I've read, I've gathered:
Don't include credentials in an external file (such as a .plist)
Don't include credentials as simple NSString * const declarations. (Is using a char * safer?)
Don't do something obvious, like put my credentials in an Objective-C singleton called AuthenticationKeyManager
I also saw this article: http://applidium.com/en/news/securing_ios_apps_debuggers/
=> tldr: add release-mode code in the main.m to prevent the app from running if a debugger is attached
Note: I am able to implement SSL pinning.
Are there any other measures I can take to safeguard my access credentials?
There is described how create and use encrypted plist: http://aptogo.co.uk/2010/07/protecting-resources/
But aes key from it is stored in static NSString *sharedKey;
When uploading files to Amazon S3 using the browser http upload feature, I know I can specify a success_action_redirect field/value that will tell my browser where to go when the upload is done.
I'm wondering: is it possible to ask Amazon to make a web hook style POST request to my web server whenever a file gets uploaded?
Basically, I want a way of being notified whenever a client uploads a new file, so that my server can process the upload. I'd like to do this without relying on the client to make the request to my server to tell me the file has been uploaded (never trust the client, right?).
They just recently announced AWS Lambda which lets you run code in response to events, with S3 uploads being one of the supported events.
Amazon can publish a notification to SNS or SQS when an object has been created in your specified S3 bucket.
http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
There is no support from Amazon regarding this as yet but we can get around this with other tools like s3cmd etc, which allow us to write cronjobs to notify us of any change in the keys on S3. So if a new key is created (notified via timestamp) we could have it send a GET request to our server endpoint listening for updates from S3 with the associated metadata.
We could use GET or POST here as the data would be very minimal I think. Probably a form data with POST should do.