AWS S3 Public Object vs Private Object? - ios

Back in S3, I have URL's to images in my bucket that I will be presenting in my application, however they are set private. When I try to click on the link, it reads "access denied". When I change the setting of the link to public, It goes through, however I've read that public access isn't the safest thing. So this is essentially a two part question,
1) What is the difference between a public and private link/Object in a bucket?
2)And how can i make a private link/object in my bucket accessible to both myself and my users?

Private objects require authentication; public objects do not.
With regard to your comment "public access isn't the safest thing", you typically need to consider a couple of things when deciding whether or not to make an S3 object public:
[major] is it OK for anyone to download it? If the content of the object is something that you should not be sharing with the world, for example a user's family photos, then the answer is "no".
[minor] do you want to pay every time some unknown person downloads an object. As the bucket owner, you pay for data transfer out (unless you opt for 'requester pays' in which case the requester needs to authenticate).
There are at least two ways that you can make private S3 objects available to your users without them being accessible to the entire world:
use time-limited, pre-signed URLs for the objects (see this article)
proxy the object downloads yourself so that all requests for objects go to your app server and can therefore be restricted to authenticated sessions.

Related

Public access for Active Storage in Rails 6.1

I use Active Storage on my Rails site with AWS. After upgrading to 6.1, I'd like to configure public access per the guide so my images have permanent URLs.
I've determined that I need to keep the existing service as-is so previously uploaded images continue to work. I've created a new service and configured the app to use it like this.
Previous images continue to work like this, but new image uploads result in Aws::S3::Errors::AccessDenied. Note that the credentials used are exactly the same as in the previous, working, non-public service. The guide mentions that the bucket needs to have the proper permissions, but not what exactly needs to be set.
Looking in AWS, the section "Block public access (bucket settings)" is all set to "Off". In "Access control list (ACL)", "Bucket owner (your AWS account)" has "List, Write" for both "Objects" and "Bucket ACL". No other permissions are listed. I've tried changing "Everyone (public access)" to include "List" for "Objects" and "Read" for "Bucket ACL" - doesn't seem to solve the problem.
How do I get public URLs working with Active Storage?
The permission you need when switching from private access to public is PutObjectAcl. Adding this in the IAM Management Console makes it work.
In addition, rather than creating a new service, you can mark all images in the existing service as public-readable via the UI or via a script.

Can a Slack bot re-share a file that was shared with it?

Context of what I'm trying to accomplish:
User shares a file with the bot
Other users interact with the bot via a dialog
The bot shares the original file to the other users
For example, we want to share a file to the bot that contains this week's cafeteria menu. Each time users would interact with the bot in a certain way, it would share the cafeteria menu with them so that they can consult it.
I've tried calling files.share method but bots can't perform this action (get invalid token type error).
As far as I can tell, there is no way to do this currently. I've tried link unfurling in the message body but that only works if the file itself was already shared to the user. If not, the link simply won't unfurl and clicking it will fail.
The bot can perform a files.upload call and re-upload the contents of the file to each user individually. This seems incredibly wasteful but appears to be the only way to work currently.
Is there something I'm missing?
The reason your bot can not use file.share is that this is an undocumented API method and you need a legacy token to use it. No other token (user token, bot token) will work, because it requires the post scope, which only exists for legacy token.
Approach A: Legacy Token
So one approach would be to use a legacy token with your bot, which you can create here for your current workspace. That should work nicely if your Slack app is only used on your "own" Slack workspace where you can create and use a legacy token.
Approach B: File Mention
Another approach is to use the mention feature in messages to share a file. This works by sending the private link (url_private property) of an already shared file in a message to a new channel. This will automatically re-share the file in that channel. I believe this only works with files that how been previously shares in a public channel and can therefore be re-shared. Be aware though that the file mention feature is currently being reworked, so this behavior might change.
Example:
https://slack.com/api/chat.postMessage?token=TOKEN&channel=CHANNEL&as_user=true&text=URL_PRIVATE
For more details see the Slack tutorial Storing, retrieving, and modifying file uploads.
Approach C: External File / image file
If you host your file externally or create a public URL for a file uploaded to Slack you can share it in every channel by just adding the URL to a message. Slack will automatically unfurl it and therefore share it to the user in any channel. This is different to Approach B, because its not a file mention and requires a public URL. You get the public URL of an uploaded file by calling files.sharedPublicURL.
If i'm not wrong, you can do like this :
you share a file with your bot
you retrieve the file shared ID, so his url_private property (cf https://api.slack.com/types/file#authentication)
you then donwload the file
you can then re-share it several times later (without re-uploading to each user)...

Get storage path from Google Storage signed url

The recommended way to get a public readable reference to a Google Storage file seems to be to use Signed URLs.
I need to retrieve a storage reference based on the URL, so that when my database record is deleted I can delete its files from Storage as well.
The signed URL for a file stored in path/file.jpeg seems to follow the pattern:
https://storage.googleapis.com/bucket.name/path%2Ffile.jpeg?foobar
So I am currently using a regex to take the text between bucket.name and the ? character, then replacing %2F with /. I would like to know:
Is this reliable?
Is there any API in official libraries that does this for me? Could not find any.
Is there any better approach? Like storing the storage path in the database record, along with the signed url (seems overkill to me).
The recommended way to get a public readable reference to a Cloud Storage object is just by allowing public access to it, by doing this you will get a URL in the form of storage.googleapis.com/[your-bucket]/[path-to-file]/[file].
-Is this reliable?
Signed URLs are meant to be used when requiring access (read, write or delete) just for a limited time, thus using a Signed URL for the current application needs may not be the best approach since you are using regex to get the appropriate URL path but ignoring all the text after “?” which requires certain computational process to be signed.
-Is there any API in official libraries that does this for me? Could not find any.
Not sure if you are referring to extracting the path from the signed URL, if that is the case then the answer is no.
-Is there a better approach?
Using the public access permission could be another option. If you are using the signed url to also have delete permissions but not really using the limited time functionality then the best approach is to use object public access, create a service account with enough permissions (delete Cloud Storage objects) and use the storage client library to delete the object from the bucket when the DB record is deleted.

Rails: Best way to allow users to upload images to either a Dropbox linked folder or "our" storage on Amazon S3

I am working on a project where the user joins a "stream". During stream setup, the person who is creating the stream (the stream creator) can choose to either:
Upload all photos added to the stream by members to our hosting solution (S3)
Upload all photos added to the stream by members to the stream creator's own Dropbox authenticated folder
In the future I would like to add more storage providers (such as Drive, Onesky etc)
There is a couple of different questions I have in regards to how to solve this.
What should the structure be in the database for photos? I currently only have photo_url, but that won't be easy to manage from a data perspective with pre-signed urls and when there are different ways a photo can be uploaded (s3, dropbox etc.)
How should the access tokens for each storage provider be stored? Remember that only the stream creator's access_token will be stored and everyone who is on the stream will share that token when uploading photos
I will add iOS and web clients in the future that will do a direct upload to the storage provider and bypass the server to avoid a heavy load on the server
As far as database storage, your application should dictate the structure based on the interface that you present both to the user and to the stream.
If you have users upload a photo and they don't get to choose the URI, and you don't have any hierarchy within a stream, then I'd recommend storing just an ID and a stream_id in your main photo table.
So at a minimum you might have something looking like
create table photos(id integer primary key, stream_id integer references streams(id) not null);
But you probably also want description and other information that is independent of storage.
The streams table would have all the generic information about a stream, but would have a polymorphic association to a class dependent on the type of stream. So you could use that association to get an instance of S3Stream or DropBoxStream based on what actual stream was used.
That instance (also an ActiveRecord resource) could store the access key, and for things like dropbox, the path to the folder etc. In addition, that instance could provide methods to construct a URI given your Photo object.
If a particular technology needs to cache signed URIs, then say the S3Stream object could reference a S3SignedUrl model where the URIs are signed.
If it turns out that the signed URL code is similar between DropBox and S3, then perhaps you have a single SignedUrl model.
When you design the ios and android clients, it is critical that they are not given access to the stream owner's access tokens. Instead, you'll need to do all the signing inside your server app. You wouldn't want a compromise of a device to lead to exposing the access token creating billing problems as well as privacy exposures.
Hope this helps.
we setup a lot of rails applications with different kind of file storages behind it.
Yes, just an url is not manageable in the future. To save a lot of time you could use gems like carrierwave or paperclip. They handle all the thumbnail generation and file validation. One approach is, that you could upload the file from the client directly to S3 or Dropbox to a tmp folder and just tell your Rails App "Hey, here is the url of a new upload file" and paperclip and carrierwave will take care of the thumbnail generation and storaging. (Example for paperclip)
Don't know exactly how your stream works, so I cannot give a good answer to this -.-
With the setup I mentioned in 1. you should upload form your different clients directly to S3 or Dropbox etc. and after uploading, the client tells the Rails Backend that it should import the file from that url. (And before paperclip or carrierwave finish their processing you could use the tmp url from the file to display something directly in your stream)

Rails implementation for securing S3 documents

I would like to protect my s3 documents behind by rails app such that if I go to:
www.myapp.com/attachment/5 that should authenticate the user prior to displaying/downloading the document.
I have read similar questions on stackoverflow but I'm not sure I've seen any good conclusions.
From what I have read there are several things you can do to "protect" your S3 documents.
1) Obfuscate the URL. I have done this. I think this is a good thing to do so no one can guess the URL. For example it would be easy to "walk" the URL's if your S3 URLs are obvious: https://s3.amazonaws.com/myapp.com/attachments/1/document.doc. Having a URL such as:
https://s3.amazonaws.com/myapp.com/7ca/6ab/c9d/db2/727/f14/document.doc seems much better.
This is great to do but doesn't resolve the issue of passing around URLs via email or websites.
2) Use an expiring URL as shown here: Rails 3, paperclip + S3 - Howto Store for an Instance and Protect Access
For me, however this is not a great solution because the URL is exposed (even for just a short period of time) and another user could perhaps in time reuse the URL quickly. You have to adjust the time to allow for the download without providing too much time for copying. It just seems like the wrong solution.
3) Proxy the document download via the app. At first I tried to just use send_file: http://www.therailsway.com/2009/2/22/file-downloads-done-right but the problem is that these files can only be static/local files on your server and not served via another site (S3/AWS). I can however use send_data and load the document into my app and immediately serve the document to the user. The problem with this solution is obvious - twice the bandwidth and twice the time (to load the document to my app and then back to the user).
I'm looking for a solution that provides the full security of #3 but does not require the additional bandwidth and time for loading. It looks like Basecamp is "protecting" documents behind their app (via authentication) and I assume other sites are doing something similar but I don't think they are using my #3 solution.
Suggestions would be greatly appreciated.
UPDATE:
I went with a 4th solution:
4) Use amazon bucket policies to control access to the files based on referrer:
http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?UsingBucketPolicies.html
UPDATE AGAIN:
Well #4 can easily be worked around via a browsers developer's tool. So I'm still in search of a solid solution.
You'd want to do two things:
Make the bucket and all objects inside it private. The naming convention doesn't actually matter, the simpler the better.
Generate signed URLs, and redirect to them from your application. This way, your app can check if the user is authenticated and authorized, and then generate a new signed URL and redirect them to it using a 301 HTTP Status code. This means that the file will never go through your servers, so there's no load or bandwidth on you. Here's the docs to presign a GET_OBJECT request:
https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/S3/Presigner.html
I would vote for number 3 it is the only truly secure approach. Because once you pass the user to the S3 URL that is valid till its expiration time. A crafty user could use that hole the only question is, will that affect your application?
Perhaps you could set the expire time to be lower which would minimise the risk?
Take a look at an excerpt from this post:
Accessing private objects from a browser
All private objects are accessible via
an authenticated GET request to the S3
servers. You can generate an
authenticated url for an object like
this:
S3Object.url_for('beluga_baby.jpg', 'marcel_molina')
By default
authenticated urls expire 5 minutes
after they were generated.
Expiration options can be specified
either with an absolute time since the
epoch with the :expires options, or
with a number of seconds relative to
now with the :expires_in options:
I have been in the process of trying to do something similar for quite sometime now. If you dont want to use the bandwidth twice, then the only way that this is possible is to allow S3 to do it. Now I am totally with you about the exposed URL. Were you able to come up with any alternative?
I found something that might be useful in this regard - http://docs.aws.amazon.com/AmazonS3/latest/dev/AuthUsingTempFederationTokenRuby.html
Once a user logs in, an aws session with his IP as a part of the aws policy should be created and then this can be used to generate the signed urls. So in case, somebody else grabs the URL the signature will not match since the source of the request will be a different IP. Let me know if this makes sense and is secure enough.

Resources