Azure Blob Storage authorization with SAS - asp.net-mvc

I have a web application (ASP.NET MVC) which uses Azure Blob Storage for storing documents and images. Each user has specific access rights to the blobs and this
is stored in web application's database.
Currently I have a quick temporary solution which uses the web application as a middle layer that runs the authorization and if the client has read access to the blob,
it is first retrieved from Azure and then delivered to the client. This is of course not the optimal way of doing it for many reasons.
I have started to rebuild this part using SAS (Shared Access Signatures), but can't find a good source for setting up a system that will scale well as the number of
users and files grow. I am expecting the number of users to be around 100 and the number of blobs to be around 100 000.
As I see it I have two options.
1) All files have one signature stored in the web applications database and this is used for all users who have access to the file. This would be the easy way to do it,
but if a user for some reason does not still have access to the file, they will still be able to access the file if they have the link from earlier access.
2) All files have specific signatures for each user who has access to the file. This will make it easy to revoke access to files, but the number of signatures will
be massive and will this have any side effects?
Are there any more options?
Any thoughts on this are greatly appreciated!

Rather than having SAS for each users it would be better that you group the files by roles and map the users to roles which is easy to scale irrelevant to number of users.
Also giving access to users to blob directly is not recommended as you want to distribute your blob content through your application. So provide access to application with specific in context of role of user.
See below article for generating twominute SAS which expires in two minute so your users with the link does not have access to image for long time.
http://www.dotnetcurry.com/windows-azure/901/protect-azure-blob-storage-shared-access-signature
Hope this helps. :)

Related

Google cloud storage is it ok to expose API key?

I'm developing an application which lets users upload pictures. I'd like to use Google cloud services to store these pictures. I am creating a unique GUID for each image in database and would like to store the images in the cloud with that name. It makes sense for me to make an ajax request for a GUID and then upload the image from the same page directly to google cloud services.
https://github.com/GoogleCloudPlatform/storage-getting-started-javascript/blob/master/index.html
Like shown in this example.
My first question is, should I be sending this to my back-end(C# code) and uploading it from there? Or is this the correct approach?
And my second question if this is the correct approach is, wouldn't exposing my details like that in javascript allow other people to upload from outside my application as well?
An API key, by itself, identifies a call as being associated with a certain project for purposes of billing. It's only necessary for anonymous calls. An API key does not grant any sort of authorizations. If there's an object in a bucket in your project that only your project members can see, the API key won't give anyone permission to read it.
That said, it's not a great idea to share your API key if you can help it, and if you need to share it, you should lock it down as much as possible. API keys can be limited to use with only certain IP addresses, only with certain web referrers (for instance, it will only work with JavaScript clients on www.yoursite.com), or only when run from a particular iPhone or Android app. These precautions aren't cryptographically fool-proof (there's no reason a hacker couldn't spoof a referer header), but they do make them pretty much useless for someone else who just wants to paste an API key somewhere to enable a web app and doesn't want to pay for it themselves.
The problem with using the javascript client's approach for your application is that individual users would either end up uploading objects completely anonymously or with their own Google accounts. Neither is super great, since the anonymous option would basically require you to create a bucket with anonymous writes enabled, and you don't want to do that.
There is a great approach to letting users upload pictures, though: signed URLs. Signed URLs allow your server to securely sign, in advance, a request to upload an object with your credentials. This is your best option for letting anonymous end users securely upload objects to your buckets.
Documentation on signed URLs: https://cloud.google.com/storage/docs/accesscontrol#Signed-URLs

Looking for advice on Amazon S3 bucket setup and management in a Rails multi-tenancy app

Each tenant will have their own photo gallery which stores photos on Amazon S3. Seeing as S3 is relatively new to me I'm looking for some advice and best practices on how to manage this in terms of buckets, IAM groups/users, security, usage reporting, and possibly billing.
The way I see it is I have two options.
Option 1:
One master bucket. Each tenant has a sub-directory where their photos are stored. I would have one IAM group for the whole application and create a new IAM user for each tenant with access to only their sub-directory. In the future if I want to know how much S3 space a tenant is using will it be easy to report on? Would I want to have a unique AWS access key and secret key for each tenant even though they are going to the same bucket?
Option 2:
Each tenant gets their own bucket. Each tenant would get their own IAM user with access only to their bucket. Is this option better for reporting on usage?
General questions:
Are there any major drawbacks to either option?
Is there another option I'm unaware of?
Can I report on storage via an IAM user's activity or does it happen
at the bucket level?
I think you're trying to turn your S3 account into a multi-user thing, which it's not.
Each tenant gets their own bucket
You are limited to 100 buckets, so this is probably not what you want. (Unless it's a very exclusive web service :)
One master bucket
OK
IAM user for each tenant
Um, I think there's a limit for IAM users too.
if I want to know how much S3 space a tenant is using will it be easy to report on?
You can write a script easy enough.
billing
You can use DevPay buckets, in which case you can have 100 buckets per user. But this requires each user sign up for AWS and other complications.
Can I report on storage via an IAM user's activity or does it happen at the bucket level?
IAM is only checked at "ingress". After that, it's all just "your account". So the files don't have different "owners".
Is there another option I'm unaware of?
The usual way is to have a thin EC2 service that controls the security:
You write a web app and run it on EC2. It knows how to authenticate your users.
When a user wants to upload, they either POST it to EC2 (and it copies to S3, and probably resizes it anyway). Or you generate a signed POST/PUT URL for the browser to directly upload into S3 (really easy to do once you understand.)
When a user wants to view a file, they hit your service to get a signed URL that allows them access to their file. But that access times out after a while. That's OK, since they are only accessing the files via your EC2 webpage.
The upshot is that your EC2 box can be small because it's just creating URLs for the browser.

IIS Restrict Access to Directory for table of users

I am trying to restrict access to files in a directory and it's sub directories based user rights. My user rights are stored in an MS SQL database in a custom format, however it is easy to query the list of users with rights to this directory.
I need to know how to apply this to a web config on the server to authenticate against a query of a database table to determine if the username is authenticated and allowed to view the file. Of course if they are not they should be blocked / given a 404.
I am using IIS and ASP.Net MVC3 with a form based security as opposed to the built in roles and responsibilities that was custom made for us and that works great. There are over 10k users tied to this non-Active Directory authentication so I am not planning to change my authentication type so please don't go there.
It is not my decision on the choice of platform, or I would have gone with a LAMP server and been done with this.
Edit 11-13-2012 # 8:57a:
In the web config can you put the result of an SQL query?
I have answered something similarly in the past (uploading and accessing files), but the principles still apply in providing access to file system level files.
in asp.net-mvc, is there a good library or pattern to follow when saving users content (images, files, etc)

Azure - uploading files to blob storage via shared hosting

Im struggling to find an answer to this. I have a website that is deployed in a shared hosting environment. I want to allow people to upload files to my azure blob storage account.
I have this working locally, using the storage emulator, however when I publish the site I get a Security Exception.
Is this actually possible under a shared hosting envrionment ?
Cheers
A bit more detail would help, in understanding how these uploads are taking place. That said, I'll make the assumption that people are uploading directly to Blob Storage, and not through your Website (or Web Service).
To allow direct uploads, you need to provide either a public blob or container (which everyone in the world can see), or create a temporary Shared Access Signature (SAS) on a specific blob or container, that grants access for a short time window.
If your app is Silverlight, then you are probably running into a cross-domain issue (and you'll need to correct that with an access policy).
If you provide more details around the way uploads are being sent, as well as the client and server technology, I can edit my answer to be more specific.

How does Dropbox upload data to its servers?

just recently I was thinking and wondered, how does Dropbox upload my files to its S3 storage and how might that one be organized?
Let's just completely forget about the sync aspect for a second and scale the problem down to one S3 bucket.
Say, in that bucket's root directory you have lots of folders, each belonging to an arbitrary user.
Now if that user wants to upload a file to his folder... how does that happen internally? I mean, Dropbox can't just store the Amazon S3 access credentials/keys hard-coded into the application (be it on ios or windows) as it might get reverse-engineered and thus exposed.
Any thoughts on this?
Thanks!
Some guys from EADS did reengineering on Dropbox, the presentation slides are available for download: A CRITICAL ANALYSIS OF
DROPBOX SOFTWARE SECURITY
In the same way websites don't allow users to directly access their databases but rather provide interfaces that can control permissions and handle authentication, I'm sure Dropbox has some kind of application that the client on your computer interacts with. Their server daemon will have permissions to write to the disk, but your computer has to go through it (and it's security procedures) before anything your computer sends is written.

Resources