Google Cloud Storage - Rails web app - different buckets and different access keys for different environments - ruby-on-rails

I plan to use a cloud based storage service to store some static user-uploaded content of my web application. I have settled upon Google Cloud Storage for now.
My web application is Rails, and I am using Paperclip with fog to connect to Google Cloud Storage.
I understand that I need to use the Interoperable Storage Access Keys in the fog config to connect to my bucket. Any additional key I add is given access to all the buckets.
I want to have a separate bucket per environment (development, staging and production). I want to have separate access and secret keys, with each key having access to only one bucket.
Basically, I don't want to put my production keys in my web-app source code, which all developers will have access to.
I read the Google Cloud Storage documentation on ACLs, but I could not find out how to achieve what I want.
I can't imagine that others wouldn't have had the same kind of requirement. Maybe I am using the wrong search terms, but I cannot get any info about this.
I would appreciate some help.
P.S. - Is what I want possible on AWS S3? I am open to switching to S3 if this is possible on it.

The normal solution for something like this would be to have 3 service accounts (development-app, staging-app, production-app), each of which would have its own set of credentials and permissions. You could either have a test, staging, and production project, or you could just have test, staging, and production buckets within a single project. You can create a whole range of per-project service accounts, each with its own set of credentials and permissions.
Unfortunately, interoperable storage access keys are not available for service accounts, only regular Google user accounts. In order to do what you want, you'd need to have three user accounts, each of which was granted access to exactly one of those buckets.

Related

Using a private google cloud storage with a custom domain

I have an google cloud storage buckets and one rails app to access this buckets. My app works with files from 1M until 300M in uploads/downloads.
On my rails app I use carriewave gem, so ...all the throughput comes to my app, after to the bucket....until now, everything normal.
Recently I implement GCP direct upload but, the base url is storage.googleapis.com. This is terrible for my customers that have such a high level security in their local networks.
I need that storage.googleapis.com becomes storage.mycustomdomain.com. In this approach my customers will just allow *.mycustomdomain.com in their networks.
Someone could help me?
Tnks
Cloud Storage public objects are served directly from GCP through storage.googleapis.com, as explained in the documentation. From John Hanley’s comment, and according to this guide, Cloud Storage does not directly support custom domains:
Because Cloud Storage doesn't support custom domains with HTTPS on its own, this tutorial uses Cloud Storage with HTTP(S) Load Balancing to serve content from a custom domain over HTTPS.
The guide goes into creating a load balancer service which you can use to serve user content from your own domain, using the buckets as the service backend. Otherwise, it is also possible to create a CDN which is supported by Cloud Storage and uses a custom domain, as mentioned by the blog objectives:
I want to serve images on my website (comparison for contact lenses) from a cloud bucket.
I want to serve it from my domain, cdn.kontaktlinsen-preisvergleich.de
I need HTTPS for that domain, because my website uses HTTPS everywhere and I don’t want to mix that.
This related thread also mentions implementation of a CDN to use a custom domain to serve Cloud Storage objects.

How to manage secrets in multi tenant app for multiple environments (Local, Dev, Prod)

I have a multi-tenant application that stores the data of each client in a distinct database. To access the data I retrieve the credentials from the SecretsManager in AWS where the secret is stored by giving it the tenant_id as the name. In code, I can then just retrieve it by passing the tennant id.
Now I'm looking for a clean way to implement multiple environments. But I'm not able to find a way that suits my use case. The restrictions I'm having are:
The tenant_id is actually the Azure Tenant ID of the client and is also used to connect to the GraphQL API. As such just using an ID like "Test-Tenant" would not be possible as I would not be able to run all the code.
Just relying on the same database in staging and testing (which probably is a bad idea anyway) would also not be possible as the database in staging is a document DB and connecting to it is not possible via the local machine (unless via ssh tunneling but then my end point URLs would not match).
What would be a clean way to implement multiple environments in this multi-tenancy setup?

How could register app from S3 bucket without public permission in Spring data flow server

I have running data flow server in PCF. And i want to register app(http://....jar) which is from S3 bucket and it does not have public access.
I see there are only 3 params available(--name, --type, --uri) for app register, how could pass credentials like --aws.accessKeyId and --aws.secretKey.
At the time of this writing, we do not have a AWS-native approach to resolve Spring Cloud Stream or Spring Cloud Task applications from S3 buckets directly. We have an open story on this matter, however. Today, you can only resolve publicly accessible application artifacts from S3 buckets.
Alternatively, you could host and resolve the applications using the SCDF App Tool that we ship.
There are few other alternatives, too, so feel free to try out the options and choose the method that works best for your use-case.

Looking for advice on Amazon S3 bucket setup and management in a Rails multi-tenancy app

Each tenant will have their own photo gallery which stores photos on Amazon S3. Seeing as S3 is relatively new to me I'm looking for some advice and best practices on how to manage this in terms of buckets, IAM groups/users, security, usage reporting, and possibly billing.
The way I see it is I have two options.
Option 1:
One master bucket. Each tenant has a sub-directory where their photos are stored. I would have one IAM group for the whole application and create a new IAM user for each tenant with access to only their sub-directory. In the future if I want to know how much S3 space a tenant is using will it be easy to report on? Would I want to have a unique AWS access key and secret key for each tenant even though they are going to the same bucket?
Option 2:
Each tenant gets their own bucket. Each tenant would get their own IAM user with access only to their bucket. Is this option better for reporting on usage?
General questions:
Are there any major drawbacks to either option?
Is there another option I'm unaware of?
Can I report on storage via an IAM user's activity or does it happen
at the bucket level?
I think you're trying to turn your S3 account into a multi-user thing, which it's not.
Each tenant gets their own bucket
You are limited to 100 buckets, so this is probably not what you want. (Unless it's a very exclusive web service :)
One master bucket
OK
IAM user for each tenant
Um, I think there's a limit for IAM users too.
if I want to know how much S3 space a tenant is using will it be easy to report on?
You can write a script easy enough.
billing
You can use DevPay buckets, in which case you can have 100 buckets per user. But this requires each user sign up for AWS and other complications.
Can I report on storage via an IAM user's activity or does it happen at the bucket level?
IAM is only checked at "ingress". After that, it's all just "your account". So the files don't have different "owners".
Is there another option I'm unaware of?
The usual way is to have a thin EC2 service that controls the security:
You write a web app and run it on EC2. It knows how to authenticate your users.
When a user wants to upload, they either POST it to EC2 (and it copies to S3, and probably resizes it anyway). Or you generate a signed POST/PUT URL for the browser to directly upload into S3 (really easy to do once you understand.)
When a user wants to view a file, they hit your service to get a signed URL that allows them access to their file. But that access times out after a while. That's OK, since they are only accessing the files via your EC2 webpage.
The upshot is that your EC2 box can be small because it's just creating URLs for the browser.

Securing S3 via your own application

Imagine the following use case:
You have a basecamp style application hosting files with S3. Accounts all have their own files, but stored on S3.
How, therefore, would a developer go about securing files so users of account 1, couldn't somehow get to files of account 2?
We're talking Rails if that's a help.
S3 supports signed time expiring URLs that mean you can furnish a user with a URL that effectively lets only people with that link view the file, and only within a certain time period from issue.
http://www.miracletutorials.com/s3-amazon-expiring-urls/
If you want to restrict control of those remote resources you could proxy the files through your app. For something like S3 this may defeat the purpose of what you are trying to do, but it would still allow you to keep the data with amazon and restrict access.
You should be careful with an approach like this as it could cause your ruby thread to block while it is proxying the file, which could become a real problem with the application.
Serve the files using an EC2 Instance
If you set your S3 bucket to private, then start up an EC2 instance, you could serve your files on S3 via EC2, using the EC2 instance to verify permissions based on your application's rules. Because there is no charge for EC2 to transfer to/from S3 (within the same region), you don't have to double up your bandwidth consumption costs at Amazon.
I haven't tackled this exact issue. But that doesn't stop me from having an opinion :)
Check out cancan:
http://github.com/ryanb/cancan
http://railscasts.com/episodes/192-authorization-with-cancan
It allows custom authorization schemes, without too much hassle.

Resources