Using a private google cloud storage with a custom domain - ruby-on-rails

I have an google cloud storage buckets and one rails app to access this buckets. My app works with files from 1M until 300M in uploads/downloads.
On my rails app I use carriewave gem, so ...all the throughput comes to my app, after to the bucket....until now, everything normal.
Recently I implement GCP direct upload but, the base url is storage.googleapis.com. This is terrible for my customers that have such a high level security in their local networks.
I need that storage.googleapis.com becomes storage.mycustomdomain.com. In this approach my customers will just allow *.mycustomdomain.com in their networks.
Someone could help me?
Tnks

Cloud Storage public objects are served directly from GCP through storage.googleapis.com, as explained in the documentation. From John Hanley’s comment, and according to this guide, Cloud Storage does not directly support custom domains:
Because Cloud Storage doesn't support custom domains with HTTPS on its own, this tutorial uses Cloud Storage with HTTP(S) Load Balancing to serve content from a custom domain over HTTPS.
The guide goes into creating a load balancer service which you can use to serve user content from your own domain, using the buckets as the service backend. Otherwise, it is also possible to create a CDN which is supported by Cloud Storage and uses a custom domain, as mentioned by the blog objectives:
I want to serve images on my website (comparison for contact lenses) from a cloud bucket.
I want to serve it from my domain, cdn.kontaktlinsen-preisvergleich.de
I need HTTPS for that domain, because my website uses HTTPS everywhere and I don’t want to mix that.
This related thread also mentions implementation of a CDN to use a custom domain to serve Cloud Storage objects.

Related

API gateway to my elastic beanstalk docker deployed app

My backend is a simple dockerized Node.js express app deployed onto elastic beanstalk. It is exposed on port 80. It would be located somewhere like
mybackend.eba-p4e52d.us-east-1.elasticbeanstalk.com
I can call my APIs on the backend
mybackend.eba-p4e52d.us-east-1.elasticbeanstalk.com/hello
mybackend.eba-p4e52d.us-east-1.elasticbeanstalk.com/postSomeDataToMe
and they work! Yay.
The URL is not very user friendly so I was hoping to set up API gateway to allow to me simply forward API requests from
api.myapp.com/apiFamily/ to mybackend.eba-p4e52d.us-east-1.elasticbeanstalk.com
so I can call api.myapp.com/apiFamily/hello or api.myapp.com/apiFamily/postMeSomeData
Unfortunately, I can't figure out (i) if I can do this (ii) how to actually do it.
Can anybody point me to a resource that explains clearly how to do this?
Thanks
Yes, you can do this. For this to happen you need two things:
a custom domain that you own and control, e.g. myapp.com.
a valid, public SSL certificate issued for that domain.
If you don't have them, and want to stay within AWS ecosystem, you can use Route53 to buy and manage your custom domain. For SSL you can use AWS ACM which will provide you with free SSL certificate for the domain.
AWS instructions on how to set it up all is:
Setting up custom domain names for REST APIs

How can we use AWS services to create a complete chat module in our iOS app with Video, Audio and Text chat capabilities?

I need to develop an iOS health care application with HIPAA compliance. Since the HIPAA compliance doesn't allow the use of any third party Chat SDKs, I need to implement the entire chat module including video conference using AWS services.
I have read about AWS CloudFront for media streaming. Can anyone suggest a better approach for this?
You can go for a serverless real-time chat application with AWS. You can create separate Lambda Functions as microservices for textual and video conferencing. Then configure these lambda functions to be triggered by CloudFront Events. You can also check which type of events are being triggered in CloudFront and perform the desired function.
First of all you need a AWS Certified Solutions Architect - Associate and his responsibilities is
Maintain AWS account for you.
Manages all the resources(EC2, cloudfront, S3, DynamoDB.. etc) for you.
Now you have to store all the chats by using DynamoDB.
Use S3 to store files (images, video, others).
Cloudfront is used to provide the files on user end w/o any latency .
You simply can assume he is person how manages the sever like in simple API calls some other PHP developer gives you all APIs and manages DB.
Doc Reference : https://aws.amazon.com/documentation/sdk-for-ios/
Now your responsibility is to use above mentioned api calls and create the app. For UI design you can use any third party or can customise yourself.
Edit:
Or you can use serverless concept as stated in other answer.

Google Cloud Storage - Rails web app - different buckets and different access keys for different environments

I plan to use a cloud based storage service to store some static user-uploaded content of my web application. I have settled upon Google Cloud Storage for now.
My web application is Rails, and I am using Paperclip with fog to connect to Google Cloud Storage.
I understand that I need to use the Interoperable Storage Access Keys in the fog config to connect to my bucket. Any additional key I add is given access to all the buckets.
I want to have a separate bucket per environment (development, staging and production). I want to have separate access and secret keys, with each key having access to only one bucket.
Basically, I don't want to put my production keys in my web-app source code, which all developers will have access to.
I read the Google Cloud Storage documentation on ACLs, but I could not find out how to achieve what I want.
I can't imagine that others wouldn't have had the same kind of requirement. Maybe I am using the wrong search terms, but I cannot get any info about this.
I would appreciate some help.
P.S. - Is what I want possible on AWS S3? I am open to switching to S3 if this is possible on it.
The normal solution for something like this would be to have 3 service accounts (development-app, staging-app, production-app), each of which would have its own set of credentials and permissions. You could either have a test, staging, and production project, or you could just have test, staging, and production buckets within a single project. You can create a whole range of per-project service accounts, each with its own set of credentials and permissions.
Unfortunately, interoperable storage access keys are not available for service accounts, only regular Google user accounts. In order to do what you want, you'd need to have three user accounts, each of which was granted access to exactly one of those buckets.

Azure - uploading files to blob storage via shared hosting

Im struggling to find an answer to this. I have a website that is deployed in a shared hosting environment. I want to allow people to upload files to my azure blob storage account.
I have this working locally, using the storage emulator, however when I publish the site I get a Security Exception.
Is this actually possible under a shared hosting envrionment ?
Cheers
A bit more detail would help, in understanding how these uploads are taking place. That said, I'll make the assumption that people are uploading directly to Blob Storage, and not through your Website (or Web Service).
To allow direct uploads, you need to provide either a public blob or container (which everyone in the world can see), or create a temporary Shared Access Signature (SAS) on a specific blob or container, that grants access for a short time window.
If your app is Silverlight, then you are probably running into a cross-domain issue (and you'll need to correct that with an access policy).
If you provide more details around the way uploads are being sent, as well as the client and server technology, I can edit my answer to be more specific.

Securing S3 via your own application

Imagine the following use case:
You have a basecamp style application hosting files with S3. Accounts all have their own files, but stored on S3.
How, therefore, would a developer go about securing files so users of account 1, couldn't somehow get to files of account 2?
We're talking Rails if that's a help.
S3 supports signed time expiring URLs that mean you can furnish a user with a URL that effectively lets only people with that link view the file, and only within a certain time period from issue.
http://www.miracletutorials.com/s3-amazon-expiring-urls/
If you want to restrict control of those remote resources you could proxy the files through your app. For something like S3 this may defeat the purpose of what you are trying to do, but it would still allow you to keep the data with amazon and restrict access.
You should be careful with an approach like this as it could cause your ruby thread to block while it is proxying the file, which could become a real problem with the application.
Serve the files using an EC2 Instance
If you set your S3 bucket to private, then start up an EC2 instance, you could serve your files on S3 via EC2, using the EC2 instance to verify permissions based on your application's rules. Because there is no charge for EC2 to transfer to/from S3 (within the same region), you don't have to double up your bandwidth consumption costs at Amazon.
I haven't tackled this exact issue. But that doesn't stop me from having an opinion :)
Check out cancan:
http://github.com/ryanb/cancan
http://railscasts.com/episodes/192-authorization-with-cancan
It allows custom authorization schemes, without too much hassle.

Resources