Currently, I am serving private S3 objects using signed urls across the globe and I am looking to accelerate certain S3 reads with CloudFront to leverage the CF<>S3 connectivity and the caching.
Unfortunately, S3 Transfer acceleration is not an option for me because, my bucket name is not DNS complaint(long-lived bucket with '.'s in the name)
I plan on using both S3 signed urls and CF. Only a set of users will be using CF considering the cost aspects.
Is there anyway that I can use CloudFront to serve private content while using S3 signed urls?
Any help on this is much appreciated.
Yes, of course. Here is a great article which you may find helpful.
Related
I have an google cloud storage buckets and one rails app to access this buckets. My app works with files from 1M until 300M in uploads/downloads.
On my rails app I use carriewave gem, so ...all the throughput comes to my app, after to the bucket....until now, everything normal.
Recently I implement GCP direct upload but, the base url is storage.googleapis.com. This is terrible for my customers that have such a high level security in their local networks.
I need that storage.googleapis.com becomes storage.mycustomdomain.com. In this approach my customers will just allow *.mycustomdomain.com in their networks.
Someone could help me?
Tnks
Cloud Storage public objects are served directly from GCP through storage.googleapis.com, as explained in the documentation. From John Hanley’s comment, and according to this guide, Cloud Storage does not directly support custom domains:
Because Cloud Storage doesn't support custom domains with HTTPS on its own, this tutorial uses Cloud Storage with HTTP(S) Load Balancing to serve content from a custom domain over HTTPS.
The guide goes into creating a load balancer service which you can use to serve user content from your own domain, using the buckets as the service backend. Otherwise, it is also possible to create a CDN which is supported by Cloud Storage and uses a custom domain, as mentioned by the blog objectives:
I want to serve images on my website (comparison for contact lenses) from a cloud bucket.
I want to serve it from my domain, cdn.kontaktlinsen-preisvergleich.de
I need HTTPS for that domain, because my website uses HTTPS everywhere and I don’t want to mix that.
This related thread also mentions implementation of a CDN to use a custom domain to serve Cloud Storage objects.
I am using Rackspace cloud files as my CDN. My app is image heavy and right now, all image are being uploaded to my server and from there being uploaded to cloud files.
This I think is redundant and it's waste of my server resources. I think a better solution would be for me to give the client a URL to upload and then client can upload to the URL (bypassing my server completely) and telling the server everythings done.
I am wondering if this is possible using cloud files and how it can be done. I am using Rails on server side btw.
Thanks
You may use the FormPost feature of Rackspace Cloud Files.
FormPost lets you offer your website audience a way to upload objects to your Cloud Files account through a web form.
A CDN Container has four URIs associated with it: iOS Streaming, Streaming, HTTPS and HTTP.
Can't you use the, say, HTTPS URI to allow your clients to upload directly to the Container?
I am building a ruby on rails website that will store and stream videos. I am using carrierwave and amazon s3 to upload and store the videos. If I am not mistaken, I can stream the files directly from s3 to my website.
So can anyone explain why does it seem that everyone uses cloudfront along with s3. What are the benifits?
What will be the average cost of such a storage/serving solution.
I will be streaming the videos via html5 so i will not be looking at encoding solutions
The main advantage to CloudFront is that it's a CDN. So the content is positioned closer to your customers, rather than just in Amazon's main data stores. You can use CloudFront with or without S3. It has a concept of an origin, which is basically the master server for your content. That master server can be S3 or a non-Amazon server.
For pricing, you should look at the CloudFront pricing details, and optionally the pricing for S3 (if you use that as origin).
You can use the calculator to estimate the actual cost. Let us know if you want help with that.
I have a Rails application that I want to add file upload to, so that the users have access to a "resources" section where they can upload and share (although not publicly) any type of file. I know I could build a solution using paperclip and S3 for example, but to try and avoid the admin overhead of all that I'm looking at API interfaces to drop.io and box.net. Does anyone have any experience of these? I've got a basic demo working rather well to drop.io, but I was just wondering if anyone had any better ideas or experiences.
Many thanks
D
I use attachment_fu with S3 backend. For User Interface goodness, I use YUI's file uploader.
Some of the files are uploaded with world read access, others with no public read access.
I use Attachement_fu to create self-signed urls to enable clients to access the private S3 files.
I did write some small helper routines for the S3 library for re-connecting after a timeout, handling various errors that the S3 library can raise, etc.
Building your own library for drop.io and/or box.net
Your idea of using the API for a commercial service is interesting but I haven't run into any problems with the above config. And the price for direct S3 access is very low.
If you do decide to go this route, you may want to open source your code. You'd benefit by getting testing, ideas, and possible code contributions from the community.
Note that if you have a lot of uploads, you can end up with a performance issue if the uploads are synchronous with the Rails thread--the rails process is busy uploading and can't do anything else until the upload is done.
HTH,
Larry
Imagine the following use case:
You have a basecamp style application hosting files with S3. Accounts all have their own files, but stored on S3.
How, therefore, would a developer go about securing files so users of account 1, couldn't somehow get to files of account 2?
We're talking Rails if that's a help.
S3 supports signed time expiring URLs that mean you can furnish a user with a URL that effectively lets only people with that link view the file, and only within a certain time period from issue.
http://www.miracletutorials.com/s3-amazon-expiring-urls/
If you want to restrict control of those remote resources you could proxy the files through your app. For something like S3 this may defeat the purpose of what you are trying to do, but it would still allow you to keep the data with amazon and restrict access.
You should be careful with an approach like this as it could cause your ruby thread to block while it is proxying the file, which could become a real problem with the application.
Serve the files using an EC2 Instance
If you set your S3 bucket to private, then start up an EC2 instance, you could serve your files on S3 via EC2, using the EC2 instance to verify permissions based on your application's rules. Because there is no charge for EC2 to transfer to/from S3 (within the same region), you don't have to double up your bandwidth consumption costs at Amazon.
I haven't tackled this exact issue. But that doesn't stop me from having an opinion :)
Check out cancan:
http://github.com/ryanb/cancan
http://railscasts.com/episodes/192-authorization-with-cancan
It allows custom authorization schemes, without too much hassle.