I'm storing images in Amazon S3 using Fog and Carrierwave. It returns a url like bucket.s3.amazonaws.com/my_image.jpg.
DNS entries have been set up so that images.mysite.com points to bucket.s3.amazonaws.com.
I want to adjust my views an APIs to use the images.mysite.com/my_image.jpg URL. Carrierwave, however, only spits out the Amazon based one. Is there a simple way to tell Carrierwave and/or Fog to use a different host for its URLs than normal? If not, how would I modify the uploader to spit it out?
Come to find out that, as of June 6th, 2012, Amazon AWS does not support custom SSL certs, which makes this a moot point.
Related
I am uploading images via CarrierWave in my Rails 4 app, to an AWS S3 Bucket. I also have Cloudfront setup, which currently serves up all of my statis assets (Excl. Public uploads).
How do I serve uploaded images via Cloudfront instead of S3, even though they are stored in an S3 Bucket? I have found tutorials like this, but since I already have a CloudFront distribution running, I was wondering if I should add another one for my Public Image uploads or is there a way to add it to my Current distribution.
You can add the bucket as an additional custom origin to your existing Cloudfront distribution.
You can then use path patterns to determine which prefixes (e.g. /images/uploads/*) should route to the alternate origin.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesPathPattern
Since creating distributions doesn't cost anything except a few minutes of your time while you wait for the distribution to become globally available, I'd suggest creating a new distribution for experimentation before adding this to your production distribution... but this is definitely doable.
I plan to use S3 + Cloudfront for hosting static images for an e-commerce web app. Each user-uploaded image can be used for several end points.
Example user actions in the back office :
Uploads image flower.jpg in his media library
Creates a flower product with id 1
Creates another flower product with id 2
Assign image flower.jpg to illustrate both product
I was thinking about a convention over configuration mechanism such as :
Uploaded images have a unique name, like flower.jpg in this case
When used to illustrate any item, use a convention like : point p1.jpg and p2.jpg to flower.jpg, the same way symlinks work
All three following URLs would return the same file :
http://aws-s3/my_app/flower.jpg
http://aws-s3/my_app/p1.jpg
http://aws-s3/my_app/p2.jpg
Can I do that with AWS ?
I did not find any such thing in the API docs, except for the temporary public URL, which comes with two no-go : 1, they expire, 2, I cannot chose the URL
Can I do that with another CDN ?
Thanks.
I believe that to accomplish such a thing your best bet is going to be to use EC2 (pricing) with S3.
My reasoning is that S3, as you say, doesn't allow for redirect URLs. To accomplish what you want, you would need to actually upload the file to each place, which would greatly increase your costs.
However, what you can do is use EC2 as a webserver. I'll leave it up to you to decide on your configuration, but you can do anything on EC2 you could do on any server - like set up redirects.
For reference, here's a good tutorial on setting up Apache on Ubuntu Server, and here's one on setting up Apache redirects.
I think, now you can use S3, Route53 and Cloudfront to achieve the same. The question seems to however aged, but thought it may be useful for someone looking now.
i am having some problem integrating Paperclip with non-us S3 server. Paperclip seem to assume that the S3 server is in us and return back a url that is at http://s3.amazonaws.com/path/to/my/file.
My question is how to change it to point to a non-us S3 server(singapore for example)? The files are uploading, i just need to get paperclip to return a correct path.
Using :
paperclip-2.4.5
aws-s3-0.6.2
Tian Wei, check this out:
http://techspry.com/ruby_and_rails/amazons-s3-european-buckets-and-paperclip-in-rails-3/
But a better answer would be just use the US one, and avoid the hassle.
Server side is Rails.
Client side is Flash, users will upload directly to S3
I need a flexible way to generate S3 policy files, base64 encode them, and then distribute the resulting signed policy to the client.
Is there a good library/gem for this, or do I need to roll my own?
I'll be using paperclip to store the file, as per:
http://www.railstoolkit.com/posts/fancyupload-amazon-s3-uploader-with-paperclip
I've had a look at:
https://github.com/geemus/fog
https://github.com/jnicklas/carrierwave
https://github.com/marcel/aws-s3
These look like they'll help me get bits done, but I can't tell if they'll help me generate flexible policies.
EDIT: Going to give the "Generate an upload signature..." bit here a shot:
http://www.kiakroas.com/blog/44/
Here is a sample project for how to do this using Rails 3, Flash/Silverlight/GoogleGears/BrowserPlus and jQuery-based Plupload to upload directly to S3: https://github.com/iwasrobbed/Rails3-S3-Uploader-Plupload
Can one of the Amazon services (their S3 data service, or otherwise) be used to offload server of static files for a Ruby on Rails app, but still support the app's authentication & authorization?
That is such that when the user browser downloaded the initial HTML for one page of the Ruby on Rails application, when it went back for static content (e.g. an image or CSS file), that this request would be:
(a) routed directly to the Amazon service (no RoR cycles used to serve it, or bandwidth), BUT
(b) the browser request for this item (e.g. an image) would still have to go through an authentication/authorization layer based on the user model in the Ruby on Rails application - in other words to ensure not just anyone could get the image...
thanks
The answer is a yes with a but. You can use a feature of S3 that allows you to create links to secure S3 objects that has a small time to live, default is 5 minutes. This will work for any S3 object that is uploaded as private. This means that the browser will only have X seconds or whatever to request the file from S3. Example code from docs for the AWS gem:
S3Object.url_for('beluga_baby.jpg', 'marcel_molina')
You can also specify an expires_in or expires option per file. The bad thing is that you would need to create a helper for your stylesheet, image, and js links to create the proper S3 URLs.
I would recommend that you setup a domain name for your S3 bucket, like "examples3.amazonaws.com" and put all your standard image files and CSS there as public. Then set that as the asset host in your rails config. Then, only use the secure links for static files that really need it.