So I setup paperclip to work with S3 to store images on upload. That is working fine.
I then went to add cloudfront for assets (with the code below)
config.action_controller.asset_host = ENV['CLOUDFRONT_ENDPOINT']
and build the assets, it seems to build all correctly and everything, but whenever I go to the page the links are there
<link rel="stylesheet" media="all" href="http://d2j2dcfn0tfw0d.cloudfront.net/assets/application-ef64d41d2d57abb59ffe5bd71a4f727580ef276a6440e70210cf8d0ab22a6dc2.css" />
<script src="http://d2j2dcfn0tfw0d.cloudfront.net/assets/application-8cd15647254a9c6f940c58bcae0567e6ca66943b8a7576ce87ec903bd19f9937.js"></script>
but when I go to that link i get this XML error
This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>
assets/application-ef64d41d2d57abb59ffe5bd71a4f727580ef276a6440e70210cf8d0ab22a6dc2.css
</Key>
<RequestId>374DF77BF548DE75</RequestId>
<HostId>
TqrV7id3elsBjugWNkUObG259mU6Vk8MhxcXjrre1qv+XvxGBERDjWoW50iiCyp4
</HostId>
</Error>
I looked at my s3 box and it's not there either..
All my cloudfront settings were default except my origin which was my s3 box
For cloudfront to fetch your assets from s3 you need to copy the assets to s3. A popular choice for this is the asset_sync gen which will do this as part of deploys.
Another option is to let cloudfront fetch the assets from your server - this requires adding a new origin and behaviour to the cloudfront distribution.
Related
I am trying to serve apple-app-site-association from the S3 via CloudFront distribution via my custom domain.
But when I am given a path like below, It's started downloading rather than showing in the browser.
https:/mycustomdonain.com/.well-known/apple-app-site-association
Do I need to make any setting at S3 or CloudFront level to make it work?
Note: The application is developed in Angular.
Thanks
As Christos says, you need to set the content-type response header in S3, which then also applies to Cloudfront HTTPS URLs.
Here is an example of mine, that I use for deep linking and OpenID Connect with an HTTPS redirect URI:
https://mobile.authsamples.com/.well-known/apple-app-site-association
Further details on how this looks in my blog post, where you set the content type by editing tte file properties:
This is about a Rails app on Heroku that runs behind CloudFront and serves ActiveStorage images from the Bucketeer add-on.
Cache config in both the Rails app itself and CloudFront are right on target for css, js, and even key, important requests (like search results, 3rd party info fetched from APIs, etc).
What I can't figure out how to cache are the images that come from the Bucketeer add-on.
Right now the images seem to come from the Bucketeer bucket every time. They show up with no Cache TTL.
I'd like for them to be cached for up to a year both at the CloudFront level and the visitor's browser level.
Is this possible?
It seems like the Bucketeer add-on itself gives us no control over how the bucket and/or the service handles caching.
Where can I force these files to show up with caching instructions?
Thanks for sharing your findings here
Additionally, I found that S3Service accepts upload options
https://github.com/rails/rails/blob/6-0-stable/activestorage/lib/active_storage/service/s3_service.rb#L12
So you can add the following code to your storage.yml
s3:
service: S3
access_key_id: ID
secret_access_key: KEY
region: REGION
bucket: BUCKET
upload:
cache_control: 'public, max-age=31536000'
For a full list of available options refer to AWS SDK
After a lot of searching, I learned that Bucketeer does give bucket control. You just have to use AWS CLI.
Here is the link to AWS docs on CLI:
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html
And here is the link where Bucketeer tells you how to get started with that on their service:
https://devcenter.heroku.com/articles/bucketeer#using-with-the-aws-cli
This means you can install AWS CLI, do the aws configure with the credentials Bucketeer provides, and then go on to change cache-control in the bucket directly.
AWS does not seem to have a feature for setting cache-control defaults for an entire bucket or folder, so you actually do it to each object.
In my case, all of my files/objects in the bucket are images that I display on the website and need to cache, so it's safe to run a command that does it all at once.
Such a command can be found in this answer:
How to set expires headers to all images in a bucket in Amazon S3
For me, it looked like this:
aws s3 cp s3://my-bucket-name s3://my-bucket-name --recursive --acl public-read --metadata-directive REPLACE --cache-control max-age=43200000
The command basically copies the entire bucket onto itself while adding the cache-control max-age=43200000 header to each object in the process.
This works for all existing files, but will not change anything for future changes or additions. You'd have to run this again every so often to catch new stuff and/or write code to set your object headers when saving the object to the bucket. Apparently there are people that have had luck with this. Not me.
Thankfully, I found this post:
https://www.neontsunami.com/posts/caching-variants-with-activestorage
This monkey-patch basically changes ActiveStorage::RepresentationsController#show to use Rails action caching for variants. Take a look. If you're having similar issues, it's worth the read.
There are drawbacks. For my case, they were not a problem, so this is the solution I went with.
I am creating a PDF which contains images which are stored on Amazon S3.
My Rails application uses https, so also the URL to the S3 image is https, which is configured in production.rb:
config.paperclip_defaults = {
:storage => :s3,
:s3_protocol => :https
}
The issue is that the S3 bucket has a security bucket policy that it only shows the image when it is coming from my web domain. This works well when showing the image in the view, because the referer is then my web domain which is whitelisted.
The issue when creating the PDF is that wicked_pdf tries to retrieve the image, but S3 can't see it is coming from my web domain and returns a 403 Forbidden. So what can I do to solve this?
Since you've tagged your question with wicked-pdf I assume that's what you're using. It looks like this is a known problem with some versions of that gem. The linked question gives several options to solve it.
Is your CORS configured in AWS? https://aws.amazon.com/blogs/aws/amazon-s3-cross-origin-resource-sharing/
Note: This problem is related to firefox being unable to download fonts from cross domain servers
source: mozilla
In Gecko, web fonts are subject to the same domain restriction (font files must be on the same domain as the page using them), unless HTTP access controls are used to relax this restriction. Note: Because there are no defined MIME types for TrueType, OpenType, and WOFF fonts, the MIME type of the file specified is not considered.
I have a rails application where assets are fetched from amazon cloudfront. CloudFront, in turn, fetches the assets from s3 bucket. I defined the asset_host in production.rb so that assets are fetched from the cloudfront url.
config.action_controller.asset_host = "//d582x5cj7qfaa.cloudfront.net"
Because I am serving assets from different domain i.e cloudfront. I am unable to download fonts in case of firefox. I came across this and applied the same logic as given in the answer, but I was still unable to download the fonts. The reason behind this is, since I have multiple domains (eg: abc.almaconnect.com, xyz.almaconnect.com) that use the same cloudfront url to access fonts, the cloudfront server caches the server headers that were sent in the first request and return the same header even when the domain requesting is different the next time.
Then I came across this link, which solves my problem of cache headers i.e if I pass query string along with the url it should work correctly
My question is, how do I append the query_string dynamically through the url? Since the same application is used by multiple domains, I need to real-time method to append query string. How do I do that?
For reference:
Here is the CORS configuration that I used in amazon s3 bucket, which was used by cloudfront url to serve the assets:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*.almaconnect.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Content-*</AllowedHeader>
<AllowedHeader>Host</AllowedHeader>
</CORSRule>
</CORSConfiguration>
This is how I came to know what happens with and without appending query_string to the url:
curl -i -H "Origin: https://niet.almaconnect.com" https://d582x5cj7qfaa.cloudfront.net/assets/AlmaConnect-Icons-d059ef02df0837a693877324e7fe1e84.ttf?https_niet.almaconnect.com
When will used the same curl request with different origins, without appending the query_string, the returned response contains the origin header that was used in the first request.
I have a – hopefully small – problem.
I am using Ruby on Rails and Paperclip to handle file uploads.
Now I want to automatically set the Content-Disposition header to "attachment" so that when the user clicks a link, the file is downloaded instead of shown directly in the browser.
I found the following solution for Amazon S3:
Download file on click - Ruby on Rails
But I don't use S3.
Can anybody help?
Thanks in advance,
/Lasse
If you use File Storage, Paperclip stores the files within the RAILS_ROOT/public/system folder (configurable using the :path option).
Files from the /public folder are served directly as static files. "Rails/Rack never sees requests to your public folder" (to quote cwninja).
The files from the /public folder are served by the webserver running this app (for example Apache or WEBrick in development). And the webserver is responsible for setting the headers upon serving the file. So you should configure the webserver to set the correct headers for your attachment.
Another option is to build a controller or some Rack middleware to serve your paperclip attachments. There you can do something like response.headers['Content-Disposition'] = 'attachment'.
Third option is to use S3, then you can store headers (like Content-Disposition) within the S3-object. S3 then serves the paperclip attachment using those headers.
According to this link, you can do the following:
<Files *.xls> ForceType application/octet-stream Header set Content-Disposition attachment </Files>
<Files *.eps> ForceType application/octet-stream Header set Content-Disposition attachment </Files>