append query string while requesting for fonts in rails - ruby-on-rails

Note: This problem is related to firefox being unable to download fonts from cross domain servers
source: mozilla
In Gecko, web fonts are subject to the same domain restriction (font files must be on the same domain as the page using them), unless HTTP access controls are used to relax this restriction. Note: Because there are no defined MIME types for TrueType, OpenType, and WOFF fonts, the MIME type of the file specified is not considered.
I have a rails application where assets are fetched from amazon cloudfront. CloudFront, in turn, fetches the assets from s3 bucket. I defined the asset_host in production.rb so that assets are fetched from the cloudfront url.
config.action_controller.asset_host = "//d582x5cj7qfaa.cloudfront.net"
Because I am serving assets from different domain i.e cloudfront. I am unable to download fonts in case of firefox. I came across this and applied the same logic as given in the answer, but I was still unable to download the fonts. The reason behind this is, since I have multiple domains (eg: abc.almaconnect.com, xyz.almaconnect.com) that use the same cloudfront url to access fonts, the cloudfront server caches the server headers that were sent in the first request and return the same header even when the domain requesting is different the next time.
Then I came across this link, which solves my problem of cache headers i.e if I pass query string along with the url it should work correctly
My question is, how do I append the query_string dynamically through the url? Since the same application is used by multiple domains, I need to real-time method to append query string. How do I do that?
For reference:
Here is the CORS configuration that I used in amazon s3 bucket, which was used by cloudfront url to serve the assets:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*.almaconnect.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Content-*</AllowedHeader>
<AllowedHeader>Host</AllowedHeader>
</CORSRule>
</CORSConfiguration>
This is how I came to know what happens with and without appending query_string to the url:
curl -i -H "Origin: https://niet.almaconnect.com" https://d582x5cj7qfaa.cloudfront.net/assets/AlmaConnect-Icons-d059ef02df0837a693877324e7fe1e84.ttf?https_niet.almaconnect.com
When will used the same curl request with different origins, without appending the query_string, the returned response contains the origin header that was used in the first request.

Related

Apple's apple-app-site-association file on S3 with CloudFront Distribution

I am trying to serve apple-app-site-association from the S3 via CloudFront distribution via my custom domain.
But when I am given a path like below, It's started downloading rather than showing in the browser.
https:/mycustomdonain.com/.well-known/apple-app-site-association
Do I need to make any setting at S3 or CloudFront level to make it work?
Note: The application is developed in Angular.
Thanks
As Christos says, you need to set the content-type response header in S3, which then also applies to Cloudfront HTTPS URLs.
Here is an example of mine, that I use for deep linking and OpenID Connect with an HTTPS redirect URI:
https://mobile.authsamples.com/.well-known/apple-app-site-association
Further details on how this looks in my blog post, where you set the content type by editing tte file properties:

large file upload via Zuul

I'm trying to upload a large file through Zuul.
Basically I have the applications set up like this:
UI: this is where the Zuul Gateway is located
Backend: this is where the file must finally arrive.
I used the functionality described here so everything works fine if I used "Transfer-Encoding: chunked". However, this can only be set via curl. I haven't found any way to set this header in the browser (the header is rejected with the error message in the console "Refused to set unsafe header ..".
Any idea how to instruct the header to set this header ?
It seems that actually there are 2 possible ways to upload large files via zuul:
By using "Transfer-Encoding: chunked" in header (but this cannot be used in a browser, as mentioned in the initial question, because this header is considered unsafe)
By bypassing the DispatcherServlet servlet used by zuul (using the /zuul path in front of the usual path that I was using).
I found the documentation not very clear in this aspect (that you can use either of the 2 options). In my case, considering that the file was being uploaded via Angular Js (hence in the browser), I had to use the second approach.

Rails AWS assets on Cloudfront & s3

So I setup paperclip to work with S3 to store images on upload. That is working fine.
I then went to add cloudfront for assets (with the code below)
config.action_controller.asset_host = ENV['CLOUDFRONT_ENDPOINT']
and build the assets, it seems to build all correctly and everything, but whenever I go to the page the links are there
<link rel="stylesheet" media="all" href="http://d2j2dcfn0tfw0d.cloudfront.net/assets/application-ef64d41d2d57abb59ffe5bd71a4f727580ef276a6440e70210cf8d0ab22a6dc2.css" />
<script src="http://d2j2dcfn0tfw0d.cloudfront.net/assets/application-8cd15647254a9c6f940c58bcae0567e6ca66943b8a7576ce87ec903bd19f9937.js"></script>
but when I go to that link i get this XML error
This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>
assets/application-ef64d41d2d57abb59ffe5bd71a4f727580ef276a6440e70210cf8d0ab22a6dc2.css
</Key>
<RequestId>374DF77BF548DE75</RequestId>
<HostId>
TqrV7id3elsBjugWNkUObG259mU6Vk8MhxcXjrre1qv+XvxGBERDjWoW50iiCyp4
</HostId>
</Error>
I looked at my s3 box and it's not there either..
All my cloudfront settings were default except my origin which was my s3 box
For cloudfront to fetch your assets from s3 you need to copy the assets to s3. A popular choice for this is the asset_sync gen which will do this as part of deploys.
Another option is to let cloudfront fetch the assets from your server - this requires adding a new origin and behaviour to the cloudfront distribution.

Setting Content-Disposition to attachment using Ruby on Rails and Paperclip

I have a – hopefully small – problem.
I am using Ruby on Rails and Paperclip to handle file uploads.
Now I want to automatically set the Content-Disposition header to "attachment" so that when the user clicks a link, the file is downloaded instead of shown directly in the browser.
I found the following solution for Amazon S3:
Download file on click - Ruby on Rails
But I don't use S3.
Can anybody help?
Thanks in advance,
/Lasse
If you use File Storage, Paperclip stores the files within the RAILS_ROOT/public/system folder (configurable using the :path option).
Files from the /public folder are served directly as static files. "Rails/Rack never sees requests to your public folder" (to quote cwninja).
The files from the /public folder are served by the webserver running this app (for example Apache or WEBrick in development). And the webserver is responsible for setting the headers upon serving the file. So you should configure the webserver to set the correct headers for your attachment.
Another option is to build a controller or some Rack middleware to serve your paperclip attachments. There you can do something like response.headers['Content-Disposition'] = 'attachment'.
Third option is to use S3, then you can store headers (like Content-Disposition) within the S3-object. S3 then serves the paperclip attachment using those headers.
According to this link, you can do the following:
<Files *.xls> ForceType application/octet-stream Header set Content-Disposition attachment </Files>
<Files *.eps> ForceType application/octet-stream Header set Content-Disposition attachment </Files>

Alternative to X-sendfile in Apache for sending file given a URL?

I'm writing a Rails application that serves files stored on a remote server to the end user.
In my case the files are stored on S3 but the user requests the file via the Rails-application (hiding the actual URL). If the file was on my servers local file-system, I could use the Apache header X-Sendfile to free up the Ruby process for other requests while Apache took over the task of sending the file to the client. But in my case - where the file is not on the local file-system, but on S3 - it seems that I'm forced to download it temporarily inside Rails before sending it to the client.
Isn't there a way for Apache to serve a "remote" file to the client that is not actually on the server it self. I don't mind if Apache has to download the file for this to work, as long as I don't have to tie up the Ruby process while it's going on.
Any suggestions?
Thomas, I have similar requirements/issues and I think I can answer your problem. First (and I'm not 100% sure you care for this part), hiding the S3 url is quite easy as Amazon allows you to point CNAMES to your bucket and use a custom URL instead of the amazon URL. To do that, you need to point your DNS to the correct amazon URL. When I set mine up it was similar to this: files.domain.com points to files.domain.com.s3.amazonaws.com. Then you need to create the bucket with the name of your custom URL (files.domain.com in this example). How to call that URL will be different depending on which gem you use, but a word of warning was that the attachment_fu plugin I was using was incorrectly sending me to files.domain.com/files.domain.com/name_of_file.... I couldn't find the setting to fix it, so a simple .sub method for the S3 portion of the plugin fixed it.
On to your other questions, to execute some rails code (like recording the hit in the db) before downloading you can simply do this:
def download
file = File.find(...
# code to record 'hit' to database
redirect_to 3Object.url_for(file.filename,
bucket,
:expires_in => 3.hours)
end
That code will still cause the file to be served by S3, but and still give you the ability to run some ruby. (Of course the above code won't work as is, you will need to point it to the correct file and bucket and my amazon keys are saved in a config file. The above is also using the syntax for the AWS::S3 gem - http://amazon.rubyforge.org/).
Second, the Content-Disposition: attachment issue is a bit more tricky. Hopefully, your situation is a bit more simple than mine and the following solution can work. Assuming the object 'file' (in this example) is the correct S3 object, you can set the disposition to attachment by
file.content_disposition = "attachment"
file.save
The above code can be executed after the file exists on the S3 server (unlike some other headers and permissions), which is nice and it can also be added when you upload the file (syntax depends on your plugin). I'm still trying to find a way to tell S3 to send it as an attachment and only when requested (not every time), and if you find that, please let me know your solution. I need to be able to sometimes download it and other times save embed images (for example) into HTML. I'm not using the above mentioned redirect but fortunately it seems that if you embed (such as a HTML image tag) a file with the content-disposition/attachment header, and the browser still displays the image normally (but I haven't throughly tested that across enough browsers to send it in the wild).
Hope that helps! Good luck.

Resources