Trying to figure out a way to change up the asset host when accessed by a certain controller.
The controller is to be strictly accessed by the https protocol, so I need the asset host to be switched over to using https. At the moment the asset host is set to a CNAME subdomain that is linked to the S3 and there is no SSL cert associated to it. What I'm trying to achieve is replace the current asset host with the https Amazon S3 URL. The only assets I'm worried about are the CSS and JS includes.
I was thinking of using a helper to strip the host from the stylesheet_link_tag and javascript_include_tag and replace them with the https Amazon S3 url. Seems a bit hackish to me though.
Or perhaps there is a way to changed asset hosts if request.ssl? is true?
I'm using Rails 3.2.x.
Figure out a solution for my case.
Ended up using a Proc on config.action_controller.action_host in my Production environment file to handle a logic on request.ssl? and respond accordingly. Here is the code
config.action_controller.asset_host = Proc.new { |source, request = nil, *_|
request && request.ssl? ? 'https://s3.amazonaws.com/my_bucket' : 'http://s3.my-domain.com'
}
'request' is set to nil to accomodate the cases where asset_host is called in asset files (CSS and JS if you are using the asset helper tags). Since request doesn't exist and if request isn't assigned in the args, then the error will be thrown when assets are compiled (as shown below).
This asset host cannot be computed without a request in scope. Remove the second argument to your asset_host Proc if you do not need the request, or make it optional.
The *_ is present due to a bug with option arguments in Proc http://bugs.ruby-lang.org/issues/5694
Related
In our web application built in Rails we have several clients using the same application who will have different assets that are used dependant on which subdomain is used.
To achieve this we swap out what folder is being used on the CDN like so:
config.action_controller.asset_host = Proc.new { |source, request|
if request.subdomain.present?
"http#{request.ssl? ? 's' : ''}://cdn.domain.com/#{request.subdomain}/"
else
"http#{request.ssl? ? 's' : ''}://#{request.host_with_port}/"
end
}
Each time we create a new client we compile the assets manually using a custom build tool that uses Sprockets to build the assets the same way Rails would and then upload them to our CDN under a folder that matches the subdomain. This then allows us to have different sets of assets based purely on the subdomain.
Now this works fine except that when we update an asset the digest will change for that file but Rails will still try and load the old asset digests because the sprockets-manifest file (which is in /public/assets) e.g. .sprockets-manifest-12345.json is being loaded instead of the one that's on the CDN. Even though the asset host is different it still loads the local one.
Rails it seems doesn't care about other manifest files as the file itself only stores the filename to the fingerprinted version so even when things like the host changes it would normally be able to find the correct asset. It would seem as though Rails has been designed this way deliberately.
However we really need to get Rails to use the manifest file that is on the CDN itself rather than use the one in the public folder local to the application.
After reading the docs, it seems you can change the manifest location. We tried doing it by using the same logic as above for the manifest like so:
config.assets.manifest = Proc.new { |source, request|
if request.subdomain.present?
"http#{request.ssl? ? 's' : ''}://cdn.domain.com/#{request.subdomain}/"
else
"http#{request.ssl? ? 's' : ''}://#{request.host_with_port}/"
end
}
But Rails/Sprockets is still using the local sprockets file... Any ideas why?
In my Rails application, i have used CDN. I have configured the cdn by adding cdn url to
config.action_controller.asset_host = "http://cdn.mydomain.com"
in production.rb file.
Now i am trying to have https:// for certain pages like Sign In and Sign Up
But as the assets are served from CDN, the https conflicts with the cdn path.
My solution to this to make the sign in and sign up pages not to use the cdn assets and should point
as local assets.
is my solution correct? if so how do i restrict certain layout files from using CDN asset path?
I would look at this response: Configure dynamic assets_host in Rails 3
What I think you would want to do is change asset_host to be dynamic based on whether your page is served over https or not. Something like:
config.action_controller.asset_host = Proc.new { |source, request|
"#{request.ssl? ? '/assets' : 'http://cdn.mydomain.com'}"
}
My syntax may be a little off as I'm typing it up on the fly but it should be close to what you need.
NOTE : The code request.try(:ssl?) always returns false even when i run the https version.
I am working on finding the solution, will post it once i find it.
Found the solution
config.action_controller.asset_host = Proc.new do |*args|
source, request = args
if request.try(:ssl?)
'https://mydomain.com'
else
'http://cdn.mydomain.com'
end
end
I'm using CarrierWave for images and Amazon Cloudfront as a CDN (without S3).
The issue is that something like: #user.image_url returns the non CDN URL, even though i've configured my assets accordingly:
# /config/environments/production.rb
config.action_controller.asset_host = Proc.new { |source, request|
if ['jpg','jpeg','png','gif','bmp'].include?(source.split('.').last)
unless request.ssl?
"http://cdn.domain.com"
else
"https://ge95v2x8h9t3.cloudfront.net"
end
end
}
How to make CarrierWave use my asset_host proc just like other assets?
You can configure carrierwave to use a custom asset_host (config.fog_host... documented in the readme). Although not documented, you can also use a Proc - or anything responding to :call - to determine the string at runtime:
https://github.com/jnicklas/carrierwave/blob/master/lib/carrierwave/storage/fog.rb#L107
I'm not sure of a way to just point config.fog_host directly to Rails' config.asset_host, but I'm sure there must be a way to get a reference to it - even if you have to use a non-public interface. Though, I don't know how helpful that would be during development... you likely want assets served from localhost and uploads served from Cloudfront.
For one of my models I have a method:
def download_url
url = xxxxx
end
which works nicely to make /xxxx/xxxx/3
What i want to do is updated this to include an absolute URL so I can use this method in an email:
https://example.com/xxxx/xxxx/3
But I don't want to hard code. I want it to be an environment var so it works on dev & production
Emails are effectively views, and can use helpers. The model shouldn't really have any knowledge about the views - instead, you should use url_for or one of its descendant methods in the email view template to generate a URL. Those helpers can generate absolute URLs based on the location that the application is running (and associated configuration - you'll want to set config.action_mailer.default_url_options[:host] in your environment file) without having to mess with environment variables and the like.
I would define the domain as a constant in development.rb & production.rb:
APP_DOMAIN = "https://mysite.com"
And then just use this constant in your method within the model:
def download_url
"#{APP_DOMAIN}/download/#{id}"
end
It may be ugly, but it's necessary. Rails apps don't and shouldn't know their root URL. That's a job for the web server. But, hardcoding sucks...
If you're using capistrano or some other deployment method, you can define the server host in a variable and write it out to a file that you can read from the app.
I have uploaded a file on s3 using paperclip.. the file upload process works fine..
Now i wanted to download it. In my model i have set my :s3_host_alias.. now as the file is private.. so if i am trying to fetch the file using paperclip url method... it's giving me access denied error...
and if i am using S3Object.url_for method then the url return is s3.amazonaws.com/mybucket/path_of_file.
I don't want tht s3.amazonaws.com to be shown in the url so used :s3_host_alias in my model
and created a CNAME inmy DNS server... now if i am directly using #object.url then its giving the correct url but throws access denied error. because i guess the access_key and signature is not passed..
Is there a way to fetch private file from s3 using paperclip by using canonical url..
I don't use paperclip, but yes, you can sign a S3 request using a virtual hostname.
I had this problem using Paperclip and the AWS::S3 gem. Paperclip set up everything fine for non-authenticated requests. But falling back to AWS::S3 to generate an authenticated URL didn't use the S3 host alias.
You can pass AWS::S3 a server option on connect, but I didn't need or want a connection just to get the URL. I also couldn't see a way to set it via configuration (so it would apply outside of a connection). Even glancing at the source, it looks like it's non-configurable.
So, I created a monkey patch. My Ruby-fu (and maybe my OO-fu) aren't super high, so there may be a better way to do this, but it works for what I need. Basically, I pass url_for an :s3_host_alias param on the option hash, and then the monkey patch uses that if it's passed. If it's passed, it also has to remove the bucket from the path that's generated.
So....
You can create this 1-line file, RAILS_ROOT/initializers/load_patches.rb, to load all patches in RAILS_ROOT/lib:
Dir[File.join(Rails.root, 'lib', 'patches', '**', '*.rb')].sort.each { |patch| require(patch) }
Then create the file RAILS_ROOT/lib/patches/aws.rb with this code:
http://pastie.org/1622881
And you can call for an authenticated url with something along these lines (Configuration is a custom class for storing, natch, configuration values) :
AWS::S3::S3Object.url_for(media.path(style || media.default_style), media.bucket_name, :expires_in => expires_in, :use_ssl => false, :s3_host_alias => Configuration.s3_host_alias)