The default Rails app installed by rails new has config.assets.compile = false in production.
And the ordinary way to do things is to run rake assets:precompile before deploying your app, to make sure all asset pipeline assets are compiled.
So what happens if I set config.assets.compile = true in production?
I wont' need to run precompile anymore. What I believe will happen is the first time an asset is requested, it will be compiled. This will be a performance hit that first time (and it means you generally need a js runtime in production to do it). But other than these downsides, after the asset was lazily compiled, I think all subsequent access to that asset will have no performance hit, the app's performance will be exactly the same as with precompiled assets after this initial first-hit lazy compilation. is this true?
Is there anything I'm missing? Any other reasons not to set config.assets.compile = true in production? If I've got a JS runtime in production, and am willing to take the tradeoff of degraded performance for the first access of an asset, in return for not having to run precompile, does this make sense?
I wrote that bit of the guide.
You definitely do not want to live compile in production.
When you have compile on, this is what happens:
Every request for a file in /assets is passed to Sprockets. On the first request for each and every asset it is compiled and cached in whatever Rails is using for cache (usually the filesystem).
On subsequent requests Sprockets receives the request and has to look up the fingerprinted filename, check that the file (image) or files(css and js) that make up the asset were not modified, and then if there is a cached version serve that.
That is everything in the assets folder and in any vendor/assets folders used by plugins.
That is a lot of overhead as, to be honest, the code is not optimized for speed.
This will have an impact on how fast asset go over the wire to the client, and will negatively impact the page load times of your site.
Compare with the default:
When assets are precompiled and compile is off, assets are compiled and fingerprinted to the public/assets. Sprockets returns a mapping table of the plain to fingerprinted filenames to Rails, and Rails writes this to the filesystem. The manifest file (YML in Rails 3 or JSON with a randomised name in Rails 4) is loaded into Memory by Rails at startup and cached for use by the asset helper methods.
This makes the generation of pages with the correct fingerprinted assets very fast, and the serving of the files themselves are web-server-from-the-filesystem fast. Both dramatically faster than live compiling.
To get the maximum advantage of the pipeline and fingerprinting, you need to set far-future headers on your web server, and enable gzip compression for js and css files. Sprockets writes gzipped versions of assets which you can set your server to use, removing the need for it to do so for each request.
This get assets out to the client as fast as possible, and in the smallest size possible, speeding up client-side display of the pages, and reducing (with far-future header) requests.
So if you are live compiling it is:
Very slow
Lacks compression
Will impact render time of pages
Versus
As fast as possible
Compressed
Remove compression overheard from server (optionally).
Minimize render time of pages.
Edit: (Answer to follow up comment)
The pipeline could be changed to precompile on the first request but there are some major roadblocks to doing so. The first is that there has to be a lookup table for fingerprinted names or the helper methods are too slow. Under a compile-on-demand senario there would need to be some way to append to the lookup table as each new asset is compiled or requested.
Also, someone would have to pay the price of slow asset delivery for an unknown period of time until all the assets are compiled and in place.
The default, where the price of compiling everything is paid off-line at one time, does not impact public visitors and ensures that everything works before things go live.
The deal-breaker is that it adds a lot of complexity to production systems.
[Edit, June 2015] If you are reading this because you are looking for a solution for slow compile times during a deploy, then you could consider precompiling the assets locally. Information on this is in the asset pipeline guide. This allows you to precompile locally only when there is a change, commit that, and then have a fast deploy with no precompile stage.
To have less overhead with Pre-compiling thing.
Precompile everything initially with these settings in production.rb
# Precompile *all* assets, except those that start with underscore
config.assets.precompile << /(^[^_\/]|\/[^_])[^\/]*$/
you can then simply use images and stylesheets as as "/assets/stylesheet.css" in *.html.erb
or "/assets/web.png"
For anyone using Heroku:
If you deploy to Herkou, it will do the precompile for you automatically during the deploy if compiled assets are not included (i.e. public/assets not committed) so no need for config.assets.compile = true, or to commit the precompiled assets.
Heroku's docs are here. A CDN is recommended to remove the load on the dyno resource.
It won't be the same as precompiling, even after that first hit: because the files aren't written to the filesystem they can't be served directly by the web server. Some ruby code will always be involved, even if it just reads a cache entry.
Set config.asset.compile = false
Add to your Gemfile
group :assets do
gem 'turbo-sprockets-rails3'
end
Install the bundle
Run rake assets:precompile
Then Start your server
From the official guide:
On the first request the assets are compiled and cached as outlined in development above, and the manifest names used in the helpers are altered to include the MD5 hash.
Sprockets also sets the Cache-Control HTTP header to max-age=31536000. This signals all caches between your server and the client browser that this content (the file served) can be cached for 1 year. The effect of this is to reduce the number of requests for this asset from your server; the asset has a good chance of being in the local browser cache or some intermediate cache.
This mode uses more memory, performs poorer than the default and is not recommended.
Also, precompile step is not trouble at all if you use Capistrano for your deploys. It takes care of it for you. You just run
cap deploy
or (depending on your setup)
cap production deploy
and you're all set. If you still don't use it, I highly recommend checking it out.
Because it is opening a directory traversal vulnerability - https://blog.heroku.com/rails-asset-pipeline-vulnerability
Related
The situation
I couldn't get my vendor assets to precompile in heroku without specifying each individual file to precompile in config/initializers/assets so resorted to setting
config.assets.compile = true
Note: I didn't require vendor assets in application.js because I'm calling them on a per page basis when they are needed.
Anyhow I setup a Cloudfront account and now everything is working as it does in development. But on deploy to Heroku, there is a warning and a link that leads to a StackOverflow post, warning against setting config.assets.compile to true.
Compile Set to True in Production If you have enabled your application
to config.assets.compile = true in production, your application might
be very slow. This was best described in a stack overflow post:
When you have compile on, this is what happens: Every request for a
file in /assets is passed to Sprockets. On the first request for each
and every asset it is compiled and cached in whatever Rails is using
for cache (usually the filesystem). On subsequent requests Sprockets
receives the request and has to look up the fingerprinted filename,
check that the file (image) or files(css and js) that make up the
asset were not modified, and then if there is a cached version serve
that.
This setting is also known to cause other run-time instabilities and
is generally not recommended. Instead we recommend either precompiling
all of your assets on deploy (which is the default) or if that is not
possible compiling assets locally.
My question is, Since I'm now using Cloudfront, does that cover me from what they are warning about, slowness etc.?
Thanks in advance for any advice :)
Yes, you're likely covered. The request for an asset will hit CloudFront first. After the first request, it will be cached and shouldn't hit your server for compilation. Every time your assets change, however, they'll have to recompile, which is of course very slow.
I understand that, when pushing to production, Rails generates digests for assets, so that application.js will become application-2695540c610db8087315134277d8afe6.js. The digest/fingerprint is added to a manifest file so that Rails can keep track of them.
My question is: what happens if you request an asset with a different digest?
Please note that our app is set up so that Rails serves all assets, which then get cached by our CDN (which fronts all requests). From our observations, assets requested with the correct digest are served immediately, while others take time, which have us think that they might be compiled live.
The digests help with caching issues. If a user has already requested the previous digested asset then it will be cached on their machine. If for some reason the page tries to call that version of the asset then the user will be served the cached version. Otherwise if the asset is not cached on the users machine and the page tries to fetch it, the asset would be unreadable as it no longer exists.
The whole point of pre-compiled assets is that they are just files in the file system. If you are not deleting your old assets (ie, by creating a new directory for each deploy), they are still there, and the requests for them will succeed.
If you delete your old assets, then the server will respond with a 404 unless you have config.assets.compile turned on in production. Then you would be compiling assets on request, which is not how the asset pipeline is meant to work. That setting should be off in production, and you should be relying on statically compiled assets.
Typically your public folder, where your compiled assets reside, should be shared across all of your deploys, via symlink for example, so that pages requesting older assets are able to find them.
When I'm testing my app on a vps through sublime and sftp, these Sprockets cache files always take forever (figuratively) to sync. What are the consequences of disabling Asset Pipeline? Will my app perform noticeably poorly?
What are the consequences of disabling Asset Pipeline? Will my app perform noticeably poorly?
Yeah, the asset pipeline is there for a reason, quoting the guide:
The asset pipeline provides a framework to concatenate and minify or compress JavaScript and CSS assets. It also adds the ability to write these assets in other languages and pre-processors such as CoffeeScript, Sass and ERB.
The concatenation of assets leads to fewer HTTP requests (connection setups) which is, at least for HTTP 1.1 considered as a best practice. Minification speaks for itself I guess. Take a look at the guide to get a full grasp of the consequences.
I'm not sure what exactly you mean with sprocket cache files and which environment (as in Rails.env) you're using on your VPS.
You can also compile the assets on the VPS, which might be quicker than uploading. (See compile/precompile section in the guide).
For testing purposes you could also run in the development environment, where the assets will be compiled on demand.
Recently, on my latest deploy to Heroku, I got a warning advising not to use AssetSync.
remote: ###### WARNING:
remote: You are using the `asset_sync` gem.
remote: See https://devcenter.heroku.com/articles/please-do-not-use-asset-sync for more information.
The original problem we were trying to solve by using AssetSync was that we were getting a huge slug size caused by the large assets in our application. Out of the 300MB that Heroku allows us, we were probably using close to 230MB - even though our git repo is only around 80MB.
We solved this by using AssetSync to synchronise all our compiled assets to a S3 bucket to be served through Cloudfront. After AssetSync runs, we have a hook that deletes all the precompiled assets to reduce the slug size. Basically, the workflow during slug compilation looked like this:
Let Heroku precompile the assets
AssetSync syncs all compiled assets to S3
All local copies of the compiled assets are deleted
The linked article argues a few points on why it's bad and what to use instead.
Using Asset Sync can cause failures. It is difficult to debug,
unnecessary, and adds extra complexity. Don’t use it. Instead, use a
CDN.
[...]
You should now use a CDN instead. Rather than
copying your assets over to S3 after they are precompiled, the CDN
grabs them from your website. Here are some reasons why that’s better.
Canonical assets
[...] It allows you to have single, authoritative places where you
store information. If you need to change that information, you only
need to change it in one place. [...] What happens if someone has a
failed deploy after assets get synced? What if someone modifies a file
in the S3 bucket? Instead of fixing one copy of assets, now you must
fix two.
Deploy determinism
If you’re debugging inside of a dyno with heroku run bash and you run
rake assets:precompile this doesn’t just modify your local copy. It
actually modifies the copy on S3 as well. [...] The sync part of
asset_sync can also fail if there’s a glitch in the network. What if
you only write part of a file, or only half of your assets are synced?
These things happen.
Although I agree with their points, the question remains: what's the recommended way to deploy a Heroku application that becomes huge when precompiled assets are stored in the slug?
The question is which assets files are making the slug huge?
By default, the Rails assets pipeline should only be used for small and limited internal assets (like JS, CSS, some logos, etc.).
It's not a great idea to store a huge amount of external or big files as Rails assets for many reasons aside the pipeline (like it's making your Git directory big in size too).
When I precompile my assets for a rails 3.1 app with rake assets:precompile it spits out an old cached version if nothing changes in the asset files. I can tell because my erb is making use of a constant that I was trying to change elsewhere in my app. One work around is to alter one of the css files (eg by adding a space etc) before re-precompiling but this is a pain and I would like to try and disable this caching if it is possible. Any ideas???
This is the expected behavior of the pipeline - the ERB is evaluated only once when you precompile. The value at compile time is the value you get in the file.
The caching is based on the checking the timestamp of the files. You could run Sprockets in production without precompiling (this is called live compiling), but you cannot turn off the caching because the performance would be dreadful - every single request would require Sprockets to recompile all the files.
Sorry :-(