Grails YUI performance - grails

Using the Grails YUI plugin, I've noticed that my GUI tags are replaced with some JavaScript code that is inserted in the HTML page.
Does this behavior contradict the Yahoo rule of making JavaScript and CSS external?
In other words, how do I separate the script code from the HTML page in order to allow external JavaScript script caching?
Should I use the Grails UI performance plugin for that matter? Is there another way to do it?

Everything in software design is a trade-off.
It depends on if the benefit of performance overweights the importance of having well segregated and maintainable code.
In your case, I wouldn't mind having some extra JavaScript code automatically added to dramatically improve the performance.
Complete code and UI separation always comes at a price. More levels of abstraction and intermediate code often translates into slower performance, but better maintainability.
Sometimes, the only way to reach maximum efficiency is to throw away all those abstractions and write optimized code for your platform, minimizing the number of functions and function calls, trying to do as most work as possible in one loop instead of having two meaningful loops, etc. (which is characterized as ugly code).

Well, this is one the features of the UiPerformance Plugin amongst other things:
The UI Performance Plugin addresses some of the 14 rules from Steve Souders and the Yahoo performance team.
[...]
Features
minifies and gzips .js and .css files
configures .js, .css, and image files (including favicon.ico) for caching by renaming with an increasing build number and setting a far-future expires header
[...]
So I'd use it indeed.

Related

Can we transpile dart code to multiple JS files?

Let's say that a developer has created a general-purpose Dart library, aimed at the client, that contains a large number of class and functionality definitions, of which only a small subset is ever expected to be used in a single web page.
As a Javascript example, Mathjax provides files that contain a large amount of functionality related to displaying mathematical expressions in the browser, though any given web page displaying mathematical expressions is likely to only use a very small amount of the functionality defined by Mathjax. Thus, what page authors tend to do is to link each page they create to a single copy of the large, general-purpose JS file (in the page header, for example) and to then write their own, usually relatively small JS files defining behaviour particular to the page. They can thus potentially have hundreds of pages each linking to a single general-purpose JS file.
When we transpile Dart to JS, however, the output is a single, large JS file that appears to contain the functionality of any dependencies along with the behaviour desired for the particular page. If we had hundreds of pages, each linked to their own dart2js-produced JS file, we appear to have a tremendous amount of redundancy in the code. Further, if we need to make a change to the general-purpose library, it seems that we would have to regenerate the JS for each of the pages. (Or maybe I completely misunderstand things.)
My question is: is it possible to transpile Dart code to multiple JS files? For example, Dart code for a given page might be transpiled into one JS file containing the dependency functionality (that many pages can link to) and one small JS file defining the behaviour particular to the page?
Sounds like you should rethink your website as a single-page app. Then, you get a payload that can process any of your pages, but is still tree-shaken to have only exactly what you need for all of them.

PHP: One large file or several small files

I self taught myself PHP, so I don't know many of the advantages and disadvantages in programming styles. Recently, I have been looking at large PHP projects, like webERP, Wordpress, Drupal, etc and I have notices they all have a main PHP page that is very large (1000+ lines of code) performing many different functions. Whereas, my projects' pages all seem to be very specific in function and are usually less than 1000 lines. What is the reasoning behind the large page, and are there any advantages over smaller more specific pages?
Thanks for the information.
It's partly about style and partly about readability/relationships. Ideally everything in a single file is related (ex. a class, related operation functions etc.) and unrelated items belong in another file.
Obviously if you are writing something to be included by others making a single file can have its advantages. Such as a condensed version of jQuery, etc.

Firefox Extension: Performance: Overlay vs Bootstrapped

I understand the convenience of installing bootstrapped extensions but there is a question that has been bugging me for a long while.
Has there ever been a performance & resource/memory usage comparison between overlay & bootstrapped extensions?
In overlay extensions, a lot of the work, ie XUL overlays etc are handled NATIVELY by the application (ie Firefox).
In a bootstrapped extension, all above work is left to the extension developer which often involves manually adding many event-listeners and observers to achieve the same (which could be not as native as the applications core).
I have noticed startless addons that fail to initiate on occasions on some windows. I have also noticed startless addons that on occasions, the insertion itself was noticeable (ie the functionality, image, icon comes slightly after the window is loaded). Furthermore, the type of the even-listener used is not uniform and varies greatly.
I have a nagging feeling that manually (and not natively) adding menus, context menus, functions, string-bundles, preferences, localization etc and Enumerating windows would use more resources (besides the fact that its efficiency would be greatly dependent on developer’s skills).
I look forward to your comments
:)
How an add-on performs mostly depends on the actual implementation (what the add-on does and how) and data it keeps around. As such you cannot really just compare performance of overlay vs. restartless add-ons.
I converted add-ons from overlay to restartless ones that performed better afterwards, because I optimized some things along the way. The opposite might be true, of course, in other cases.
Memory consumption depends on what the add-on does, incl. how many event listeners it creates. Unless you create thousands upon thousands of event listeners (that also pseudo-leak stuff in closures), the memory consumed by these listeners is usually negligible as about:memory will tell you. You can have memory hungry overlay add-ons and lightweight restartless add-ons, or vice-versa.
You're right hat the efficiency depends greatly on the skills a developer has, i.e. the quality of the implementation and data structure designs which is usually directly correlated with said skills.
It is easy to create a simple "button" SDK add-on, but the SDK has lots and lots of abstractions to make it easy, and these abstractions consume resources (memory, CPU or even file I/O).
It is a bit harder to create an equivalent overlay add-on, but still you get quite a few things for free (overlay, style). These niceties are higher-level abstractions, too, and come at a cost.
It is quite difficult to create an equivalent bootstrapped add-on, but if done correctly it might outperform the other add-on types, even during startup (no chrome.manifest to read, parse and interpret, no sync loading of overlays and associated styles, etc.)
It's a bit like comparing C (restartless) to Java (overlay) to Ruby (SDK). People love the convenience of Ruby, but proper Java code easily outperforms it. Then again, Java will often outperform equivalent C programs written by novice developers (also those novice developers will more likely leak memory all over the place ;), but a C program written by skilled developer may outperform Java.
What you're asking here is essentially indicative of premature optimization. Instead code, measure and then optimize if necessary and according to your skill levels.
Once you notice that your add-on consumes tons of memory or runs slowly, then measure and/or debug the cause. Or just measure pro-actively. The point is: measure.
If it isn't your add-on that misbehaves, tell the author, or file a tech evangelism bug if it is real bad.
Since you ask about DOM manipulation/overlays, addEventListener vs. "native":
Overlays may be faster than calling a bunch of DOM methods from JS. But then again, overlays are XML and need to be read from disk, then parsed into a DOM, then the DOM needs to be merged with the DOM that is overlaid, following all kinds of rules, etc. That requires all kinds of I/O, (temp.) memory, CPU, etc. So overlays may be slower. Depends on the overlay.
addEventListener is usually blazingly fast. In fact, overlay "event" listeners, (those nasty oncommand/onclick/onwhatever attributes), use the same implementation internally (well, kinda), and additionally the string values from these attributes will be throw into the JS engine anyway by creating anonymous functions from these strings (and that takes time, too ;)
Anyway, on the few occasions I actually did measure UI initialization in restartless add-ons (only the DOM stuff in JS) and it always came out taking something in the (lower) double-digit milliseconds range for any add-on with a reasonable amount of DOM and listeners (<100).
BTW:
I have also noticed startless addons that on occasions, the insertion itself was noticeable (ie the functionality, image, icon comes slightly after the window is loaded).
Yeah, some restartless add-ons e.g. load (toolbarbutton) images asynchronously, or delay (some) of their initialization to a later point (e.g. why populate the context menu before popupshowing?). This can be a little bit less efficient (because it can cause redraws) or can be more efficient (because the browser can continue to execute other initialization code while e.g. images load in the background).
If restartless add-ons fail to initialize, then well, that is a bug. But I did mention that restartless add-ons are rather difficult to write already.
PS: Gecko Profiler and about:memory are your friends ;)

Spark view engine performance with prefix in MVC 2 project?

I use spark view engine in my MVC 2 project. By default, Spark tags are unqualified, without prefix. My question is: Do I gain performance when put prefix in my web.config file? Put differently, whether spark files compile faster with prefix?
Compilation will not be any faster or slower with the prefix.
Spark not only outputs HTML, but is equally capable of outputting valid XML, and in the latter case, you very often need to qualify your tags to maintain valid XML.
None of this will effect compilation time. It is recommended however, regardless of this, that you precompile your views for production anyway to improve performance.

Wicket: “large memory footprint!”, "Does Wicket scale?".. etc

Wicket uses the Session heavily which could mean “large memory footprint” (as stated by some developers) for larger apps with lots of pages. If you were to explain to a bunch of CTOs from Fortune 500 that they have to adopt Apache Wicket for their large web application deployments and that their fears about Wicket problems with scaling are just bad assumptions; what would you argue?
PS:
The question concerns only
scaling.
Technical details and real world
examples are very welcomed.
IMO credibility for Apache Wicket in very large scale deployment is satisfied with the following URL: http://mobile.walmart.com View the source.
See also http://mexico.com, http://vegas.com, http://adscale.de, and look those domains up with alexa to see their ranking.
So, yes it is quite possible to build internet scale applications using Wicket. But whether or not you are using Wicket, Struts, SpringMVC, or just plain old JSPs: internet scale software development is hard. No framework can make that easy for you. No framework can give you software with a next-next-finish wizard that services 5M users.
Well, first of all, explain where the footprint comes from, and it is mainly the PageMap.
The next step would be to explain what a page map does, what is it for and what problems it solves (back buttons and popup dialogs for example). Problems, which would have to be solved manually, at similar memory costs but at a much bigger development cost and risk.
And finally, tell them how you can affect what goes in the page map, the secondary page cache and thus how the size can be kept under control.
Obviously you can also show them benchmarks, but probably an even better bet is to drop a line to Martijn Dashorst (although I believe he's reading this post anyway :)).
In any case, I'd try to put two points across:
There's nothing Wicket stores in memory which you wouldn't have to store in memory anyway. It's just better organised, easier to develop, keep consistent, and test.
Java itself means that you're carrying some inevitable excess baggage around all the time. If they are so worried about footprint, maybe Java isn't the language they want to use at all. There are hundreds of large traffic websites written in other languages, so that's a perfectly workable solution. The worst thing they can do is to go with Java, take on the excess baggage and then not use the advantages that come with an advanced framework.
Wicket saves the last N pages in the session. This is done to be able to load the page faster when it is needed. It is needed mostly in two cases - using browser back button or in Ajax applications.
The back button is clear, no need to explain, I think.
About Ajax - each ajax requests needs the current page (the last page in the session cache) to find a component in it and call its callback method, update some model, etc.
From their on the session size completely depends on your application code. It will be the same for any web framework.
The number of pages to cache (N above) is configurable, i.e. depending on the type of your application you may tweak it as your find appropriate. Even when there is no inmemory cache (N=0) the pages are stored in the disk (again configurable) and the page will be find again, just it will be a bit slower.
About some references:
http://fabulously40.com/ - social network with many users,
several education sites - I know two in USA and one in Netherlands. They also have quite a lot users,
currently I work on a project that expects to be used by several million users. Wicket 1.5 will be improved wherever we find hotspots.
Send this to your CTO ;-)

Resources