Let's say that a developer has created a general-purpose Dart library, aimed at the client, that contains a large number of class and functionality definitions, of which only a small subset is ever expected to be used in a single web page.
As a Javascript example, Mathjax provides files that contain a large amount of functionality related to displaying mathematical expressions in the browser, though any given web page displaying mathematical expressions is likely to only use a very small amount of the functionality defined by Mathjax. Thus, what page authors tend to do is to link each page they create to a single copy of the large, general-purpose JS file (in the page header, for example) and to then write their own, usually relatively small JS files defining behaviour particular to the page. They can thus potentially have hundreds of pages each linking to a single general-purpose JS file.
When we transpile Dart to JS, however, the output is a single, large JS file that appears to contain the functionality of any dependencies along with the behaviour desired for the particular page. If we had hundreds of pages, each linked to their own dart2js-produced JS file, we appear to have a tremendous amount of redundancy in the code. Further, if we need to make a change to the general-purpose library, it seems that we would have to regenerate the JS for each of the pages. (Or maybe I completely misunderstand things.)
My question is: is it possible to transpile Dart code to multiple JS files? For example, Dart code for a given page might be transpiled into one JS file containing the dependency functionality (that many pages can link to) and one small JS file defining the behaviour particular to the page?
Sounds like you should rethink your website as a single-page app. Then, you get a payload that can process any of your pages, but is still tree-shaken to have only exactly what you need for all of them.
Related
I self taught myself PHP, so I don't know many of the advantages and disadvantages in programming styles. Recently, I have been looking at large PHP projects, like webERP, Wordpress, Drupal, etc and I have notices they all have a main PHP page that is very large (1000+ lines of code) performing many different functions. Whereas, my projects' pages all seem to be very specific in function and are usually less than 1000 lines. What is the reasoning behind the large page, and are there any advantages over smaller more specific pages?
Thanks for the information.
It's partly about style and partly about readability/relationships. Ideally everything in a single file is related (ex. a class, related operation functions etc.) and unrelated items belong in another file.
Obviously if you are writing something to be included by others making a single file can have its advantages. Such as a condensed version of jQuery, etc.
My web site (on Linux servers) needs to support multiple languages.
What is the best practice to have/store multiple languages versions of the same site?
Some I can think of:
store in DB
different view file for each language
gettex
hard coded words in PHP files (like in phpBB)
With web sites, you really have several categories of content to consider for localization:
The article-type content elements that you would in many cases create, edit and publish in a CMS.
The smaller content blocks that are common to every page (or a sub-group of pages), such as tagline, blurb, text around a contact form, but also imported content such as a news ticker or ads and affiliate links. Some of these may only appear for one language (for example, if you don't offer some services in some regions, or don't have, say, language-appropriate imported content for a particular language: it can be better to remove an element rather than offering English to people who may not speak it).
The purely functional elements, like "Click here to comment", "More...", high-level navigation, etc., which are sometimes part of your template. Some of these may be inside images.
For 1. the main decision is using a CMS or not. If yes, you absolutely need to choose one that supports multiple languages. I'm not up-to-date with recent developments in PHP CMS's, but several of the Django CMS apps (Django-CMS-2, FeinCMS) support multi-language content. Don't forget that date stamps, for example, need to be localized, too (or you can get around this by choosing ISO dates, though that may not always be possible). If you don't use a CMS, and everything is in your HTML files, then gettext is the way to go, and keep the .mo files (and your offline .po files) in folders by language.
For 2. if you have a CMS with good multi-lingual support, get as much as possible inside the CMS. The reason is that these bits do change, and you want to edit your template as little as possible. If you write code yourself, think of ways of exporting all in-CMS strings per language, to hand them to translators. Otherwise, again, gettext. The main issue is that these elements may require hard-coding language-selection code (if $language = X display content1 ...)
For 3., if it's in your template, use gettext. For images, the per-language folders will come in handy, and for heaven's sake make choose images the generation of which can be automated, or you (or your graphic artist) will go mad with editing 100s of custom images with strings in languages you don't understand.
For both 2. and 3., abstracting from the language selection may help selecting the appropriate blocks or content directory (where localized images or .mo files are kept).
What you definitely want to avoid is keeping a pile of HTML files with extensive text content in them that would be a nightmare to maintain.
EDIT: Everything about gettext, .po and .mo files is in the GNU gettext manual (more than you ever wanted to know) or a slightly dated but friendlier tutorial. For PHP, there's are the PHP gettext functions, and also the Zend Locale documentation
I recommend using Zend_Translate's Gettext adapter which parses mo files. Very efficient + caching. Your calls would be like
echo $translation->_("Hello World");
Which would find the locale specific key for that specified string.
Check out i18n support for php: http://php-flp.sourceforge.net/getting_started_english.htm
Using the Grails YUI plugin, I've noticed that my GUI tags are replaced with some JavaScript code that is inserted in the HTML page.
Does this behavior contradict the Yahoo rule of making JavaScript and CSS external?
In other words, how do I separate the script code from the HTML page in order to allow external JavaScript script caching?
Should I use the Grails UI performance plugin for that matter? Is there another way to do it?
Everything in software design is a trade-off.
It depends on if the benefit of performance overweights the importance of having well segregated and maintainable code.
In your case, I wouldn't mind having some extra JavaScript code automatically added to dramatically improve the performance.
Complete code and UI separation always comes at a price. More levels of abstraction and intermediate code often translates into slower performance, but better maintainability.
Sometimes, the only way to reach maximum efficiency is to throw away all those abstractions and write optimized code for your platform, minimizing the number of functions and function calls, trying to do as most work as possible in one loop instead of having two meaningful loops, etc. (which is characterized as ugly code).
Well, this is one the features of the UiPerformance Plugin amongst other things:
The UI Performance Plugin addresses some of the 14 rules from Steve Souders and the Yahoo performance team.
[...]
Features
minifies and gzips .js and .css files
configures .js, .css, and image files (including favicon.ico) for caching by renaming with an increasing build number and setting a far-future expires header
[...]
So I'd use it indeed.
I have a question about parsing HTML pages, specificaly forums,
i want to parse a forum or thread containing certain post criterias, i havent defined the
algorithm yet, since i have only parsed structure text formats before,
A use case may be copy and paste each thread into the program by hand, or insert a URL like
http://www.forums.com/forum/showthread.php?t=46875&page=3 and let the program parse the pages
Given all this i would like to know:
Is it possible to parse a forum thread on a HTML page?
what would be the best/Fastest/easiest language for doing this?
If i prefer Java what tools/libraries do i need for this?
Any other thing i should consider?
1 / yes
2 / Use some compact language like python or ruby for prototyping.
For python there is a neat library for HTML/XML parsing called beautifulsoup
For ruby, you could try: nokogiri or hpricot
3 / A Java tool to consider: htmlparser
4 / If you are interested only in some particular text or some special classes, a regular expression might be sufficient. But as soon as you want to dig deeper into the structure of the content, you'll need some kind of model to hold your data, and hence a parser, which, in the best case, can cope with the occuring incosistencies of real world html.
You might want to look into some sort of html parsing library, rather than using regular expressions to do this. There are some really good html parsers for ruby and python, but a quick google shows there to be a number of parsers for java as well. The benefit of these libraries is that you don't have to handle every edge case with regular expressions/they handle malformed html (both of which can be impossible with regexes, depending on what you want to do) and they also give you a much way of dealing with the data (for example, beautiful soup lets you grab all elements which belong to a specific class or to use some other css selector to limit which page elements you want to deal with).
Personally, I would, at least for the beginning, start in ruby or python, as the libraries are known and there is a lot of info about using them for this purpose. Also, I find it easier to quickly prototype these types of things in ruby or python than in the jvm. You could even later bring that code onto the jvm with jruby or jython, if it becomes necessary.
yes
regular expressions, any flavor.
probably the ones w/regex
there are tools out there that will do this for you.
Can anyone (maybe an XSL-fan?) help me find any advantages with handling presentation of data on a web-page with XSL over ASP.NET MVC?
The two alternatives are:
ASP.NET (MVC/WebForms) with XSL
Getting the data from the database and transforming it to XML which is then displayed on the different pages with XSL-templates.
ASP.NET MVC
Getting the data from the database as C# objects (or LinqToSql/EF-objects) and displaying it with inline-code on MVC-pages.
The main benefit of XSL has been consistent display of data on many different pages, like WebControls. So, correct me if I'm wrong, ASP.NET MVC can be used the same way, but with strongly typed objects. Please help me see if there are any benefits to XSL.
I can see the main benefit of employing XSLT to transform your data and display it to the user would be the following:
The data is already in an XML format
The data follows a well defined schema (this makes using tools like XMLSpy much easier).
The data needs to be transformed into a number of different output formats, e.g. PDF, WMP and HTML
If this is to be the only output for your data, and it is not in XML format, then XSLT might not be the best solution.
Likewise if user interaction is required (such as editing of the data) then you will end up employing back-end code anyway to handle updates so might prove one technology too far...
I've always found two main issues when working with XML transformations:
Firstly they tend to be quite slow, the whole XML file must be parsed and validated before you can do anything with it. Being XML it's also excessively verbose, and therefore larger than it needs to be.
Secondly the way transformations work is a bit of a pain to code - custom tools like XmlSpy help, but it's still a different model to what most developers are used to.
At the moment MVC is very quick and looking very promising, but does suffer from the traditional web-development blight of <% and %> bee-stings all over your code. Using XML transformations avoids that, but is much harder to read and maintain.
I've used that technique in the past, and there are applications where we use it at my current place of employment. (I will admit, I am not totally a fan of it, but I'll play devil's advocate) Really that is one of the main advatages, and pushing this idea can be kinda neat. You're able to dynamically create the xsl on the fly and change the look and feel of the page on a whim. Is it possible to do this through the other methods...yes, but it's really easy to build a program to modify an xml/xsl document on the fly.
If you think of using XSL to transform one xml document to another and displaying it as html (which is really what you're doing), you're opening up your system to allow other programs to access the data on the page via XML. You can do this through the other methods, but using an xsl transformation forces it to output xml every time.
I would tread lightly with creating a system this way. You'll find a lot of pit falls you aren't expecting, and if you don't know xsl really really well, there is going to be a learning curve also.
Check this out if you want to use XSLT and ASP.MVC
http://www.bleevo.com/2009/06/aspnet-mvc-xslt-iviewengine/
Jafar Husain offers a few advantages in his proposal for Pretty XSL, primarily caching of the stylesheet to increase page load and reduce the size of your data. Steve Sanderson proposed a slightly different approach using JavaScript as the controller here.
Another, similar approach would be to use XForms, though the best support for it is through a JavaScript library.
If you only going to display data from DB XSL templates may be convenient solution, but if you gonna handle user interaction. Hm... I don't think it'll be maintainable at all.