erlang template engine. sgte, google-cTemplate, or erlydtl - erlang

I am planning to implement a template engine to my erlang project, and the most important thing is the performance. Currently I have a lot of Velocity Java template, and I want to migrate templates available to erlang.
I googled it, and found things like;
http://www.ivan.fomichev.name/2008/05/erlang-template-engine-prototype.html
erlydtl
google-ctemplate
sgte
Pure erlang implementation would be the best, but c(c++) based template engine, i.e. google-ctemplate, performs better, I would use it with erlang linked in driver.
Have no experience on this matter yet, so anyone's suggestions would be super great.
thanks

My personal favourite is erlydtl. It compiles the template to an erlang module, so there's no filesystem access or parsing time consumed when you call 'render'.
I think rebar has erlydtl support these days, so getting your templates compiled is a lot less hassle than it used to be. Just name them *.dtl and they'll get built when you run rebar compile.
It should also be fairly competitive speed wise as it's in-process (skip the IPC cost of a port program), compiled (and could be compiled to native code if you wanted to), and generates iolists which are pretty efficient.

Related

How to use n-tier application development

I am a .net desktop app developer. I use c# and WPF. I used SQL Server as my database.
Now I want to learn electron, node.js and would like to code in HTML, CSS and Javascript. Also, as everything mentioned above is open-source/free I would change my database as well. Now I will be using MySQL.
In .net we were using n-tier application development. I know that n-tier application development is not specific to .net, so I think it's possible with electron, node.js, HTML, CSS, Javascript using Atom as development tool.
In .net my projects were structured as:
MySolution
|--Entities
| |--Student.cs
| |--Teacher.cs
|--Repositories
| |--RepositoryBase.cs
| |--StudentRepository.cs
| |--TeacherRepository.cs
|--WPFApp
| |--Window.xaml
| |--App.xaml
| |--App.config
The above mentioned structure is just a small demo. In reality we used WCF services and much more. We always have had near about 20 projects for a single Desktop application.
I would like to do same in electron, node.js, HTML, CSS, Javascript and if possible jQuery. Can anyone please guide me about how do they use n-tier applicaiton development in above mentioned app?? If possible can anybody please post a small working demo??
I've known about it for 10 years, I'd imagine, but had never heard of it as being referred to as "n-tier" and had to look it up. The most common multi-tiered pattern in the Node.js world, is "MVC". I am guessing you're used to that pattern, yourself, or the MVVM pattern that I've seen mentioned in .NET circles.
Anyway, I only wanted to make that distinction in hopes of improving your search results; you will probably find better hits with searches for "MVC" than for "n-tier" in places like npmjs.org.
There are several MVC frameworks available and they should be drop-in ready for Electron development. Backbone is rather popular, and the first that comes to mind, but there are also many, many, others.
Analogous to the multi-project structures you're probably used to in .NET, you will find that Node.js development is, also, typically subdivided into multiple "modules". Assuming you want to offer your project as open-source, you will build it as multiple NPM modules and push each to NPM.. then you will use npm install xxxxx in your main project to bring them all in.
If you are not planning to publish your modules as open source, you can also look into npmjs.org's private module service or, like us, host your own using a solution such as "Sinopia".
Migrating to Node.js can be a bit overwhelming and there will be a lot of information to swallow. If I could offer two tips that have been invaluable in my own journey, I would say:
Conform to Node.js and its community, do not try to coerce it to conform to you.
Always try to avoid writing code. Just about anything, generic, that you can think to write has already been written and is available on NPMjs.org. Utility libs, frameworks, etc. It sucks having to learn someone else's code, but it pays dividends, especially in cases where the open-source editions are well support and/or have a large-ish community.
Also, to go one step further on #1 .. you will probably find that NoSQL (especially MongoDB) is often preferred over MySQL in the Node.js circles. Its another mind bender for those of us who grew up on SQL, but you should, at least, carefully consider it.
Best of luck,

What are some great but little known libraries for Lua?

A common statement said regarding Lua is that it doesn't come with batteries included; meaning that it lacks a lot of extra libraries.
I think there are a lot of Lua libraries out there and more are being developed all the time, but it is likely people don't know about many of them since the Lua community in general is very pragmatic about getting work done and doesn't waste a lot of time with self promotion.
So what are some great Lua libraries that more people ought to know about?
Shameless self-promotion plug: http://www.tecgraf.puc-rio.br/~lhf/ftp/lua/
I hope you find something that's useful there.
My personal favorites are:
LuaSocket, a socket library enabling the use of internet with Lua
The Kepler suite a set of libraries for web application development in Lua.
LuaSQL and LuaSQLite for toying with DB stuff.
All this apart (or not as matter a fact) I highly recommend murgaLua for a batteries-included-but-not-bloated Lua distribution. It's crossplatform, and packs (non exhaustive list):
a binding to FLTK for developing GUI applications
LuaSQLite for sql stuff
LuaSocket
Basic encyption with slncrypt (blowfish, sha1, ...)
Decent RNG
And since the last beta release even a binding to FANN
Audio via ProteAudio
FFI via alien
...
And this whole beast packs in a measly 782kB executable.
I do not think there is the lack of "self promotion" , Lua is one of the best "glue" languages out there (if not the best), therefore allot of the code written for Lua is application specific.
For example, I´ve written a pretty extensive (networking) utility library for Lua and a pretty decent IDE, but its product specific and wont be released for general use.
http://www.intellipool.se/idedoc/

What's the best way to distribute Lua and libraries?

I'm looking at moving a program that currently embeds a Python interpreter to use Lua. With Python it's fairly easy to use modulefinder, compileall, and zipfile to make a nice tidy zip containing all the external libraries used.
Does Lua have the ability to bundle up its libraries like that, or is there some better best practice for distributing programs that embed Lua?
As is typical with Lua, there's no one standard and a lot of people roll their own.
There's an effort to standardize on a package-management system called Lua Rocks, but I'm not sure how much momentum is behind it or how mature it is. (In 2008 it was not quite ready for prime time, but things may have changed.)
I myself am very low tech: if I want to distribute something, I just turn my Lua sources into C files and link them in with the binary. Finding and converting all the modules could be a bit tedious for me, but quite the easiest thing for my users—they don't even need to know that Lua is involved. I've posted a copy of my lua2c script at Pastebin. I have the option of compiling but I generally don't compile because the results are not portable and because the Lua compiler is so fast anyway.
It would be nice to have something more automated. I think it would probably take several days to write and debug a good tool.
People on the Lua mailing list will surely know more.
If it is pure Lua, you might also consider using squish
It's a tool that packs all Lua source files into a single file, with options to gzip/minify it.

What is your experience with Nitrogen on Erlang?

I've been checking out the Nitrogen Project which is supposed to be the most mature web development framework for Erlang.
Erlang, as a language, is extremely impressive. However, with regards to Nitrogen, what I am not too keen about is using Erlang's rather uncommon syntax (unless you're native in PROLOG) to build UIs.
What is your experience with it as opposed to other mainstream web frameworks such as Django or Rails?
I've done very little with Nitrogen so far, but I've been monitoring the mailing list for months, so I think I have something useful to say about it.
To your concern about the syntax of Erlang and the Nitrogen framework, I'd respond that that sounds like a pure case of unfamiliarity, rather than unsuitability. Objectively, HTML is not a beautiful language, and it has plenty of quirks. You're used to this now, so it doesn't seem so bad. Give Nitrogen/Erlang a chance and you may find that you get used to it soon enough, too.
To your question about comparison to other languages and frameworks, I'd say the biggest difference is that with Nitrogen, the entire web site is being served directly by the Erlang runtime. Ruby on Rails has such a mode, but it's intended only for testing. Many other frameworks don't even offer the option of running everything within a single long-running process.
Running the entire web application and its underlying infrastructure within a single long-running process has significant implications on how the site runs:
With Apache, each child gets killed off every N connections, where N=500 or so, and you can't say whether a given child will always handle all of a given client's requests. Because HTTP is stateless but web apps almost always require some client state, an Apache child must rebuild its view of client state as part of handling a new connection. By default, this means going back to disk for persistent data stored about that client. There are alternatives like memcached, but these aren't built into the core of a LAMP type stack. With Erlang, nothing is killed off periodically, and Erlang offers standard facilities like Mnesia which provide disk-backed in-memory DBs.
Incidentally, if you're familiar with nginx, it's built on the same principles as Erlang, and it's fast for the same reason. The main difference between nginx and an Erlang instance running a web server is that nginx isn't a programming environment, so it still has to delegate a lot of processing to outside code. That means it shares the same IPC and persistent state problems as Apache.
Because the runtime stays up continuously and is a fully-functional programming environment, you can probably build more parts of your system in Erlang than with a lashed-together LAMP type stack. This magnifies the above benefits. The various parts of your system can coordinate via message passing and Mnesia instead of heavyweight IPC and MySQL, and all the pieces stay up and running continually, leading to less time-consuming state reconstruction.
A dozen or so Apache children all accessing the persistent client state data store is a lock-based hairball. The frameworks all handle locking and such for you transparently, but what they can't hide is the time it takes to do all this correctly.
Erlang is an impure functional language, which implies but does not require data purity; it is also built with multiprocessing in mind, going clear down to the core of the runtime design. These two facts mean you're less likely to spend time waiting on locks in an Erlang based server than one naively built on one of the other frameworks. It is certainly possible to optimize away lock delays in the other systems, but is that really what you want to be doing? Do you want to be on the thousandth team that has to learn how to optimize its web stack after the service becomes popular, or would you rather leave it all up to the tooling so you can spend your time doing something no one else has done yet?
I, too, was once concerned about clunky Erlang syntax. I've built a couple of tools to alleviate its annoyances for everyday web programming, and perhaps you will find one or both of them helpful:
ErlyDTL is an Erlang implementation of the Django Template Language; it's not available in Nitrogen, but it is available in other frameworks, such as Zotonic, Erlang Web, BeepBeep, and Chicago Boss
Chicago Boss is a full-stack Erlang framework that does a lot of code generation so that you can access data fields with function calls instead of Erlang's rather verbose record syntax (e.g. Person:name() instead of Person#person.name)
Note that Nitrogen does not include a database layer, so it's not really comparable to Rails or Django. For a comprehensive comparison of the database-driven frameworks, check out my answer to this StackOverflow question:
https://stackoverflow.com/questions/1822518/current-state-of-erlang-web-development-frameworks-template-languages/2898271#2898271
I would check out Webmachine if I were you. It is quite simple, fast, and leaves the interface up to you.
Erlang Web should also be considered mature. It is an MVC framework, whereas Nitrogen is more event based. It's a matter of preference.
I haven't used the other tools mentioned here except Webmachine, which I think it's a wonderful tool, but it is not a web framework like the others. It is as HTTP processor, and is ideal for building a restful interfaces.
I would also suggest you give the Erlang syntax a chance. Erlang is one of my favourite languages to use.

Why do so many insist on dragging the JVM into new applications?

For example, I'm running into developers and architects who are scared to death of Rails apps, but love the idea of writing new Grails apps.
From what I've seen, there is a LOT of resource overhead that goes into using the JVM to support languages such as Groovy, JRuby and Jython instead of straight Ruby or Python.
Ruby and Python can both be interpreted on just about any OS, so I don't see any "write once run anywhere" advantage... why bring the hulking JVM along with you?
Java is a much, much more mature platform, with a lot of existing class libraries that could be "dropped in" and used, than, say, Ruby or Python (or even Perl, for that matter). So for people who like using existing code, rather than writing everything themselves, Java is a huge win.
For example, recently I've been looking for something like JAXB for Python or Ruby. In the end, I ended up using JRuby just because I haven't found any mature, widely-used XML-binding libraries.
The huge advantage of writing code (in any language) for the JVM is that it's usually very easy to tap into the enormous wealth of mature Java libraries out there, if necessary.
And I don't know where you got this idea of a "hulking" JVM with a huge resource overhead. The JIT tends to produce code that is quite fast, and the core JVM is anything but huge by today's standards. It does tend to have a huge memory footprint when running, but that's because modern machines have a lot of RAM and the GC works best when it has a lot of RAM to play with. If desired, the GC can be fine-tuned to hell and back to be more conservative.
As someone else put it: "The best thing about Groovy is that I don't have to use Java. The second best thing about Groovy is that I can use Java".
An assumption that seems to be built into the question is that new projects are greenfield projects. Many organizations have made a huge investment in Java over the last decade+ and require any new project to work within the existing (internal) code ecosystem. As pointed out, there's a huge bonus in all the publicly available Java libraries (whether free/OSS or commercial), but the need to work with existing code and even as a component within an existing system is at least as important (if not more so) to large organizations.
A lot also comes down to the maturity and capability of the platform, which is to say the JVM and everything that comes with it (the entire Java ecosystem). A few examples off the top of my head:
You can plug a remote debugger into a running JVM and get all kinds of information about a running application that is simply impossible with Python, Ruby, etc. Going a step further, there's JMX, a standard way to write code so that objects can be monitored and even tweaked in a live application. Take a look at JConsole and see if you don't drool just a little (despite the ugliness of the interface).
Going even further in this direction, there's OSGi, a standard for writing highly modular code that can be deployed, started, stopped, and even upgraded in a live application. With OSGi you break a large application into many smaller "bundles" which can then be maintained (deployed, started/stopped, upgraded) separately. This is a really big deal in large applications, or any applications that need to remain running at all times.
The platform has very good support for asynchronous, reliable messaging. You get JMS as a baseline, and many excellent and powerful libraries built on it for doing complicated things with very little code (cf. Apache Camel, ServiceMix, Mule, and many others). This is another feature that's extremely valuable in larger applications or those which must run within a larger code universe.
The JVM has real (OS-level) threading, while Python et al. are very limited in this regard (notoriously so). (That being said, shared state concurrency -- threading -- is the wrong approach; cf. Erlang, Alice, Mozart/Oz, etc.)
There are numerous JVM choices beyond the standard Sun implementations, like JRockit, IBM's JVM, etc. This is a developing area with other languages -- Python has Jython, Iron Python, even PyPy and Stackless; Ruby has JRuby, Rubinius, and others -- but as good as these are they can't match the maturity found in the various JVM offerings.
All that being said, I really don't like Java the language and avoid it as much as possible. These days with all the excellent alternative languages for the JVM I don't have to. Groovy gets my vote for its accessibility and tight integration with the platform (and even the language), and because of Grails, which I sometimes like to call "Rails for grownups". I like other JVM languages better, particularly Clojure and Scala, but these aren't as accessible to the average programmer. Scala is popping up a lot lately, though, especially thanks to its high profile use at Twitter, so there's hope for interesting and truly excellent languages making it in larger environments. But that's another topic.
why bring to hulking JVM along with you?
JVM isn't bloated, nor is it slow. on the contrary, it's a lean, fast, deeply optimized VM. Unfortunately, it's optimized for static OOP languages.
Still, good compilers targeting JVM do create good performing programs. I don't know about JRuby; but Jython's goal is to be all-around faster than regular C Python, and they're getting close (it's already faster at several important use cases).
Remember that a good JIT (like those for JVM) can apply some optimizations unavailable on static C compilers, getting faster code from them isn't a pipe dream. Of course, a VM optimized for your language should be faster than a 'not-really-generic' VM like JVM; but there's the maturity issue: JVM has a lot of work done there, while JITs for Ruby and Python aren't anywhere near.
Unfortunately, there doesn't seem to be any better generic bytecode VM. Microsoft's CLI suffers from similar limitations as JVM (ironPython is much slower and heavier than JPython). The best candidate seems to be LLVM. Does anybody know why isn't there more dynamic languages over LLVM? I've seen a couple of Scheme compilers, but seem to have several problems.
Groovy is NOT an interpreted language, it is a dynamic language. The groovy compiler produces JVM bytecode that runs inside the JVM just like any other java class. In this sense, groovy is just like java, simply adding syntax to the java language that is meaningful only to developers and not to the JVM.
Developer productivity, ease and flexibility of syntax make groovy attractive to the java ecosystem - ruby or python would be as attractive if they resulted in java bytecode (see jython).
Java developers are not really scared of ruby; as a matter of fact many quickly embrace groovy or jython both close to ruby and python. What they don't care about is leaving such an amazing platform (java) for a less performant, less scalable even less used language such as ruby (for all its merits).
The big knock on RoR is that it isn't scalable and hard to deploy. By using the Java platform, you can leverage your existing infrastructure.
grails war
Produces a war file that is easily deployed on Glassfish, Jboss, etc.
Ruby and Python can both be
interpreted on just about any OS, so I
don't see any "write once run
anywhere" advantage... why bring the
hulking JVM along with you?
Mostly because you want to take advantage of the HUGE existing ecosystem of Java libraries, APIs and products, which dwarfs anything available for Ruby or Python, especially in the enterprise domain.
Also, keep in mind that JRuby and Jython are faster in a lot of benchmarks than the regular (C implementations) of the languages, especially Ruby (even Ruby 1.9).
Having multiple languages targeting the same virtual machine has a lot of benefits, such as leveraging a common infrastructure, code reuse, shared APIs, the ability to use whatever language is conceptually best for you, or for a specific problem domain, etc.
The same things happens in the .NET space, with multiple languages targeting the CLR. The Parrot (vaporware) VM project also aims to the same thing, and it's a stated goal of the LLVM project too.
The reason is Hotspot.
It is an engineering tour de force.
the other reason not many mentioned is existing infrastructure related to jvm - if you already have a server running java stuff, why not use it instead of bringing in yet another platform (like rails)?
I've encountered this and also been baffled by it, and here's my theory.
Enterprise software is full of Java programmers. Like programmers of all stripes, many Java programmers are convinced that their language is the fastest, the most flexible and the easiest to use--they're not too familiar with other languages but are convinced that those who practice them must be savages and barbarians, because any enlightened person would, of course, use Java.
These people have built vast, complicated Java infrastructures: rube-goldberg machines of frameworks and auto-generated code full of byzantine inheritance structures and very, very large XML files.
So, when someone comes along and says "Hey! Let's use a C interpreted language! It's fast and has neat libraries and is much quicker for scripting and prototyping!" The Java guy is firstly like "I have to run a make file to configure this? QUEL HORREUR!" Then the reality of having to deploy and host this on servers that are running dated OSes and dated versions of Tomcat and nothing else starts to set in.
"Hey, I know! There's a java version of this interpreted language! It may break down in the fast lane on the bridge in rush-hour, and it sometimes catches on fire, but I can get Tomcat to run it. I don't have to dirty my hands with learning non-java stuff, and I can shoehorn it into the existing infrastructure! Win!"
So, is this the "right" reason for choosing a java implementation of a scripting language? Probably not. Depends on your definition of "right". But, I suspect that it's the reason they're chosen more often than snobs like me would like to believe.

Resources