I'm currently reading Real-World Functional Programming, and it briefly mentions in-fix operators as being one of the main benefits of custom operators. Are there any sort of standards for when to use or not use custom operators in F#? I'm looking for answers equivalent to this.
For reference, here is the quote to which #JohnPalmer is referring from here:
3.8 Operator Definitions
Avoid defining custom symbolic operators in F#-facing library designs.
Custom operators are essential in some
situations and are highly useful notational devices within a large
body of implementation code. For new users of a library, named
functions are often easier to use. In addition custom symbolic
operators can be hard to document, and users find it more difficult to
lookup help on operators, due to existing limitations in IDE and
search engines.
As a result, it is generally best to publish your
functionality as named functions and members.
Custom infix operators are a nice feature in some situations, but when you use them, you have to be very careful to keep your code readable - so the recommendation from F# design guidelines applies most of the time. If I was writing Real-World Functional Programming again, I would be a bit less enthusiastic about them, because they really should be used carefuly :-).
That said, there are some F# libraries that make good use of custom operators and sometimes they work quite nicely. I think FParsec (parser combinator library) is one case - though maybe they have too many of them. Another example is a XML DSL which uses #=.
In general, when you're writing ordinary F# library, you probably do not want to expose them. However, when you're writing a domain specific language, custom operators might be useful.
Related
I've done some research about Lua programming but I'm still confused about under which paradigm it can work.
In some walkthroughs, I've found that Lua is not made for Object-Oriented Programming. But they are other people that says it can work too for OOP. So, I'm looking in which programming paradigms it can work the best.
Lua is a "do what you want to" programming language. It doesn't pick paradigms; it is a bag of useful features that give you the freedom to use whatever paradigm that you need. It doesn't have functional language features, but it does feature functions as first-class objects and proper lexical scoping. So you can use it functionally if you so desire. It doesn't have "classes" or other such prototypes, but it does feature ways to encapsulate data and special syntax for calling a function with a "this" object. So you can use it to build objects.
Lua does not dictate what you do with it; that's up to you. It provides low-level tools that allow you to easily build whatever paradigm you wish.
Lua is an imperative language; so it is not ideal for functional programming.
Left by itself, Lua is a procedural language. However, given the simplicity of its data structures (it just has one: the table) it's very easy to add a "layer" on top of it and make it an object oriented language. The most basic inheritance rule can be achieved in as little as 10 lines of code. There are several libraries out there that provide a more refined experience though. My library, middleclass, is 140 LOC in total.
Another great way of using Lua is as an scripting language. It's small, fast, uses only standard C stuff, and it's standard lib is tiny. On the other hand, it doesn't come with "batteries included" like java does.
Finally, I find it very useful as a data notation language; you can express raw data in a format very similar to JSON.
In general, I think that Lua feels very close to javascript.
I'm currently developing a general-purpose agent-based programming language (its syntaxt will be somewhat inspired by Java, and we are also using object in this language).
Since the beginning of the project we were doubtful about the fact of using ANTLR or Xtext. At that time we found out that Xtext was implementing a subset of the feature of ANTLR. So we decided to use ANLTR for our language losing the possibility to have a full-fledged Eclipse editor for free for our language (such a nice features provided by Xtext).
However, as the best of my knowledge, this summer the Xtext project has done a big step forward. Quoting from the link:
What are the limitations of Xtext?
Sven: You can implement almost any kind of programming language or DSL
with Xtext. There is one exception, that is if you need to use so
called 'Semantic Predicates' which is a rather complicated thing I
don't think is worth being explained here. Very few languages really
need this concept. However the prominent example is C/C++. We want to
look into that topic for the next release.
And that is also reinforced in the Xtext documentation:
What is Xtext? No matter if you want to create a small textual domain-specific language (DSL) or you want to implement a full-blown
general purpose programming language. With Xtext you can create your
very own languages in a snap. Also if you already have an existing
language but it lacks decent tool support, you can use Xtext to create
a sophisticated Eclipse-based development environment providing
editing experience known from modern Java IDEs in a surprisingly short
amount of time. We call Xtext a language development framework.
If Xtext has got rid of its past limitations why is it still not possible to find a complex Xtext grammar for the best known programming languages (Java, C#, etc.)?
On the ANTLR website you can find tons of such grammar examples, for what concerns Xtext instead the only sample I was able to find is the one reported in the documentation. So maybe Xtext is still not mature to be used for implementing a general purpose programming language? I'm a bit worried about this... I would not start to re-write the grammar in Xtext for then to recognize that it was not suited for that.
I think nobody implemented Java or C++ because it is a lot of work (even with Xtext) and the existing tools and compilers are excellent.
However, you could have a look at Xbase and Xtend, which is the expression language we ship with Xtext. It is built with Xtext and is quite a good proof for what you can build with Xtext. We have done that in about 4 person months.
I did a couple of screencasts on Xtend:
http://blog.efftinge.de/2011/03/xtend-screencast-part-1-basics.html
http://blog.efftinge.de/2011/03/xtend-screencast-part-2-switch.html
http://blog.efftinge.de/2011/03/xtend-screencast-part-3-rich-strings-ie.html
Note, that you can simply embed Xbase expressions into your language.
I can't speak for what Xtext is or does well.
I can speak to the problem of developing robust tools for processing real languages, based on our experience with the DMS Software Reengineering Toolkit, which we imagine is a language manipulation framework.
First, parsing of real languages usually involves something messy in lexing and/or parsing, due to the historical ways these languages have evolved. Java is pretty clean. C# has context-dependent keywords and a rudimentary preprocessor sort of like C's. C has a full blown preprocessor. C++ is famously "hard to parse" due to ambiguities in the grammar and shenanigans with template syntax. COBOL is fairly ugly, doesn't have any reference grammars, and comes in a variety of dialects. PHP will turn you to stone if you look at it because it is so poorly defined. (DMS has parsers for all of these, used in anger on real applications).
Yet you can parse all of these with most of the available parsing technologies if you try hard enough, usually by abusing the lexer or the parser to achieve your goals (how the GNU guys abused Bison to parse C++ by tangling lexical analysis with symbol table lookup is a nice ugly case in point). But it takes a lot of effort to get the language details right, and the reference manuals are only close approximations of the truth with respect to what the compilers really accept.
If Xtext has a decent parsing engine, one can likely do this with Xtext. A brief perusal of the Xtext site sounds like the lexers and parsers are fairly decent. I didn't see anything about the "Semantic Predicate"s; we have them in DMS and they are lifesavers in some of the really dark corners of parsing. Even using the really good parsing technology (we use GLR parsers), it would be very hard to parse COBOL data declarations (extracting their nesting structure during the parse) without them.
You have an interesting problem in that your language isn't well defined yet. That will make your initial parsers somewhat messy, and you'll revise them a lot. Here's where strong parsing technology helps you: if you can revise your grammar easily you can focus on what you want your language to look like, rather than focusing on fighting the lexer and parser. The fact that you can change your language definition means in fact that if Xtext has some limitations, you can probably bend your language syntax to match without huge amounts of pain. ANTLR does have the proven ability to parse a language pretty much as you imagine it, modulo the usual amount of parser hacking.
What is never discussed is what else is needed to process a language for real. The first thing you need to be able to do is to construct ASTs, which ANTLR and YACC will help you do; I presume Xtext does also. You also need symbol tables, control and data flow analysis (both local and global), and machinery to transform your language into something else (presumably more executable). Doing just symbol tables you will find surprisingly hard; C++ has several hundred pages of "how to look up an identifier"; Java generics are a lot tougher to get right than you might expect. You might also want to prettyprint the AST back to source code, if you want to offer refactorings. (EDIT: Here both ANTLR and Xtext offer what amounts to text-template driven code generation).
Yet these are complex mechanisms that take as much time, if not more than building the parser. The reason DMS exists isn't because it can parse (we view this just as the ante in a poker game), but because all of this other stuff is very hard and we wanted to amortize the cost of doing it all (DMS has, we think, excellent support for all of these mechanisms but YMMV).
On reading the Xtext overview, it sounds like they have some support for symbol tables but it is unclear what kind of assumption is behind it (e.g., for C++ you have to support multiple inheritance and namespaces).
If you are already started down the ANTLR road and have something running, I'd be tempted to stay the course; I doubt if Xtext will offer you a lot of additional help. If you really really want Xtext's editor, then you can probably switch at the price of restructuring what grammar you have (this is a pretty typical price to pay when changing parsing paradigms). Expect most of your work to appear after you get the parser right, in an ad hoc way. I doubt you will find Xtext or ANTLR much different here.
I guess the most simple answer to your question is: Many general purpose languages can be implemented using Xtext. But since there is no general answer to which parser-capabilities a general purpose languages needs, there is no general answer to your questions.
However, I've got a few pointers:
With Xtext 2.0 (released this summer), Xtext supports syntactic predicates. This is one of the most requested features to handle ambiguous syntax without enabling antlr's backtracking.
You might want to look at the brand-new languages Xbase and Xtend, which are (judging based on their capabilities) general-purpose and which are developed using Xtext. Sven has some nice screen casts in his blog: http://blog.efftinge.de/
Regarding your question why we don't see Xtext-grammars for Java, C++, etc.:
With Xtext, a language is more than just a grammar, so just having a grammar that describes a language's syntax is a good starting point but usually not an artifact valuable enough for shipping. The reason is that with an Xtext-grammar you also define the AST's structure (Abstract Syntax Tree, and an Ecore Model in fact) including true cross references. Since this model is the main internal API of your language people usually spend a lot of thought designing it. Furthermore, to resolve cross references (aka linking) you need to implement scoping (as it is called in Xtext). Without a proper implementation of scoping you can either not have true cross references in your model or you'll get many lining errors.
A guess my point is that creating a grammar + designing the AST model + implementing scoping is just a little more effort that taking a grammar from some language-zoo and translating it to Xtext's syntax.
I know it might sound strange but I would like to know one thing in this new world where Microsoft Visual F# is getting into.
There are many application of this language, I am going to learn, regarding parsing, functional programming, structured programming... But what about artificial intelligence?
Are there any applications for Fuzzy Logic? Is F# a good language to be used for Fuzzy Logic applications?
At university we are studying Prolog and similar languages. Prolog is able to create complex query in a very plain and short expresisons (by taking advantage of predicates and facts). Is F# able to do this?
Thank you in advance.
Fuzzy Logic. F# doesn't provide any types for implementing fuzzy logic calculations out of the box, but it is certainly possible to use F# in this domain. The succinctness of F# and the ability to define custom overloaded operators should make code based on fuzzy logic quite nice. I did a quick search and discovered a few articles implementing fuzzy logic in F#:
Fuzzy Logic in F#, Example 1
FuzzyAdvisor - A Simple Fuzzy Logic Expert System in F#
Prolog is a bit different question. The power (and also the weakness) of Prolog come from the fact that it has backtracking built directly in the language. This makes it very nice for implementing search algorithms based on backtracking, but it also a limitation.
F# doesn't have any direct support for backtracking, but it is quite easy to write algorithms based on backtracking using recursion (which is the main control flow mechanism in both F# and Prolog).
Also, it is possible to implement domain specific language for logical programming in F#. This means that you'd implement something like Prolog within F# and then write your search algorithms using this mini-language in F# (possibly using other F# features as needed). You can find more information about similar problems in this question.
F# is a general purpose language with some nice language features, such as computation expression/Monad and quotation. You can assume that it has about the same power as C#.
It is not like Matlab or R, where a lot of pre-implemented libraries are built-in. If you want to implement a Fuzzy Logic library or other AI algorithms from scratch, F# is a very good language for you as its language features make life easier.
But if you just want to use a Fuzzy logic library, then using other languages or specialized systems will be more appropriate because F# or .Net in general does not have very good quality libraries in this aspect.
I recently began to read some F# related literature, speaking of "Real World Functional Programming" and "Expert F#" e. g.. At the beginning it's easy, because I have some background in Haskell, and know C#. But when it comes to "Language Oriented Programming" I just don't get it. - I read some explanations and it's like reading an academic paper that gets more abstract and strange with every sentence.
Does anybody have an easy example for that kind of stuff and how it compares to existing paradigms? It's not just academic fantasy, isn't it? ;)
Thanks,
wishi
Language oriented program (LOP) can be used to describe any of the following.
Creating an external language (DSL)
This is perhaps the most common use of LOP, and is where you have a specific domain - such as UPS shipping packages via transit types through routes, etc. Rather than try to encode all of these domain-specific entities inside of program code, you rather create a separate programming language for just that domain. So you can encode your problem in a separate, external language.
Creating an internal language
Sometimes you want your program code to look less like 'code' and map more closely to the problem domain. That is, have the code 'read more naturally'. A fluent interface is an example of this: Fluent Interface. Also, F# has Active Patterns which support this quite well.
I wrote a blog post on LOP a while back that provides some code examples.
F# has a few mechanisms for doing programming in a style one might call "language-oriented".
First, the syntax niceties (function calls don't need parentheses, can define own infix operators, ...) make it so that many user-defined libraries have the appearance of embedded DSLs.
Second, the F# "quotations" mechanism can enable you to quote code and then run it with an alternative semantics/evaluation engine.
Third, F# "computation expressions" (aka workflows, monads, ...) also provide a way to provide a type of alternative semantics for certain blocks of code.
All of these kinda fall into the EDSL category.
In Object Oriented Programming, you try to model a problem using Objects. You can then connect those Objects together to perform functions...and in the end solve the original problem.
In Language Oriented Programming, rather than use an existing Object Oriented or Functional Programming Language, you design a new Domain Specific Language that is best suited to efficiently solve your problem.
The term language Oriented Programming may be overloaded in that it might have different meanings to different people.
But in terms of how I've used it, it means that you create a DSL(http://en.wikipedia.org/wiki/Domain_Specific_Language) before you start to solve your problem.
Once your DSL is created you would then write your program in terms of the DSL.
The idea being that your DSL is more suited to expressing the problem than a General purpose language would be.
Some examples would be the make file syntax or Ruby on Rails ActiveRecord class.
I haven't directly used language oriented programming in real-world situations (creating an actual language), but it is useful to think about and helps design better domain-driven objects.
In a sense, any real-world development in Lisp or Scheme can be considered "language-oriented," since you are developing the "language" of your application and its abstract tree as you code along. Cucumber is another real-world example I've heard about.
Please note that there are some problems to this approach (and any domain-driven approach) in real-world development. One major problem that I've dealt with before is mismatch between the logic that makes sense in the domain and the logic that makes sense in software. Domain (business) logic can be extremely convoluted and senseless - and causes domain models to break down.
An easy example of a domain-specific language, mentioned here, is SQL. Also: UNIX shell scripts.
Of course, if you are doing a lot of basic ops and have a lot of overlap with the underlying language, it is probably overengineering.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
From day 1 of my programming career, I started with object-oriented programming. However, I'm interested in learning other paradigms (something which I've said here on SO a number of times is a good thing, but I haven't had the time to do). I think I'm not only ready, but have the time, so I'll be starting functional programming with F#.
However, I'm not sure how to structure much less design applications. I'm used to the one-class-per-file and class-noun/function-verb ideas in OO programming. How do you design and structure functional applications?
Read the SICP.
Also, there is a PDF Version available.
You might want to check out a recent blog entry of mine: How does functional programming affect the structure of your code?
At a high level, an OO design methodology is still quite useful for structuring an F# program, but you'll find this breaking down (more exceptions to the rule) as you get down to lower levels. At a physical level, "one class per file" will not work in all cases, as mutually recursive types need to be defined in the same file (type Class1 = ... and Class2 = ...), and a bit of your code may reside in "free" functions not bound to a particular class (this is what F# "module"s are good for). The file-ordering constraints in F# will also force you to think critically about the dependencies among types in your program; this is a double-edged sword, as it may take more work/thought to untangle high-level dependencies, but will yield programs that are organized in a way that always makes them approachable (as the most primitive entities always come first and you can always read a program from 'top to bottom' and have new things introduced one-by-one, rather than just start looking a directory full of files of code and not know 'where to start').
How to Design Programs is all about this (at tiresome length, using Scheme instead of F#, but the principles carry over). Briefly, your code mirrors your datatypes; this idea goes back to old-fashioned "structured programming", only functional programming is more explicit about it, and with fancier datatypes.
Given that modern functional languages (i.e. not lisps) by default use early-bound polymorphic functions (efficiently), and that object-orientation is just a particular way of arranging to have polymorphic functions, it's not really very different, if you know how to design properly encapsulated classes.
Lisps use late-binding to achieve a similar effect. To be honest, there's not much difference, except that you don't explictly declare the structure of types.
If you've programmed extensively with C++ template functions, then you probably have an idea already.
In any case, the answer is small "classes" and instead of modifying internal state, you have to return a new version with different state.
F# provides the conventional OO approachs for large-scale structured programming (e.g. interfaces) and does not attempt to provide the experimental approaches pioneered in languages like OCaml (e.g. functors).
Consequently, the large-scale structuring of F# programs is essentially the same as that of C# programs.
Functional programming is a different paradigm for sure. Perhaps the easiest way to wrap your head around it is to insist that the design be laid out using a flow chart. Each function is distinct, no inheritance, no polymorphism, distinct. The data is passed around from function to function to make deletions, updates, insertion, and create new data.
On structuring functional programs:
While OO languages structure the code with classes, functional languages structure it with modules. Objects contain state and methods, modules contain data types and functions. In both cases the structural units group data types together with related behavior. Both paradigms have tools for building and enforcing abstraction barriers.
I would highly recommend picking a functional programming language you are comfortable with (F#, OCaml, Haskell, or Scheme) and taking a long look at how its standard library is structured.
Compare, for example, the OCaml Stack module with System.Collections.Generic.Stack from .NET or a similar collection in Java.
It is all about pure functions and how to compose them to build larger abstractions. This is actually a hard problem for which a robust mathematical background is needed. Luckily, there are several patterns with deep formal and practical research available. On Functional and Reactive Domain Modeling Debasish Ghosh explores this topic further and puts together several practical scenarios applying pure functional patterns:
Functional and Reactive Domain Modeling teaches you how to think of
the domain model in terms of pure functions and how to compose them to
build larger abstractions. You will start with the basics of
functional programming and gradually progress to the advanced concepts
and patterns that you need to know to implement complex domain models.
The book demonstrates how advanced FP patterns like algebraic data
types, typeclass based design, and isolation of side-effects can make
your model compose for readability and verifiability.