HLint-like tool for F# code? - f#

HLint is a command line static analysis tool for Haskell code, that even suggests the appropriate refactored version of the code. Anyone know of similar command line tools for linting F# code?

Short answer:
No, there is no such tool yet.
Long answer:
Let's discuss how to build it then.
I did some background research which might be useful.
References
There are a few lint tools in functional languages, which can be used as sources of inspiration. However, they tend to go to different directions.
HLint is an advanced tool and its refactoring capability is amazing. Refactoring suggestion is more tricky in F# due to (1) F# code might have side effects so equational reasoning is unsound (2) When doing point-free transformation, value restriction could eliminate some good suggestions. If we accept false positives, it might become a bit easier.
In Scala's world, you have Wart Remover and Scala Style. The former focuses on common functional programming mistakes in Scala. The latter has its focus on human errors and inconsistencies (e.g. naming, convention, etc.). I guess Wart Remover is more relevant to F# as it is a functional-first programming language. However, a style checker tool is useful on a big code base with multiple developers.
The most relevant lint tool for F# is probably OCaml's style checker, Mascot. It has a big and extensible rule set. Many of these rules are applicable to F# with minor adaptation.
Resources (and the lack thereof)
What we have:
F# compiler is on GitHub. The relevant component - F# compiler service has a NuGet package so bootstrapping is easy. F# compiler source code is a good resource since F# compiler's warnings are very good and informative.
There are recent works using F# compiler e.g. language binding, refactoring, code formatting, etc. so we have experience to build up on.
We have other lint tools to learn from.
What we don't have:
Good documentation on recommended styles and practices in F# is missing. The design guideline has been useful, but it isn't complete enough.
Building lint tool is time-consuming and difficult. Even with simple and single-purpose tool like Fantomas; it takes a lot of time to process F#'s ASTs in a correct way.
To sum up, if we define a right scope, creating a simple yet useful tool for F# is within reach.
Updates
There is an actively-developing linter for F# available at https://github.com/duckmatt/FSharpLint. It seems that my analysis is not too far off :).

Related

What risks exist if I work in a C# shop and attempt to write F# just to rely on ILSpy for conversion?

What risks are involved if I work in a C# shop and I attempt to write a feature in F# and then rely on ILSpy to translate the F# source code to a C# representation?
I would very strongly recommend against doing this.
F# code that has been decompiled into C# tends to be extremely verbose and unreadable. It will be near impossible for anyone who doesn't possess a copy of the original F# code to understand or maintain.
Functional code gives you opportunities for code reuse that you wouldn't have in an OO language. The C# code produced by decompiling probably wouldn't offer (m)any avenues of reuse beyond the boundaries of your decompiled F#.
What's idiomatic in F# sometimes isn't in C#, that's particularly true after an intermediate stage of decompilation. The code would likely not pass a review process.
Units of measure and inline functions with static type constraints are both features of the F# compiler rather something provided by .NET. You might gain some advantage from them by using the decompiled C# directly but any modifications made to the C# source wouldn't be checked for e.g. dimensional correctness.
I would also second Tomas' suggestion of having a read through this article: http://fsharpforfunandprofit.com/posts/low-risk-ways-to-use-fsharp-at-work/
I would suggest, however, that it could be worth having a conversation with your team/manager(s) about the possibility of introducing F# at your workplace.
My personal experience of using F# commercially is that development time often tends to be shorter (sometimes substantially) compared to the same project done in C# and it's usually easier to verify and test the result. These are advantages that are very appealing commercially.

What's the easiest way to build an F# compiler that runs on the JVM and generates Java bytecode?

The current F# Compiler is written in F#, is open source and runs on .Net and Mono, allowing it to execute on many platforms including Windows, Mac and Linux. F#'s Code Quotations mechanism has been used to compile F# to JavaScript in projects like WebSharper, Pit and FunScript. There also appears to be some interest in running F# code on the JVM.
I believe a version of the OCaml compiler was used to originally Bootstrap the F# compiler.
If someone wanted to build an F# compiler that runs on the JVM would it be easier to:
Modify the existing F# compiler to emit Java bytecode and then compile the F# compiler with it?
Use a JVM based ML compiler like Yeti to Bootstrap a minimal F# compiler on the JVM?
Re-write the F# compiler from scratch in Java as the fjord project appears to be attempting?
Something else?
Another option that should probably be considered is to convert the .NET CLR byte code into JVM byte-code like http://www.ikvm.net does with JVM > CLR byte codes. Although this approach has been considered and dismissed by the fjord owner.
Getting buy-in from the top with option 1) and have the F# compiler team have pluggable backends that could emit Java bytecode sounds in theory like it would produce the most polished solution.
But if you look at other languages that have been ported to different platforms this is rarely the case. Most of the time it's been a rewrite from scratch. But this is also likely due to the original language team having no interest in supporting alternative platforms themselves and that the original host implementation might've not been able to support multiple backends and it's already too slow for this to be a feasible option to start with.
My hunch is a combination of re-writing from scratch and being able to do as much code sharing and automation as possible from the original implementation. E.g. if the test suites could be re-used for both implementations it would take a lot of the burden off the JVM port and go a long way in ensuring language parity.
If I really had to do this, I would probably start with the #1 approach - add JVM backend to the existing compiler. But I would also try to argue for a different target VM.
Quotations are not very relevant - as an author of WebSharper I can assure you that while quotations can give you a nice F#-like language to program with, they are restrictive, and not optimized. I imagine that for potential JVM F# users the bar would be a lot higher - full language compatibility and comparable performance. This is very hard.
Take tail calls, for example. In WebSharper we apply heuristics to optimize some local tail calls to loops in JavaScript, but that is not enough - you cannot in general rely on TCO, as you do in general F# libraries. This is ok for WebSharper as our users do not expect to have full F#, but will not be ok for a JVM F# port. I believe most JVM implementations do not do TCO, so it will have to be implemented with some indirection, introducing a performance hit.
An bytecode re-compilation approach mentioned by #mythz sounds very attractive as it allows more than just porting F# - ideally it allows porting more .NET software to the JVM. I worked quite a bit with .NET bytecode analysis on an internal WebSharper 3.0 project - we are looking at the option of compiling .NET bytecode instead of F# quotations to JavaScript. But there are huge challenges there:
A lot of code in BCL is opaque (native) - and you cannot decompile it
The generics model is fairly complicated. I have implemented a JavaScript runtime that models class and method generics, instantiation, type generation, and basic reflection with some precision and reasonable performance. This was difficult enough in dynamic JavaScript with closures and is seems quite difficult to do in a performant way on the JVM - but maybe I just do not see a simple solution.
Value types create significant complications in the bytecode. I am yet to figure this one out for WebSharper 3.0. They cannot be ignored either, as they are used extensively by many libraries you would want ported.
Similarly, basic reflection is used in many real-world .NET libraries - and it is a nightmare to cross-compile in terms of both lots of native code and proper support for generics and value types.
Also, the bytecode approach does not remove the question on how to implement tail calls. AFAIK, Scala does not implement tailcalls. They have certainly the talent and the funding to do that - the fact that they do not, tells me a lot about how practical it is to do TCO on the JVM. For our .NET->JavaScript port I will probably go a similar route - no TCO guarantees unless you specifically ask for trampolining which will work but cost you an order of magnitude (or two) in performance.
There is a project that compiles OCaml to the JVM, OCaml-Java: it's pretty complete and in particular can compile the OCaml's compiler (written in OCaml) sources. I'm not sure which aspects of the F# language you're interested in, but if you're mainly looking at getting a mature strict typed functional language to the JVM, that may be a good option.
I suspect any approach would be a lot of work, but I think your first suggestion is the only one that would avoid introducing lots of additional incompatibilities and bugs. The compiler's pretty complex and there are a lot of corner cases around overload resolution, etc. (and the spec probably has gaps too), so it seems very unlikely that a new implementation would have consistently compatible semantics.
Modify the existing F# compiler to emit Java bytecode and then compile the F# compiler with it?
Use a JVM based ML compiler like Yeti to Bootstrap a minimal F# compiler on the JVM?
Porting the compiler shouldn't be that hard if it is written in F#.
I'd probably go the first way, because this is the only way one could hope to keep the new compiler in sync with the .net F# compiler.
Re-write the F# compiler from scratch in Java as the fjord project appears to be attempting?
This is certainly the least elegant approach, IMHO.
Something else?
When the compiler is done, you'll have 90% of the work left to do.
For example, not knowing much F#, but I assume it is easy to use any .NET libraries out there.
That means, the basic problem is to port the .NET ecosystem, somehow.
I was looking for something in similar lines, though it was more like a F# to Akka translator/compiler. As far as F# -> JVM is concerned, I came across two not quite production ready options:
1. F# -> [Fjord][1] -> JVM.
2. F# -> [Funscript][2] -> [Vert.X][3] -> JVM

Can Xtext be used for parsing general purpose programming languages?

I'm currently developing a general-purpose agent-based programming language (its syntaxt will be somewhat inspired by Java, and we are also using object in this language).
Since the beginning of the project we were doubtful about the fact of using ANTLR or Xtext. At that time we found out that Xtext was implementing a subset of the feature of ANTLR. So we decided to use ANLTR for our language losing the possibility to have a full-fledged Eclipse editor for free for our language (such a nice features provided by Xtext).
However, as the best of my knowledge, this summer the Xtext project has done a big step forward. Quoting from the link:
What are the limitations of Xtext?
Sven: You can implement almost any kind of programming language or DSL
with Xtext. There is one exception, that is if you need to use so
called 'Semantic Predicates' which is a rather complicated thing I
don't think is worth being explained here. Very few languages really
need this concept. However the prominent example is C/C++. We want to
look into that topic for the next release.
And that is also reinforced in the Xtext documentation:
What is Xtext? No matter if you want to create a small textual domain-specific language (DSL) or you want to implement a full-blown
general purpose programming language. With Xtext you can create your
very own languages in a snap. Also if you already have an existing
language but it lacks decent tool support, you can use Xtext to create
a sophisticated Eclipse-based development environment providing
editing experience known from modern Java IDEs in a surprisingly short
amount of time. We call Xtext a language development framework.
If Xtext has got rid of its past limitations why is it still not possible to find a complex Xtext grammar for the best known programming languages (Java, C#, etc.)?
On the ANTLR website you can find tons of such grammar examples, for what concerns Xtext instead the only sample I was able to find is the one reported in the documentation. So maybe Xtext is still not mature to be used for implementing a general purpose programming language? I'm a bit worried about this... I would not start to re-write the grammar in Xtext for then to recognize that it was not suited for that.
I think nobody implemented Java or C++ because it is a lot of work (even with Xtext) and the existing tools and compilers are excellent.
However, you could have a look at Xbase and Xtend, which is the expression language we ship with Xtext. It is built with Xtext and is quite a good proof for what you can build with Xtext. We have done that in about 4 person months.
I did a couple of screencasts on Xtend:
http://blog.efftinge.de/2011/03/xtend-screencast-part-1-basics.html
http://blog.efftinge.de/2011/03/xtend-screencast-part-2-switch.html
http://blog.efftinge.de/2011/03/xtend-screencast-part-3-rich-strings-ie.html
Note, that you can simply embed Xbase expressions into your language.
I can't speak for what Xtext is or does well.
I can speak to the problem of developing robust tools for processing real languages, based on our experience with the DMS Software Reengineering Toolkit, which we imagine is a language manipulation framework.
First, parsing of real languages usually involves something messy in lexing and/or parsing, due to the historical ways these languages have evolved. Java is pretty clean. C# has context-dependent keywords and a rudimentary preprocessor sort of like C's. C has a full blown preprocessor. C++ is famously "hard to parse" due to ambiguities in the grammar and shenanigans with template syntax. COBOL is fairly ugly, doesn't have any reference grammars, and comes in a variety of dialects. PHP will turn you to stone if you look at it because it is so poorly defined. (DMS has parsers for all of these, used in anger on real applications).
Yet you can parse all of these with most of the available parsing technologies if you try hard enough, usually by abusing the lexer or the parser to achieve your goals (how the GNU guys abused Bison to parse C++ by tangling lexical analysis with symbol table lookup is a nice ugly case in point). But it takes a lot of effort to get the language details right, and the reference manuals are only close approximations of the truth with respect to what the compilers really accept.
If Xtext has a decent parsing engine, one can likely do this with Xtext. A brief perusal of the Xtext site sounds like the lexers and parsers are fairly decent. I didn't see anything about the "Semantic Predicate"s; we have them in DMS and they are lifesavers in some of the really dark corners of parsing. Even using the really good parsing technology (we use GLR parsers), it would be very hard to parse COBOL data declarations (extracting their nesting structure during the parse) without them.
You have an interesting problem in that your language isn't well defined yet. That will make your initial parsers somewhat messy, and you'll revise them a lot. Here's where strong parsing technology helps you: if you can revise your grammar easily you can focus on what you want your language to look like, rather than focusing on fighting the lexer and parser. The fact that you can change your language definition means in fact that if Xtext has some limitations, you can probably bend your language syntax to match without huge amounts of pain. ANTLR does have the proven ability to parse a language pretty much as you imagine it, modulo the usual amount of parser hacking.
What is never discussed is what else is needed to process a language for real. The first thing you need to be able to do is to construct ASTs, which ANTLR and YACC will help you do; I presume Xtext does also. You also need symbol tables, control and data flow analysis (both local and global), and machinery to transform your language into something else (presumably more executable). Doing just symbol tables you will find surprisingly hard; C++ has several hundred pages of "how to look up an identifier"; Java generics are a lot tougher to get right than you might expect. You might also want to prettyprint the AST back to source code, if you want to offer refactorings. (EDIT: Here both ANTLR and Xtext offer what amounts to text-template driven code generation).
Yet these are complex mechanisms that take as much time, if not more than building the parser. The reason DMS exists isn't because it can parse (we view this just as the ante in a poker game), but because all of this other stuff is very hard and we wanted to amortize the cost of doing it all (DMS has, we think, excellent support for all of these mechanisms but YMMV).
On reading the Xtext overview, it sounds like they have some support for symbol tables but it is unclear what kind of assumption is behind it (e.g., for C++ you have to support multiple inheritance and namespaces).
If you are already started down the ANTLR road and have something running, I'd be tempted to stay the course; I doubt if Xtext will offer you a lot of additional help. If you really really want Xtext's editor, then you can probably switch at the price of restructuring what grammar you have (this is a pretty typical price to pay when changing parsing paradigms). Expect most of your work to appear after you get the parser right, in an ad hoc way. I doubt you will find Xtext or ANTLR much different here.
I guess the most simple answer to your question is: Many general purpose languages can be implemented using Xtext. But since there is no general answer to which parser-capabilities a general purpose languages needs, there is no general answer to your questions.
However, I've got a few pointers:
With Xtext 2.0 (released this summer), Xtext supports syntactic predicates. This is one of the most requested features to handle ambiguous syntax without enabling antlr's backtracking.
You might want to look at the brand-new languages Xbase and Xtend, which are (judging based on their capabilities) general-purpose and which are developed using Xtext. Sven has some nice screen casts in his blog: http://blog.efftinge.de/
Regarding your question why we don't see Xtext-grammars for Java, C++, etc.:
With Xtext, a language is more than just a grammar, so just having a grammar that describes a language's syntax is a good starting point but usually not an artifact valuable enough for shipping. The reason is that with an Xtext-grammar you also define the AST's structure (Abstract Syntax Tree, and an Ecore Model in fact) including true cross references. Since this model is the main internal API of your language people usually spend a lot of thought designing it. Furthermore, to resolve cross references (aka linking) you need to implement scoping (as it is called in Xtext). Without a proper implementation of scoping you can either not have true cross references in your model or you'll get many lining errors.
A guess my point is that creating a grammar + designing the AST model + implementing scoping is just a little more effort that taking a grammar from some language-zoo and translating it to Xtext's syntax.

Grammar/own-written parser?

I'm doing some small projects which involve having different syntaxes for something, however sometimes these syntaxes are so easy that using a parser generator might be overkill.
Now, when should I use a hand-made parser, and when should I use a parser generator?
Thanks,
William van Doorn
There is no hard-and-fast answer, other than "use whatever is easiest for the particular situation".
My experience is that parsers tend to get more complicated over their lifetimes, so using a parser generator up front usually pays off. Even if the language doesn't get more complicated, using a generator forces you to create a formal specification of the syntax, which is itself valuable.
The downsides are that other programmers may not know how to use the generator, so it makes it difficult for others to help out, and it makes your project dependent on that generator.
It's worth coding the parser by hand if, and only if, you're super-keen to have it be extremely fast even on a machine of very modest speed. For example, in this article on the history of Turbo Pascal from before it got its name, you can see how and why the prototype impressed the small (then Danish) firm "Borland" to hire the prototype's author (Anders Hejlsberg), fully develop the compiler, and launch it as its main product, and I quote...:
with no great expectations I hit the
compile key - AND THEN I WAS
COMPLETELY FLOORED! My test program,
that took minutes to compile and link
using Digital Research’s Pascal MT+,
was compiled and running before I
could blink an eye! That was a great
WOW moment!
Turbo Pascal's amazing compile speed -- coming first and foremost from a carefully hand-coded and highly tuned recursive descent parser coded in assembly language -- allowed it to use a very different strategy from most compilers: no separate compilation pass generating object files and libraries, and then a linker to put them together, rather, Turbo Pascal 1.0 was a single-pass compiler that directly turned source code into a single executable binary.
I remember just the same amazing experience on the tiny personal computers of that era (when a Z80, 64K or RAM, and two floppies was a lot;-) -- Turbo Pascal, with its amazing parser and the IDE and everything else, fit comfortably in memory together with a substantial program in both source and compiled form -- no floppies were needed, which meant many orders of magnitude of difference in program turnaround time.
If Hejlsberg had stuck to what was already the traditional wisdom at the time -- always use parser generators -- Turbo Pascal would probably never have emerged as a commercial product, and definitely not achieved the dominance in the Pascal world it enjoyed for years.
Of course, on a typical PC of today, such extreme parsing speed would not be needed for most compilers. Possible exceptions include compilers that must run seamlessly as part of an "interpreter-like" environment (the simple compilers for languages such as Perl and Python are typically hand-coded, to substantial extents, for that reason -- that was an implementation choice that made them viable in the '90s, although today it's not clear it's still needed), or compilers that run on very limited hardware resources, such as smartphones or low-end netbooks.
In the vast majority of cases in which you'll be writing a compiler, none of these performance considerations probably apply, and you'll be happier with a parser generator.
Your question title suggests that using a grammar is optional. It really isn't - even if I was going to implement a tiny language, I'd sketch out a grammar on a single sheet of paper.
As for when to use parser generators, this is really personal preference. Many people believe in hand-writing recursive descent parsers, rather than using the table-driven approach, for example. The important thing is to be comfortable in understanding the capabilities of the generator.
And don't be thinking that using parser generators is somehow the more professional, or even the easier approach. Bjarne Stroustrup when writing the first C++ compiler intended to use recursive descent, but got talked out of it by some keen colleagues at Bell Labs, much to his eventual chagrin. See section 3.3.2 of The Design and Evolution of C++ for more details.

If you already know LISP, why would you also want to learn F#?

What is the added value for learning F# when you are already familiar with LISP?
Static typing (with type inference)
Algebraic data types
Pattern matching
Extensible pattern matching with active patterns.
Currying (with a nice syntax)
Monadic programming, called 'workflows', provides a nice way to do asynchronous programming.
A lot of these are relatively recent developments in the programming language world. This is something you'll see in F# that you won't in Lisp, especially Common Lisp, because the F# standard is still under development. As a result, you'll find there is a quite a bit to learn. Of course things like ADTs, pattern matching, monads and currying can be built as a library in Lisp, but it's nicer to learn how to use them in a language where they are conveniently built-in.
The biggest advantage of learning F# for real-world use is its integration with .NET.
Comparing Lisp directly to F# isn't really fair, because at the end of the day with enough time you could write the same app in either language.
However, you should learn F# for the same reasons that a C# or Java developer should learn it - because it allows functional programming on the .NET platform. I'm not 100% familiar with Lisp, but I assume it has some of the same problems as OCaml in that there isn't stellar library support. How do you do Database access in Lisp? What about high-performance graphics?
If you want to learn more about 'Why .NET', check out this SO question.
If you knew F# and Lisp, you'd find this a rather strange question to ask.
As others have pointed out, Lisp is dynamically typed. More importantly, the unique feature of Lisp is that it's homoiconic: Lisp code is a fundamental Lisp data type (a list). The macro system takes advantage of that by letting you write code which executes at compile-time and modifies other code.
F# has nothing like this - it's a statically typed language which borrows a lot of ideas from ML and Haskell, and runs it on .NET
What you are asking is akin to "Why do I need to learn to use a spoon if I know how to use a fork?"
Given that LISP is dynamically typed and F# is statically typed, I find such comparisons strange.
If I were switching from Lisp to F#, it would be solely because I had a task on my hands that hugely benefitted from some .NET-only library.
But I don't, so I'm not.
Money. F# code is already more valuable than Lisp code and this gap will widen very rapidly as F# sees widespread adoption.
In other words, you have a much better chance of earning a stable income using F# than using Lisp.
Cheers,
Jon Harrop.
F# is a very different language compared to most Lisp dialects. So F# gives you a very different angle of programming - an angle that you won't learn from Lisp. Most Lisp dialects are best used for incremental, interactive development of symbolic software. At the same time most Lisp dialects are not Functional Programming Languages, but more like multi-paradigm languages - with different dialects placing different weight on supporting FPL features (free of side effects, immutable data structures, algebraic data types, ...). Thus most Lisp dialects either lack static typing or don't put much emphasis on it.
So, if you know some Lisp dialect, then learning F# can make a lot of sense. Just don't think that much of your Lisp knowledge applies to F#, since F# is a very different language. As much as an imperative programming used to C or Java needs to unlearn some ideas when learning Lisp, one also needs to unlearn Lisp habits (no types, side effects, macros, ...) when using F#. F# is also driven by Microsoft and taking advantage of the .net framework.
F# has the benefit that .NET development (in general) is very widely adopted, easily available, and more mass market.
If you want to code F#, you can get Visual Studio, which many developers will already have...as opposed to getting the LISP environment up and running.
Additionally, existing .NET developers are much more likely to look at F# than LISP, if that means anything to you.
(This is coming from a .NET developer who coded, and loved, LISP, while in college).
I'm not sure if you would? If you find F# interesting that would be a reason. If you work requires it, it would be a reason. If you think it would make you more productive or bring you added value over your current knowledge, that would be a reason.
But if you don't find F# interesting, your work doesn't require it and you don't think it would make you more productive or bring you added value, then why would you?
If the question on the other hand is what F# gives that lisp don't, then type inference, pattern matching and integration with the rest of the .NET framework should be considered.
I know this thread is old but since I stumbled on this one I just wanted to comment on my reasons. I am learning F# simply for professional opportunities since .NET carries a lot of weight in a category of companies that dominate my field. The functional paradigm has been growing in use among more quantitatively and data oriented companies and I'd like to be one of the early comers to this trend. Currently there doesn't an exist a strong functional language that fully and safely integrates with the .NET library. I actually attempted to port some .NET from Lisp code and it's really a pain b/c the FFI only supports C primitives and .NET interoperability requires an 'interface' construct and even though I know how to do this in C it's really a huge pain. It would be really, really, good if Lisp went the extra mile in it's next standard and required a c++ class (including virtual functions w/ vtables), and a C# style interface type in it's FFI. Maybe even throw in a Java interface style type too. This would allow complete interoperability with the .NET library and make Lisp a strong contender as a large-scale language. However with that said, coming from a Lisp background made learning F# rather easy. And I like how F# has gone the extra mile to provide types that you would commonly see it quantitative type work. I believe F# was created with mathematical work in mind and that in itself has value over Lisp.
One way to look at this (the original question) is to match up the language (and associated tools and platforms) to the immediate task. If the task requires an overwhelming percentage of .NET code, and it would require less shoe-horning in one language than another to meet the task head-on, then take the path of least resistance (F#). If you don't need .NET capabilities, and you're comfortable working with LISP and there's no arm-bending to move away from it, keep using it.
Not really much different from comparing a hammer with a wrench. Pick the tool that fits the job most effectively. Trying to pick a tool that's objectively "best" is nonsense. And in any case, in 20 years, all of the currently "hot" languages might be outdated anyway.

Resources