Why does it seem like the order of the arguments matters for F#? It doesn't matter for C# (which uses the same compilation model). When I try this:
# main.fs
module Main
let main = Printer.print_repeatedly 5 "hello, world"
# printer.fs
module Printer
let print_repeatedly n str = for x in 1..n do printfn "%s" str
And I execute the compiler (both Microsoft's and Mono's) with main.fs preceding printer.fs, I get an error:
main.fs(4,12): error FS0039: The namespace or module 'Printer' is not defined
If I do printer.fs before main.fs on the command line, it's fine. Is there a reason the compiler requires this for F#?
In F#, the order of files passed to the compiler absolutely does matter: the F# compiler reads programs strictly left-to-right and top-to-bottom. Variables and types may only refer to those defined before them, unless you explicitly indicate a mutually recursive relationship via and.
If you come from C#, this may seem like a limitation at first. But you learn that in practice it is actually an incredibly effective enforcement of code organization and prevents recursive reference madness.
Note that forward references are actually a fairly modern feature of programming languages and compilers (you don't get them for "free"), if you've ever had to work with early C++ compilers you may remember the need for forward declarations to enable this (having the compiler do the work cost memory which has not always been so abundant).
Related
While looking at poor runtime performance of some Purescript code I wrote, I noticed that the generated Javascript code contains dictionary passing for overloaded definitions for e.g. transformers monads. However, all my exported functions are monomorphic, so the compiler should have no trouble specializing these overloaded occurrences.
If this were Haskell compiled with GHC, I'd check optimization flags, and things like making sure nothing is marked NOINLINE and that unfoldings are properly exposed for specialisable definitions. What are the equivalent techniques for Purescript?
PureScript v0.15.3 has an optimization for “common subexpression elimination for expressions created by the compiler in the process of creating and using
typeclass dictionaries.” Maybe that will speed up your code.
I'm building a toy language, I want to have pattern matching. I could build the whole thing itself (and don't know how) but because I will do it in F# I wonder if I can defer the whole thing to it.
So, I have a interpreter and my custom syntax. If I give a AST, is possible to use F# use it to solve the pattern matching?
Another way to look at this, is possible to use F# pattern matching from C# and other .NET languages?
For example, if the program is (in invented syntax almost like F#):
case x with
( 1 , 2 , Three):
print("Found 1, 2, or 3!")
else var1:
print("%d" % var1)
Is possible to do
matched, error = F#MagicHere.Match(EvaluateMyAST)
I'm not exactly sure if I understand your question correctly, but I suspect you could use F# Compiler Service to do what you need. Basically, the compiler service lets you call some of the tasks that the compiler performs (and it is a normal .NET library).
The first step would be to turn your AST into valid F# code - I guess you could find a systematic way of doing this (but it requires some thinking). Then you can use F.C.S to:
Type-check the expression, which gives you warnings for overlapping cases and missing cases (e.g. when you have an incomplete pattern match).
It gives you the AST of the pattern matching and I believe you can even get a decision tree that you can interpret to evaluate the pattern matching.
Alternatively, you could use F.C.S to compile the code and run it (provided that you can translate your DSL to F# code)
Reading sources of Array2D module, I've stumbled upon this interesting construct in implementation of many core functions, for example:
[<CompiledName("Get")>]
let get (array: 'T[,]) (n:int) (m:int) = (# "ldelem.multi 2 !0" type ('T) array n m : 'T #)
I can only assume that this is the syntax to inline CIL and is used here obviously to gain performance benefits. However, when I've tried to use this syntax in my program, I get an error:
warning FS0042: This construct is deprecated: it is only for use in the F# library
What exactly is this? Is there any detailed documentation?
I think that this has 2 purposes:
These functions compile down to exactly 1 CIL instruction which has to be encoded somewhere, so encoding at the source seems best.
It allows for some extra trickery with defining polymorphic Add functions in a high performance way which is hard with the F# type system.
You can actually use this but you have to specify the --compiling-fslib (undocumented) and --standalone flags in your code.
I've found some details in usenet archives: http://osdir.com/ml/lang.fsharp.general/2008-01/msg00009.html
Embedded IL in F# codes. Is this feature officially supported
Not really. The 99.9% purpose of this feature is for operations defined
in FSharp.Core.dll (called fslib.dll in 1.9.2.9 and before).
Historically it has been useful to allow end-users to embed IL in order
to access .NET IL functionality not accessible by F# library or
language constructs using their own embedded IL. The need for this is
becoming much more rare, indeed almost non-existent, now that the F#
library has matured a bit more. We expect this to continue to be the
case. It's even possible that we will make this a library-only feature
in the "product" version of F#, though we have not yet made a final
decision in this regard.
This was a message from Don Syme, dated January of 2008.
I'm following a language called 'elm' which is an attempt to bring a Haskel-esque syntax and FRP to Javascript. There has been some discussion here about implementing the pipeline operator from F# but the language designer has concerns about the increased cost (I assume in increased compilation time or compiler implementation complexity) over the more standard (in other FP langs at least) reverse pipeline operator (which elm already implements). Can anyone speak to this? [Feel free to post directly to that thread as well or I will paste back the best answers if no one else does].
https://groups.google.com/forum/?fromgroups=#!topic/elm-discuss/Kt0MbDyRpO4
Thanks!
In the discussion you reference, I see Evan poses two challenges:
Show me some F# project that uses it
Find some credible F# programmer talking about why it is a good idea and what costs come with it (blog post or something).
I'd answer as follows:
The forward pipe-idiom is very common in F# programming, both for stylistic (we like it) and practical (it helps type inference) reasons. Just about any F# project you'll find will use it frequently. Certainly all of my open source projects use it (Unquote, FsEye, NL found here). No doubt you'll find the same with all of the Github located F# projects including the F# compiler source itself.
Brian, a developer on the F# compiler team at Microsoft, blogged about Pipelining in F# back in 2008, a still very interesting and relevant blog which relates F# pipes to POSIX pipes. In my own estimation, there is very little cost to implementing a pipe operator. In the F# compiler, this is certainly true in every sense (it's a one-line, inline function definition).
The pipeline operator is actually incredibly simple - here is the standard definition
let inline (|>) a b = b a
Also, the . operator discussed in the thread is the reverse pipe operator in F# (<|) which enables you to eliminate some brackets.
I don't think adding pipeline operators would have a significant impact on complexity
In addition to the excellent answers already given here, I'd like to add a couple more points.
Firstly, one of the reasons why the pipeline operator is common in F# is that it helps to circumvent a shortcoming the way type inference is currently done. Specifically, if you apply an aggregate operation with a lambda function that uses OOP to a collection type inference will typically fail. For example:
Seq.map (fun z -> z.Real) zs
This fails because F# does not yet know the type of z when it encounters the property Real so it refuses to compile this code. The idiomatic fix is to use the pipeline operator:
xs |> Seq.map (fun z -> z.Real)
This is strictly uglier (IMO) but it works.
Secondly, the F# pipe operator is nice to a point but you cannot currently get the inferred type of an intermediate result. For example:
x
|> h
|> g
|> f
If there is a type error at f then the programmer will want to know the type of the value being fed into f in case the problem was actually with h or g but this is not currently possible in Visual Studio. Ironically, this was easy in OCaml with the Tuareg mode for Emacs because you could get the inferred type of any subexpression, not just an identifier.
In Erlang, you are encouraged not to match patterns that you do not actually handle. For example:
case (anint rem 10) of
1 -> {ok, 10}
9 -> {ok, 25}
end;
is a style that is encouraged, with other possible results resulting in a badmatch result. This is consistant with the "let it crash" philosophy in Erlang.
On the other hand, F# would issue an "incomplete pattern matching" in the equivalent F# code, like here.
The question: why wouldn't F# remove the warning, effectively by augmenting every pattern matching with a statement equivalent to
|_ -> failwith "badmatch"
and use the "let it crash" philosophy?
Edit: Two interesting answers so far: either to avoid bugs that are likely when not handling all cases of an algebraic datatype; or because of the .Net platform. One way to find out which is to check OCaml. So, what is the default behaviour in OCaml?
Edit: To remove misunderstanding by .Net people who have no background in Erlang. The point of the Erlang philosophy is not to produce bad code that always crashes. Let it crash means let some other process fix the error. Instead of writing the function so that it can handle all possible cases, let the caller (for example) handle the bad cases which are thrown automatically. For those with Java background, it is like the difference between having a language with checked exceptions which must declare everything it will possibly return with every possible exception, and having a language in which functions may raise exceptions that are not explicitly declared.
F# (and other languages with pattern matching, like Haskell and O'Caml) does implicitly add a case that throws an exception.
In my opinion the most valuable reason for having complete pattern matches and paying attention to the warning, is that it makes it easy to refactor by extending your datatype, because the compiler will then warn you about code you haven't yet updated with the new case.
On the other hand, sometimes there genuinely are cases that should be left out, and then it's annoying to have to put in a catch-all case with what is often a poor error message. So it's a trade-off.
In answer to your edit, this is also a warning by default in O'Caml (and in Haskell with -Wall).
In most cases, particularly with algebraic datatypes, forgetting a case is likely to be an accident and not an intentional decision to ignore a case. In strongly typed functional languages, I think that most functions will be total, and should therefore handle every case. Even for partial functions, it's often ideal to throw a specific exception rather than to use a generic pattern matching failure (e.g. List.head throws an ArgumentException when given an empty list).
Thus, I think that it generally makes sense for the compiler to warn the developer. If you don't like this behavior, you can either add a catch-all pattern which itself throws an exception, or turn off or ignore the warning.
why wouldn't F# remove the warning
Interesting that you would ask this. Silently injecting sources of run-time error is absolutely against the philosophy behind F# and its relatives. It is considered to be a grotesque abomination. This family of languages are all about static checking, to the extent that the type system was fundamentally designed to facilitate exactly these kinds of static checks.
This stark difference in philosophy is precisely why F# and Python are so rarely compared and contrasted. "Never the twain shall meet" as they say.
So, what is the default behaviour in OCaml?
Same as F#: exhaustiveness and redundancy of pattern matches is checked at compile time and a warning is issued if a match is found to be suspect. Idiomatic style is also the same: you are expected to write your code such that these warnings do not appear.
This behaviour has nothing to do with .NET and, in fact, this functionality (from OCaml) was only implemented properly in F# quite recently.
For example, if you use a pattern in a let binding to extract the first element of a list because you know the list will always have at least one element:
let x::_ = myList
In this family of languages, that is almost always indicative of a design flaw. The correct solution is to represent your non-empty list using a type that makes it impossible to represent the empty list. Static type checking then proves that your list cannot be empty and, therefore, guarantees that this source of run-time errors has been completely eliminated from your code.
For example, you can represent a non-empty list as a tuple containing the head and the tail list. Your pattern match then becomes:
let x, _ = myList
This is exhaustive so the compiler is happy and does not issue a warning. This code cannot go wrong at run-time.
I became an advocate of this technique back in 2004, when I refactored about 1kLOC of commercial OCaml code that had been a major source of run-time errors in an application (even though they were explicit in the form of catch-all match cases that raised exceptions). My refactoring removed all of the sources of run-time errors from most the code. The reliability of the entire application improved enormously. Moreover, we had wasted weeks hunting bugs via debugging but my refactoring was completed within 2 days. So this technique really does pay dividends in the real world.
Erlang cannot have exhaustive pattern matching because of dynamic types unless you have a catch-all in every, which is just silly. Ocaml, on the other hand, can. Ocaml also tries to push all issues that can be caught at compile-time to compile-time.
OCaml by default does warn you about incomplete matches. You can disable it by adding "p" to the "-w" flag. The idea with these warnings is that more often than not (at least in my experience) they are an indication of programmer error. Especially when all your patterns are really complex like Node (Node (Leaf 4) x) (Node y (Node Empty _)), it is easy to miss a case. When the programmer is sure that it cannot go wrong, explicitly adding a | _ -> assert false case is an unambiguous way to indicate that.
GHC by default turns off these warnings; but you can enable it with -fwarn-incomplete-patterns