Why F# seq starts with small letter? - f#

I am new to F# and I noticed that all other collection types start with a capital letter except for sequences. Is this because the seq is actually merely an alias for IEnumerable and not an actual new F# type like the other ones?
Thank you.

In F#, a sequence can expressed via the built in computation expression named "seq". Computation Expressions are by convention lower case. You can define your own computation expressions in upper case, but convention is lower case.
seq - when used as a computation expression - is not an alias for IEnumerable, but seq will create an IEnumerable.
But there are more meanings of seq as clarified in the comments.

Related

F# "reverse" exhausting match on discriminated unions?

Taking the example of writing a parser-unparser pair for a DU.
The unparser function of course uses match expression, where we can count on static exhaustiveness check if new cases are added later.
Now, for the parsing side, the obvious route is (with eg FParsec) use a choice parser with a list of the DU case parsers. This works, but does not have exhaustiveness checking.
My best solution for this, is to create a DU_Case -> parser function that uses match expression, and using runtime reflection get all DU cases, call the function with them, and pass the so generated list to the choice combinator. This works, and does have static exhaustiveness checking, BUT gets pretty clunky fast, especially if the cases have varying fields.
In other words, is there a good pattern other than using reflection for ensuring exhaustiveness when the discriminated union is the output and not the input?
Edit: example on the reflection based solution
By getting clunky, I mean that for every case that has fields, I have to also create the field values dynamically, instead of the simple Array.zeroCreate(0) which is of course always valid. But what in the else branch? There needs to be code to dynamically create the correct number of field values, with their correct (possibly complex) constructors and so on. Not impossible with reflection, but clunky.
FSharpType.GetUnionCases duType
|> Seq.map (fun duCase ->
if duCase.GetFields().Length = 0 then
let case = FSharpValue.MakeUnion(duCase, Array.zeroCreate(0))
else
(*...*)

How many ternary operator used in computer language?

I was curious that are there any ternary operator being used in programming language except ?: operator. And could found only 2 from wikipedia
Is it only operator we have been used? Are there any more than these?
Element update
Another useful class of ternary operator, especially in functional languages, is the "element update" operation. For example, OCaml expressions have three kinds of update syntax:
a.b<-c means the record a where field b has value c
a.(b)<-c means the array a where index b has value c
a.[b]<-c means the string a where index b has value c
Note that these are not "update" in the sense of "assignment" or "modification"; the original object is unchanged, and a new object is yielded that has the stated properties. Consequently, these operations cannot be regarded as a simple composition of two binary operators.
Similarly, the Isabelle theorem prover has:
a(|b := c|) meaning the record a where field b has value c
Array slice
Yet another sort of ternary operator is array slice, for example in Python we have:
a[b:c] meaning an array whose first element is a[b] and last element is a[c-1]
In fact, Python has a quaternary form of slice:
a[b:c:d] meaning an array whose elements are a[b + n*d] where n ranges from 0 to the largest value such that b + n*d < c
Bash/ksh variable substitution
Although quite obscure, bash has several forms of variable expansion (apparently borrowed from ksh) that are ternary:
${var:pos:len} is a maximum of len characters from $var, starting at pos
${var/Pattern/Replacement} is $var except the first substring within it that matches Pattern is replaced with Replacement
${var//Pattern/Replacement} is the same except all matches are replaced
${var/#Pattern/Replacement} is like the first case except Pattern has to match a prefix of $var
${var/%Pattern/Replacement} is like the previous except for matching a suffix
These are borderline in my opinion, being close to ordinary functions that happen to accept three arguments, written in the sometimes baroque style of shell syntax. But, I include them as they are entirely made of non-letter symbols.
Congruence modulo
In mathematics, an important ternary relation is congruence modulo:
a ≡ b (mod c) is true iff a and b both belong to the same equivalence class in c
I'm not aware of any programming language that has this, but programming languages often borrow mathematical notation, so it's possible it exists in an obscure language. (Of course, most programming languages have mod as a binary operator, allowing the above to be expressed as (a mod c) == (b mod c).) Furthermore, unlike the bash variable substitution syntax, if this were introduced in some language, it would not be specific to that language since it is established notation elsewhere, making it more similar to ?: in ubiquity.
Excluded
There are some categories of operator I've chosen to exclude from the category of "ternary" operator:
Operations like function application (a(b,c)) that could apply to any number of operators.
Specific named functions (e.g., f(a,b,c)) that accept three arguments, as there are too many of them for any to be interesting in this context.
Operations like SUM (Σ) or let that function as a binding introduction of a new variable, since IMO a ternary operator ought to act on three already-existing things.
One-letter operators in languages like sed that happen to accept three arguments, as these really are like the named function case, and the language just has a very terse naming convention.
Well it’s not a ternary operator per-say but I do think the three way comparison operator is highly underrated.
The ternary operator is appropriate when a computation has to take place, even if I cannot use the effect inside of an if/else statement or switch statement Consequently, 0 or the DEFAULT VALUE is treated as a DEFAULT VALUE when I try the computation.
The if/else or switch statements require me to enumerate every case that can take place and are only helpful if the ability to discriminate between a numeric value and a branching choice can help. In some cases, it is clear why it won't help if a condition test exists, simply because I am either too early or too late to do something about the condition, even though some other function is unable to test for the given condition.
The ternary operator forces a computation to have the effect that can pass through the code that follows it. Other types of condition tests resort to using, for instance, the && and || operators which can't guarantee a pass through.

What is a "primary expression"?

In the PowerShell Language Specification, the grammar has a term called "Primary Expression".
primary-expression:
value
member-access
element-access
invocation-expression
post-increment-expression
post-decrement-expression
Semantically, what is a primary expression intended to describe?
As I understand it there are two parts to this:
Formal grammars break up things like expressions so operator precedence is implicit.
Eg. if the grammar had
expression:
value
expression * expression
expression + expression
…
There would need to be a separate mechanism to define * as having higher precedence than +. This
becomes significant when using tools to directly transform the grammar into a tokeniser/parser.1
There are so many different specific rules (consider the number of operators in PowerShell) that using fewer rules would be harder to understand because each rule would be so long.
1 I suspect this is not the case with PowerShell because it is highly context sensitive (express vs command mode, and then consider calling non-inbuilt executables). But such grammars across languages tend to have a lot in common, so style can also be carried over (don't re-invent the wheel).
My understaning of it is that an expression is an arrangement of commands/arguments and/or operators/operands that, taken together will execute and effect an action or produce a result (even if it's $null) than can be assigned to a variable or send down the pipeline. The term "primary expression" is used to differentiate the whole expression from any sub-expressions $() that may be contained within it.
My very limited experience says that primary refers to expressions (or sub-expressions) that can't be parsed in more than one way, regardless of precedence or associativity. They have enough syntactic anchors that they are really non-negotiable--ID '(' ID ')' for example, can only match that. The parens make it guaranteed, and there are no operators to allow for any decision on precedence.
In matching an expression tree, it's common to have a set of these sub-expressions under a "primary_expr" rule. Any sub-expressions can be matched any way you like because they're all absolutely determined by syntax. Everything after that will have to be ordered carefully in the grammar to guarantee correct precedence and associativity.
There's some support for this interpretation here.
think $( lots of statements ) would be like a complex expression
and in powershell most things can be an expression like
$a = if ($y) { 1 } else { 2}
so primary expression is likely the lowest common denominator of classic expressions
a single thing that returns a value
whether an explicit value, calling something.. getting a variable, or a property from a variable $x.y or the result of increment operation $z++
but even a simple math "expression" wouldn't match this 4 +2 + (3 / 4 ) , that would be more of a complex expression.
So I was thinking of the WHY behind it, and at first it was mentioned that it could be to help determine command/expression mode, but with investigation that wasn't it. Then i thought maybe it was what could be passed in command mode as an argument explicitly, but thats not it, because if you pass $x++ as a parameter to a cmdlet you get a string.
I think its probably just the lowest common denominator of expressions, a building block, so the next question is what other expressions does the grammar contain, and where is this one used?

Why are functions in OCaml/F# not recursive by default?

Why is it that functions in F# and OCaml (and possibly other languages) are not by default recursive?
In other words, why did the language designers decide it was a good idea to explicitly make you type rec in a declaration like:
let rec foo ... = ...
and not give the function recursive capability by default? Why the need for an explicit rec construct?
The French and British descendants of the original ML made different choices and their choices have been inherited through the decades to the modern variants. So this is just legacy but it does affect idioms in these languages.
Functions are not recursive by default in the French CAML family of languages (including OCaml). This choice makes it easy to supercede function (and variable) definitions using let in those languages because you can refer to the previous definition inside the body of a new definition. F# inherited this syntax from OCaml.
For example, superceding the function p when computing the Shannon entropy of a sequence in OCaml:
let shannon fold p =
let p x = p x *. log(p x) /. log 2.0 in
let p t x = t +. p x in
-. fold p 0.0
Note how the argument p to the higher-order shannon function is superceded by another p in the first line of the body and then another p in the second line of the body.
Conversely, the British SML branch of the ML family of languages took the other choice and SML's fun-bound functions are recursive by default. When most function definitions do not need access to previous bindings of their function name, this results in simpler code. However, superceded functions are made to use different names (f1, f2 etc.) which pollutes the scope and makes it possible to accidentally invoke the wrong "version" of a function. And there is now a discrepancy between implicitly-recursive fun-bound functions and non-recursive val-bound functions.
Haskell makes it possible to infer the dependencies between definitions by restricting them to be pure. This makes toy samples look simpler but comes at a grave cost elsewhere.
Note that the answers given by Ganesh and Eddie are red herrings. They explained why groups of functions cannot be placed inside a giant let rec ... and ... because it affects when type variables get generalized. This has nothing to do with rec being default in SML but not OCaml.
One crucial reason for the explicit use of rec is to do with Hindley-Milner type inference, which underlies all staticly typed functional programming languages (albeit changed and extended in various ways).
If you have a definition let f x = x, you'd expect it to have type 'a -> 'a and to be applicable on different 'a types at different points. But equally, if you write let g x = (x + 1) + ..., you'd expect x to be treated as an int in the rest of the body of g.
The way that Hindley-Milner inference deals with this distinction is through an explicit generalisation step. At certain points when processing your program, the type system stops and says "ok, the types of these definitions will be generalised at this point, so that when someone uses them, any free type variables in their type will be freshly instantiated, and thus won't interfere with any other uses of this definition."
It turns out that the sensible place to do this generalisation is after checking a mutually recursive set of functions. Any earlier, and you'll generalise too much, leading to situations where types could actually collide. Any later, and you'll generalise too little, making definitions that can't be used with multiple type instantiations.
So, given that the type checker needs to know about which sets of definitions are mutually recursive, what can it do? One possibility is to simply do a dependency analysis on all the definitions in a scope, and reorder them into the smallest possible groups. Haskell actually does this, but in languages like F# (and OCaml and SML) which have unrestricted side-effects, this is a bad idea because it might reorder the side-effects too. So instead it asks the user to explicitly mark which definitions are mutually recursive, and thus by extension where generalisation should occur.
There are two key reasons this is a good idea:
First, if you enable recursive definitions then you can't refer to a previous binding of a value of the same name. This is often a useful idiom when you are doing something like extending an existing module.
Second, recursive values, and especially sets of mutually recursive values, are much harder to reason about then are definitions that proceed in order, each new definition building on top of what has been already defined. It is nice when reading such code to have the guarantee that, except for definitions explicitly marked as recursive, new definitions can only refer to previous definitions.
Some guesses:
let is not only used to bind functions, but also other regular values. Most forms of values are not allowed to be recursive. Certain forms of recursive values are allowed (e.g. functions, lazy expressions, etc.), so it needs an explicit syntax to indicate this.
It might be easier to optimize non-recursive functions
The closure created when you create a recursive function needs to include an entry that points to the function itself (so the function can recursively call itself), which makes recursive closures more complicated than non-recursive closures. So it might be nice to be able to create simpler non-recursive closures when you don't need recursion
It allows you to define a function in terms of a previously-defined function or value of the same name; although I think this is bad practice
Extra safety? Makes sure that you are doing what you intended. e.g. If you don't intend it to be recursive but you accidentally used a name inside the function with the same name as the function itself, it will most likely complain (unless the name has been defined before)
The let construct is similar to the let construct in Lisp and Scheme; which are non-recursive. There is a separate letrec construct in Scheme for recursive let's
Given this:
let f x = ... and g y = ...;;
Compare:
let f a = f (g a)
With this:
let rec f a = f (g a)
The former redefines f to apply the previously defined f to the result of applying g to a. The latter redefines f to loop forever applying g to a, which is usually not what you want in ML variants.
That said, it's a language designer style thing. Just go with it.
A big part of it is that it gives the programmer more control over the complexity of their local scopes. The spectrum of let, let* and let rec offer an increasing level of both power and cost. let* and let rec are in essence nested versions of the simple let, so using either one is more expensive. This grading allows you to micromanage the optimization of your program as you can choose which level of let you need for the task at hand. If you don't need recursion or the ability to refer to previous bindings, then you can fall back on a simple let to save a bit of performance.
It's similar to the graded equality predicates in Scheme. (i.e. eq?, eqv? and equal?)

Explaining pattern matching vs switch

I have been trying to explain the difference between switch statements and pattern matching(F#) to a couple of people but I haven't really been able to explain it well..most of the time they just look at me and say "so why don't you just use if..then..else".
How would you explain it to them?
EDIT! Thanks everyone for the great answers, I really wish I could mark multiple right answers.
Having formerly been one of "those people", I don't know that there's a succinct way to sum up why pattern-matching is such tasty goodness. It's experiential.
Back when I had just glanced at pattern-matching and thought it was a glorified switch statement, I think that I didn't have experience programming with algebraic data types (tuples and discriminated unions) and didn't quite see that pattern matching was both a control construct and a binding construct. Now that I've been programming with F#, I finally "get it". Pattern-matching's coolness is due to a confluence of features found in functional programming languages, and so it's non-trivial for the outsider-looking-in to appreciate.
I tried to sum up one aspect of why pattern-matching is useful in the second of a short two-part blog series on language and API design; check out part one and part two.
Patterns give you a small language to describe the structure of the values you want to match. The structure can be arbitrarily deep and you can bind variables to parts of the structured value.
This allows you to write things extremely succinctly. You can illustrate this with a small example, such as a derivative function for a simple type of mathematical expressions:
type expr =
| Int of int
| Var of string
| Add of expr * expr
| Mul of expr * expr;;
let rec d(f, x) =
match f with
| Var y when x=y -> Int 1
| Int _ | Var _ -> Int 0
| Add(f, g) -> Add(d(f, x), d(g, x))
| Mul(f, g) -> Add(Mul(f, d(g, x)), Mul(g, d(f, x)));;
Additionally, because pattern matching is a static construct for static types, the compiler can (i) verify that you covered all cases (ii) detect redundant branches that can never match any value (iii) provide a very efficient implementation (with jumps etc.).
Excerpt from this blog article:
Pattern matching has several advantages over switch statements and method dispatch:
Pattern matches can act upon ints,
floats, strings and other types as
well as objects.
Pattern matches can act upon several
different values simultaneously:
parallel pattern matching. Method
dispatch and switch are limited to a single
value, e.g. "this".
Patterns can be nested, allowing
dispatch over trees of arbitrary
depth. Method dispatch and switch are limited
to the non-nested case.
Or-patterns allow subpatterns to be
shared. Method dispatch only allows
sharing when methods are from
classes that happen to share a base
class. Otherwise you must manually
factor out the commonality into a
separate function (giving it a
name) and then manually insert calls
from all appropriate places to your
unnecessary function.
Pattern matching provides redundancy
checking which catches errors.
Nested and/or parallel pattern
matches are optimized for you by the
F# compiler. The OO equivalent must
be written by hand and constantly
reoptimized by hand during
development, which is prohibitively
tedious and error prone so
production-quality OO code tends to
be extremely slow in comparison.
Active patterns allow you to inject
custom dispatch semantics.
Off the top of my head:
The compiler can tell if you haven't covered all possibilities in your matches
You can use a match as an assignment
If you have a discriminated union, each match can have a different 'type'
Tuples have "," and Variants have Ctor args .. these are constructors, they create things.
Patterns are destructors, they rip them apart.
They're dual concepts.
To put this more forcefully: the notion of a tuple or variant cannot be described merely by its constructor: the destructor is required or the value you made is useless. It is these dual descriptions which define a value.
Generally we think of constructors as data, and destructors as control flow. Variant destructors are alternate branches (one of many), tuple destructors are parallel threads (all of many).
The parallelism is evident in operations like
(f * g) . (h * k) = (f . h * g . k)
if you think of control flowing through a function, tuples provide a way to split up a calculation into parallel threads of control.
Looked at this way, expressions are ways to compose tuples and variants to make complicated data structures (think of an AST).
And pattern matches are ways to compose the destructors (again, think of an AST).
Switch is the two front wheels.
Pattern-matching is the entire car.
Pattern matches in OCaml, in addition to being more expressive as mentioned in several ways that have been described above, also give some very important static guarantees. The compiler will prove for you that the case-analysis embodied by your pattern-match statement is:
exhaustive (no cases are missed)
non-redundant (no cases that can never be hit because they are pre-empted by a previous case)
sound (no patterns that are impossible given the datatype in question)
This is a really big deal. It's helpful when you're writing the program for the first time, and enormously useful when your program is evolving. Used properly, match-statements make it easier to change the types in your code reliably, because the type system points you at the broken match statements, which are a decent indicator of where you have code that needs to be fixed.
If-Else (or switch) statements are about choosing different ways to process a value (input) depending on properties of the value at hand.
Pattern matching is about defining how to process a value given its structure, (also note that single case pattern matches make sense).
Thus pattern matching is more about deconstructing values than making choices, this makes them a very convenient mechanism for defining (recursive) functions on inductive structures (recursive union types), which explains why they are so abundantly used in languages like Ocaml etc.
PS: You might know the pattern-match and If-Else "patterns" from their ad-hoc use in math;
"if x has property A then y else z" (If-Else)
"some term in p1..pn where .... is the prime decomposition of x.." ((single case) pattern match)
Perhaps you could draw an analogy with strings and regular expressions? You describe what you are looking for, and let the compiler figure out how for itself. It makes your code much simpler and clearer.
As an aside: I find that the most useful thing about pattern matching is that it encourages good habits. I deal with the corner cases first, and it's easy to check that I've covered every case.

Resources