Make Fish in F# - f#

The Kleisli composition operator >=>, also known as the "fish" in Haskell circles, may come in handy in many situations where composition of specialized functions is needed. It works kind of like the >> operator, but instead of composing simple functions 'a -> 'b it confers some special properties on them possibly best expressed as 'a -> m<'b>, where m is either a monad-like type or some property of the function's return value.
Evidence of this practice in the wider F# community can be found e.g. in Scott Wlaschin's Railway oriented programming (part 2) as composition of functions returning the Result<'TSuccess,'TFailure> type.
Reasoning that where there's a bind, there must be also fish, I try to parametrize the canonical Kleisli operator's definition let (>=>) f g a = f a >>= g with the bind function itself:
let mkFish bind f g a = bind g (f a)
This works wonderfully with the caveat that generally one shouldn't unleash special operators on user-facing code. I can compose functions returning options...
module Option =
let (>=>) f = mkFish Option.bind f
let odd i = if i % 2 = 0 then None else Some i
let small i = if abs i > 10 then None else Some i
[0; -1; 9; -99] |> List.choose (odd >=> small)
// val it : int list = [-1; 9]
... or I can devise a function application to the two topmost values of a stack and push the result back without having to reference the data structure I'm operating on explicitly:
module Stack =
let (>=>) f = mkFish (<||) f
type 'a Stack = Stack of 'a list
let pop = function
| Stack[] -> failwith "Empty Stack"
| Stack(x::xs) -> x, Stack xs
let push x (Stack xs) = Stack(x::xs)
let apply2 f =
pop >=> fun x ->
pop >=> fun y ->
push (f x y)
But what bothers me is that the signature val mkFish : bind:('a -> 'b -> 'c) -> f:('d -> 'b) -> g:'a -> a:'d -> 'c makes no sense. Type variables are in confusing order, it's overly general ('a should be a function), and I'm not seeing a natural way to annotate it.
How can I abstract here in the absence of formal functors and monads, not having to define the Kleisli operator explicitly for each type?

You can't do it in a natural way without Higher Kinds.
The signature of fish should be something like:
let (>=>) (f:'T -> #Monad<'U>``) (g:' U -> #Monad<'V>) (x:'T) : #Monad<'V> = bind (f x) g
which is unrepresentable in current .NET type system, but you can replace #Monad with your specific monad, ie: Async and use its corresponding bind function in the implementation.
Having said that, if you really want to use a generic fish operator you can use F#+ which has it already defined by using static constraints. If you look at the 5th code sample here you will see it in action over different types.
Of course you can also define your own, but there is a lot of things to code, in order to make it behave properly in most common scenarios. You can grab the code from the library or if you want I can write a small (but limited) code sample.
The generic fish is defined in this line.
I think in general you really feel the lack of generic functions when using operators, because as you discovered, you need to open and close modules. It's not like functions that you prefix them with the module name, you can do that with operators as well (something like Option.(>=>)) , but then it defeats the whole purpose of using operators, I mean it's no longer an operator.

Related

F#: What to call a combination of map and fold, or of map and reduce?

A simple example, inspired by this question:
module SimpleExample =
let fooFold projection folder state source =
source |> List.map projection |> List.fold folder state
// val fooFold :
// projection:('a -> 'b) ->
// folder:('c -> 'b -> 'c) -> state:'c -> source:'a list -> 'c
let fooReduce projection reducer source =
source |> List.map projection |> List.reduce reducer
// val fooReduce :
// projection:('a -> 'b) -> reducer:('b -> 'b -> 'b) -> source:'a list -> 'b
let game = [0, 5; 10, 15]
let minX, maxX = fooReduce fst min game, fooReduce fst max game
let minY, maxY = fooReduce snd min game, fooReduce snd max game
What would be a natural name for the functions fooFold and fooReduce in this example? Alas, mapFold and mapReduce are already taken.
mapFold is part of the F# library and does a fold operation over the input to return a tuple of 'result list * 'state, similar to scan, but without the initial state and the need to provide the tuple as part of the state yourself. Its signature is:
val mapFold : ('State -> 'T -> 'Result * 'State) -> 'State -> 'T list
-> 'Result list * 'State
Since the projection can easily be integrated into the folder, the fooFold function is only included for illustration purposes.
And MapReduce:
MapReduce is an algorithm for processing huge datasets on certain
kinds of distributable problems using a large number of nodes
Now for a more complex example, where the fold/reduce is not directly applied to the input, but to the groupings following a selection of the keys.
The example has been borrowed from a Python library, where it is called - perhaps misleadingly - reduceby.
module ComplexExample =
let fooFold keySelection folder state source =
source |> Seq.groupBy keySelection
|> Seq.map (fun (k, xs) ->
k, Seq.fold folder state xs)
// val fooFold :
// keySelection:('a -> 'b) ->
// folder:('c -> 'a -> 'c) -> state:'c -> source:seq<'a> -> seq<'b * 'c>
// when 'b : equality
let fooReduce keySelection projection reducer source =
source |> Seq.groupBy keySelection
|> Seq.map (fun (k, xs) ->
k, xs |> Seq.map projection |> Seq.reduce reducer)
// val fooReduce :
// keySelection:('a -> 'b) ->
// projection:('a -> 'c) ->
// reducer:('c -> 'c -> 'c) -> source:seq<'a> -> seq<'b * 'c>
// when 'b : equality
type Project = { name : string; state : string; cost : decimal }
let projects =
[ { name = "build roads"; state = "CA"; cost = 1000000M }
{ name = "fight crime"; state = "IL"; cost = 100000M }
{ name = "help farmers"; state = "IL"; cost = 2000000M }
{ name = "help farmers"; state = "CA"; cost = 200000M } ]
fooFold (fun x -> x.state) (fun acc x -> acc + x.cost) 0M projects
// val it : seq<string * decimal> = seq [("CA", 1200000M); ("IL", 2100000M)]
fooReduce (fun x -> x.state) (fun x -> x.cost) (+) projects
// val it : seq<string * decimal> = seq [("CA", 1200000M); ("IL", 2100000M)]
What would be the natural name for the functions fooFold and fooReduce here?
I'd probably call the first two mapAndFold and mapAndReduce (though I agree that mapFold and mapReduce would be good names if they were not already taken). Alternatively, I'd go with mapThenFold (etc.), which is perhaps more explicit, but it reads a bit cumbersome.
For the more complex ones, reduceBy and foldBy sound good. The issue is that this would not work if you also wanted a version of those functions that do not do the mapping operation. If you wanted that, you'd probably need mapAndFoldBy and mapAndReduceBy (as well as just foldBy and reduceBy). This gets a bit ugly, but I'm afraid that's the best you can do.
More generally, the issue when comparing names with Python is that Python allows overloading whereas F# functions do not. This means that you need to have a unique name for functions that would have multiple overloads. This means that you just need to come up with a consistent naming scheme that will not make the names unbearably long.
(I experienced this when coming up with names for the functions in the Deedle library, which is somewhat inspired by Pandas. You can see for example the aggregation functions in Deedle for an example - there is a pattern in the naming to deal with the fact that each function needs a unique name.)
I have a different opinion as Thomas.
First; I think that not having overloads is a good thing, and giving every operation unique names is also
something good. I also would say that giving long names to functions rarely used is even more important
and should not be avoided.
Writing longer names is usally never a problem as we as programers usually use an IDE with auto-completion.
But reading and understanding is different. Knowing what a functions does because of a long descriptive name
is better then a short name.
A long descriptive function name gets more important the less often a function is used. It helps reading and
understanding the code. A short and less descriptive function name that is rarely used causes confusion. The
confusion would just increase if it even would be just an overload of another function name.
Yes; naming things can be hard, that's the reason why its important and shoudn't be avoided.
To what you describe. I would have name it mapFold and mapReduce. As those exactly describe what they do.
There is already a mapFold in F#, and in my opinion, the F# devs fucked up either with the naming, arguments or the
output of the function. But anyhow, they just fucked up.
I usually would have expected mapFold to do map and then fold. Actually it does, but it also returns the intermediate
list that is created on the run. Something I would not expect it to return. And i would also expect it to pass two
functions instead of one.
When we get to Thomas suggestion on naming it mapAndFold or mapThenFold. Then i would expect different behaviour
for those two functions. mapThenFold exactly tells what it does. map and then fold on it. I think the then is
not important. That's also why I would name it mapFold or mapReduce. Writing it this way already suggest a then.
But mapAndFold or mapAndReduce does not tell something about the order of execution. It just says it does two things
or somehow returns this AND that.
With that in mind, i would say that the F# library should have named its mapFold either mapAndFold, changed the return
value to just return the fold (and have two arguments instead of one). But hey, its fucked up now, we cannot change it anymore.
As for mapReduce, I think you are a little bit mistaken. The mapReduce algorithm is named that way, because it just does
map and then reduce. And that's it.
But functional programming with its stateless and more descriptive operations sometimes have additional benefits. Technically
a map is less powerful compared to a for/fold as it just describes how values are changed, without that the order matters
or the position in a list. But because of this limitation, you can run it in parallel, even on a big computer cluster. And that's all
what mapReduce Algorithm you cite do.
But that doesn't mean a mapReduce must always run its operation on a big cluster or in parallel. In my opinion you could
just name it mapReduce and that's fine. Everybody will know what it does and I think nobody expect it to suddenly run on
cluster.
In general I think the mapFold that F# provides is silly, here are 4 examples how I think it should have been provided.
let double x = x * 2
let add x y = x + y
mapFold double add 0 [1..10] // 110
mapAndFold double add 0 [1..10] // [2;4;6;8;10;12;14;16;18;20] * 110
mapReduce double add [1..10] // Some (110)
mapAndReduce double add [1..10] // Some ([2;4;6;8;10;12;14;16;18;20] * 110)
Well mapFold doesn't work that way, so you have the following options.
Implement mapReduce the way you have it. And ignore the in-consistency with mapFold.
Provide mapAndReduce and mapReduce.
Make your mapReduce return the same crap as the default implementation of mapFold does and provide mapThenReduce.
Like (3) but also add mapThenFold.
Option 4 has the most compatibility and expectation of what already exists in F#. But that doesn't mean you must do it that way.
In my opinion I would just:
implement mapReduce returning the result of map and then reduce.
I wouldn't care about a mapAndReduce version that returns a list and the result.
Provide a mapThenFold expecting two function arguments returning the result just of fold.
As a general notice: Implementing mapReduce just by calling map and then reduce is somewhat pointless. I would
expect it to have a more low-level implementation that does both things by just traversing the data-structure once.
If not, i just can call map and then reduce anyway.
So an implementation should look like:
let mapReduce mapper reducer xs =
let rec loop state xs =
match xs with
| [] -> state
| x::xs -> loop (reducer state (mapper x)) xs
match xs with
| [] -> ValueNone
| [x] -> ValueSome (mapper x)
| x::xs -> ValueSome (loop (mapper x) xs)
let double x = x * 2
let add x y = x + y
let some110 = mapReduce double add [1..10]

What is the typical definition/meaning of this F# operator <*>

Being relatively new to functional programming, I am still unfamiliar with all the standard operators. The fact that their definition is allowed to be arbitrary in many languages and also that such definitions aren't available in nearby source code, if at all, makes reading functional code unnecessarily challenging.
Presently, I don't know what <*> as it occurs in WebSharper.UI.Next documentation.
It would be good if there was a place that listed all the conventional definition for the various operators of the various functional languages.
I agree with you, it would be good to have a place where all implicit conventions for operators used in F# are listed.
The <*> operator comes from Haskell, it's an operator for Applicative Functors, its general signature is: Applicative'<('A -> 'B)> -> Applicative'<'A> -> Applicative'<'B> which is an illegal signature in .NET as higher kinds are not supported.
Anyway nothing stops you from defining the operator for a specific Applicative Functor, here's the typical definition for option types:
let (<*>) f x =
match (f, x) with
| Some f, Some x -> Some (f x)
| _ -> None
Here the type is inferred as:
val ( <*> ) : f:('a -> 'b) option -> x:'a option -> 'b option
which is equivalent to:
val ( <*> ) : f: option<('a -> 'b)> -> x: option<'a> -> option<'b>
The intuitive explanation is that it takes a function in a context and an argument for that function in the context, then it executes the function inside the context.
In our example for option types it can be used for applying a function to a result value of an operation which may return a None value:
let tryParse x =
match System.Int32.TryParse "100" with
| (true, x) -> Some x
| _ -> None
Some ((+) 10) <*> tryParse "100"
You can take advantage of currying and write:
Some (+) <*> tryParse "100" <*> Some 10
Which represents something like:
(+) (System.Int32.Parse "100") 10
but without throwing exceptions, that's why it is also said that Applicatives are used to model side-effects, specially in pure functional languages like Haskell. Here's another sample of option applicatives.
But for different types it has different uses, for lists it may be used to zip them as shown in this post.
In F# it's not defined because .NET type system would not make it possible to define it in a generic way however it would be possible using overloads and static member constraints as in FsControl otherwise you will have to select different instances by hand by opening specific modules, which is the approach used in FSharpx.
Just discovered elsewhere in the documentation on another subject...
let ( <*> ) f x = View.Apply f x
where type of View.Apply and therefore ( <*> ) is:
View<'A * 'B> -> View<'A> -> View<'B>

constrain generic type to inherit a generic type in f#

let mapTuple f (a,b) = (f a, f b)
I'm trying to create a function that applies a function f to both items in a tuple and returns the result as a tuple. F# type inference says that mapTuple returns a 'b*'b tuple. It also assumes that a and b are of the same type.
I want to be able to pass two different types as parameters. You would think that wouldn't work because they both have to be passed as parameters to f. So I thought if they inherited from the same base class, it might work.
Here is a less generic function for what I am trying to accomplish.
let mapTuple (f:Map<_,_> -> Map<'a,'b>) (a:Map<int,double>,b:Map<double, int>) = (f a, f b)
However, it gives a type mismatch error.
How do I do it? Is what I am trying to accomplish even possible in F#?
Gustavo is mostly right; what you're asking for requires higher-rank types. However,
.NET (and by extension F#) does support (an encoding of) higher-rank types.
Even in Haskell, which supports a "nice" way of expressing such types (once you've enabled the right extension), they wouldn't be inferred for your example.
Digging into point 2 may be valuable: given map f a b = (f a, f b), why doesn't Haskell infer a more general type than map :: (t1 -> t) -> t1 -> t1 -> (t, t)? The reason is that once you include higher-rank types, it's not typically possible to infer a single "most general" type for a given expression. Indeed, there are many possible higher-rank signatures for map given its simple definition above:
map :: (forall t. t -> t) -> x -> y -> (x, y)
map :: (forall t. t -> z) -> x -> y -> (z, z)
map :: (forall t. t -> [t]) -> x -> y -> ([x], [y])
(plus infinitely many more). But note that these are all incompatible with each other (none is more general than another). Given the first one you can call map id 1 'c', given the second one you can call map (\_ -> 1) 1 'c', and given the third one you can call map (\x -> [x]) 1 'c', but those arguments are only valid with each of those types, not with the other ones.
So even in Haskell you need to specify the particular polymorphic signature you want to use - this may be a bit of a surprise if you're coming from a more dynamic language. In Haskell, this is relatively clean (the syntax is what I've used above). However, in F# you'll have to jump through an additional hoop: there's no clean syntax for a "forall" type, so you'll have to create an additional nominal type instead. For example, to encode the first type above in F# I'd write something like this:
type Mapping = abstract Apply : 'a -> 'a
let map (m:Mapping) (a, b) = m.Apply a, m.Apply b
let x, y = map { new Mapping with member this.Apply x = x } (1, "test")
Note that in contrast to Gustavo's suggestion, you can define the first argument to map as an expression (rather than forcing it to be a member of some separate type). On the other hand, there's clearly a lot more boilerplate than would be ideal...
This problem has to do with rank-n types which are supported in Haskell (through extensions) but not in .NET type system.
One way I found to workaround this limitation is to pass a type with a single method instead of a function and then define an inline map function with static constraints, for example let's suppose I have some generic functions: toString and toOption and I want to be able to map them to a tuple of different types:
type ToString = ToString with static member inline ($) (ToString, x) = string x
type ToOption = ToOption with static member ($) (ToOption, x) = Some x
let inline mapTuple f (x, y) = (f $ x, f $ y)
let tuple1 = mapTuple ToString (true, 42)
let tuple2 = mapTuple ToOption (true, 42)
// val tuple1 : string * string = ("True", "42")
// val tuple2 : bool option * int option = (Some true, Some 42)
ToString will return the same type but operating with arbitrary types. ToOption will return two Generics of different types.
By using a binary operator type inference creates the static constraints for you and I use $ because in Haskell it means apply so a nice detail is that for haskellers f $ x reads already apply x to f.
At the risk of stating the obvious, a good enough solution might be to have a mapTuple that takes two functions instead of one:
let mapTuple fa fb (a, b) = (fa a, fb b)
If your original f is generic, passing it as fa and fb will give you two concrete instantiations of the function with the types you're looking for. At worst, you just need to pass the same function twice when a and b are of the same type.

Using functions before they are declared

For the sake of using literate programming (i.e. cweb) in F#, I need to be able to forward declare functions (i.e. use them before defining them). I came up with two ways, both of them unpleasing. Can you think of something better (easier to use for the programmer)?
Nice, but doesn't work with polymorphic functions
// This can be ugly
let declare<'a> = ref Unchecked.defaultof<'a>
// This has to be beautiful
let add = declare<float -> float>
let ``function I want to explain that depends on add`` nums = nums |> Seq.map !add
add := fun x -> x + 1.
Ugly, but works with everything
// This can be ugly
type Literate() =
static member Declare<'a, 'b> (ref : obj ref) (x : 'a) : 'b =
unbox <| (unbox<obj -> obj> !ref)
static member Define<'a, 'b> (func : 'a -> 'b) (ref : obj ref) (f : 'a -> 'b) =
ref := box (unbox<'a> >> f >> box)
// This has to be beautiful
let rec id (x : 'a) : 'a = Literate.Declare idImpl x
and idImpl = ref null
let f () = id 100 + id 200
Literate.Define id idImpl (fun x -> x)
I used a tool that follows the same ideas as literate programming when creating www.tryjoinads.org. A document is simply a Markdown with code snippets that get turned into an F# source code that you can run and the snippets have to be in a correct order. (In some literate programming tools, the documentation is written in commments, but the idea is the same.)
Now, I think that making your code more complicated so that you can write it in a literate programming style (and document it) is introducing a lot of accidental complexity and it is defeating the main purpose of literate programming.
So, if I wanted to solve this problem, I would extend my literate programming tool with some annotation that specifies the order of code blocks that is needed to make the script work (and a simple pre-processing tool can re-order them when generating F# input). You can take a [look at my build script][1] for TryJoinads, which would be fairly easy to extend to do this.
The tool I used for TryJoinads already provides some meta-tags that can be used to hide code blocks from the output, so you can write something like:
## This is my page heading
[hide]
// This function will be hidden from the generated HTML
// but it is visible to the F# compiler
let add a b = a + b
Here is the description for the other function:
let functionThatUsesAdd x = add x x
And later on I can repeat `add` with more comments (and I can add an annotation
`module` to avoid conflicts with the previous declaration):
[module=Demo]
let add a b =
// Add a and b
a + b
This also isn't perfect, because you have to duplicate functions, but at least your generated blog post or HTML documentation will not be obscured by things that do not matter. But of course, adding some meta-command like module or hide to specify order of blocks wouldn't be too hard and it would be a clean solution.
In summary, I think you just need a better literate programming tool, not different F# code or F# langauge.
Perhaps I'm missing something, but why aren't you going all the way and 'doing it properly'?
Using the function first:
<<test.fs>>=
<<add>>
let inc = add 1
Declaring the function afterwards:
<<add>>=
let add a b = a + b
Since functions are first-class objects in F#, you can pass them around instead -- which presents a much nicer (and still immutable) solution than forward references.
let dependentFunction f nums = nums |> Seq.map f
let ``function I want to explain that depends on add`` nums =
dependentFunction (fun x -> x + 1.) nums
Also, in most cases you should be able to use currying (partial function application) to simplify the code further but the type inference for seq<'T> is a little strange in F# because it's usually used as a flexible type (similar to covariance in C# 4.0). To illustrate:
// This doesn't work because of type inference on seq<'T>,
// but it should work with most other types.
let ``function I want to explain that depends on add`` =
dependentFunction (fun x -> x + 1.)
Finally, a good rule of thumb for using ref or mutable in F# is that if you're only going to assign the value once (to initialize it), there's probably a cleaner, more functional way to write that code (passing the value as a function parameter (as above) and lazy are two such approaches). Obviously there are exceptions to this rule, but even then they should be used very sparingly.
As I said, this is wrong (and you should publish a blog article, or a FPish post, about why you're doing this)), but here is my take:
let ``function I want to explain that depends on add``
(add : float -> float) = nums |> Seq.map add
let add = (+) 1.
let ``function I want to explain that depends on add`` = ``function I want to explain that depends on add`` add

How do I define y-combinator without "let rec"?

In almost all examples, a y-combinator in ML-type languages is written like this:
let rec y f x = f (y f) x
let factorial = y (fun f -> function 0 -> 1 | n -> n * f(n - 1))
This works as expected, but it feels like cheating to define the y-combinator using let rec ....
I want to define this combinator without using recursion, using the standard definition:
Y = λf·(λx·f (x x)) (λx·f (x x))
A direct translation is as follows:
let y = fun f -> (fun x -> f (x x)) (fun x -> f (x x));;
However, F# complains that it can't figure out the types:
let y = fun f -> (fun x -> f (x x)) (fun x -> f (x x));;
--------------------------------^
C:\Users\Juliet\AppData\Local\Temp\stdin(6,33): error FS0001: Type mismatch. Expecting a
'a
but given a
'a -> 'b
The resulting type would be infinite when unifying ''a' and ''a -> 'b'
How do I write the y-combinator in F# without using let rec ...?
As the compiler points out, there is no type that can be assigned to x so that the expression (x x) is well-typed (this isn't strictly true; you can explicitly type x as obj->_ - see my last paragraph). You can work around this issue by declaring a recursive type so that a very similar expression will work:
type 'a Rec = Rec of ('a Rec -> 'a)
Now the Y-combinator can be written as:
let y f =
let f' (Rec x as rx) = f (x rx)
f' (Rec f')
Unfortunately, you'll find that this isn't very useful because F# is a strict language,
so any function that you try to define using this combinator will cause a stack overflow.
Instead, you need to use the applicative-order version of the Y-combinator (\f.(\x.f(\y.(x x)y))(\x.f(\y.(x x)y))):
let y f =
let f' (Rec x as rx) = f (fun y -> x rx y)
f' (Rec f')
Another option would be to use explicit laziness to define the normal-order Y-combinator:
type 'a Rec = Rec of ('a Rec -> 'a Lazy)
let y f =
let f' (Rec x as rx) = lazy f (x rx)
(f' (Rec f')).Value
This has the disadvantage that recursive function definitions now need an explicit force of the lazy value (using the Value property):
let factorial = y (fun f -> function | 0 -> 1 | n -> n * (f.Value (n - 1)))
However, it has the advantage that you can define non-function recursive values, just as you could in a lazy language:
let ones = y (fun ones -> LazyList.consf 1 (fun () -> ones.Value))
As a final alternative, you can try to better approximate the untyped lambda calculus by using boxing and downcasting. This would give you (again using the applicative-order version of the Y-combinator):
let y f =
let f' (x:obj -> _) = f (fun y -> x x y)
f' (fun x -> f' (x :?> _))
This has the obvious disadvantage that it will cause unneeded boxing and unboxing, but at least this is entirely internal to the implementation and will never actually lead to failure at runtime.
I would say it's impossible, and asked why, I would handwave and invoke the fact that simply typed lambda calculus has the normalization property. In short, all terms of the simply typed lambda calculus terminate (consequently Y can not be defined in the simply typed lambda calculus).
F#'s type system is not exactly the type system of simply typed lambda calculus, but it's close enough. F# without let rec comes really close to the simply typed lambda calculus -- and, to reiterate, in that language you cannot define a term that does not terminate, and that excludes defining Y too.
In other words, in F#, "let rec" needs to be a language primitive at the very least because even if you were able to define it from the other primitives, you would not be able to type this definition. Having it as a primitive allows you, among other things, to give a special type to that primitive.
EDIT: kvb shows in his answer that type definitions (one of the features absent from the simply typed lambda-calculus but present in let-rec-less F#) allow to get some sort of recursion. Very clever.
Case and let statements in ML derivatives are what makes it Turing Complete, I believe they're based on System F and not simply typed but the point is the same.
System F cannot find a type for the any fixed point combinator, if it could, it wasn't strongly normalizing.
What strongly normalizing means is that any expression has exactly one normal form, where a normal form is an expression that cannot be reduced any further, this differs from untyped where every expression has at max one normal form, it can also have no normal form at all.
If typed lambda calculi could construct a fixed point operator in what ever way, it was quite possible for an expression to have no normal form.
Another famous theorem, the Halting Problem, implies that strongly normalizing languages are not Turing complete, it says that's impossible to decide (different than prove) of a turing complete language what subset of its programs will halt on what input. If a language is strongly normalizing, it's decidable if it halts, namely it always halts. Our algorithm to decide this is the program: true;.
To solve this, ML-derivatives extend System-F with case and let (rec) to overcome this. Functions can thus refer to themselves in their definitions again, making them in effect no lambda calculi at all any more, it's no longer possible to rely on anonymous functions alone for all computable functions. They can thus again enter infinite loops and regain their turing-completeness.
Short answer: You can't.
Long answer:
The simply typed lambda calculus is strongly normalizing. This means it's not Turing equivalent. The reason for this basically boils down to the fact that a Y combinator must either be primitive or defined recursively (as you've found). It simply cannot be expressed in System F (or simpler typed calculi). There's no way around this (it's been proven, after all). The Y combinator you can implement works exactly the way you want, though.
I would suggest you try scheme if you want a real Church-style Y combinator. Use the applicative version given above, as other versions won't work, unless you explicitly add laziness, or use a lazy Scheme interpreter. (Scheme technically isn't completely untyped, but it's dynamically typed, which is good enough for this.)
See this for the proof of strong normalization:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.127.1794
After thinking some more, I'm pretty sure that adding a primitive Y combinator that behaves exactly the way the letrec defined one does makes System F Turing complete. All you need to do to simulate a Turing machine then is implement the tape as an integer (interpreted in binary) and a shift (to position the head).
Simply define a function taking its own type as a record, like in Swift (there it's a struct) :)
Here, Y (uppercase) is semantically defined as a function that can be called with its own type. In F# terms, it is defined as a record containing a function named call, so for calling a y defined as this type, you have to actually call y.call :)
type Y = { call: Y -> (int -> int) }
let fibonacci n =
let makeF f: int -> int =
fun x ->
if x = 0 then 0 else if x = 1 then 1 else f(x - 1) + f(x - 2)
let y = { call = fun y -> fun x -> (makeF (y.call y)) x }
(y.call y) n
It's not supremely elegant to read but it doesn't resort to recursion for defining a y combinator that is supposed to provide recursion all by itself ^^

Resources