Hey guys, I'm trying to get cozy with functional programming (particularly with F#), and I've hit a wall when it comes to building tail-recursive functions. I'm pretty good with turning basic recursion (where the function basically calls itself once per invocation), into tail recursion, but I now have a slightly more complicated situation.
In my case, the function must accept a single list as a parameter. When the function is called, I have to remove the first element from the list, and then recur using the remainder of the list. Then I need to apply the first element which I removed in some way to the result of the recursion. Next, I remove the second element and do the same thing (Note: when I say "remove the seond element", that is from the original list, so the list passed at the recursion includes the first element as well). I do the same for the third, fourth, etc. elements of the list.
Is there a way to convert the above situation into a tail-recursive function? Maybe nested tail-recursive functions??? Thank you for any answers.
Okay, so here's my basic code. This particular one is a permutation generator (I'm not too concern with the permutation part, though - it's the recursion I'd like to focusing on):
let permutationsOther str =
match str with
| value :: [] ->
[[value]]
| _ ->
let list = (List.map (fun a -> // This applies the remove part for every element a
let lst = (List.filter (fun b -> b <> a) str) // This part removes element a from the list
let permutedLst = permutations lst // recursive call
consToAll a permutedLst // constToAll this is my own function which performs "cons" operation with a and every element in the list permutedLst
) str)
List.reduce (fun acc elem -> elem # acc) list // flatten list of lists produce by map into a single list
I hope this is clear enough - I'll be happy to provide clarifications if needed.
By the way, I have found just a way to rewrite this particular function so that it only uses a single recursion, but it was a fluke more than an informed decision. However, this has encouraged me that there may be a general method of turning multiple recursion into single recursion, but I have not yet found it.
Conversion to CPS should do the trick:
NOTE 1: Source of the sample is typed directly in browser, so may contain errors :(. But I hope it can demonstrate the general idea.
NOTE 2: consToAll function should be converted to CPS too: consToAll: 'T -> 'T list list -> ('T list list -> 'R) -> 'R
let remove x l = List.filter ((<>) x) l // from original post: should duplicates also be removed ???
let permute l =
let rec loop k l =
match l with
| [] -> k []
| [value] -> k [[value]]
| _ -> filter l [] l (fun r -> r |> List.reduce (fun acc elem -> elem # acc) |> k )
and filter l acc orig fk =
match l with
| [] -> fk acc
| x::xs ->
remove x orig
|> loop (fun res ->
consToAll x res (fun rs -> filter xs (rs::acc) orig fk)
)
loop id l
Related
I am trying to convert the following normal-recursive code to tail-recursive in F#, but I am failing miserably.
let rec insert elem lst =
match lst with
| [] -> [elem]
| hd::tl -> if hd > elem then
elem::lst
else
hd::(insert elem tl)
let lst1 = []
let lst2 = [1;2;3;5]
printfn "\nInserting 4 in an empty list: %A" (insert 4 lst1)
printfn "\nInserting 4 in a sorted list: %A" (insert 4 lst2)
Can you guys help? Unfortunately I am a beginner in f#. Also, can anyone point me to a good tutorial to understand tail-recursion?
The point of tail recursion is the following: the last operation before returning from a function is a call to itself; this is called a tail call, and is where tail recursion gets its name from (the recursive call is in last, i.e. tail position).
Your function is not tail recursive because at least one of its branches has an operation after the recursive call (the list cons operator).
The usual way of converting a recursive function into a tail-recursive function is to add an argument to accumulate intermediate results (the accumulator). When it comes to lists, and when you realize that the only elementary list operation is prepending an element, this also means that after you are through with processing your list, it will be reversed, and thus the resulting accumulator will usually have to be reversed again.
With all these points in mind, and given that we do not want to change the function's public interface by adding a parameter that is superfluous from the caller's point of view, we move the real work to an internal subfunction. This particular function is slightly more complicated because after the element has been inserted, there is nothing else to do but concatenate the two partial lists again, one of which is now in reverse order while the other is not. We create a second internal function to handle that part, and so the whole function looks as follows:
let insert elm lst =
let rec iter acc = function
| [] -> List.rev (elm :: acc)
| (h :: t) as ls ->
if h > elm then finish (elm :: ls) acc
else iter (h :: acc) t
and finish acc = function
| [] -> acc
| h :: t -> finish (h :: acc) t
iter [] lst
For further studying, Scott Wlaschin's F# for Fun and Profit is a great resource, and tail recursion is handled in a larger chapter about recursive types and more: https://fsharpforfunandprofit.com/posts/recursive-types-and-folds
I was trying to write a generic mapFoldWhile function, which is just mapFold but requires the state to be an option and stops as soon as it encounters a None state.
I don't want to use mapFold because it will transform the entire list, but I want it to stop as soon as an invalid state (i.e. None) is found.
This was myfirst attempt:
let mapFoldWhile (f : 'State option -> 'T -> 'Result * 'State option) (state : 'State option) (list : 'T list) =
let rec mapRec f state list results =
match list with
| [] -> (List.rev results, state)
| item :: tail ->
let (result, newState) = f state item
match newState with
| Some x -> mapRec f newState tail (result :: results)
| None -> ([], None)
mapRec f state list []
The List.rev irked me, since the point of the exercise was to exit early and constructing a new list ought to be even slower.
So I looked up what F#'s very own map does, which was:
let map f list = Microsoft.FSharp.Primitives.Basics.List.map f list
The ominous Microsoft.FSharp.Primitives.Basics.List.map can be found here and looks like this:
let map f x =
match x with
| [] -> []
| [h] -> [f h]
| (h::t) ->
let cons = freshConsNoTail (f h)
mapToFreshConsTail cons f t
cons
The consNoTail stuff is also in this file:
// optimized mutation-based implementation. This code is only valid in fslib, where mutation of private
// tail cons cells is permitted in carefully written library code.
let inline setFreshConsTail cons t = cons.(::).1 <- t
let inline freshConsNoTail h = h :: (# "ldnull" : 'T list #)
So I guess it turns out that F#'s immutable lists are actually mutable because performance? I'm a bit worried about this, having used the prepend-then-reverse list approach as I thought it was the "way to go" in F#.
I'm not very experienced with F# or functional programming in general, so maybe (probably) the whole idea of creating a new mapFoldWhile function is the wrong thing to do, but then what am I to do instead?
I often find myself in situations where I need to "exit early" because a collection item is "invalid" and I know that I don't have to look at the rest. I'm using List.pick or Seq.takeWhile in some cases, but in other instances I need to do more (mapFold).
Is there an efficient solution to this kind of problem (mapFoldWhile in particular and "exit early" in general) with functional programming concepts, or do I have to switch to an imperative solution / use a Collections.Generics.List?
In most cases, using List.rev is a perfectly sufficient solution.
You are right that the F# core library uses mutation and other dirty hacks to squeeze some more performance out of the F# list operations, but I think the micro-optimizations done there are not particularly good example. F# list functions are used almost everywhere so it might be a good trade-off, but I would not follow it in most situations.
Running your function with the following:
let l = [ 1 .. 1000000 ]
#time
mapFoldWhile (fun s v -> 0, s) (Some 1) l
I get ~240ms on the second line when I run the function without changes. When I just drop List.rev (so that it returns the data in the other order), I get around ~190ms. If you are really calling the function frequently enough that this matters, then you'd have to use mutation (actually, your own mutable list type), but I think that is rarely worth it.
For general "exit early" problems, you can often write the code as a composition of Seq.scan and Seq.takeWhile. For example, say you want to sum numbers from a sequence until you reach 1000. You can write:
input
|> Seq.scan (fun sum v -> v + sum) 0
|> Seq.takeWhile (fun sum -> sum < 1000)
Using Seq.scan generates a sequence of sums that is over the whole input, but since this is lazily generated, using Seq.takeWhile stops the computation as soon as the exit condition happens.
I'm trying to process a sequence of items whereby the process step relies on some additional cumulative state from the prior items (ordering isn't important).
Essentially:
I have a Seq<'A>
I have a (Type * int) list referred to as the skip list
I have a process step 'A -> (Type * int) list -> 'B option
This takes the current skip list
The method in question essentially:
Seq<'A'> -> (Type * int) list -> (Type * int) list
So we take a bunch of input items and an initial skip list and produce a final skip list.
I've basically got the following so far:
sourceItems
|> Seq.map (fun srcItem -> (srcItem, outerSkip))
|> Seq.unfold (fun elem ->
match elem with
| SeqEmpty -> None
| SeqCons((srcItem, skip), tail) ->
match process(srcItem, skip) with
| Some targetItem -> Some((Some targetItem, skip), tail)
| None -> Some((None, skip), tail |> Seq.map (fun (i, skp) -> (i, (srcItem.GetType(), liD srcItem) :: skp))))
With SeqEmpty and SeqCons being active patterns:
let (|SeqEmpty|SeqCons|) (xs: 'a seq) =
if Seq.isEmpty xs then SeqEmpty
else SeqCons(Seq.head xs, Seq.skip 1 xs)
My process so far basically just starts off with the items and adds the initial skip to each, unfolds and maps the remaining seq to have the same item but with the new skip list.
I have a number of problems with this:
It's ugly and confusing as all hell
I'm sure it's less than performant
Ideally I'd like to avoid the need to map the items to include the initial skip list in the first place, but then I'm not sure how I'd get that into the unfold aside from mapping it into just the first element in the sequence.
Possible Alternative Solution
Based on a different approach taken in List processing with intermediate state (Mark's answer)
I've been able to use:
items
|> Seq.fold (fun (skip) srcItem ->
match process(srcItem, skip) with
| None -> (srcItem.GetType(), liD srcItem) :: skip
| Some tgtItem ->
skip
) outerSkip
Which aside from all the stuff needed when there is an item, appears to actually do the trick!
This is significantly simpler than the unfold approach, but I'm a little unclear on exactly how it's working.
I'm assuming that fun (skip) srcItem -> ... is essentially creating a function expecting an additional parameter that through the magic of something (partial application?) I'm able to provide to fold using outerSkip - is this right?
I ended up adopting the fold strategy as mentioned in the question.
The final code is:
let result =
items
|> Seq.fold (fun (skip, targetItems), srcItem ->
match process(srcItem, skip) with
| None -> ((srcItem.GetType(), getId srcItem) :: skip, targetItems)
| Some tgtItem -> (skip, tgtItem :: targetItems)) (outerSkip, [])
With result being a tuple ((Type * int) list, obj list) which is exactly what I wanted.
I'm then able to take action on the target items, and to just return the final skip list.
This is a huge improvement over the unfold method I was using previously.
I have a list of lists
LL = [[1;2;3];[4;5;6];[7;8;9]]
And I would like it to look like this
LSimple= [1;2;3;4;5;6;7;8;9]
That's as simple as I can ask it, but maybe rewording helps. How can I parse this lists of lists and create a simple list from it?
List.concat LL
Will do what you want. The X.concat family of functions concatenate any sequence of the collection X to a single X where X may be a List, Array, Seq or even a String with a given separator.
Depending on the implementation of List.concat it could be highly inefficient. if it is implemented with the naive approach it will use the naive way for a single appending.
for the simple case: assume that we got two list xs ys we wish to append together.
then the implementation would be
let rec append =
function
| [], ys -> ys
| xs, [] -> xs
| x::xs, y::ys -> append(x :: append (xs,[y]), ys)
This is the most inefficient implementation there is, another implementation will simple replace the last case return with
x::append(xs,y::ys)
Both has O(n) running time. but the last will have a smaller konstant k (k*n).
The most effecient way with 2*k*n run time are given below. where n is the number of elements, and k is the kost of concatenating a single element to a list.
let collapse lst =
let rec help acc =
function
| [] -> // conversion point
List.rev acc
| (x :: []) :: xss -> // singleton case
help (x :: acc) xss
| (x :: xs) :: xss -> // general case
help (x :: acc) (xs :: xss)
help [] lst // return
In general, don't trust libraries when it comes to efficiency, and be aware that those who make them are just humans as you, so there can be a lot of bugs.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Linked list partition function and reversed results
Actually I don't care about the input type or the output type, any of seq, array, list will do. (It doesn't have to be generic) Currently my code takes list as input and (list * list) as output
let takeWhile predicator list =
let rec takeWhileRec newList remain =
match remain with
| [] -> (newList |> List.rev, remain)
| x::xs -> if predicator x then
takeWhileRec (x::newList) xs
else
(newList |> List.rev, remain)
takeWhileRec [] list
However, there is a pitfall. As fas as I see, List.rev is O(n^2), which would likely to dominate the overall speed? I think it is even slower than the ugly solution: Seq.takeWhile, then count, and then take tail n times... which is still O(n)
(If there is a C# List, then i would use that without having to reverse it...)
A side question, what's difference between Array.ofList and List.toArray , or more generally, A.ofB and B.ofA in List, Seq, Array?
is seq myList identical to List.toSeq myList?
Another side question, is nested Seq.append have same complexity as Seq.concat?
e.g.
Seq.append (Seq.append (Seq.append a b) c) d // looks aweful
Seq.concat [a;b;c;d]
1)The relevant implementation of List.rev is in local.fs in the compiler - it is
// optimized mutation-based implementation. This code is only valid in fslib, where mutation of private
// tail cons cells is permitted in carefully written library code.
let rec revAcc xs acc =
match xs with
| [] -> acc
| h::t -> revAcc t (h::acc)
let rev xs =
match xs with
| [] -> xs
| [_] -> xs
| h1::h2::t -> revAcc t [h2;h1]
The comment does seem odd as there is no obvious mutation. Note that this is in fact O(n) not O(n^2)
2) As pad said there is no difference - I prefer to use the to.. as I think
A
|> List.map ...
|> List.toArray
looks nicer than
A
|> List.map ...
|> Array.ofList
but that is just me.
3)
Append (compiler source):
[<CompiledName("Append")>]
let append (source1: seq<'T>) (source2: seq<'T>) =
checkNonNull "source1" source1
checkNonNull "source2" source2
fromGenerator(fun () -> Generator.bindG (toGenerator source1) (fun () -> toGenerator source2))
Note that for each append we get an extra generator that has to be walked through. In comparison, the concat implementation will just have 1 single extra function rather than n so using concat is probably better.
To answer your questions:
1) Time complexity of List.rev is O(n) and worst-case complexity of takeWhile is also O(n). So using List.rev doesn't increase complexity of the function. Using ResizeArray could help you avoid List.rev, but you have to tolerate a bit of mutation.
let takeWhile predicate list =
let rec loop (acc: ResizeArray<_>) rest =
match rest with
| x::xs when predicate x -> acc.Add(x); loop acc xs
| _ -> (acc |> Seq.toList, rest)
loop (ResizeArray()) list
2) There is no difference. Array.ofList and List.toArray uses the same function internally (see here and here).
3). I think Seq.concat has the same complexity with a bunch of Seq.append. In the context of List andArray, concat is more efficient than append because you have more information to pre-allocate space for outputs.
how about this:
let takeWhile pred =
let cont = ref true
List.partition (pred >> fun r -> !cont && (cont := r; r))
It uses a single library function, List.partition, which is efficiently implemented.
Hope this is what you meant :)