Edit: I discovered a partial answer to my own question in the process of writing this, but I think it can easily be improved upon so I will post it anyway. Maybe there's a better solution out there?
I am looking for an easy way to define recursive functions in a let form without resorting to letfn. This is probably an unreasonable request, but the reason I am looking for this technique is because I have a mix of data and recursive functions that depend on each other in a way requires a lot of nested let and letfn statements.
I wanted to write the recursive functions that generate lazy sequences like this (using the Fibonacci sequence as an example):
(let [fibs (lazy-cat [0 1] (map + fibs (rest fibs)))]
(take 10 fibs))
But it seems in clojure that fibs cannot use it's own symbol during binding. The obvious way around it is using letfn
(letfn [(fibo [] (lazy-cat [0 1] (map + (fibo) (rest (fibo)))))]
(take 10 (fibo)))
But as I said earlier this leads to a lot of cumbersome nesting and alternating let and letfn.
To do this without letfn and using just let, I started by writing something that uses what I think is the U-combinator (just heard of the concept today):
(let [fibs (fn [fi] (lazy-cat [0 1] (map + (fi fi) (rest (fi fi)))))]
(take 10 (fibs fibs)))
But how to get rid of the redundance of (fi fi)?
It was at this point when I discovered the answer to my own question after an hour of struggling and incrementally adding bits to the combinator Q.
(let [Q (fn [r] ((fn [f] (f f)) (fn [y] (r (fn [] (y y))))))
fibs (Q (fn [fi] (lazy-cat [0 1] (map + (fi) (rest (fi))))))]
(take 10 fibs))
What is this Q combinator called that I am using to define a recursive sequence? It looks like the Y combinator with no arguments x. Is it the same?
(defn Y [r]
((fn [f] (f f))
(fn [y] (r (fn [x] ((y y) x))))))
Is there another function in clojure.core or clojure.contrib that provides the functionality of Y or Q? I can't imagine what I just did was idiomatic...
letrec
I have written a letrec macro for Clojure recently, here's a Gist of it. It acts like Scheme's letrec (if you happen to know that), meaning that it's a cross between let and letfn: you can bind a set of names to mutually recursive values, without the need for those values to be functions (lazy sequences are ok too), as long as it is possible to evaluate the head of each item without referring to the others (that's Haskell -- or perhaps type-theoretic -- parlance; "head" here might stand e.g. for the lazy sequence object itself, with -- crucially! -- no forcing involved).
You can use it to write things like
(letrec [fibs (lazy-cat [0 1] (map + fibs (rest fibs)))]
fibs)
which is normally only possible at top level. See the Gist for more examples.
As pointed out in the question text, the above could be replaced with
(letfn [(fibs [] (lazy-cat [0 1] (map + (fibs) (rest (fibs)))))]
(fibs))
for the same result in exponential time; the letrec version has linear complexity (as does a top-level (def fibs (lazy-cat [0 1] (map + fibs (rest fibs)))) form).
iterate
Self-recursive seqs can often be constructed with iterate -- namely when a fixed range of look-behind suffices to compute any given element. See clojure.contrib.lazy-seqs for an example of how to compute fibs with iterate.
clojure.contrib.seq
c.c.seq provides an interesting function called rec-seq, enabling things like
(take 10 (cseq/rec-seq fibs (map + fibs (rest fibs))))
It has the limitation of only allowing one to construct a single self-recursive sequence, but it might be possible to lift from it's source some implementation ideas enabling more diverse scenarios. If a single self-recursive sequence not defined at top level is what you're after, this has to be the idiomatic solution.
combinators
As for combinators such as those displayed in the question text, it is important to note that they are hampered by the lack of TCO (tail call optimisation) on the JVM (and thus in Clojure, which elects to use the JVM's calling conventions directly for top performance).
top level
There's also the option of putting the mutually recursive "things" at top level, possibly in their own namespace. This doesn't work so great if those "things" need to be parameterised somehow, but namespaces can be created dynamically if need be (see clojure.contrib.with-ns for implementation ideas).
final comments
I'll readily admit that the letrec thing is far from idiomatic Clojure and I'd avoid using it in production code if anything else would do (and since there's always the top level option...). However, it is (IMO!) nice to play with and it appears to work well enough. I'm personally interested in finding out how much can be accomplished without letrec and to what degree a letrec macro makes things easier / cleaner... I haven't formed an opinion on that yet. So, here it is. Once again, for the single self-recursive seq case, iterate or contrib might be the best way to go.
fn takes an optional name argument with that name bound to the function in its body. Using this feature, you could write fibs as:
(def fibs ((fn generator [a b] (lazy-seq (cons a (generator b (+ a b))))) 0 1))
Related
On wikipedia it says that using call/cc you can implement the amb operator for nondeterministic choice, and my question is how would you implement the amb operator in a language in which the only support for continuations is to write in continuation passing style, like in erlang?
If you can encode the constraints for what constitutes a successful solution or choice as guards, list comprehensions can be used to generate solutions. For example, the list comprehension documentation shows an example of solving Pythagorean triples, which is a problem frequently solved using amb (see for example exercise 4.35 of SICP, 2nd edition). Here's the more efficient solution, pyth1/1, shown on the list comprehensions page:
pyth1(N) ->
[ {A,B,C} ||
A <- lists:seq(1,N-2),
B <- lists:seq(A+1,N-1),
C <- lists:seq(B+1,N),
A+B+C =< N,
A*A+B*B == C*C
].
One important aspect of amb is efficiently searching the solution space, which is done here by generating possible values for A, B, and C with lists:seq/2 and then constraining and testing those values with guards. Note that the page also shows a less efficient solution named pyth/1 where A, B, and C are all generated identically using lists:seq(1,N); that approach generates all permutations but is slower than pyth1/1 (for example, on my machine, pyth(50) is 5-6x slower than pyth1(50)).
If your constraints can't be expressed as guards, you can use pattern matching and try/catch to deal with failing solutions. For example, here's the same algorithm in pyth/1 rewritten as regular functions triples/1 and the recursive triples/5:
-module(pyth).
-export([triples/1]).
triples(N) ->
triples(1,1,1,N,[]).
triples(N,N,N,N,Acc) ->
lists:reverse(Acc);
triples(N,N,C,N,Acc) ->
triples(1,1,C+1,N,Acc);
triples(N,B,C,N,Acc) ->
triples(1,B+1,C,N,Acc);
triples(A,B,C,N,Acc) ->
NewAcc = try
true = A+B+C =< N,
true = A*A+B*B == C*C,
[{A,B,C}|Acc]
catch
error:{badmatch,false} ->
Acc
end,
triples(A+1,B,C,N,NewAcc).
We're using pattern matching for two purposes:
In the function heads, to control values of A, B and C with respect to N and to know when we're finished
In the body of the final clause of triples/5, to assert that conditions A+B+C =< N and A*A+B*B == C*C match true
If both conditions match true in the final clause of triples/5, we insert the solution into our accumulator list, but if either fails to match, we catch the badmatch error and keep the original accumulator value.
Calling triples/1 yields the same result as the list comprehension approaches used in pyth/1 and pyth1/1, but it's also half the speed of pyth/1. Even so, with this approach any constraint could be encoded as a normal function and tested for success within the try/catch expression.
(define make (lambda (x) (lambda (y) (cons x (list y)))))
(let ((x 7)
(p (make 4)))
(cons x (p 0)))
I'm new to Scheme and functional program, so I am a bit clunky with walking through programs, but I get that if I used deep binding this program will return (7 4 0). Makes sense. What would this program do using shallow binding? I get this may sound dumb but is the p in the line with cons a redefinition? So in that case, we would return (7 0)?
Basically, I understand the concept of deep v. shallow binding, but I feel like I'm jumbling it up when looking at Scheme because I'm not crazy familiar with it.
Deep or shallow binding is an implementational technique and can not be observed from inside the program. The difference for the programmer is between lexical and dynamic scoping rules, but both can be implemented with any of the two techniques (i.e. one notion has got nothing to do with the other).
Deep or shallow refers to the choice of stack frame to hold a given outer scoped variable's binding. In deep binding there is a chain of frames to be accessed until the correct frame is entered holding the record for the variable; in shallow binding all bindings are present in one, shallow environment. See also "rerooting" (which only makes sense in the context of shallow binding implementation of lexical scoping).
To your specific question, under lexical scoping rules your code would return (7 4 0) and under dynamic - (7 7 0), because the call ((lambda(y) (list x y)) 0) is done inside the dynamic scope of x=7 binding (as a side note, (cons x (list y)) is the same as (list x y)):
x = 7
p = (lambda (y) (list x y)) ; x=4 is unused, in p=(make 4)
(cons 7 (p 0)) == (list 7 7 0) ; 'x' in this line and in lambda body for p
; both refer to same binding that is
; in effect, i.e. x=7
NB same terms (deep/shallow binding) are used in other language(s) now with completely different meaning (they do have something to do with the scoping rules there), which I don't care to fully understand. This answer is given in the context of Scheme.
Reference: Shallow Binding in LISP 1.5 by Baker, Henry G. Jr., 1977.
See this wikipedia article for a discussion on scoping (it mentions lexical/dynamic scoping and deep/shallow binding) bearing in mind that Scheme is lexically scoped. Will Ness' answer provides additional information.
For now, let's see step-by-step what's happening in this snippet of code:
; a variable called x is defined and assigned the value 7
(let ((x 7)
; make is called and returns a procedure p, inside its x variable has value 4
(p (make 4)))
; 7 is appended at the head of the result of calling p with y = 0
(cons x (p 0)))
=> '(7 4 0)
Notice that in the second line a closure is created in the lambda returned by make, and the variable x inside will be assigned the value 4. This x has nothing to do with the outer x, because Scheme is lexically scoped.
The last line is not a redefinition, as mentioned in the previous paragraph the x inside make is different from the x defined in the let expression.
I'm seeing rather surprising behavior from ctx-solver-simplify, where the order of parameters to z3.And() seems to matter, using the latest version of Z3 from the master branch of https://git01.codeplex.com/z3 (89c1785b):
#!/usr/bin/python
import z3
a, b = z3.Bools('a b')
p = z3.Not(a)
q = z3.Or(a, b)
for e in z3.And(p, q), z3.And(q, p):
print e, '->', z3.Tactic('ctx-solver-simplify')(e)
produces:
And(Not(a), Or(a, b)) -> [[Not(a), Or(a, b)]]
And(Or(a, b), Not(a)) -> [[b, Not(a)]]
Is this a bug in Z3?
No, this is not a bug. The tactic ctx-solver-simplify is very expensive and inherently asymmetric. That is, the order the sub-formulas are visited affect the final result. This tactic is implemented in the file src/smt/tactic/ctx_solver_simplify_tactic.cpp. The code is quite readable. Note that, it uses a "SMT" solver (m_solver), and makes several calls to m_solver.check() while traversing the input formula. Each one of these calls can be quite expensive. For particular problem domains, we can implement a version of this tactic that is even more expensive, and avoids the asymmetry described in your question.
EDIT:
You may also consider the tactic ctx-simplify, it is cheaper than ctx-solver-simplify, but it is symmetric. The tactic ctx-simplify will essentially applies rules such as:
A \/ F[A] ==> A \/ F[false]
x != c \/ F[x] ==> F[c]
Where F[A] is a formula that may contain A. It is cheaper than ctx-solver-simplify because it does not invoke a SMT solver when it is traversing the formula. Here is an example using this tactic (also available online):
a, b = Bools('a b')
p = Not(a)
q = Or(a, b)
c = Bool('c')
t = Then('simplify', 'propagate-values', 'ctx-simplify')
for e in Or(c, And(p, q)), Or(c, And(q, p)):
print e, '->', t(e)
Regarding humand-readability, this was never a goal when implementing any tactic. Please, tell us if the tactic above is not good enough for your purposes.
I think it will be better to write a custom tactic because there are
other tradeoffs when you essentially ask for canonicity.
Z3 does not have any tactics that convert formulas into a canonical form.
So if you want something that always produces the same answer for formulas
that are ground equivalent you will have to write a new normalizer that
ensures this.
The ctx_solver_simplify_tactic furthermore makes some tradeoffs when simplifying formulas:
it avoids simplifying the same sub-formula multiple times. If it did, it could increase
the size of the resulting formula significantly (exponentially).
What is "monadic reflection"?
How can I use it in F#-program?
Is the meaning of term "reflection" there same as .NET-reflection?
Monadic reflection is essentially a grammar for describing layered monads or monad layering. In Haskell describing also means constructing monads. This is a higher level system so the code looks like functional but the result is monad composition - meaning that without actual monads (which are non-functional) there's nothing real / runnable at the end of the day. Filinski did it originally to try to bring a kind of monad emulation to Scheme but much more to explore theoretical aspects of monads.
Correction from the comment - F# has a Monad equivalent named "Computation Expressions"
Filinski's paper at POPL 2010 - no code but a lot of theory, and of course his original paper from 1994 - Representing Monads. Plus one that has some code: Monad Transformers and Modular Interpreters (1995)
Oh and for people who like code - Filinski's code is on-line. I'll list just one - go one step up and see another 7 and readme. Also just a bit of F# code which claims to be inspired by Filinski
I read through the first Google hit, some slides:
http://www.cs.ioc.ee/mpc-amast06/msfp/filinski-slides.pdf
From this, it looks like
This is not the same as .NET reflection. The name seems to refer to turning data into code (and vice-versa, with reification).
The code uses standard pure-functional operations, so implementation should be easy in F#. (once you understand it)
I have no idea if this would be useful for implementing an immutable cache for a recursive function. It look like you can define mutable operations and convert them to equivalent immutable operations automatically? I don't really understand the slides.
Oleg Kiselyov also has an article, but I didn't even try to read it. There's also a paper from Jonathan Sobel (et al). Hit number 5 is this question, so I stopped looking after that.
As previous answers links describes, Monadic reflection is a concept to bridge call/cc style and Church style programming. To describe these two concepts some more:
F# Computation expressions (=monads) are created with custom Builder type.
Don Syme has a good blog post about this. If I write code to use a builder and use syntax like:
attempt { let! n1 = f inp1
let! n2 = failIfBig inp2
let sum = n1 + n2
return sum }
the syntax is translated to call/cc "call-with-current-continuation" style program:
attempt.Delay(fun () ->
attempt.Bind(f inp1,(fun n1 ->
attempt.Bind(f inp2,(fun n2 ->
attempt.Let(n1 + n2,(fun sum ->
attempt.Return(sum))))))))
The last parameter is the next-command-to-be-executed until the end.
(Scheme-style programming.)
F# is based on OCaml.
F# has partial function application, but it also is strongly typed and has value restriction.
But OCaml don't have value restriction.
OCaml can be used in Church kind of programming, where combinator-functions are used to construct any other functions (or programs):
// S K I combinators:
let I x = x
let K x y = x
let S x y z = x z (y z)
//examples:
let seven = S (K) (K) 7
let doubleI = I I //Won't work in F#
// y-combinator to make recursion
let Y = S (K (S I I)) (S (S (K S) K) (K (S I I)))
Church numerals is a way to represent numbers with pure functions.
let zero f x = x
//same as: let zero = fun f -> fun x -> x
let succ n f x = f (n f x)
let one = succ zero
let two = succ (succ zero)
let add n1 n2 f x = n1 f (n2 f x)
let multiply n1 n2 f = n2(n1(f))
let exp n1 n2 = n2(n1)
Here, zero is a function that takes two functions as parameters: f is applied zero times so this represent the number zero, and x is used to function combination in other calculations (like add). succ function is like plusOne so one = zero |> plusOne.
To execute the functions, the last function will call the other functions with last parameter (x) as null.
(Haskell-style programming.)
In F# value restriction makes this hard. Church numerals can be made with C# 4.0 dynamic keyword (which uses .NET reflection inside). I think there are workarounds to do that also in F#.
I've read through all the documentation, and most of the source of LFE. All the presentations emphasize basic lisp in traditional lisp roles - General Problem Solving, Hello world and syntax emulating macros.
Does anyone know how LFE handles messaging primitives? To specify a more precise question, how would you express this erlang:
A = 2,
Pid = spawn(fun()->
receive
B when is_integer(B) -> io:format("Added: ~p~n",[A+B]);
_ -> nan
end
end),
Pid ! 5.
And then, you know, it mumbles something about having added up some numbers and the answer being 7.
I'm not an LFE user, but there is a user guide in the source tree. From reading it I would guess it is something like this:
(let ((A 2))
(let ((Pid (spawn (lambda ()
(receive
(B (when (is_integer B))
(: io format "Added: ~p~n" (list (+ A B))))
(_ nan))))))
(! Pid 5)))
But I'm very likely to have made a mistake since I haven't even evaluated it in LFE.
Some questions of mine:
Is there a LET* form or is it behaving like one already?
Are guards called the more lispy is-integer and not is_integer as I wrote?
There is a serious lack of examples in the LFE release, all contributions are welcome.
Christian's suggestion is correct. My only comment is that there is no need to have capitalized variable names, it is not wrong, but not necessary.
The LFE let is a "real" let in which the variable bindings are visible first in the body. You can use patterns in let. There is also a let* form (macro actually) which binds sequentially.
No, I have so far kept all the Erlang core function names just as they are in vanilla erlang. It is definitely more lispy to use -instead of _ in names, but what do you do with all the other function names and atoms in OTP? One suggestion is to automatically map - in LFE symbols to _ in the resultant atoms, and back again going the other way of course. This would probably work, but would it lead to confusion?
I could then have a behaviour module looking like:
(defmodule foo
(export (init 1) (handle-call 2) (handle-cast 2) (handle-info 2) ...)
(behaviour gen-server))
(defun handle-call ...)
(defun handle-cast ...)
etc ...
But I am very ambivalent about it.