Scheme and Shallow Binding - binding

(define make (lambda (x) (lambda (y) (cons x (list y)))))
(let ((x 7)
(p (make 4)))
(cons x (p 0)))
I'm new to Scheme and functional program, so I am a bit clunky with walking through programs, but I get that if I used deep binding this program will return (7 4 0). Makes sense. What would this program do using shallow binding? I get this may sound dumb but is the p in the line with cons a redefinition? So in that case, we would return (7 0)?
Basically, I understand the concept of deep v. shallow binding, but I feel like I'm jumbling it up when looking at Scheme because I'm not crazy familiar with it.

Deep or shallow binding is an implementational technique and can not be observed from inside the program. The difference for the programmer is between lexical and dynamic scoping rules, but both can be implemented with any of the two techniques (i.e. one notion has got nothing to do with the other).
Deep or shallow refers to the choice of stack frame to hold a given outer scoped variable's binding. In deep binding there is a chain of frames to be accessed until the correct frame is entered holding the record for the variable; in shallow binding all bindings are present in one, shallow environment. See also "rerooting" (which only makes sense in the context of shallow binding implementation of lexical scoping).
To your specific question, under lexical scoping rules your code would return (7 4 0) and under dynamic - (7 7 0), because the call ((lambda(y) (list x y)) 0) is done inside the dynamic scope of x=7 binding (as a side note, (cons x (list y)) is the same as (list x y)):
x = 7
p = (lambda (y) (list x y)) ; x=4 is unused, in p=(make 4)
(cons 7 (p 0)) == (list 7 7 0) ; 'x' in this line and in lambda body for p
; both refer to same binding that is
; in effect, i.e. x=7
NB same terms (deep/shallow binding) are used in other language(s) now with completely different meaning (they do have something to do with the scoping rules there), which I don't care to fully understand. This answer is given in the context of Scheme.
Reference: Shallow Binding in LISP 1.5 by Baker, Henry G. Jr., 1977.

See this wikipedia article for a discussion on scoping (it mentions lexical/dynamic scoping and deep/shallow binding) bearing in mind that Scheme is lexically scoped. Will Ness' answer provides additional information.
For now, let's see step-by-step what's happening in this snippet of code:
; a variable called x is defined and assigned the value 7
(let ((x 7)
; make is called and returns a procedure p, inside its x variable has value 4
(p (make 4)))
; 7 is appended at the head of the result of calling p with y = 0
(cons x (p 0)))
=> '(7 4 0)
Notice that in the second line a closure is created in the lambda returned by make, and the variable x inside will be assigned the value 4. This x has nothing to do with the outer x, because Scheme is lexically scoped.
The last line is not a redefinition, as mentioned in the previous paragraph the x inside make is different from the x defined in the let expression.

Related

In Z3, I cannot understand result of quantifier elimination of Exists y. Forall x. (x>=2) => ((y>1) /\ (y<=x))

In Z3-Py, I am performing quantifier elimination (QE) over the following formulae:
Exists y. Forall x. (x>=2) => ((y>1) /\ (y<=x))
Forall x. Exists y. (x>=2) => ((y>1) /\ (y<=x)),
where both x and y are Integers. I did QE in the following way:
x, y = Ints('x, y')
t = Tactic("qe")
negS0= (x >= 2)
s1 = (y > 1)
s2 = (y <= x)
#EA
ea = Goal()
ea.add(Exists([y],Implies(negS0, (ForAll([x], And(s1,s2))))))
ea_qe = t(ea)
print(ea_qe)
#AE
ae = Goal()
ae.add(ForAll([x],Implies(negS0, (Exists([y], And(s1,s2))))))
ae_qe = t(ae)
print(ae_qe)
Result QE for ae is as expected: [[]] (i.e., True). However, as for ea, QE outputs: [[Not(x, >= 2)]], which is a results that I do not know how to interpret since (1) it has not really performed QE (note the resulting formula still contains x and indeed does not contain y which is the outermost quantified variable) and (2) I do not understand the meaning of the comma in x, >=. I cannot get the model either:
phi = Exists([y],Implies(negS0, (ForAll([x], And(s1,s2)))))
s_def = Solver()
s_def.add(phi)
print(s_def.model())
This results in the error Z3Exception: model is not available.
I think the point is as follows: since (x>=2) is an implication, there are two ways to satisfy the formula; by making the antecedent False or by satisfying the consequent. In the second case, the model would be y=2. But in the first case, the result of QE would be True, thus we cannot get a single model (as it happens with a universal model):
phi = ForAll([x],Implies(negS0, (Exists([y], And(s1,s2)))))
s_def = Solver()
s_def.add(phi)
print(s_def.model())
In any case, I cannot 'philosophically' understand the meaning of a QE of x where x is part of the (quantifier-eliminated) answer.
Any help?
There are two separate issues here, I'll address them separately.
The mysterious comma This is a common gotcha. You declared:
x, y = Ints('x, y')
That is, you gave x the name "x," and y the name "y". Note the comma after the x in the name. This should be
x, y = Ints('x y')
I guess you can see the difference: The name you gave to the variable x is "x," when you do the first; i.e., comma is part of the name. Simply skip the comma on the right hand side, which isn't what you intended anyhow. And the results will start being more meaningful. To be fair, this is a common mistake, and I wish the z3 developers ignored the commas and other punctuation in the string you give; but that's just not the case. They simply break at whitespace.
Quantification
This is another common gotcha. When you write:
ea.add(Exists([y],Implies(negS0, (ForAll([x], And(s1,s2))))))
the x that exists in negS0 is not quantified over by your ForAll, since it's not in the scope. Perhaps you meant:
ea.add(Exists([y],ForAll([x], Implies(negS0, And(s1,s2)))))
It's hard to guess what you were trying to do, but I hope the above makes it clear that the x wasn't quantified. Also, remember that a top-level exist quantifier in a formula is more or less irrelevant. It's equivalent to a top-level declaration for all practical purposes.
Once you make this fix, I think things will become more clear. If not, please ask further clarifying questions. (As a separate question on Stack-overflow; as edits to existing questions only complicate the matters.)

forall usage in SMT

How does forall statement work in SMT? I could not find information about usage. Can you please simply explain this? There is an example from
https://rise4fun.com/Z3/Po5.
(declare-fun f (Int) Int)
(declare-fun g (Int) Int)
(declare-const a Int)
(declare-const b Int)
(declare-const c Int)
(assert (forall ((x Int))
(! (= (f (g x)) x)
:pattern ((g x)))))
(assert (= (g a) c))
(assert (= (g b) c))
(assert (not (= a b)))
(check-sat)
For general information on quantifiers (and everything else SMTLib) see the official SMTLib document:
http://smtlib.cs.uiowa.edu/papers/smt-lib-reference-v2.6-r2017-07-18.pdf
Quoting from Section 3.6.1:
Exists and forall quantifiers. These binders correspond to the usual
universal and existential quantifiers of first-order logic, except
that each variable they quantify is also associated with a sort. Both
binders have a non-empty list of variables, which abbreviates a
sequential nesting of quantifiers. Specifically, a formula of the form
(forall ((x1 σ1) (x2 σ2) · · · (xn σn)) ϕ) (3.1) has the same
semantics as the formula (forall ((x1 σ1)) (forall ((x2 σ2)) (· · ·
(forall ((xn σn)) ϕ) · · · ) (3.2) Note that the variables in the list
((x1 σ1) (x2 σ2) · · · (xn σn)) of (3.1) are not required to be
pairwise disjoint. However, because of the nested quantifier
semantics, earlier occurrences of same variable in the list are
shadowed by the last occurrence—making those earlier occurrences
useless. The same argument applies to the exists binder.
If you have a quantified assertion, that means the solver has to find a satisfying instance that makes that formula true. For a forall quantifier, this means it has to find a model that the assertion is true for all assignments to the quantified variables of the relevant sorts. And likewise, for exists the model needs to be able to exhibit a particular value satisfying the assertion.
Top-level exists quantifiers are usually left-out in SMTLib: By skolemization, declaring a top-level variable fills that need, and also has the advantage of showing up automatically in models. (That is, any top-level declared variable is automatically existentially quantified.)
Using forall will typically make the logic semi-decidable. So, you're likely to get unknown as an answer if you use quantifiers, unless some heuristic can find a satisfying assignment. Similarly, while the syntax allows for nested quantifiers, most solvers will have very hard time dealing with them. Patterns can help, but they remain hard-to-use to this day. To sum up: If you use quantifiers, then SMT solvers are no longer decision-procedures: They may or may not terminate.
If you are using the Python interface for z3, also take a look at: https://ericpony.github.io/z3py-tutorial/advanced-examples.htm. It does contain some quantification examples that can clarify things for you. (Even if you don't use the Python interface, I heartily recommend going over that page to see what the capabilities are. They more or less translate to SMTLib directly.)
Hope that gets you started. Stack-overflow works the best if you ask specific questions, so feel free to ask clarification on actual code as you need.
Semantically, a quantifier forall x: T . e(x) is equivalent to e(x_1) && e(x_2) && ..., where the x_i are all the values of type T. If T has infinitely many (or statically unknown many) values, then it is intuitively clear that an SMT solver cannot simply turn a quantifier into the equivalent conjunction.
The classical approach in this case are patterns (also called triggers), pioneered by Simplify and available in Z3 and others. The idea is rather simple: users annotate a quantifier with a syntactical pattern that serves a heuristic for when (and how) to instantiate the quantifier.
Here is an example (in pseudo-code):
assume forall x :: {foo(x)} foo(x) ==> false
Here, {foo(x)} is the pattern, indicating to the SMT solver that the quantifier should be instantiated whenever the solver gets a ground term foo(something). For example:
assume forall x :: {foo(x)} foo(x) ==> 0 < x
assume foo(y)
assert 0 < y
Since the ground term foo(y) matches the trigger foo(x) when the quantified variable x is instantiated with y, the solver will instantiate the quantifier accordingly and learn 0 < y.
Patterns and quantfier triggering is difficult, though. Consider this example:
assume forall x :: {foo(x)} (foo(x) || bar(x)) ==> 0 < y
assume bar(y)
assert 0 < y
Here, the quantifier won't be instantiated because the ground term bar(y) does not match the chosen pattern.
The previous example shows that patterns can cause incompletenesses. However, they can also cause termination problems. Consider this example:
assume forall x :: {f(x)} !f(x) || f(f(x))
assert f(y)
The pattern now admits a matching loop, which can cause nontermination. Ground term f(y) allows to instantiate the quantifier, which yields the ground term f(f(y)). Unfortunately, f(f(y)) matches the trigger (instantiate x with f(y)), which yields f(f(f(y))) ...
Patterns are dreaded by many and indeed tricky to get right. On the other hand, working out a triggering strategy (given a set of quantifiers, find patterns that allow the right instantiations, but ideally not more than these) ultimately "only" required logical reasoning and discipline.
Good starting points are:
* https://rise4fun.com/Z3/tutorial/, section "Quantifiers"
* http://moskal.me/smt/e-matching.pdf
* https://dl.acm.org/citation.cfm?id=1670416
* http://viper.ethz.ch/tutorial/, section "Quantifiers"
Z3 also has offers Model-based Quantifier Instantiation (MBQI), an approach to quantifiers that doesn't use patterns. As far as I know, it is unfortunately also much less well documented, but the Z3 tutorial has a short section on MBQI as well.

Shallow and Deep Binding

I was trying to understand the concept of dynamic/static scope with deep and shallow binding. Below is the code-
(define x 0)
(define y 0)
(define (f z) (display ( + z y))
(define (g f) (let ((y 10)) (f x)))
(define (h) (let ((x 100)) (g f)))
(h)
I understand at dynamic scoping value of the caller function is used by the called function. So using dynamic binding I should get the answer- 110. Using static scoping I would get the answer 0. But I got these results without considering shallow or deep binding. What is shallow and deep binding and how will it change the result?
There's an example in these lecture notes 6. Names, Scopes, and Bindings: that explains the concepts, though I don't like their pseudo-code:
thres:integer
function older(p:person):boolean
return p.age>thres
procedure show(p:person, c:function)
thres:integer
thres:=20
if c(p)
write(p)
procedure main(p)
thres:=35
show(p, older)
As best I can tell, this would be the following in Scheme (with some, I hope, more descriptive names:
(define cutoff 0) ; a
(define (above-cutoff? person)
(> (age person) cutoff))
(define (display-if person predicate)
(let ((cutoff 20)) ; b
(if (predicate person)
(display person))))
(define (main person)
(let ((cutoff 35)) ; c
(display-if person above-cutoff?)))
With lexical scoping the cutoff in above-cutoff? always refers to binding a.
With dynamic scoping as it's implemented in Common Lisp (and most actual languages with dynamic scoping, I think), the value of cutoff in above-cutoff?, when used as the predicate in display-if, will refer to binding b, since that's the most recent on on the stack in that case. This is shallow binding.
So the remaining option is deep binding, and it has the effect of having the value of cutoff within above-cutoff? refer to binding c.
Now let's take a look at your example:
(define x 0)
(define y 0)
(define (f z) (display (+ z y))
(define (g f) (let ((y 10)) (f x)))
(define (h) (let ((x 100)) (g f)))
(h)
I'm going to add some newlines so that commenting is easier, and use a comment to mark each binding of each of the variables that gets bound more than once.
(define x 0) ; x0
(define y 0) ; y0
(define (f z) ; f0
(display (+ z y)))
(define (g f) ; f1
(let ((y 10)) ; y1
(f x)))
(define (h)
(let ((x 100)) ; x1
(g f)))
Note the f0 and f1 there. These are important, because in the deep binding, the current environment of a function passed as an argument is bound to that environment. That's important, because f is passed as a parameter to g within f. So, let's cover all the cases:
With lexical scoping, the result is 0. I think this is the simplest case.
With dynamic scoping and shallow binding the answer is 110. (The value of z is 100, and the value of y is 10.) That's the answer that you already know how to get.
Finally, dynamic scoping and deep binding, you get 100. Within h, you pass f as a parameter, and the current scope is captured to give us a function (lambda (z) (display (+ z 0))), which we'll call ff for sake of convenience. Once, you're in g, the call to the local variable f is actually a call to ff, which is called with the current value of x (from x1, 100), so you're printing (+ 100 0), which is 100.
Comments
As I said, I think the deep binding is sort of unusual, and I don't know whether many languages actually implement that. You could think of it as taking the function, checking whether it has any free variables, and then filling them in with values from the current dynamic environment. I don't think this actually gets used much in practice, and that's probably why you've received some comments asking about these terms. I do see that it could be useful in some circumstances, though. For instance, in Common Lisp, which has both lexical and dynamic (called 'special') variables, many of the system configuration parameters are dynamic. This means that you can do things like this to print in base 16 (since *print-radix* is a dynamic variable):
(let ((*print-radix* 16))
(print value))
But if you wanted to return a function that would print things in base 16, you can't do:
(let ((*print-radix* 16))
(lambda (value)
(print value)))
because someone could take that function, let's call it print16, and do:
(let ((*print-radix* 10))
(print16 value))
and the value would be printed in base 10. Deep binding would avoid that issue. That said, you can also avoid it with shallow binding; you just return
(lambda (value)
(let ((*print-radix* 16))
(print value)))
instead.
All that said, I think that this discussion gets kind of strange when it's talking about "passing functions as arguments". It's strange because in most languages, an expression is evaluated to produce a value. A variable is one type of expression, and the result of evaluating a variable is the expression of that variable. I emphasize "the" there, because that's how it is: a variable has a single value at any given time. This presentation of deep and shallow binding makes it gives a variable a different value depending on where it is evaluated. That seems pretty strange. What I think would make much more sense is if the discussions were about what you get back when you evaluate a lambda expression. Then you could ask "what will the values of the free variables in the lambda expression be"? The answer, in shallow binding, will be "whatever the dynamic values of those variables are when the function is called later. The answer, in deep binding, is "whatever the dynamic values of those variables are when the lambda expression is evaluated."
Then we wouldn't have to consider "functions being passed as arguments." The whole "functions being passed as arguments" is bizarre, because what happens when you pass a function as a parameter (capturing its dynamic environment) and whatever you're passing it to then passes it somewhere else? Is the dynamic environment supposed to get re-bound?
Related Questions and Answers
Dynamic Scoping - Deep Binding vs Shallow Binding
Shallow & Deep Binding - What would this program print?
Dynamic/Static scope with Deep/Shallow binding (exercises) (The answer to this question mentions that "Dynamic scope with deep binding is much trickier, since few widely-deployed languages support it.")

Partial application and closures

I was asked what's the relationship between partial function application and closures.
I would say there isn't any, unless I'm missing the point.
Let's say I'm writing in python and I have a very simple function MySum defined as follows:
MySum = lambda x, y : x + y;
Now I'm fixing one parameter to obtain a function with smaller arity which returns the same value that MySum would return if I called it with the same parameters (partial application):
MyPartialSum = lambda x : MySum(x, 0);
I could do the the very same thing with C:
int MySum(int x, int y) { return x + y; }
int MyPartialSum(int x) { return MySum(x, 0); }
So, the dumb question is: what's the difference? Why should I need closures for partial applications? These codes are equivalent, I don't see what's the bound with closures and partial application.
Partial function application is about fixing some arguments of a given function to yield another function with fewer arguments, like
sum = lambda x, y: x + y
inc = lambda x: sum(x, 1)
Notice that 'inc' is 'sum' partially applied, without capturing anything from the context (as you mentioned closure).
But, such hand-written (usually anonymous) functions are kind of tedious. One can use a function factory, which returns an inner function. The inner function can be parameterized by capturing some variable from its context, like
# sum = lambda x, y: x + y
def makePartialSumF(n):
def partialSumF(x):
return sum(x, n)
return partialSumF
inc = makePartialSumF(1)
plusTwo = makePartialSumF(2)
Here the factory makePartialSumF is invoked twice. Each call results in a partialSumF function (capturing different values as n). Using closure makes the implementation of partial application convenient. So you can say partial application can be implemented by means of closure. Of course, closures can do many other things! (As a side node, python does not have proper closure.)
Currying is about turning a function of N arguments into a unary function which returns a unary function... for example we have a function which takes three arguments and returns a value:
sum = lambda x, y, z: x + y + z
The curried version is
curriedSum = lambda x: lambda y: lambda z: x + y + z
I bet you wouldn't write python code like that. IMO the motivation of Currying is mostly of theoretical interest. (A framework of expressing computations using only unary functions: every function is unary!) The practical byproduct is that, in languages where functions are curried, some partial applications (when you 'fix' arguments from the left) are as trivial as supplying arguments to curried function. (But not all partial applications are as such. Example: given f(x,y,z) = x+2*y+3*z, when you binds y to a constant to yield a function of two variables.) So you can say, Currying is a technique which, in practice and as a byproduct, can make many useful partial functional applications trivial, but that's not the point of Currying.
Partial application is a technique whereby you take an existing function and a subset of it's arguments, and produce a new function that accepts the remaining arguments.
In other words, if you have function F(a, b), a function that applies partial application of a would look like B(fn, a) where F(a, b) = B(F, a)(b).
In your example you're simply creating new functions, rather than applying partial application to the existing one.
Here's an example in python:
def curry_first(fn, arg):
def foo(*args):
return fn(arg, *args)
return foo
This creates a closure over the supplied function and argument. A new function is returned that calls the first function with new argument signature. The closure is important - it allows fn access to arg. Now you can do this sort of thing:
add = lambda x, y : x + y;
add2 = curry_first(add, 2)
add2(4) # returns 6
I've usually heard this referred to as currying.
Simply, the result of a partial application is normally implemented as a closure.
Closures are not a required functionality in a language. I'm experimenting a homemade language, lambdatalk, in which lambdas don't create closures but accept partial application. For instance this is how the set [cons, car, cdr] could be defined in SCHEME:
(def cons (lambda (x y) (lambda (m) (m x y))))
(def car (lambda (z) (z (lambda (p q) p))))
(def cdr (lambda (z) (z (lambda (p q) q))))
(car (cons 12 34)) -> 12
(cdr (cons 12 34)) -> 34
and in lambdatalk:
{def cons {lambda {:x :y :m} {:m :x :y}}}
{def car {lambda {:z} {:z {lambda {:x :y} :x}}}}
{def cdr {lambda {:z} {:z {lambda {:x :y} :y}}}}
{car {cons 12 34}} -> 12
{cdr {cons 12 34}} -> 34
In SCHEME the outer lambda saves x and y in a closure the inner lambda can access given m. In lambdatalk the lambda saves :x and :y and returns a new lambda waiting for :m. So, even if closure (and lexical scope) are usefull functionalities, there are not a necessity. Without any free variables, out of any lexical scope, functions are true black boxes without any side effect, in a total independance, following a true functional paradigm. Don't you think so?
For me, using the partialSum that way, makes sure that you only depend on one Function to sum the numbers (MySum) and that will make debugging a lot more easier if things go wrong, because you would not have to worry about the logic of your code in two different parts of your code.
If in the future you decide to change the logic of MySum, (say for example, make it return x+y+1) then you will not have to worry about MyPartialSum because it calls MySum
Even if it seems stupid, having code written that way is just to simplify the process of dependencies in functions. I am sure you will notice that later in your studies.

How to create a lazy-seq generating, anonymous recursive function in Clojure?

Edit: I discovered a partial answer to my own question in the process of writing this, but I think it can easily be improved upon so I will post it anyway. Maybe there's a better solution out there?
I am looking for an easy way to define recursive functions in a let form without resorting to letfn. This is probably an unreasonable request, but the reason I am looking for this technique is because I have a mix of data and recursive functions that depend on each other in a way requires a lot of nested let and letfn statements.
I wanted to write the recursive functions that generate lazy sequences like this (using the Fibonacci sequence as an example):
(let [fibs (lazy-cat [0 1] (map + fibs (rest fibs)))]
(take 10 fibs))
But it seems in clojure that fibs cannot use it's own symbol during binding. The obvious way around it is using letfn
(letfn [(fibo [] (lazy-cat [0 1] (map + (fibo) (rest (fibo)))))]
(take 10 (fibo)))
But as I said earlier this leads to a lot of cumbersome nesting and alternating let and letfn.
To do this without letfn and using just let, I started by writing something that uses what I think is the U-combinator (just heard of the concept today):
(let [fibs (fn [fi] (lazy-cat [0 1] (map + (fi fi) (rest (fi fi)))))]
(take 10 (fibs fibs)))
But how to get rid of the redundance of (fi fi)?
It was at this point when I discovered the answer to my own question after an hour of struggling and incrementally adding bits to the combinator Q.
(let [Q (fn [r] ((fn [f] (f f)) (fn [y] (r (fn [] (y y))))))
fibs (Q (fn [fi] (lazy-cat [0 1] (map + (fi) (rest (fi))))))]
(take 10 fibs))
What is this Q combinator called that I am using to define a recursive sequence? It looks like the Y combinator with no arguments x. Is it the same?
(defn Y [r]
((fn [f] (f f))
(fn [y] (r (fn [x] ((y y) x))))))
Is there another function in clojure.core or clojure.contrib that provides the functionality of Y or Q? I can't imagine what I just did was idiomatic...
letrec
I have written a letrec macro for Clojure recently, here's a Gist of it. It acts like Scheme's letrec (if you happen to know that), meaning that it's a cross between let and letfn: you can bind a set of names to mutually recursive values, without the need for those values to be functions (lazy sequences are ok too), as long as it is possible to evaluate the head of each item without referring to the others (that's Haskell -- or perhaps type-theoretic -- parlance; "head" here might stand e.g. for the lazy sequence object itself, with -- crucially! -- no forcing involved).
You can use it to write things like
(letrec [fibs (lazy-cat [0 1] (map + fibs (rest fibs)))]
fibs)
which is normally only possible at top level. See the Gist for more examples.
As pointed out in the question text, the above could be replaced with
(letfn [(fibs [] (lazy-cat [0 1] (map + (fibs) (rest (fibs)))))]
(fibs))
for the same result in exponential time; the letrec version has linear complexity (as does a top-level (def fibs (lazy-cat [0 1] (map + fibs (rest fibs)))) form).
iterate
Self-recursive seqs can often be constructed with iterate -- namely when a fixed range of look-behind suffices to compute any given element. See clojure.contrib.lazy-seqs for an example of how to compute fibs with iterate.
clojure.contrib.seq
c.c.seq provides an interesting function called rec-seq, enabling things like
(take 10 (cseq/rec-seq fibs (map + fibs (rest fibs))))
It has the limitation of only allowing one to construct a single self-recursive sequence, but it might be possible to lift from it's source some implementation ideas enabling more diverse scenarios. If a single self-recursive sequence not defined at top level is what you're after, this has to be the idiomatic solution.
combinators
As for combinators such as those displayed in the question text, it is important to note that they are hampered by the lack of TCO (tail call optimisation) on the JVM (and thus in Clojure, which elects to use the JVM's calling conventions directly for top performance).
top level
There's also the option of putting the mutually recursive "things" at top level, possibly in their own namespace. This doesn't work so great if those "things" need to be parameterised somehow, but namespaces can be created dynamically if need be (see clojure.contrib.with-ns for implementation ideas).
final comments
I'll readily admit that the letrec thing is far from idiomatic Clojure and I'd avoid using it in production code if anything else would do (and since there's always the top level option...). However, it is (IMO!) nice to play with and it appears to work well enough. I'm personally interested in finding out how much can be accomplished without letrec and to what degree a letrec macro makes things easier / cleaner... I haven't formed an opinion on that yet. So, here it is. Once again, for the single self-recursive seq case, iterate or contrib might be the best way to go.
fn takes an optional name argument with that name bound to the function in its body. Using this feature, you could write fibs as:
(def fibs ((fn generator [a b] (lazy-seq (cons a (generator b (+ a b))))) 0 1))

Resources