Partial application and closures - closures

I was asked what's the relationship between partial function application and closures.
I would say there isn't any, unless I'm missing the point.
Let's say I'm writing in python and I have a very simple function MySum defined as follows:
MySum = lambda x, y : x + y;
Now I'm fixing one parameter to obtain a function with smaller arity which returns the same value that MySum would return if I called it with the same parameters (partial application):
MyPartialSum = lambda x : MySum(x, 0);
I could do the the very same thing with C:
int MySum(int x, int y) { return x + y; }
int MyPartialSum(int x) { return MySum(x, 0); }
So, the dumb question is: what's the difference? Why should I need closures for partial applications? These codes are equivalent, I don't see what's the bound with closures and partial application.

Partial function application is about fixing some arguments of a given function to yield another function with fewer arguments, like
sum = lambda x, y: x + y
inc = lambda x: sum(x, 1)
Notice that 'inc' is 'sum' partially applied, without capturing anything from the context (as you mentioned closure).
But, such hand-written (usually anonymous) functions are kind of tedious. One can use a function factory, which returns an inner function. The inner function can be parameterized by capturing some variable from its context, like
# sum = lambda x, y: x + y
def makePartialSumF(n):
def partialSumF(x):
return sum(x, n)
return partialSumF
inc = makePartialSumF(1)
plusTwo = makePartialSumF(2)
Here the factory makePartialSumF is invoked twice. Each call results in a partialSumF function (capturing different values as n). Using closure makes the implementation of partial application convenient. So you can say partial application can be implemented by means of closure. Of course, closures can do many other things! (As a side node, python does not have proper closure.)
Currying is about turning a function of N arguments into a unary function which returns a unary function... for example we have a function which takes three arguments and returns a value:
sum = lambda x, y, z: x + y + z
The curried version is
curriedSum = lambda x: lambda y: lambda z: x + y + z
I bet you wouldn't write python code like that. IMO the motivation of Currying is mostly of theoretical interest. (A framework of expressing computations using only unary functions: every function is unary!) The practical byproduct is that, in languages where functions are curried, some partial applications (when you 'fix' arguments from the left) are as trivial as supplying arguments to curried function. (But not all partial applications are as such. Example: given f(x,y,z) = x+2*y+3*z, when you binds y to a constant to yield a function of two variables.) So you can say, Currying is a technique which, in practice and as a byproduct, can make many useful partial functional applications trivial, but that's not the point of Currying.

Partial application is a technique whereby you take an existing function and a subset of it's arguments, and produce a new function that accepts the remaining arguments.
In other words, if you have function F(a, b), a function that applies partial application of a would look like B(fn, a) where F(a, b) = B(F, a)(b).
In your example you're simply creating new functions, rather than applying partial application to the existing one.
Here's an example in python:
def curry_first(fn, arg):
def foo(*args):
return fn(arg, *args)
return foo
This creates a closure over the supplied function and argument. A new function is returned that calls the first function with new argument signature. The closure is important - it allows fn access to arg. Now you can do this sort of thing:
add = lambda x, y : x + y;
add2 = curry_first(add, 2)
add2(4) # returns 6
I've usually heard this referred to as currying.

Simply, the result of a partial application is normally implemented as a closure.

Closures are not a required functionality in a language. I'm experimenting a homemade language, lambdatalk, in which lambdas don't create closures but accept partial application. For instance this is how the set [cons, car, cdr] could be defined in SCHEME:
(def cons (lambda (x y) (lambda (m) (m x y))))
(def car (lambda (z) (z (lambda (p q) p))))
(def cdr (lambda (z) (z (lambda (p q) q))))
(car (cons 12 34)) -> 12
(cdr (cons 12 34)) -> 34
and in lambdatalk:
{def cons {lambda {:x :y :m} {:m :x :y}}}
{def car {lambda {:z} {:z {lambda {:x :y} :x}}}}
{def cdr {lambda {:z} {:z {lambda {:x :y} :y}}}}
{car {cons 12 34}} -> 12
{cdr {cons 12 34}} -> 34
In SCHEME the outer lambda saves x and y in a closure the inner lambda can access given m. In lambdatalk the lambda saves :x and :y and returns a new lambda waiting for :m. So, even if closure (and lexical scope) are usefull functionalities, there are not a necessity. Without any free variables, out of any lexical scope, functions are true black boxes without any side effect, in a total independance, following a true functional paradigm. Don't you think so?

For me, using the partialSum that way, makes sure that you only depend on one Function to sum the numbers (MySum) and that will make debugging a lot more easier if things go wrong, because you would not have to worry about the logic of your code in two different parts of your code.
If in the future you decide to change the logic of MySum, (say for example, make it return x+y+1) then you will not have to worry about MyPartialSum because it calls MySum
Even if it seems stupid, having code written that way is just to simplify the process of dependencies in functions. I am sure you will notice that later in your studies.

Related

Constructing a function that declares some other function inside it's body?

Sometimes I need to use a declaration of a function inside another function. For example, I made the following in Mathematica:
Ie: There is a function f and when I compute f[Cos], it declares h as Cos[x]. Observe that I can't do the same by computing f[x] because it does not recognize $x$ as a function, although I can circumvent that by using Mathematica's notation for pure function: f[#&], now it works perfectly fine.
I noticed I can do the same with Maxima:
Although I don't know how to do the f[#&] in Maxima. Ie: Tell Maxima "x" is a function just as I did in Mathematica. Is there a way to do the same in Maxima? Also If I try to compile:
Without asking it to compute g(x), f won't work in the same way it worked before. It's not clear to me why this happens.
:= always defines a global function, even if it's within another function or block. As it stands, when you call f twice, the second definition of g clobbers the first one -- you can't have two different g functions.
I think what you want is an unnamed function, lambda([x], ...). The tricky part is that the body of lambda is not evaluated, so when you write lambda([x], y(x)), the value of y doesn't appear in the result. There are a few different ways to achieve that; I'll describe one way using subst.
subst('y = something, lambda([x], y(x))) constructs an unnamed function and then pastes something into it, replacing y. The result is lambda([x], something(x)) which I think is what you want.
To put this in the framework you outlined,
(%i3) f(x, y) := block ([g: subst ('y = y, lambda ([x], y(x)))], [g, g(x)]);
(%o3) f(x, y) := block([g : subst('y = y, lambda([x], y(x)))], [g, g(x)])
(%i4) f(1/2, cos);
1
(%o4) [lambda([x], cos(x)), cos(-)]
2

Shallow and Deep Binding

I was trying to understand the concept of dynamic/static scope with deep and shallow binding. Below is the code-
(define x 0)
(define y 0)
(define (f z) (display ( + z y))
(define (g f) (let ((y 10)) (f x)))
(define (h) (let ((x 100)) (g f)))
(h)
I understand at dynamic scoping value of the caller function is used by the called function. So using dynamic binding I should get the answer- 110. Using static scoping I would get the answer 0. But I got these results without considering shallow or deep binding. What is shallow and deep binding and how will it change the result?
There's an example in these lecture notes 6. Names, Scopes, and Bindings: that explains the concepts, though I don't like their pseudo-code:
thres:integer
function older(p:person):boolean
return p.age>thres
procedure show(p:person, c:function)
thres:integer
thres:=20
if c(p)
write(p)
procedure main(p)
thres:=35
show(p, older)
As best I can tell, this would be the following in Scheme (with some, I hope, more descriptive names:
(define cutoff 0) ; a
(define (above-cutoff? person)
(> (age person) cutoff))
(define (display-if person predicate)
(let ((cutoff 20)) ; b
(if (predicate person)
(display person))))
(define (main person)
(let ((cutoff 35)) ; c
(display-if person above-cutoff?)))
With lexical scoping the cutoff in above-cutoff? always refers to binding a.
With dynamic scoping as it's implemented in Common Lisp (and most actual languages with dynamic scoping, I think), the value of cutoff in above-cutoff?, when used as the predicate in display-if, will refer to binding b, since that's the most recent on on the stack in that case. This is shallow binding.
So the remaining option is deep binding, and it has the effect of having the value of cutoff within above-cutoff? refer to binding c.
Now let's take a look at your example:
(define x 0)
(define y 0)
(define (f z) (display (+ z y))
(define (g f) (let ((y 10)) (f x)))
(define (h) (let ((x 100)) (g f)))
(h)
I'm going to add some newlines so that commenting is easier, and use a comment to mark each binding of each of the variables that gets bound more than once.
(define x 0) ; x0
(define y 0) ; y0
(define (f z) ; f0
(display (+ z y)))
(define (g f) ; f1
(let ((y 10)) ; y1
(f x)))
(define (h)
(let ((x 100)) ; x1
(g f)))
Note the f0 and f1 there. These are important, because in the deep binding, the current environment of a function passed as an argument is bound to that environment. That's important, because f is passed as a parameter to g within f. So, let's cover all the cases:
With lexical scoping, the result is 0. I think this is the simplest case.
With dynamic scoping and shallow binding the answer is 110. (The value of z is 100, and the value of y is 10.) That's the answer that you already know how to get.
Finally, dynamic scoping and deep binding, you get 100. Within h, you pass f as a parameter, and the current scope is captured to give us a function (lambda (z) (display (+ z 0))), which we'll call ff for sake of convenience. Once, you're in g, the call to the local variable f is actually a call to ff, which is called with the current value of x (from x1, 100), so you're printing (+ 100 0), which is 100.
Comments
As I said, I think the deep binding is sort of unusual, and I don't know whether many languages actually implement that. You could think of it as taking the function, checking whether it has any free variables, and then filling them in with values from the current dynamic environment. I don't think this actually gets used much in practice, and that's probably why you've received some comments asking about these terms. I do see that it could be useful in some circumstances, though. For instance, in Common Lisp, which has both lexical and dynamic (called 'special') variables, many of the system configuration parameters are dynamic. This means that you can do things like this to print in base 16 (since *print-radix* is a dynamic variable):
(let ((*print-radix* 16))
(print value))
But if you wanted to return a function that would print things in base 16, you can't do:
(let ((*print-radix* 16))
(lambda (value)
(print value)))
because someone could take that function, let's call it print16, and do:
(let ((*print-radix* 10))
(print16 value))
and the value would be printed in base 10. Deep binding would avoid that issue. That said, you can also avoid it with shallow binding; you just return
(lambda (value)
(let ((*print-radix* 16))
(print value)))
instead.
All that said, I think that this discussion gets kind of strange when it's talking about "passing functions as arguments". It's strange because in most languages, an expression is evaluated to produce a value. A variable is one type of expression, and the result of evaluating a variable is the expression of that variable. I emphasize "the" there, because that's how it is: a variable has a single value at any given time. This presentation of deep and shallow binding makes it gives a variable a different value depending on where it is evaluated. That seems pretty strange. What I think would make much more sense is if the discussions were about what you get back when you evaluate a lambda expression. Then you could ask "what will the values of the free variables in the lambda expression be"? The answer, in shallow binding, will be "whatever the dynamic values of those variables are when the function is called later. The answer, in deep binding, is "whatever the dynamic values of those variables are when the lambda expression is evaluated."
Then we wouldn't have to consider "functions being passed as arguments." The whole "functions being passed as arguments" is bizarre, because what happens when you pass a function as a parameter (capturing its dynamic environment) and whatever you're passing it to then passes it somewhere else? Is the dynamic environment supposed to get re-bound?
Related Questions and Answers
Dynamic Scoping - Deep Binding vs Shallow Binding
Shallow & Deep Binding - What would this program print?
Dynamic/Static scope with Deep/Shallow binding (exercises) (The answer to this question mentions that "Dynamic scope with deep binding is much trickier, since few widely-deployed languages support it.")

Is there an existing function to apply a function to each member of a tuple?

I want to apply a function to both members of a homogenous tuple, resulting in another tuple. Following on from my previous question I defined an operator that seemed to make sense to me:
let (||>>) (a,b) f = f a, f b
However, again I feel like this might be a common use case but couldn't find it in the standard library. Does it exist?
I don't think there is any standard library function that does this.
My personal preference would be to avoid too many custom operators (they make code shorter, but they make it harder to read for people who have not seen the definition before). Applying function to both elements of a tuple is logically close to the map operation on lists (which applies a function to all elements of a list), so I would probably define Tuple2.map:
module Tuple2 =
let map f (a, b) = (f a, f b)
Then you can use the function quite nicely with pipelining:
let nums = (1, 2)
nums |> Tuple2.map (fun x -> x + 1)

Scheme and Shallow Binding

(define make (lambda (x) (lambda (y) (cons x (list y)))))
(let ((x 7)
(p (make 4)))
(cons x (p 0)))
I'm new to Scheme and functional program, so I am a bit clunky with walking through programs, but I get that if I used deep binding this program will return (7 4 0). Makes sense. What would this program do using shallow binding? I get this may sound dumb but is the p in the line with cons a redefinition? So in that case, we would return (7 0)?
Basically, I understand the concept of deep v. shallow binding, but I feel like I'm jumbling it up when looking at Scheme because I'm not crazy familiar with it.
Deep or shallow binding is an implementational technique and can not be observed from inside the program. The difference for the programmer is between lexical and dynamic scoping rules, but both can be implemented with any of the two techniques (i.e. one notion has got nothing to do with the other).
Deep or shallow refers to the choice of stack frame to hold a given outer scoped variable's binding. In deep binding there is a chain of frames to be accessed until the correct frame is entered holding the record for the variable; in shallow binding all bindings are present in one, shallow environment. See also "rerooting" (which only makes sense in the context of shallow binding implementation of lexical scoping).
To your specific question, under lexical scoping rules your code would return (7 4 0) and under dynamic - (7 7 0), because the call ((lambda(y) (list x y)) 0) is done inside the dynamic scope of x=7 binding (as a side note, (cons x (list y)) is the same as (list x y)):
x = 7
p = (lambda (y) (list x y)) ; x=4 is unused, in p=(make 4)
(cons 7 (p 0)) == (list 7 7 0) ; 'x' in this line and in lambda body for p
; both refer to same binding that is
; in effect, i.e. x=7
NB same terms (deep/shallow binding) are used in other language(s) now with completely different meaning (they do have something to do with the scoping rules there), which I don't care to fully understand. This answer is given in the context of Scheme.
Reference: Shallow Binding in LISP 1.5 by Baker, Henry G. Jr., 1977.
See this wikipedia article for a discussion on scoping (it mentions lexical/dynamic scoping and deep/shallow binding) bearing in mind that Scheme is lexically scoped. Will Ness' answer provides additional information.
For now, let's see step-by-step what's happening in this snippet of code:
; a variable called x is defined and assigned the value 7
(let ((x 7)
; make is called and returns a procedure p, inside its x variable has value 4
(p (make 4)))
; 7 is appended at the head of the result of calling p with y = 0
(cons x (p 0)))
=> '(7 4 0)
Notice that in the second line a closure is created in the lambda returned by make, and the variable x inside will be assigned the value 4. This x has nothing to do with the outer x, because Scheme is lexically scoped.
The last line is not a redefinition, as mentioned in the previous paragraph the x inside make is different from the x defined in the let expression.

What is currying in F#? [duplicate]

This question already exists:
Closed 12 years ago.
Possible Duplicate:
Functional programming: currying
I'm reading the free F# Wikibook here:
http://en.wikibooks.org/wiki/F_Sharp_Programming
There's a section explaining what Partial Functions are. It says that using F# you can partially use a function, but I just can't understand what's going on. Consider the following code snippet that is used an example:
#light
open System
let addTwoNumbers x y = x + y
let add5ToNumber = addTwoNumbers 5
Console.WriteLine(add5ToNumber 6)
The ouput is 11. But I'm not following. My function 'add5ToNumber' doesn't ask for a paramter so why can I invoke it and give it it one?
I really like learning about F# these days, baby steps!
Basically, every function in F# has one parameter and returns one value. That value can be of type unit, designated by (), which is similar in concept to void in some other languages.
When you have a function that appears to have more than one parameter, F# treats it as several functions, each with one parameter, that are then "curried" to come up with the result you want. So, in your example, you have:
let addTwoNumbers x y = x + y
That is really two different functions. One takes x and creates a new function that will add the value of x to the value of the new function's parameter. The new function takes the parameter y and returns an integer result.
So, addTwoNumbers 5 6 would indeed return 11. But, addTwoNumbers 5 is also syntactically valid and would return a function that adds 5 to its parameter. That is why add5ToNumber 6 is valid.
Currying is something like this:
addTwoNumbers is a function that takes a number and returns a function that takes a number that returns a number.
So addTwoNumbers 5 is in fact a function that takes a number and returns a number, which is how currying works. Since you assign addTwoNumbers 5 to add5ToNumber, that make add5ToNumber a function that takes a number an returns a number.
I don't know what type definition looks like in F# but in Haskell, the type definition of functions makes this clear:
addTwoNumbers :: (Num a) => a -> a -> a
On the other hand, if you wrote addTwonumbers to take a two tuple,
addTwoNumbers :: (Num a) => (a, a) -> a
then is would be a function that takes a two tuple and returns a number, so add5ToNumber would not be able to be created as you have it.
Just to add to the other answers, underneath the hood a closure is returned when you curry the function.
[Serializable]
internal class addToFive#12 : FSharpFunc<int, int>
{
// Fields
[DebuggerBrowsable(DebuggerBrowsableState.Never), CompilerGenerated, DebuggerNonUserCode]
public int x;
// Methods
internal addToFive#12(int x)
{
this.x = x;
}
public override int Invoke(int y)
{
return Lexer.add(this.x, y);
}
}
This is known as eta-expansion : in a functional language,
let f a = g a
Is equivalent to
let f = g
This makes mathematical sense : if the two functions are equal for every input, then they're equal.
In your example, g is addTwoNumbers 5 and the code you wrote is entirely equivalent to:
let add5toNumber y = addTwoNumbers 5 y
There are a few situations where they are different:
In some situations, the type system may not recognize y as universally quantified if you omit it.
If addTwoNumbers 5 (with one parameter only) has a side-effect (such as printing 5 to the console) then the eta-expanded version would print 5 every time it's called while the eta-reduced version would print it when it's defined. This may also have performance consequences, if addTwoNumbers 5 involved heavy calculations that can be done only once.
Eta-reduction is not very friendly to labels and optional arguments (but they don't exist in F#, so that's fine).
And, of course, unless your new function name is extremely readable, providing the names of the omitted arguments is always a great help for the reader.
addTwoNumbers accepts 2 arguments (x and y).
add5ToNumber is assigned to the output of calling addTwoNumbers with only 1 argument, which results in another function that "saves" the first argument (x -> 5) and accepts one other argument (y).
When you pass 6 into add5ToNumber, its passing the saved x (5) and the given y (6) into addTwoNumbers, resulting in 11

Resources