Sometimes I need to use a declaration of a function inside another function. For example, I made the following in Mathematica:
Ie: There is a function f and when I compute f[Cos], it declares h as Cos[x]. Observe that I can't do the same by computing f[x] because it does not recognize $x$ as a function, although I can circumvent that by using Mathematica's notation for pure function: f[#&], now it works perfectly fine.
I noticed I can do the same with Maxima:
Although I don't know how to do the f[#&] in Maxima. Ie: Tell Maxima "x" is a function just as I did in Mathematica. Is there a way to do the same in Maxima? Also If I try to compile:
Without asking it to compute g(x), f won't work in the same way it worked before. It's not clear to me why this happens.
:= always defines a global function, even if it's within another function or block. As it stands, when you call f twice, the second definition of g clobbers the first one -- you can't have two different g functions.
I think what you want is an unnamed function, lambda([x], ...). The tricky part is that the body of lambda is not evaluated, so when you write lambda([x], y(x)), the value of y doesn't appear in the result. There are a few different ways to achieve that; I'll describe one way using subst.
subst('y = something, lambda([x], y(x))) constructs an unnamed function and then pastes something into it, replacing y. The result is lambda([x], something(x)) which I think is what you want.
To put this in the framework you outlined,
(%i3) f(x, y) := block ([g: subst ('y = y, lambda ([x], y(x)))], [g, g(x)]);
(%o3) f(x, y) := block([g : subst('y = y, lambda([x], y(x)))], [g, g(x)])
(%i4) f(1/2, cos);
1
(%o4) [lambda([x], cos(x)), cos(-)]
2
Related
Here is an example from the "Programming with Erlang" (2nd edition):
count_chars([], Result) ->
Result;
count_chars([C|Substring], #{C := N}=Result) ->
count_chars(Substring, Result#{C := N+1 });
count_chars([C|Substring], Result) ->
count_chars(Substring, Result#{C => 1}).
..which mercylessly yields the following error:
variable 'C' is unbound
So I am kind of stuck here; to my view, variable 'C' is bound, namely it must be a head of the string (just a linked list of chars, right?). Yet Erlang disagrees with me, breaking example from the (probably, outdated?) book I'am reading right now.
So what's wrong? What's the right way to pattern-match in this particular example?
P.S. A screenshot from the book. Pay attention at slightly different syntax, which also doesn't work for me:
P.P.S. I am using the latest version of Erlang I've managed to download from the official site.
C must be bound before the expression #{C := N}=Result is evaluated.
You consider that C is bound since the first parameter [C|Substring] was evaluated before: #{C := N}=Result. In fact it is not the case. There is no real assignment until a head evaluation succeed and the function enters the body.
Writing count_chars([C|Substring], #{C := N}=Result) -> is exactly the same as count_chars([C1|Substring], #{C2 := N}=Result) when C1 =:= C2 ->
During the head evaluation, each element is stored in a different element (a place in the heap) to check if all the parameters match the head definition. In your case the compiler want store the value C in an element, let's say x1 and the key C? in another element, let's say x2, and then verify that x1 and x2 are equals. the second operation is not possible without a deep modification of the compiler behavior.
I wrote a small example to show how it works, and compiled it with the option 'S' to see the result of the compilation:
test([K|_],K,M) -> % to see how the test of parameter equality is done
#{K := V} = M, % to verify that this pattern works when K is bound.
V.
the assembly result is :
{function, test, 3, 33}.
{label,32}.
{line,[{location,"mod.erl",64}]}.
{func_info,{atom,mod},{atom,test},3}.
{label,33}.
{test,is_nonempty_list,{f,32},[{x,0}]}. % the whole list is assigned to x0
{get_hd,{x,0},{x,3}}. % the list's head is assigned to x3
{test,is_eq_exact,{f,32},[{x,1},{x,3}]}. % the second parameter K is assigned to x1, verify if x1 =:= x3
{test,is_map,{f,34},[{x,2}]}. % the third parameter M is assigned to x2, check if it is a map if not go to label 34
{get_map_elements,{f,34},{x,2},{list,[{x,3},{x,0}]}}. % get the value associated to key x3 in the map x2 and store it into x0, if no suck key go to label 34
return.
{label,34}. % throw a badmatch error
{line,[{location,"mod.erl",65}]}.
{badmatch,{x,2}}.
Now, to code your function you can simply write:
count_chars([], Result) ->
Result;
count_chars([C|Substring], Result) ->
N = maps:get(C, Result, 0) +1,
count_chars(Substring, Result#{C => N }).
I have already done some searches, and this question is a duplicate of another post. I am posting this just for future reference.
Is it possible to define SUMPRODUCT without explicitly using variable names x, y?
Original Function:
let SUMPRODUCT x y = List.map2 (*) x y |> List.sum
SUMPRODUCT [1;4] [3;25] // Result: 103
I was hoping to do this:
// CONTAINS ERROR!
let SUMPRODUCT = (List.map2 (*)) >> List.sum
// CONTAINS ERROR!
But F# comes back with an error.
I have already found the solution on another post, but if you have any suggestions please let me know. Thank you.
Function composition only works when the input function takes a single argument. However, in your example, the result of List.map2 (*) is a function that takes two separate arguments and so it cannot be easily composed with List.sum using >>.
There are various ways to work around this if you really want, but I would not do that. I think >> is nice in a few rare cases where it fits nicely, but trying to over-use it leads to unreadable mess.
In some functional languages, the core library defines helpers for turning function with two arguments into a function that takes a tuple and vice versa.
let uncurry f (x, y) = f x y
let curry f x y = f (x, y)
You could use those two to define your sumProduct like this:
let sumProduct = curry ((uncurry (List.map2 (*))) >> List.sum)
Now it is point-free and understanding it is a fun mental challenge, but for all practical purposes, nobody will be able to understand the code and it is also longer than your original explicit version:
let sumProduct x y = List.map2 (*) x y |> List.sum
According to this post:
What am I missing: is function composition with multiple arguments possible?
Sometimes "pointed" style code is better than "pointfree" style code, and there is no good way to unify the type difference of the original function to what I hope to achieve.
With Maxima, it is possible to replace an unknown by a value using at() statement.
But this use a list, for the substitution, and the solve() statement don't return a list.
Code:
(%i1) g(x):=x^2+a;
2
(%o1) g(x) := x + a
(%i2) g(x),solve(x=3),a=2;
(%o2) 11
I managed to compute a result using commas, but I can't create a function to do so:
(%i3) f(y) := g(x),solve(x=3),a=y;
(%o3) f(y) := g(x)
(%i4) f(2);
2
(%o4) x + a
Is there a statement for which the commas acts like it acts directly in the line?
Edit:
Actually, it is possible to use at() with solve() to create the function f(), as solve() just return a list of lists. So the code would be:
(%i5) f(y) := at(at(g(x), solve(x=3)[1]), a=y);
(%o5) f(y) := at(at(g(x), solve(x = 3) ), a = y)
(%i6) f(2);
(%o6) 11
Notice the [1] after solve(x=3) in the (%i5). It select the the first item (solution) of list.
I'm not sure what you are trying to accomplish -- probably it would be best if you would back up a couple of steps and describe the larger problem you are trying to solve here.
My best guess as to what you want is that you are trying to use the result of 'solve' to find a value to substitute into some expression. If so you can achieve it like this: f(eq, u) := map (lambda ([e], subst (e, g(u))), solve (eq, x)); where eq is an equation to solve for x and then substitute into g(u). Note that 'solve' can return multiple solutions so that's why I use 'map' to apply something to each solution. Here is an example output:
(%i7) f(eq) := map (lambda ([e], subst (e, g(x))), solve (eq, x));
(%o7) f(eq) := map(lambda([e], subst(e, g(x))), solve(eq, x))
(%i8) solve (x^2 + 2*x + 2);
(%o8) [x = - %i - 1, x = %i - 1]
(%i9) f (x^2 + 2*x + 2);
(%o9) [g(- %i - 1), g(%i - 1)]
Of course you can define 'g' in whatever way is appropriate.
The answer to your specific question (which I believe is not actually very much relevant, but anyway) is to use 'block' to group together expressions to be evaluated. E.g. f(x) := block (...);
Perhaps I'm answering the wrong question. Maybe what you want is ev(foo, bar, baz) -- ev is the function that is actually called when you write foo, bar, baz at the console input prompt. So the function would be written f(y) := ev (g(x), solve(x=3), a=y).
However, bear in mind that there are several different kinds of functionality built into ev, so it is hard to understand (see the documentation for ev). Instead, consider using subst which is much simpler.
I have 2 formulas F1 and F2. These two formulas share most variables, except some 'temporary' (or I call them 'free') variables having different names, that are there for some reasons.
Now I want to prove F1 == F2, but prove() method of Z3 always takes into account all the variables. How can I tell prove() to ignore those 'free' variables, and focuses only on a list of variables I really care about?
I mean with all the same input to the list of my variables, if at the output time, F1 and F2 have the same value of all these variables (regardless the values of 'free' variables), then I consider them 'equivalence'
I believe this problem has been solved in other researches before, but I dont know where to look for the information.
Thanks so much.
We can use existential quantifiers to capture 'temporary'/'free' variables.
For example, in the following example, the formulas F and G are not equivalent.
x, y, z, w = Ints('x y z w')
F = And(x >= y, y >= z)
G = And(x > z - 1, w < z)
prove(F == G)
The script will produce the counterexample [z = 0, y = -1, x = 0, w = -1].
If we consider y and w as 'temporary' variables, we may try to prove:
prove(Exists([y], F) == Exists([w], G))
Now, Z3 will return proved. Z3 is essentially showing that for all x and z, there is a y that makes F true if and only if there is a w that makes G true.
Here is the full example.
Remark: when we add quantifiers, we are making the problem much harder for Z3. It may return unknown for problems containing quantifiers.
Apparently, I cannot comment, so I have to add another answer. The process of "disregarding" certain variables is typically called "projection" or "forgetting". I am not familiar with it in contexts going beyond propositional logic, but if direct existential quantification is possible (which Leo described), it is conceptually the simplest way to do it.
I was asked what's the relationship between partial function application and closures.
I would say there isn't any, unless I'm missing the point.
Let's say I'm writing in python and I have a very simple function MySum defined as follows:
MySum = lambda x, y : x + y;
Now I'm fixing one parameter to obtain a function with smaller arity which returns the same value that MySum would return if I called it with the same parameters (partial application):
MyPartialSum = lambda x : MySum(x, 0);
I could do the the very same thing with C:
int MySum(int x, int y) { return x + y; }
int MyPartialSum(int x) { return MySum(x, 0); }
So, the dumb question is: what's the difference? Why should I need closures for partial applications? These codes are equivalent, I don't see what's the bound with closures and partial application.
Partial function application is about fixing some arguments of a given function to yield another function with fewer arguments, like
sum = lambda x, y: x + y
inc = lambda x: sum(x, 1)
Notice that 'inc' is 'sum' partially applied, without capturing anything from the context (as you mentioned closure).
But, such hand-written (usually anonymous) functions are kind of tedious. One can use a function factory, which returns an inner function. The inner function can be parameterized by capturing some variable from its context, like
# sum = lambda x, y: x + y
def makePartialSumF(n):
def partialSumF(x):
return sum(x, n)
return partialSumF
inc = makePartialSumF(1)
plusTwo = makePartialSumF(2)
Here the factory makePartialSumF is invoked twice. Each call results in a partialSumF function (capturing different values as n). Using closure makes the implementation of partial application convenient. So you can say partial application can be implemented by means of closure. Of course, closures can do many other things! (As a side node, python does not have proper closure.)
Currying is about turning a function of N arguments into a unary function which returns a unary function... for example we have a function which takes three arguments and returns a value:
sum = lambda x, y, z: x + y + z
The curried version is
curriedSum = lambda x: lambda y: lambda z: x + y + z
I bet you wouldn't write python code like that. IMO the motivation of Currying is mostly of theoretical interest. (A framework of expressing computations using only unary functions: every function is unary!) The practical byproduct is that, in languages where functions are curried, some partial applications (when you 'fix' arguments from the left) are as trivial as supplying arguments to curried function. (But not all partial applications are as such. Example: given f(x,y,z) = x+2*y+3*z, when you binds y to a constant to yield a function of two variables.) So you can say, Currying is a technique which, in practice and as a byproduct, can make many useful partial functional applications trivial, but that's not the point of Currying.
Partial application is a technique whereby you take an existing function and a subset of it's arguments, and produce a new function that accepts the remaining arguments.
In other words, if you have function F(a, b), a function that applies partial application of a would look like B(fn, a) where F(a, b) = B(F, a)(b).
In your example you're simply creating new functions, rather than applying partial application to the existing one.
Here's an example in python:
def curry_first(fn, arg):
def foo(*args):
return fn(arg, *args)
return foo
This creates a closure over the supplied function and argument. A new function is returned that calls the first function with new argument signature. The closure is important - it allows fn access to arg. Now you can do this sort of thing:
add = lambda x, y : x + y;
add2 = curry_first(add, 2)
add2(4) # returns 6
I've usually heard this referred to as currying.
Simply, the result of a partial application is normally implemented as a closure.
Closures are not a required functionality in a language. I'm experimenting a homemade language, lambdatalk, in which lambdas don't create closures but accept partial application. For instance this is how the set [cons, car, cdr] could be defined in SCHEME:
(def cons (lambda (x y) (lambda (m) (m x y))))
(def car (lambda (z) (z (lambda (p q) p))))
(def cdr (lambda (z) (z (lambda (p q) q))))
(car (cons 12 34)) -> 12
(cdr (cons 12 34)) -> 34
and in lambdatalk:
{def cons {lambda {:x :y :m} {:m :x :y}}}
{def car {lambda {:z} {:z {lambda {:x :y} :x}}}}
{def cdr {lambda {:z} {:z {lambda {:x :y} :y}}}}
{car {cons 12 34}} -> 12
{cdr {cons 12 34}} -> 34
In SCHEME the outer lambda saves x and y in a closure the inner lambda can access given m. In lambdatalk the lambda saves :x and :y and returns a new lambda waiting for :m. So, even if closure (and lexical scope) are usefull functionalities, there are not a necessity. Without any free variables, out of any lexical scope, functions are true black boxes without any side effect, in a total independance, following a true functional paradigm. Don't you think so?
For me, using the partialSum that way, makes sure that you only depend on one Function to sum the numbers (MySum) and that will make debugging a lot more easier if things go wrong, because you would not have to worry about the logic of your code in two different parts of your code.
If in the future you decide to change the logic of MySum, (say for example, make it return x+y+1) then you will not have to worry about MyPartialSum because it calls MySum
Even if it seems stupid, having code written that way is just to simplify the process of dependencies in functions. I am sure you will notice that later in your studies.