I'd like to pre-store a bunch of function calls in a data structure and later evaluate/execute them from within another function.
This works as planned for functions defined at namespace level with defn (even though the function definition comes after my creation of the data structure) but will not work with functions defined by let [name (fn or letfn inside the function.
Here's my small self-contained example:
(def todoA '(funcA))
(def todoB '(funcB))
(def todoC '(funcC))
(def todoD '(funcD)) ; unused
(defn funcA [] (println "hello funcA!"))
(declare funcB funcC)
(defn runit []
(let [funcB (fn [] (println "hello funcB"))]
(letfn [(funcC [] (println "hello funcC!"))]
(funcA) ; OK
(eval todoA) ; OK
(funcB) ; OK
(eval todoB) ; "Unable to resolve symbol: funcB in this context" at line 2
(funcC) ; OK
(eval todoC) ; "Unable to resolve symbol: funcC in this context" at line 3
)))
In case you're wondering about my test setup, to see the result of those 6 statements I comment/uncomment specific of the OK/failing lines and then call (runit) from the REPL.
Is there a simple fix I could undertake to get eval'd quoted calls to functions to work for functions defined inside another function?
Update:
This (based on danlei's suggestion) does work. Let's see if I can get this method working in "real life!"
(def todoB '(funcB))
(declare funcB)
(defn runit []
(binding [funcB (fn [] (println "hello funcB"))]
(funcB)
(eval todoB) ; "Unable to resolve symbol: funcB in this context" at line 1!
))
Update:
This code is going into my solution for a Constraint Satisfaction Problem - I want to find out who owns the zebra! I'm fairly new to Clojure and especially functional programming, and this has made the exercise quite challenging. I'm falling into a lot of pits but I'm OK with that as it's part of the learning experience.
I used to specify the constraints as a bunch of simple vectors, like this:
[:con-eq :spain :dog]
[:abs-pos :norway 1]
[:con-eq :kools :yellow]
[:next-to :chesterfields :fox]
where the first of each vector would specify the kind of constraint. But that led me to an awkward implementation of a dispatch mechanism for those rules, so I decided to encode them as (quoted) function calls instead:
'(coloc :japan :parliament) ; 10
'(coloc :coffee :green) ; 12
'(next-to :chesterfield :fox) ; 5
so I can dispatch the constraining rule with a simple eval. This seems a lot more elegant and "lisp-y." However, each of these functions needs to access my domain data (named vars), and this data keeps changing as the program runs. I didn't want to blemish my rules by introducing an extra argument, so I wanted vars to be available to the eval'd functions via dynamic scoping.
I've now learned that dynamic scoping can be done using binding, but it also needs a declare.
Do you mean something like this?
(def foo '(bar))
(declare bar)
(binding [bar (fn [] (println "hello bar"))]
(eval foo))
If yes, your problem reduces to this:
(let [foo 1]
(eval 'foo))
This won't work, because eval does not evaluate in the lexical environment. You can get around that using vars:
(declare foo)
(binding [foo 1]
(eval 'foo))
As far as that is concerned, Clojure seems to have similar semantics to CL, cf. the CLHS:
Evaluates form in the current dynamic environment and the null lexical environment.
I think you're solving the wrong problem. In functional languages, functions are values and can be assigned to anything that can store any other value, e.g. a map. You shouldn't be trying to manipulate namespaces or evaling anything - this isn't perl.
Try something like this, and use assoc to change the map locally:
user=> (def fnmap {:funcA (fn [x] (inc x)), :funcB (fn [x] (* x 2))})
#'user/fnmap
user=> ((:funcA fnmap) 10)
11
user=> ((:funcB fnmap) 10)
20
Related
I wanted to add some more symbols to my Erlang library. Example
For a matrix library A**B could mean matrix multiplication etc.
I could not find any help for the same.
Also anyone knows how to apply functions like + - or % using erlang:apply()
You can use any atom as function name. If you have specail symbols in atom you have to use its quoted form '**':
-module(operator).
-export(['**'/2]).
'**'(A, B) ->
{'**', A, B}.
There is no syntactic sugar to use such operators though. All the default operators are functions defined in module erlang and can be accessed like that:
1> operator:'**'(a, b).
{'**',a,b}
2> F0 = fun operator:'**'/2.
#Fun<operator.**.2>
3> F0(c, d).
{'**',c,d}
4> F1 = fun erlang:'+'/2.
#Fun<erlang.+.2>
5> F1(1, 2).
3
6> F2 = fun erlang:'rem'/2.
#Fun<erlang.rem.2>
7> F2(5, 3).
2
If you really, really want this you can use a parse transform but your code has to be syntactically correct before the transform. So for example your parse transform can rewrite A *_* B into exp(A, B) because A *_* B will be parsed as something like (A * _) * B. However you will not be able to transform A ** B.
Also, using parse transforms for something so frivolous is a really bad idea.
I was asked what's the relationship between partial function application and closures.
I would say there isn't any, unless I'm missing the point.
Let's say I'm writing in python and I have a very simple function MySum defined as follows:
MySum = lambda x, y : x + y;
Now I'm fixing one parameter to obtain a function with smaller arity which returns the same value that MySum would return if I called it with the same parameters (partial application):
MyPartialSum = lambda x : MySum(x, 0);
I could do the the very same thing with C:
int MySum(int x, int y) { return x + y; }
int MyPartialSum(int x) { return MySum(x, 0); }
So, the dumb question is: what's the difference? Why should I need closures for partial applications? These codes are equivalent, I don't see what's the bound with closures and partial application.
Partial function application is about fixing some arguments of a given function to yield another function with fewer arguments, like
sum = lambda x, y: x + y
inc = lambda x: sum(x, 1)
Notice that 'inc' is 'sum' partially applied, without capturing anything from the context (as you mentioned closure).
But, such hand-written (usually anonymous) functions are kind of tedious. One can use a function factory, which returns an inner function. The inner function can be parameterized by capturing some variable from its context, like
# sum = lambda x, y: x + y
def makePartialSumF(n):
def partialSumF(x):
return sum(x, n)
return partialSumF
inc = makePartialSumF(1)
plusTwo = makePartialSumF(2)
Here the factory makePartialSumF is invoked twice. Each call results in a partialSumF function (capturing different values as n). Using closure makes the implementation of partial application convenient. So you can say partial application can be implemented by means of closure. Of course, closures can do many other things! (As a side node, python does not have proper closure.)
Currying is about turning a function of N arguments into a unary function which returns a unary function... for example we have a function which takes three arguments and returns a value:
sum = lambda x, y, z: x + y + z
The curried version is
curriedSum = lambda x: lambda y: lambda z: x + y + z
I bet you wouldn't write python code like that. IMO the motivation of Currying is mostly of theoretical interest. (A framework of expressing computations using only unary functions: every function is unary!) The practical byproduct is that, in languages where functions are curried, some partial applications (when you 'fix' arguments from the left) are as trivial as supplying arguments to curried function. (But not all partial applications are as such. Example: given f(x,y,z) = x+2*y+3*z, when you binds y to a constant to yield a function of two variables.) So you can say, Currying is a technique which, in practice and as a byproduct, can make many useful partial functional applications trivial, but that's not the point of Currying.
Partial application is a technique whereby you take an existing function and a subset of it's arguments, and produce a new function that accepts the remaining arguments.
In other words, if you have function F(a, b), a function that applies partial application of a would look like B(fn, a) where F(a, b) = B(F, a)(b).
In your example you're simply creating new functions, rather than applying partial application to the existing one.
Here's an example in python:
def curry_first(fn, arg):
def foo(*args):
return fn(arg, *args)
return foo
This creates a closure over the supplied function and argument. A new function is returned that calls the first function with new argument signature. The closure is important - it allows fn access to arg. Now you can do this sort of thing:
add = lambda x, y : x + y;
add2 = curry_first(add, 2)
add2(4) # returns 6
I've usually heard this referred to as currying.
Simply, the result of a partial application is normally implemented as a closure.
Closures are not a required functionality in a language. I'm experimenting a homemade language, lambdatalk, in which lambdas don't create closures but accept partial application. For instance this is how the set [cons, car, cdr] could be defined in SCHEME:
(def cons (lambda (x y) (lambda (m) (m x y))))
(def car (lambda (z) (z (lambda (p q) p))))
(def cdr (lambda (z) (z (lambda (p q) q))))
(car (cons 12 34)) -> 12
(cdr (cons 12 34)) -> 34
and in lambdatalk:
{def cons {lambda {:x :y :m} {:m :x :y}}}
{def car {lambda {:z} {:z {lambda {:x :y} :x}}}}
{def cdr {lambda {:z} {:z {lambda {:x :y} :y}}}}
{car {cons 12 34}} -> 12
{cdr {cons 12 34}} -> 34
In SCHEME the outer lambda saves x and y in a closure the inner lambda can access given m. In lambdatalk the lambda saves :x and :y and returns a new lambda waiting for :m. So, even if closure (and lexical scope) are usefull functionalities, there are not a necessity. Without any free variables, out of any lexical scope, functions are true black boxes without any side effect, in a total independance, following a true functional paradigm. Don't you think so?
For me, using the partialSum that way, makes sure that you only depend on one Function to sum the numbers (MySum) and that will make debugging a lot more easier if things go wrong, because you would not have to worry about the logic of your code in two different parts of your code.
If in the future you decide to change the logic of MySum, (say for example, make it return x+y+1) then you will not have to worry about MyPartialSum because it calls MySum
Even if it seems stupid, having code written that way is just to simplify the process of dependencies in functions. I am sure you will notice that later in your studies.
Edit: I discovered a partial answer to my own question in the process of writing this, but I think it can easily be improved upon so I will post it anyway. Maybe there's a better solution out there?
I am looking for an easy way to define recursive functions in a let form without resorting to letfn. This is probably an unreasonable request, but the reason I am looking for this technique is because I have a mix of data and recursive functions that depend on each other in a way requires a lot of nested let and letfn statements.
I wanted to write the recursive functions that generate lazy sequences like this (using the Fibonacci sequence as an example):
(let [fibs (lazy-cat [0 1] (map + fibs (rest fibs)))]
(take 10 fibs))
But it seems in clojure that fibs cannot use it's own symbol during binding. The obvious way around it is using letfn
(letfn [(fibo [] (lazy-cat [0 1] (map + (fibo) (rest (fibo)))))]
(take 10 (fibo)))
But as I said earlier this leads to a lot of cumbersome nesting and alternating let and letfn.
To do this without letfn and using just let, I started by writing something that uses what I think is the U-combinator (just heard of the concept today):
(let [fibs (fn [fi] (lazy-cat [0 1] (map + (fi fi) (rest (fi fi)))))]
(take 10 (fibs fibs)))
But how to get rid of the redundance of (fi fi)?
It was at this point when I discovered the answer to my own question after an hour of struggling and incrementally adding bits to the combinator Q.
(let [Q (fn [r] ((fn [f] (f f)) (fn [y] (r (fn [] (y y))))))
fibs (Q (fn [fi] (lazy-cat [0 1] (map + (fi) (rest (fi))))))]
(take 10 fibs))
What is this Q combinator called that I am using to define a recursive sequence? It looks like the Y combinator with no arguments x. Is it the same?
(defn Y [r]
((fn [f] (f f))
(fn [y] (r (fn [x] ((y y) x))))))
Is there another function in clojure.core or clojure.contrib that provides the functionality of Y or Q? I can't imagine what I just did was idiomatic...
letrec
I have written a letrec macro for Clojure recently, here's a Gist of it. It acts like Scheme's letrec (if you happen to know that), meaning that it's a cross between let and letfn: you can bind a set of names to mutually recursive values, without the need for those values to be functions (lazy sequences are ok too), as long as it is possible to evaluate the head of each item without referring to the others (that's Haskell -- or perhaps type-theoretic -- parlance; "head" here might stand e.g. for the lazy sequence object itself, with -- crucially! -- no forcing involved).
You can use it to write things like
(letrec [fibs (lazy-cat [0 1] (map + fibs (rest fibs)))]
fibs)
which is normally only possible at top level. See the Gist for more examples.
As pointed out in the question text, the above could be replaced with
(letfn [(fibs [] (lazy-cat [0 1] (map + (fibs) (rest (fibs)))))]
(fibs))
for the same result in exponential time; the letrec version has linear complexity (as does a top-level (def fibs (lazy-cat [0 1] (map + fibs (rest fibs)))) form).
iterate
Self-recursive seqs can often be constructed with iterate -- namely when a fixed range of look-behind suffices to compute any given element. See clojure.contrib.lazy-seqs for an example of how to compute fibs with iterate.
clojure.contrib.seq
c.c.seq provides an interesting function called rec-seq, enabling things like
(take 10 (cseq/rec-seq fibs (map + fibs (rest fibs))))
It has the limitation of only allowing one to construct a single self-recursive sequence, but it might be possible to lift from it's source some implementation ideas enabling more diverse scenarios. If a single self-recursive sequence not defined at top level is what you're after, this has to be the idiomatic solution.
combinators
As for combinators such as those displayed in the question text, it is important to note that they are hampered by the lack of TCO (tail call optimisation) on the JVM (and thus in Clojure, which elects to use the JVM's calling conventions directly for top performance).
top level
There's also the option of putting the mutually recursive "things" at top level, possibly in their own namespace. This doesn't work so great if those "things" need to be parameterised somehow, but namespaces can be created dynamically if need be (see clojure.contrib.with-ns for implementation ideas).
final comments
I'll readily admit that the letrec thing is far from idiomatic Clojure and I'd avoid using it in production code if anything else would do (and since there's always the top level option...). However, it is (IMO!) nice to play with and it appears to work well enough. I'm personally interested in finding out how much can be accomplished without letrec and to what degree a letrec macro makes things easier / cleaner... I haven't formed an opinion on that yet. So, here it is. Once again, for the single self-recursive seq case, iterate or contrib might be the best way to go.
fn takes an optional name argument with that name bound to the function in its body. Using this feature, you could write fibs as:
(def fibs ((fn generator [a b] (lazy-seq (cons a (generator b (+ a b))))) 0 1))
I'm writing a lexer for a small language in Alex with Haskell.
The language is specified to have pythonesque significant indentation, with an INDENT token or a DEDENT token emitted whenever the indentation level changes.
In a traditional imperative language like C, you'd keep a global in the lexer and update it with the indentation level at each line.
This doesn't work in Alex/Haskell because I can't store any global data anywhere with Haskell, and I can't put all my lexing rules inside any monad or anything.
So, how can I do this? Is it even possible? Or will i have to write my own lexer and avoid using alex?
Note that in other whitespace-sensitive languages -- like Haskell -- the layout handling is indeed done in the lexer. GHC in fact implements layout handling in Alex. Here's the source:
https://github.com/ghc/ghc/blob/master/compiler/GHC/Parser/Lexer.x
There are some serious errors in your question that lead you astray, as jrockway points out. "I can't store any global data anywhere with Haskell" is on the wrong track. Firstly, you can have global state, secondly, you should not be using global state here, when Alex fully supports state transitions in rules in a safe manner.
Look at the AlexState structure that Alex provides, letting you thread state through your lexer. Then, look at how the state is used in GHC's layout implementation to implement indent/unindent of the layout rules. (Search for "-- Layout processing" in GHC's lexer to see how the state is pushed and popped).
I can't store any global data anywhere with Haskell
This is not true; in most cases something like the State monad is sufficient, but there is also the ST monad.
You don't need global state for this task, however. Writing a parser consists of two parts; lexical analysis and syntax analysis. The lexical analysis just turns a stream of characters into a stream of meaningful tokens. The syntax analysis turns tokens into an AST; this is where you should deal with indentation.
As you are interpreting the indentation, you will call a handler function as the indentation level changes -- when it increases (nesting), you call your handler function (perhaps with one arg incremented, if you want to track the indentation level); when the level decreases, you simply return the relevant AST portion from the function.
(As an aside, using a global variable for this is something that would not occur to me in an imperative language either -- if anything, it's an instance variable. The State monad is very similar conceptually to this.)
Finally, I think the phrase "I can't put all my lexing rules inside any monad" indicates some sort of misunderstanding of monads. If I needed to parse and keep global state, my code would look like:
data AST = ...
type Step = State Int AST
parseFunction :: Stream -> Step
parseFunction s = do
level <- get
...
if anotherFunction then put (level + 1) >> parseFunction ...
else parseWhatever
...
return node
parse :: Stream -> Step
parse s = do
if looksLikeFunction then parseFunction ...
main = runState parse 0 -- initial nesting of 0
Instead of combining function applications with (.) or ($), you combine them with (>>=) or (>>). Other than that, the algorithm is the same. (There is no "monad" to be "inside".)
Finally, you might like applicative functors:
eval :: Environment -> Node -> Evaluated
eval e (Constant x) = Evaluated x
eval e (Variable x) = Evaluated (lookup e x)
eval e (Function f x y) = (f <$> (`eval` x) <*> (`eval` y)) e
(or
eval e (Function f x y) = ((`eval` f) <*> (`eval` x) <*> (`eval` y)) e
if you have something like "funcall"... but I digress.)
There is plenty of literature on parsing with applicative functors, monads, and arrows; all of which have the potential to solve your problem. Read up on those and see what you get.
I wrote the follwing function:
let str2lst str =
let rec f s acc =
match s with
| "" -> acc
| _ -> f (s.Substring 1) (s.[0]::acc)
f str []
How can I know if the F# compiler turned it into a loop? Is there a way to find out without using Reflector (I have no experience with Reflector and I Don't know C#)?
Edit: Also, is it possible to write a tail recursive function without using an inner function, or is it necessary for the loop to reside in?
Also, Is there a function in F# std lib to run a given function a number of times, each time giving it the last output as input? Lets say I have a string, I want to run a function over the string then run it again over the resultant string and so on...
Unfortunately there is no trivial way.
It is not too hard to read the source code and use the types and determine whether something is a tail call by inspection (is it 'the last thing', and not in a 'try' block), but people second-guess themselves and make mistakes. There's no simple automated way (other than e.g. inspecting the generated code).
Of course, you can just try your function on a large piece of test data and see if it blows up or not.
The F# compiler will generate .tail IL instructions for all tail calls (unless the compiler flags to turn them off is used - used for when you want to keep stack frames for debugging), with the exception that directly tail-recursive functions will be optimized into loops. (EDIT: I think nowadays the F# compiler also fails to emit .tail in cases where it can prove there are no recursive loops through this call site; this is an optimization given that the .tail opcode is a little slower on many platforms.)
'tailcall' is a reserved keyword, with the idea that a future version of F# may allow you to write e.g.
tailcall func args
and then get a warning/error if it's not a tail call.
Only functions that are not naturally tail-recursive (and thus need an extra accumulator parameter) will 'force' you into the 'inner function' idiom.
Here's a code sample of what you asked:
let rec nTimes n f x =
if n = 0 then
x
else
nTimes (n-1) f (f x)
let r = nTimes 3 (fun s -> s ^ " is a rose") "A rose"
printfn "%s" r
I like the rule of thumb Paul Graham formulates in On Lisp: if there is work left to do, e.g. manipulating the recursive call output, then the call is not tail recursive.