I've read through all the documentation, and most of the source of LFE. All the presentations emphasize basic lisp in traditional lisp roles - General Problem Solving, Hello world and syntax emulating macros.
Does anyone know how LFE handles messaging primitives? To specify a more precise question, how would you express this erlang:
A = 2,
Pid = spawn(fun()->
receive
B when is_integer(B) -> io:format("Added: ~p~n",[A+B]);
_ -> nan
end
end),
Pid ! 5.
And then, you know, it mumbles something about having added up some numbers and the answer being 7.
I'm not an LFE user, but there is a user guide in the source tree. From reading it I would guess it is something like this:
(let ((A 2))
(let ((Pid (spawn (lambda ()
(receive
(B (when (is_integer B))
(: io format "Added: ~p~n" (list (+ A B))))
(_ nan))))))
(! Pid 5)))
But I'm very likely to have made a mistake since I haven't even evaluated it in LFE.
Some questions of mine:
Is there a LET* form or is it behaving like one already?
Are guards called the more lispy is-integer and not is_integer as I wrote?
There is a serious lack of examples in the LFE release, all contributions are welcome.
Christian's suggestion is correct. My only comment is that there is no need to have capitalized variable names, it is not wrong, but not necessary.
The LFE let is a "real" let in which the variable bindings are visible first in the body. You can use patterns in let. There is also a let* form (macro actually) which binds sequentially.
No, I have so far kept all the Erlang core function names just as they are in vanilla erlang. It is definitely more lispy to use -instead of _ in names, but what do you do with all the other function names and atoms in OTP? One suggestion is to automatically map - in LFE symbols to _ in the resultant atoms, and back again going the other way of course. This would probably work, but would it lead to confusion?
I could then have a behaviour module looking like:
(defmodule foo
(export (init 1) (handle-call 2) (handle-cast 2) (handle-info 2) ...)
(behaviour gen-server))
(defun handle-call ...)
(defun handle-cast ...)
etc ...
But I am very ambivalent about it.
Related
I'm a completely new to erlang. As an exercise to learn the language, I'm trying to implement the function sublist using tail recursion and without using reverse. Here's the function that I took from this site http://learnyousomeerlang.com/recursion:
tail_sublist(L, N) -> reverse(tail_sublist(L, N, [])).
tail_sublist(_, 0, SubList) -> SubList;
tail_sublist([], _, SubList) -> SubList;
tail_sublist([H|T], N, SubList) when N > 0 ->
tail_sublist(T, N-1, [H|SubList]).
It seems the use of reverse in erlang is very frequent.
In Mozart/Oz, it's very easy to create such the function using unbound variables:
proc {Sublist Xs N R}
if N>0 then
case Xs
of nil then
R = nil
[] X|Xr then
Unbound
in
R = X|Unbound
{Sublist Xr N-1 Unbound}
end
else
R=nil
end
end
Is it possible to create a similar code in erlang? If not, why?
Edit:
I want to clarify something about the question. The function in Oz doesn't use any auxiliary function (no append, no reverse, no anything external or BIF). It's also built using tail recursion.
When I ask if it's possible to create something similar in erlang, I'm asking if it's possible to implement a function or set of functions in erlang using tail recursion, and iterating over the initial list only once.
At this point, after reading your comments and answers, I'm doubtful that it can be done, because erlang doesn't seem to support unbound variables. It seems that all variables need to be assigned to value.
Short Version
No, you can't have a similar code in Erlang. The reason is because in Erlang variables are Single assignment variables.
Unbound Variables are simply not allowed in Erlang.
Long Version
I can't imagine a tail recursive function similar to the one you presenting above due to differences at paradigm level of the two languages you are trying to compare.
But nevertheless it also depends of what you mean by similar code.
So, correct me if I am wrong, the following
R = X|Unbound
{Sublist Xr N-1 Unbound}
Means that the attribution (R=X|Unbound) will not be executed until the recursive call returns the value of Unbound.
This to me looks a lot like the following:
sublist(_,0) -> [];
sublist([],_) -> [];
sublist([H|T],N)
when is_integer(N) ->
NewTail = sublist(T,N-1),
[H|NewTail].
%% or
%%sublist([H|T],N)
%% when is_integer(N) -> [H|sublist(T,N-1)].
But this code isn't tail recursive.
Here's a version that uses appends along the way instead of a reverse at the end.
subl(L, N) -> subl(L, N, []).
subl(_, 0, Accumulator) ->
Accumulator;
subl([], _, Accumulator) ->
Accumulator;
subl([H|T], N, Accumulator) ->
subl(T, N-1, Accumulator ++ [H]).
I would not say that "the use of reverse in Erlang is very frequent". I would say that the use of reverse is very common in toy problems in functional languages where lists are a significant data type.
I'm not sure how close to your Oz code you're trying to get with your "is it possible to create a similar code in Erlang? If not, why?" They are two different languages and have made many different syntax choices.
This question is about some syntax a partner came across today and though we understand how it works, we don't understand why is it allowed (what is its use?).
Look at this snippet:
fun() -> ok end().
Without the last pair of parentheses this will produce something like:
#Fun<erl_eval.20.82930912>
But with them, the function is evaluated producing:
ok
My question is, why is that syntax allowed in Erlang ? why would I want to create a function just to call it immediately instead of just writing out its contents? is there any practical use to it ?
The only thing we could think about was introducing local variables inside the fun's body (but that would look ugly and unclear to me).
Please note that this other syntax is not allowed in Erlang, even though it follows the same concept of the former:
fun() -> fun() -> ok end end()().
(It would mean: a function A that returns a function B. And I'm evaluating A (thus producing B) and then evaluating B to get 'ok').
The syntax you mentioned is a natural outcome of Erlang's being functional.
In Erlang, functions are values (stored as closures).
The value of fun() -> ok end is a function, which takes nothing and returns ok. When we put parentheses after it, we are calling that function. Another way to demonstrate this is:
> F = fun() -> ok end.
#Fun<erl_eval.20.80484245>
> F().
ok
The functions in the second example of yours need to be grouped properly in order for the parser to make sense of them.
As for your question -- "why this syntax is allowed", I'd have to say it's a natural outcome of functions being values in Erlang. This ability enables the functional style of programming. Here is an example:
> lists:map(fun(X) -> X * 2 end, [1,2,3]).
[2,4,6]
The above code is in essence this:
> [fun(X) -> X * 2 end(1), fun(X) -> X * 2 end(2), fun(X) -> X * 2 end(3)].
[2,4,6]
A "natural outcome" is just a natural outcome, it really doesn't have to be of any practical use. So, you will probably never see code like (fun() -> fun() -> ok end end())(). being used:)
You typically wont't have much use for the syntax fun() -> ok end (). But it can be useful to do something like (find_right_fun()) (), which is basically the same thing - an expression that evaluates to a function.
Note that the Erlang parser requires you to specify the precedence using () to sort out the meaning of ()(), i.e. your second example should be (fun() -> fun() -> ok end end()) ().
Edit: I discovered a partial answer to my own question in the process of writing this, but I think it can easily be improved upon so I will post it anyway. Maybe there's a better solution out there?
I am looking for an easy way to define recursive functions in a let form without resorting to letfn. This is probably an unreasonable request, but the reason I am looking for this technique is because I have a mix of data and recursive functions that depend on each other in a way requires a lot of nested let and letfn statements.
I wanted to write the recursive functions that generate lazy sequences like this (using the Fibonacci sequence as an example):
(let [fibs (lazy-cat [0 1] (map + fibs (rest fibs)))]
(take 10 fibs))
But it seems in clojure that fibs cannot use it's own symbol during binding. The obvious way around it is using letfn
(letfn [(fibo [] (lazy-cat [0 1] (map + (fibo) (rest (fibo)))))]
(take 10 (fibo)))
But as I said earlier this leads to a lot of cumbersome nesting and alternating let and letfn.
To do this without letfn and using just let, I started by writing something that uses what I think is the U-combinator (just heard of the concept today):
(let [fibs (fn [fi] (lazy-cat [0 1] (map + (fi fi) (rest (fi fi)))))]
(take 10 (fibs fibs)))
But how to get rid of the redundance of (fi fi)?
It was at this point when I discovered the answer to my own question after an hour of struggling and incrementally adding bits to the combinator Q.
(let [Q (fn [r] ((fn [f] (f f)) (fn [y] (r (fn [] (y y))))))
fibs (Q (fn [fi] (lazy-cat [0 1] (map + (fi) (rest (fi))))))]
(take 10 fibs))
What is this Q combinator called that I am using to define a recursive sequence? It looks like the Y combinator with no arguments x. Is it the same?
(defn Y [r]
((fn [f] (f f))
(fn [y] (r (fn [x] ((y y) x))))))
Is there another function in clojure.core or clojure.contrib that provides the functionality of Y or Q? I can't imagine what I just did was idiomatic...
letrec
I have written a letrec macro for Clojure recently, here's a Gist of it. It acts like Scheme's letrec (if you happen to know that), meaning that it's a cross between let and letfn: you can bind a set of names to mutually recursive values, without the need for those values to be functions (lazy sequences are ok too), as long as it is possible to evaluate the head of each item without referring to the others (that's Haskell -- or perhaps type-theoretic -- parlance; "head" here might stand e.g. for the lazy sequence object itself, with -- crucially! -- no forcing involved).
You can use it to write things like
(letrec [fibs (lazy-cat [0 1] (map + fibs (rest fibs)))]
fibs)
which is normally only possible at top level. See the Gist for more examples.
As pointed out in the question text, the above could be replaced with
(letfn [(fibs [] (lazy-cat [0 1] (map + (fibs) (rest (fibs)))))]
(fibs))
for the same result in exponential time; the letrec version has linear complexity (as does a top-level (def fibs (lazy-cat [0 1] (map + fibs (rest fibs)))) form).
iterate
Self-recursive seqs can often be constructed with iterate -- namely when a fixed range of look-behind suffices to compute any given element. See clojure.contrib.lazy-seqs for an example of how to compute fibs with iterate.
clojure.contrib.seq
c.c.seq provides an interesting function called rec-seq, enabling things like
(take 10 (cseq/rec-seq fibs (map + fibs (rest fibs))))
It has the limitation of only allowing one to construct a single self-recursive sequence, but it might be possible to lift from it's source some implementation ideas enabling more diverse scenarios. If a single self-recursive sequence not defined at top level is what you're after, this has to be the idiomatic solution.
combinators
As for combinators such as those displayed in the question text, it is important to note that they are hampered by the lack of TCO (tail call optimisation) on the JVM (and thus in Clojure, which elects to use the JVM's calling conventions directly for top performance).
top level
There's also the option of putting the mutually recursive "things" at top level, possibly in their own namespace. This doesn't work so great if those "things" need to be parameterised somehow, but namespaces can be created dynamically if need be (see clojure.contrib.with-ns for implementation ideas).
final comments
I'll readily admit that the letrec thing is far from idiomatic Clojure and I'd avoid using it in production code if anything else would do (and since there's always the top level option...). However, it is (IMO!) nice to play with and it appears to work well enough. I'm personally interested in finding out how much can be accomplished without letrec and to what degree a letrec macro makes things easier / cleaner... I haven't formed an opinion on that yet. So, here it is. Once again, for the single self-recursive seq case, iterate or contrib might be the best way to go.
fn takes an optional name argument with that name bound to the function in its body. Using this feature, you could write fibs as:
(def fibs ((fn generator [a b] (lazy-seq (cons a (generator b (+ a b))))) 0 1))
I'm writing a lexer for a small language in Alex with Haskell.
The language is specified to have pythonesque significant indentation, with an INDENT token or a DEDENT token emitted whenever the indentation level changes.
In a traditional imperative language like C, you'd keep a global in the lexer and update it with the indentation level at each line.
This doesn't work in Alex/Haskell because I can't store any global data anywhere with Haskell, and I can't put all my lexing rules inside any monad or anything.
So, how can I do this? Is it even possible? Or will i have to write my own lexer and avoid using alex?
Note that in other whitespace-sensitive languages -- like Haskell -- the layout handling is indeed done in the lexer. GHC in fact implements layout handling in Alex. Here's the source:
https://github.com/ghc/ghc/blob/master/compiler/GHC/Parser/Lexer.x
There are some serious errors in your question that lead you astray, as jrockway points out. "I can't store any global data anywhere with Haskell" is on the wrong track. Firstly, you can have global state, secondly, you should not be using global state here, when Alex fully supports state transitions in rules in a safe manner.
Look at the AlexState structure that Alex provides, letting you thread state through your lexer. Then, look at how the state is used in GHC's layout implementation to implement indent/unindent of the layout rules. (Search for "-- Layout processing" in GHC's lexer to see how the state is pushed and popped).
I can't store any global data anywhere with Haskell
This is not true; in most cases something like the State monad is sufficient, but there is also the ST monad.
You don't need global state for this task, however. Writing a parser consists of two parts; lexical analysis and syntax analysis. The lexical analysis just turns a stream of characters into a stream of meaningful tokens. The syntax analysis turns tokens into an AST; this is where you should deal with indentation.
As you are interpreting the indentation, you will call a handler function as the indentation level changes -- when it increases (nesting), you call your handler function (perhaps with one arg incremented, if you want to track the indentation level); when the level decreases, you simply return the relevant AST portion from the function.
(As an aside, using a global variable for this is something that would not occur to me in an imperative language either -- if anything, it's an instance variable. The State monad is very similar conceptually to this.)
Finally, I think the phrase "I can't put all my lexing rules inside any monad" indicates some sort of misunderstanding of monads. If I needed to parse and keep global state, my code would look like:
data AST = ...
type Step = State Int AST
parseFunction :: Stream -> Step
parseFunction s = do
level <- get
...
if anotherFunction then put (level + 1) >> parseFunction ...
else parseWhatever
...
return node
parse :: Stream -> Step
parse s = do
if looksLikeFunction then parseFunction ...
main = runState parse 0 -- initial nesting of 0
Instead of combining function applications with (.) or ($), you combine them with (>>=) or (>>). Other than that, the algorithm is the same. (There is no "monad" to be "inside".)
Finally, you might like applicative functors:
eval :: Environment -> Node -> Evaluated
eval e (Constant x) = Evaluated x
eval e (Variable x) = Evaluated (lookup e x)
eval e (Function f x y) = (f <$> (`eval` x) <*> (`eval` y)) e
(or
eval e (Function f x y) = ((`eval` f) <*> (`eval` x) <*> (`eval` y)) e
if you have something like "funcall"... but I digress.)
There is plenty of literature on parsing with applicative functors, monads, and arrows; all of which have the potential to solve your problem. Read up on those and see what you get.
I wrote the follwing function:
let str2lst str =
let rec f s acc =
match s with
| "" -> acc
| _ -> f (s.Substring 1) (s.[0]::acc)
f str []
How can I know if the F# compiler turned it into a loop? Is there a way to find out without using Reflector (I have no experience with Reflector and I Don't know C#)?
Edit: Also, is it possible to write a tail recursive function without using an inner function, or is it necessary for the loop to reside in?
Also, Is there a function in F# std lib to run a given function a number of times, each time giving it the last output as input? Lets say I have a string, I want to run a function over the string then run it again over the resultant string and so on...
Unfortunately there is no trivial way.
It is not too hard to read the source code and use the types and determine whether something is a tail call by inspection (is it 'the last thing', and not in a 'try' block), but people second-guess themselves and make mistakes. There's no simple automated way (other than e.g. inspecting the generated code).
Of course, you can just try your function on a large piece of test data and see if it blows up or not.
The F# compiler will generate .tail IL instructions for all tail calls (unless the compiler flags to turn them off is used - used for when you want to keep stack frames for debugging), with the exception that directly tail-recursive functions will be optimized into loops. (EDIT: I think nowadays the F# compiler also fails to emit .tail in cases where it can prove there are no recursive loops through this call site; this is an optimization given that the .tail opcode is a little slower on many platforms.)
'tailcall' is a reserved keyword, with the idea that a future version of F# may allow you to write e.g.
tailcall func args
and then get a warning/error if it's not a tail call.
Only functions that are not naturally tail-recursive (and thus need an extra accumulator parameter) will 'force' you into the 'inner function' idiom.
Here's a code sample of what you asked:
let rec nTimes n f x =
if n = 0 then
x
else
nTimes (n-1) f (f x)
let r = nTimes 3 (fun s -> s ^ " is a rose") "A rose"
printfn "%s" r
I like the rule of thumb Paul Graham formulates in On Lisp: if there is work left to do, e.g. manipulating the recursive call output, then the call is not tail recursive.