Erlang function with number for parameter - erlang

Hi I'm learning Erlang via Learn You Some Erlang by Fred Hebert.
And I've come across a code that I'm confuse about:
sword(1) -> throw(slice);
sword(2) -> erlang:error(cut_arm);
sword(3) -> exit(cut_leg);
sword(4) -> throw(punch);
sword(5) -> exit(cross_bridge).
talk() -> "blah blah".
black_knight(Attack) when is_function(Attack, 0) ->
try Attack() of
_ -> "None shall pass."
catch
throw:slice -> "It is but a scratch.";
error:cut_arm -> "I've had worse.";
exit:cut_leg -> "Come on you pansy!";
_:_ -> "Just a flesh wound."
end.
So here's the confusion. I don't understand sword(#) function. Why are there number as parameter? The function is_function actually check if these function are of arity 0 and apparently all the sword(#) functions are of arity 0.
Also the way to pass in the sword(#) function to the black_knight function is different compare to the talk function.
Here's how the book pass a sword function and the talk function.
exceptions:black_knight(fun exceptions:talk/0).
vs
exceptions:black_knight(fun() -> exceptions:sword(1) end).
The talk function we just pass the function where as the sword(1) function we have to wrap it with a anonymous function. I don't get it.
So the questions are:
Why is passing these sword(#) different from talk function.
Why sword(#) have a number as a parameter?
Why sword(#) have 0 arity when it seems like it have an arity of 1 (I'm counting the number parameter as a parameter)?
The chapter of the book I'm at.
Thank you for your time.

If you look at the guard statement for the black_knight function, is_function(Attack, 0), it will only match the definition if the function passed in takes 0 parameters. Since talk takes 0 parameters, it can be passed in directly. sword takes one parameter, so you need to wrap it in an anonymous function that takes 0 parameters before you can pass it in.
The number in the definition of each clause is an example of pattern matching. If you call sword with 1 as the argument, you will execute the code in the clause sword(1) ->. If you pass in 2 as the argument, you will execute the clause sword(2) ->. See this section in Learn You Some Erlang for a more complete description.
sword does have an arity of 1, so you were counting parameters correctly.

The purpose of the sword function is to show off different kinds of errors that can be thrown. It accepts a parameter so it can have more than one clause. Fred probably chose integers because they are short, but that doesn't really matter.
The sword function really has an arity of one.
The black_knight/1 function is supposed to show you how to catch the different error classes that exist in Erlang. It does this by calling the zero-arity function that is passed into it and providing a different response for different errors it might throw.
sword/1 is passed into black_knight/1 using an anonymous function
because black_knight/1 only accepts functions of arity zero.
The anonymous function that is created by
fun () -> sword(1) end
is a function of arity zero that calls sword/1 with one argument.
talk/0 can be passed directly because it already is a zero arity function.

Related

understand mutable variable closure

In this example:
let generateStamp =
let mutable count = 0
(fun () -> count <- count + 1; count)
generateStamp ()
generateStamp ()
// prints 1 and 2
Isn't mutable count allocated for every generateStamp() call ? If that happens you would have to store the result of that call in order to be able to increment the counter, but that does not seem to be the correct assumption here.
No. You've defined generateStamp to be a value, so it's only evaluated once, binding the name generateStamp to a lambda of type (unit -> int). Every invocation of that lambda then shares the same count variable.
If you had defined it as a plain function instead, every invocation would allocate its own count, like this:
let generateStamp () =
let mutable count = 0
count <- count + 1; count
generateStamp () // 1
generateStamp () // also 1
Note that the output is only different because generateStamp is impure. For pure functions (i.e. no side-effects), the output of the two versions would always be the same (although one might be faster than the other).
Also, note that the type signature is slightly different for the two versions. The first one is (unit -> int), while the second one is unit -> int. (I'm not sure if this is anything more than a quirk of the F# compiler, but it's interesting to be aware of nonetheless.)
No. An amateur (in this regard) tries to explain:
You are not calling a function named generateStamp. generateStamp has a value which is a function, expressed as a lambda. generateStamp is calculated once before you call the function it has. Part of that calculation is to set count to zero. So that happens only once.
The expression generateStamp () means that you first retrieve a function with the expression generateStamp, however which way that comes about - meaning it could be a declared function or a computed value as in this case. Then you supply an argument to make it execute.
The expression generateStamp () looks just like a normal function call, unlike in C# where you have to deal with this sort of thing in a much more cumbersome way. When we talk about functions being first-class members of the language, this is part of what it is about.

I don't understand this map tuple key compilation error, in F#

Here is a function:
let newPositions : PositionData list =
positions
|> List.filter (fun x ->
let key = (x.Instrument, x.Side)
match brain.Positions.TryGetValue key with
| false, _ ->
// if we don't know the position, it's new
true
| true, p when x.UpdateTime > p.UpdateTime ->
// it's newer than the version we have, it's new
true
| _ ->
false
)
it compiles at expected.
let's focus on two lines:
let key = (x.Instrument, x.Side)
match brain.Positions.TryGetValue key with
brain.Positions is a Map<Instrument * Side, PositionData> type
if I modify the second line to:
match brain.Positions.TryGetValue (x.Instrument, x.Side) with
then the code will not compile, with error:
[FS0001] This expression was expected to have type
'Instrument * Side'
but here has type
'Instrument'
but:
match brain.Positions.TryGetValue ((x.Instrument, x.Side)) with
will compile...
why is that?
This is due to method call syntax.
TryGetValue is not a function, but a method. A very different thing, and a much worse thing in general. And subject to some special syntactic rules.
This method, you see, actually has two parameters, not one. The first parameter is a key, as you expect. And the second parameter is what's known in C# as out parameter - i.e. kind of a second return value. The way it was originally meant to be called in C# is something like this:
Dictionary<int, string> map = ...
string val;
if (map.TryGetValue(42, out val)) { ... }
The "regular" return value of TryGetValue is a boolean signifying whether the key was even found. And the "extra" return value, denoted here out val, is the value corresponding to the key.
This is, of course, extremely awkward, but it did not stop the early .NET libraries from using this pattern very widely. So F# has special syntactic sugar for this pattern: if you pass just one parameter, then the result becomes a tuple consisting of the "actual" return value and the out parameter. Which is what you're matching against in your code.
But of course, F# cannot prevent you from using the method exactly as designed, so you're free to pass two parameters as well - the first one being the key and the second one being a byref cell (which is F# equivalent of out).
And here is where this clashes with the method call syntax. You see, in .NET all methods are uncurried, meaning their arguments are all effectively tupled. So when you call a method, you're passing a tuple.
And this is what happens in this case: as soon as you add parentheses, the compiler interprets that as an attempt to call a .NET method with tupled arguments:
brain.Positions.TryGetValue (x.Instrument, x.Side)
^ ^
first arg |
second arg
And in this case it expects the first argument to be of type Instrument * Side, but you're clearly passing just an Instrument. Which is exactly what the error message tells you: "expected to have type 'Instrument * Side'
but here has type 'Instrument'".
But when you add a second pair of parens, the meaning changes: now the outer parens are interpreted as "method call syntax", and the inner parens are interpreted as "denoting a tuple". So now the compiler interprets the whole thing as just a single argument, and all works as before.
Incidentally, the following will also work:
brain.Positions.TryGetValue <| (x.Instrument, x.Side)
This works because now it's no longer a "method call" syntax, because the parens do not immediately follow the method name.
But a much better solution is, as always, do not use methods, use functions instead!
In this particular example, instead of .TryGetValue, use Map.tryFind. It's the same thing, but in proper function form. Not a method. A function.
brain.Positions |> Map.tryFind (x.Instrument, x.Side)
Q: But why does this confusing method even exist?
Compatibility. As always with awkward and nonsensical things, the answer is: compatibility.
The standard .NET library has this interface System.Collections.Generic.IDictionary, and it's on that interface that the TryGetValue method is defined. And every dictionary-like type, including Map, is generally expected to implement that interface. So here you go.
In future, please consider the Stack Overflow guidelines provided under How to create a Minimal, Reproducible Example. Well, minimal and reproducible the code in your question is, but it shall also be complete...
…Complete – Provide all parts someone else needs to reproduce your
problem in the question itself
That being said, when given the following definitions, your code will compile:
type Instrument() = class end
type Side() = class end
type PositionData = { Instrument : Instrument; Side : Side; }
with member __.UpdateTime = 0
module brain =
let Positions = dict[(Instrument(), Side()), {Instrument = Instrument(); Side = Side()}]
let positions = []
Now, why is that? Technically, it is because of the mechanism described in the F# 4.1 Language Specification under §14.4 Method Application Resolution, 4. c., 2nd bullet point:
If all formal parameters in the suffix are “out” arguments with byref
type, remove the suffix from UnnamedFormalArgs and call it
ImplicitlyReturnedFormalArgs.
This is supported by the signature of the method call in question:
System.Collections.Generic.IDictionary.TryGetValue(key: Instrument * Side, value: byref<PositionData>)
Here, if the second argument is not provided, the compiler does the implicit conversion to a tuple return type as described in §14.4 5. g.
You are obviously familiar with this behaviour, but maybe not with the fact that if you specify two arguments, the compiler will see the second of them as the explicit byref "out" argument, and complains accordingly with its next error message:
Error 2 This expression was expected to have type
PositionData ref
but here has type
Side
This misunderstanding changes the return type of the method call from bool * PositionData to bool, which consequently elicits a third error:
Error 3 This expression was expected to have type
bool
but here has type
'a * 'b
In short, your self-discovered workaround with double parentheses is indeed the way to tell the compiler: No, I am giving you only one argument (a tuple), so that you can implicitly convert the byref "out" argument to a tuple return type.

When to use parentheses when calling a function in f#?

I'm learning about f# and I understand you don't need to use parentheses when calling a function.
Ex
let addOne arg1 =
arg1 + 1
addOne 1
vs
this.GetType()
Why do I have to use parentheses on the second function?
There is a bit of a mismatch between working with .NET libraries and working with F# libraries when it comes to parameters, but you can generally see () not as parentheses, but as a special value of type unit that means "no useful information".
This means that when you say:
addOne 1
You are calling addOne with a value - number 1 - as a parameter. Now, when you apply the same reading to the second example:
this.GetType()
You can read this as calling this.GetType with a value - the special () unit value as a parameter. If you wanted to be consistent, you could write this with space too:
this.GetType ()
In practice, most people will omit the space when calling .NET libraries. When you do not write the space, F# also supports method chaining so you can write e.g. foo().bar().
Many F# functions taking multiple parameters will use the "curried" form, which means that the parameters need to be separated by spaces. For example:
let add a b = a + b
let mul a b = a * b
add 10 (mul 20 3)
Here, you need parentheses around the second expression, so that the compiler knows how to parse the code. This is in contrast with typical .NET methods, which take parameters as a tuple. F# tuples are written as (10, "hello") and so you can see a method call as an ordinary call accepting a tuple:
some.Operation (10, "Hello")
Again, typically you wouldn't write the space here, because you know this is actually a .NET method call, rather than "passing tuple to a function", but conceptually, you can think of it in both ways.
This is the summary - there are a few corner cases where method calls do not really behave like tuples (e.g. when it comes to named parameters), but this way of thinking about it should give you an idea about how things work.

How to refactor a function using "ignore"

When should I use "ignore" instead of "()"?
I attempted to write the following:
let log = fun data medium -> ()
I then received the following message:
Lint: 'fun _ -> ()' might be able to be refactored into 'ignore'.
So I updated the declaration to the following:
let log = fun data medium -> ignore
Is there any guidance on why I might use one over the other?
My gut tells me that I should use ignore when executing an actual expression.
In this case though, I'm declaring a high-order function.
Are my assumptions accurate?
The linter message that you got here is a bit confusing. The ignore function is just a function that takes anything and returns unit:
let ignore = fun x -> ()
Your log function is a bit similar to ignore, but it takes two parameters:
let log = fun data medium -> ()
In F#, this is actually a function that returns another function (currying). You can write this more explicitly by saying:
let log = fun data -> fun medium -> ()
Now, you can see that a part of your function is actually the same thing as ignore. You can write:
let log = fun data -> ignore
This means the same thing as your original function and this is what the linter is suggesting. I would not write the code in this way, because it is less obvious what the code does (it actually takes two arguments) - I guess the linter is looking just for the simple pattern, ignoring the fact that sometimes the refactoring is not all that useful.
Never, at least not in the way shown in the question.
Substituting between ignore and () is not meaningful, as they are different concepts:
ignore is a generic function with one argument and unit return. Its type is 'T -> unit.
() is the only valid value of type unit. It is not a function at all.
Therefore, it's not valid to do the refactor shown in the question. The first version of log takes two curried arguments, while the second version takes three.
What Lint is trying to suggest isn't quite clear. ignore is a function with one argument; it's not obvious how (or why) it should be used to refactor a method that takes two curried arguments. fun _ _ -> () would be an okay and quite readable way to ignore two arguments.

Why does return/redo evaluate result functions in the calling context, but block results are not evaluated?

Last night I learned about the /redo option for when you return from a function. It lets you return another function, which is then invoked at the calling site and reinvokes the evaluator from the same position
>> foo: func [a] [(print a) (return/redo (func [b] [print b + 10]))]
>> foo "Hello" 10
Hello
20
Even though foo is a function that only takes one argument, it now acts like a function that took two arguments. Something like that would otherwise require the caller to know you were returning a function, and that caller would have to manually use the do evaluator on it.
Thus without return/redo, you'd get:
>> foo: func [a] [(print a) (return (func [b] [print b + 10]))]
>> foo "Hello" 10
Hello
== 10
foo consumed its one parameter and returned a function by value (which was not invoked, thus the interpreter moved on). Then the expression evaluated to 10. If return/redo did not exist you'd have had to write:
>> do foo "Hello" 10
Hello
20
This keeps the caller from having to know (or care) if you've chosen to return a function to execute. And is cool because you can do things like tail call optimization, or writing a wrapper for the return functionality itself. Here's a variant of return that prints a message but still exits the function and provides the result:
>> myreturn: func [] [(print "Leaving...") (return/redo :return)]
>> foo: func [num] [myreturn num + 10]
>> foo 10
Leaving...
== 20
But functions aren't the only thing that have behavior in do. So if this is a general pattern for "removing the need for a DO at the callsite", then why doesn't this print anything?
>> test: func [] [return/redo [print "test"]]
>> test
== [print "test"]
It just returned the block by value, like a normal return would have. Shouldn't it have printed out "test"? That's what do would...uh, do with it:
>> do [print "test"]
test
The short answer is because it is generally unnecessary to evaluate a block at the call point, because blocks in Rebol don't take parameters so it mostly doesn't matter where they are evaluated. However, that "mostly" may need some explanation...
It comes down to two interesting features of Rebol: static binding, and how do of a function works.
Static Binding and Scopes
Rebol doesn't have scoped word bindings, it has static direct word bindings. Sometimes it seems like we have lexical scope, but we really fake that by updating the static bindings each time we're building a new "scoped" code block. We can also rebind words manually whenever we want.
What that means for us in this case though, is that once a block exists, its bindings and values are static - they're not affected by where the block is physically located, or where it is being evaluated.
However, and this is where it gets tricky, function contexts are weird. While the bindings of words bound to a function context are static, the set of values assigned to those words are dynamically scoped. It's a side effect of how code is evaluated in Rebol: What are language statements in other languages are functions in Rebol, so a call to if, for instance, actually passes a block of data to the if function which if then passes to do. That means that while a function is running, do has to look up the values of its words from the call frame of the most recent call to the function that hasn't returned yet.
This does mean that if you call a function and return a block of code with words bound to its context, evaluating that block will fail after the function returns. However, if your function calls itself and that call returns a block of code with its words bound to it, evaluating that block before your function returns will make it look up those words in the call frame of the current call of your function.
This is the same for whether you do or return/redo, and affects inner functions as well. Let me demonstrate:
Function returning code that is evaluated after the function returns, referencing a function word:
>> a: 10 do do has [a] [a: 20 [a]]
** Script error: a word is not bound to a context
** Where: do
** Near: do do has [a] [a: 20 [a]]
Same, but with return/redo and the code in a function:
>> a: 10 do has [a] [a: 20 return/redo does [a]]
** Script error: a word is not bound to a context
** Where: function!
** Near: [a: 20 return/redo does [a]]
Code do version, but inside an outer call to the same function:
>> do f: function [x] [a: 10 either zero? x [do f 1] [a: 20 [a]]] 0
== 10
Same, but with return/redo and the code in a function:
>> do f: function [x] [a: 10 either zero? x [f 1] [a: 20 return/redo does [a]]] 0
== 10
So in short, with blocks there is usually no advantage to doing the block elsewhere than where it is defined, and if you want to it is easier to use another call to do instead. Self-calling recursive functions that need to return code to be executed in outer calls of the same function are an exceedingly rare code pattern that I have never seen used in Rebol code at all.
It could be possible to change return/redo so it would handle blocks as well, but it probably isn't worth the increased overhead to return/redo to add a feature that is only useful in rare circumstances and already has a better way to do it.
However, that brings up an interesting point: If you don't need return/redo for blocks because do does the same job, doesn't the same apply to functions? Why do we need return/redo at all?
How DO of a Function Works
Basically, we have return/redo because it uses exactly the same code that we use to implement do of a function. You might not realize it, but do of a function is really unusual.
In most programming languages that can call a function value, you have to pass the parameters to the function as a complete set, sort of how R3's apply function works. Regular Rebol function calling causes some unknown-ahead-of-time number of additional evaluations to happen for its arguments using unknown-ahead-of-time evaluation rules. The evaluator figures out these evaluation rules at runtime and just passes the results of the evaluation to the function. The function itself doesn't handle the evaluation of its parameters, or even necessarily know how those parameters were evaluated.
However, when you do a function value explicitly, that means passing the function value to a call to another function, a regular function named do, and then that magically causes the evaluation of additional parameters that weren't even passed to the do function at all.
Well it's not magic, it's return/redo. The way do of a function works is that it returns a reference to the function in a regular shortcut-return value, with a flag in the shortcut-return value that tells the interpreter that called do to evaluate the returned function as if it were called right there in the code. This is basically what is called a trampoline.
Here's where we get to another interesting feature of Rebol: The ability to shortcut-return values from a function is built into the evaluator, but it doesn't actually use the return function to do it. All of the functions you see from Rebol code are wrappers around the internal stuff, even return and do. The return function we call just generates one of those shortcut-return values and returns it; the evaluator does the rest.
So in this case, what really happened is that all along we had code that did what return/redo does internally, but Carl decided to add an option to our return function to set that flag, even though the internal code doesn't need return to do so because the internal code calls the internal function. And then he didn't tell anyone that he was making the option externally available, or why, or what it did (I guess you can't mention everything; who has the time?). I have the suspicion, based on conversations with Carl and some bugs we've been fixing, that R2 handled do of a function differently, in a way that would have made return/redo impossible.
That does mean that the handling of return/redo is pretty thoroughly oriented towards function evaluation, since that is its entire reason for existing at all. Adding any overhead to it would add overhead to do of a function, and we use that a lot. Probably not worth extending it to blocks, given how little we'd gain and how rarely we'd get any benefit at all.
For return/redo of a function though, it seems to be getting more and more useful the more we think about it. In the last day we've come up with all sorts of tricks that this enables. Trampolines are useful.
While the question originally asked why return/redo did not evaluate blocks, there were also formulations like: "is cool because you can do things like tail call optimization", "[can write] a wrapper for the return functionality", "it seems to be getting more and more useful the more we think about it".
I do not think these are true. My first example demonstrates a case where return/redo can really be used, an example being in the "area of expertise" of return/redo, so to speak. It is a variadic sum function called sumn:
use [result collect process] [
collect: func [:value [any-type!]] [
unless value? 'value [return process result]
append/only result :value
return/redo :collect
]
process: func [block [block!] /local result] [
result: 0
foreach value reduce block [result: result + value]
result
]
sumn: func [] [
result: copy []
return/redo :collect
]
]
This is the usage example:
>> sumn 1 * 2 2 * 3 4
== 12
Variadic functions taking "unlimited number" of arguments are not as useful in Rebol as it may look at the first sight. For example, if we wanted to use the sumn function in a small script, we would have to wrap it into a paren to indicate where it should stop collecting arguments:
result: (sumn 1 * 2 2 * 3 4)
print result
This is not any better than using a more standard (non-variadic) alternative called e.g. block-sum and taking just one argument, a block. The usage would be like
result: block-sum [1 * 2 2 * 3 4]
print result
Of course, if the function can somehow detect what is its last argument without needing enclosing paren, we really gain something. In this case we could use the #[unset!] value as the sumn stopping argument, but that does not spare typing either:
result: sumn 1 * 2 2 * 3 4 #[unset!]
print result
Seeing the example of a return wrapper I would say that return/redo is not well suited for return wrappers, return wrappers being outside of its area of expertise. To demonstrate that, here is a return wrapper written in Rebol 2 that actually is outside of return/redo's area of expertise:
myreturn: func [
{my RETURN wrapper returning the string "indefinite" instead of #[unset!]}
; the [throw] attribute makes this function a RETURN wrapper in R2:
[throw]
value [any-type!] {the value to return}
] [
either value? 'value [return :value] [return "indefinite"]
]
Testing in R2:
>> do does [return #[unset!]]
>> do does [myreturn #[unset!]]
== "indefinite"
>> do does [return 1]
== 1
>> do does [myreturn 1]
== 1
>> do does [return 2 3]
== 2
>> do does [myreturn 2 3]
== 2
Also, I do not think it is true that return/redo helps with tail call optimizations. There are examples how tail calls can be implemented without using return/redo at the www.rebol.org site. As said, return/redo was tailor-made to support implementation of variadic functions and it is not flexible enough for other purposes as far as argument passing is concerned.

Resources