Why does return/redo evaluate result functions in the calling context, but block results are not evaluated? - return

Last night I learned about the /redo option for when you return from a function. It lets you return another function, which is then invoked at the calling site and reinvokes the evaluator from the same position
>> foo: func [a] [(print a) (return/redo (func [b] [print b + 10]))]
>> foo "Hello" 10
Hello
20
Even though foo is a function that only takes one argument, it now acts like a function that took two arguments. Something like that would otherwise require the caller to know you were returning a function, and that caller would have to manually use the do evaluator on it.
Thus without return/redo, you'd get:
>> foo: func [a] [(print a) (return (func [b] [print b + 10]))]
>> foo "Hello" 10
Hello
== 10
foo consumed its one parameter and returned a function by value (which was not invoked, thus the interpreter moved on). Then the expression evaluated to 10. If return/redo did not exist you'd have had to write:
>> do foo "Hello" 10
Hello
20
This keeps the caller from having to know (or care) if you've chosen to return a function to execute. And is cool because you can do things like tail call optimization, or writing a wrapper for the return functionality itself. Here's a variant of return that prints a message but still exits the function and provides the result:
>> myreturn: func [] [(print "Leaving...") (return/redo :return)]
>> foo: func [num] [myreturn num + 10]
>> foo 10
Leaving...
== 20
But functions aren't the only thing that have behavior in do. So if this is a general pattern for "removing the need for a DO at the callsite", then why doesn't this print anything?
>> test: func [] [return/redo [print "test"]]
>> test
== [print "test"]
It just returned the block by value, like a normal return would have. Shouldn't it have printed out "test"? That's what do would...uh, do with it:
>> do [print "test"]
test

The short answer is because it is generally unnecessary to evaluate a block at the call point, because blocks in Rebol don't take parameters so it mostly doesn't matter where they are evaluated. However, that "mostly" may need some explanation...
It comes down to two interesting features of Rebol: static binding, and how do of a function works.
Static Binding and Scopes
Rebol doesn't have scoped word bindings, it has static direct word bindings. Sometimes it seems like we have lexical scope, but we really fake that by updating the static bindings each time we're building a new "scoped" code block. We can also rebind words manually whenever we want.
What that means for us in this case though, is that once a block exists, its bindings and values are static - they're not affected by where the block is physically located, or where it is being evaluated.
However, and this is where it gets tricky, function contexts are weird. While the bindings of words bound to a function context are static, the set of values assigned to those words are dynamically scoped. It's a side effect of how code is evaluated in Rebol: What are language statements in other languages are functions in Rebol, so a call to if, for instance, actually passes a block of data to the if function which if then passes to do. That means that while a function is running, do has to look up the values of its words from the call frame of the most recent call to the function that hasn't returned yet.
This does mean that if you call a function and return a block of code with words bound to its context, evaluating that block will fail after the function returns. However, if your function calls itself and that call returns a block of code with its words bound to it, evaluating that block before your function returns will make it look up those words in the call frame of the current call of your function.
This is the same for whether you do or return/redo, and affects inner functions as well. Let me demonstrate:
Function returning code that is evaluated after the function returns, referencing a function word:
>> a: 10 do do has [a] [a: 20 [a]]
** Script error: a word is not bound to a context
** Where: do
** Near: do do has [a] [a: 20 [a]]
Same, but with return/redo and the code in a function:
>> a: 10 do has [a] [a: 20 return/redo does [a]]
** Script error: a word is not bound to a context
** Where: function!
** Near: [a: 20 return/redo does [a]]
Code do version, but inside an outer call to the same function:
>> do f: function [x] [a: 10 either zero? x [do f 1] [a: 20 [a]]] 0
== 10
Same, but with return/redo and the code in a function:
>> do f: function [x] [a: 10 either zero? x [f 1] [a: 20 return/redo does [a]]] 0
== 10
So in short, with blocks there is usually no advantage to doing the block elsewhere than where it is defined, and if you want to it is easier to use another call to do instead. Self-calling recursive functions that need to return code to be executed in outer calls of the same function are an exceedingly rare code pattern that I have never seen used in Rebol code at all.
It could be possible to change return/redo so it would handle blocks as well, but it probably isn't worth the increased overhead to return/redo to add a feature that is only useful in rare circumstances and already has a better way to do it.
However, that brings up an interesting point: If you don't need return/redo for blocks because do does the same job, doesn't the same apply to functions? Why do we need return/redo at all?
How DO of a Function Works
Basically, we have return/redo because it uses exactly the same code that we use to implement do of a function. You might not realize it, but do of a function is really unusual.
In most programming languages that can call a function value, you have to pass the parameters to the function as a complete set, sort of how R3's apply function works. Regular Rebol function calling causes some unknown-ahead-of-time number of additional evaluations to happen for its arguments using unknown-ahead-of-time evaluation rules. The evaluator figures out these evaluation rules at runtime and just passes the results of the evaluation to the function. The function itself doesn't handle the evaluation of its parameters, or even necessarily know how those parameters were evaluated.
However, when you do a function value explicitly, that means passing the function value to a call to another function, a regular function named do, and then that magically causes the evaluation of additional parameters that weren't even passed to the do function at all.
Well it's not magic, it's return/redo. The way do of a function works is that it returns a reference to the function in a regular shortcut-return value, with a flag in the shortcut-return value that tells the interpreter that called do to evaluate the returned function as if it were called right there in the code. This is basically what is called a trampoline.
Here's where we get to another interesting feature of Rebol: The ability to shortcut-return values from a function is built into the evaluator, but it doesn't actually use the return function to do it. All of the functions you see from Rebol code are wrappers around the internal stuff, even return and do. The return function we call just generates one of those shortcut-return values and returns it; the evaluator does the rest.
So in this case, what really happened is that all along we had code that did what return/redo does internally, but Carl decided to add an option to our return function to set that flag, even though the internal code doesn't need return to do so because the internal code calls the internal function. And then he didn't tell anyone that he was making the option externally available, or why, or what it did (I guess you can't mention everything; who has the time?). I have the suspicion, based on conversations with Carl and some bugs we've been fixing, that R2 handled do of a function differently, in a way that would have made return/redo impossible.
That does mean that the handling of return/redo is pretty thoroughly oriented towards function evaluation, since that is its entire reason for existing at all. Adding any overhead to it would add overhead to do of a function, and we use that a lot. Probably not worth extending it to blocks, given how little we'd gain and how rarely we'd get any benefit at all.
For return/redo of a function though, it seems to be getting more and more useful the more we think about it. In the last day we've come up with all sorts of tricks that this enables. Trampolines are useful.

While the question originally asked why return/redo did not evaluate blocks, there were also formulations like: "is cool because you can do things like tail call optimization", "[can write] a wrapper for the return functionality", "it seems to be getting more and more useful the more we think about it".
I do not think these are true. My first example demonstrates a case where return/redo can really be used, an example being in the "area of expertise" of return/redo, so to speak. It is a variadic sum function called sumn:
use [result collect process] [
collect: func [:value [any-type!]] [
unless value? 'value [return process result]
append/only result :value
return/redo :collect
]
process: func [block [block!] /local result] [
result: 0
foreach value reduce block [result: result + value]
result
]
sumn: func [] [
result: copy []
return/redo :collect
]
]
This is the usage example:
>> sumn 1 * 2 2 * 3 4
== 12
Variadic functions taking "unlimited number" of arguments are not as useful in Rebol as it may look at the first sight. For example, if we wanted to use the sumn function in a small script, we would have to wrap it into a paren to indicate where it should stop collecting arguments:
result: (sumn 1 * 2 2 * 3 4)
print result
This is not any better than using a more standard (non-variadic) alternative called e.g. block-sum and taking just one argument, a block. The usage would be like
result: block-sum [1 * 2 2 * 3 4]
print result
Of course, if the function can somehow detect what is its last argument without needing enclosing paren, we really gain something. In this case we could use the #[unset!] value as the sumn stopping argument, but that does not spare typing either:
result: sumn 1 * 2 2 * 3 4 #[unset!]
print result
Seeing the example of a return wrapper I would say that return/redo is not well suited for return wrappers, return wrappers being outside of its area of expertise. To demonstrate that, here is a return wrapper written in Rebol 2 that actually is outside of return/redo's area of expertise:
myreturn: func [
{my RETURN wrapper returning the string "indefinite" instead of #[unset!]}
; the [throw] attribute makes this function a RETURN wrapper in R2:
[throw]
value [any-type!] {the value to return}
] [
either value? 'value [return :value] [return "indefinite"]
]
Testing in R2:
>> do does [return #[unset!]]
>> do does [myreturn #[unset!]]
== "indefinite"
>> do does [return 1]
== 1
>> do does [myreturn 1]
== 1
>> do does [return 2 3]
== 2
>> do does [myreturn 2 3]
== 2
Also, I do not think it is true that return/redo helps with tail call optimizations. There are examples how tail calls can be implemented without using return/redo at the www.rebol.org site. As said, return/redo was tailor-made to support implementation of variadic functions and it is not flexible enough for other purposes as far as argument passing is concerned.

Related

What is "object = {...}" in lua good for?

I recently read about lua and addons for the game "World of Warcraft". Since the interface language for addons is lua and I want to learn a new language, I thought this was a good idea.
But there is this one thing I can't get to know. In almost every addon there is this line on the top which looks for me like a constructor that creates a object on which member I can have access to. This line goes something like this:
object = {...}
I know that if a function returns several values (which is IMHO one huge plus for lua) and I don't want to store them seperatly in several values, I can just write
myArray = {SomeFunction()}
where myArray is now a table that contains the values and I can access the values by indexing it (myArray[4]). Since the elements are not explicitly typed because only the values themselfe hold their type, this is fine for lua. I also know that "..." can be used for a parameter array in a function for the case that the function does not know how many parameter it gets when called (like String[] args in java). But what in gods name is this "curly bracket - dot, dot, dot - curly bracket" used for???
You've already said all there is to it in your question:
{...} is really just a combination of the two behaviors you described: It creates a table containing all the arguments, so
function foo(a, b, ...)
return {...}
end
foo(1, 2, 3, 4, 5) --> {3, 4, 5}
Basically, ... is just a normal expression, just like a function call that returns multiple values. The following two expressions work in the exact same way:
local a, b, c = ...
local d, e, f = some_function()
Keep in mind though that this has some performance implications, so maybe don't use it in a function that gets called like 1000 times a second ;)
EDIT:
Note that this really doesn't apply just to "functions". Functions are actually more of a syntax feature than anything else. Under the hood, Lua only knows of chunks, which are what both functions and .lua files get turned into. So, if you run a Lua script, the entire script gets turned into a chunk and is therefore no different than a function.
In terms of code, the difference is that with a function you can specify names for its arguments outside of its code, whereas with a file you're already at the outermost level of code; there's no "outside" a file.
Luckily, all Lua files, when they're loaded as a chunk, are automatically variadic, meaning they get the ... to access their argument list.
When you call a file like lua script.lua foo bar, inside script.lua, ... will actually contain the two arguments "foo" and "bar", so that's also a convenient way to access arguments when using Lua for standalone scripts.
In your example, it's actually quite similar. Most likely, somewhere else your script gets loaded with load(), which returns a function that you can call—and, you guessed it, pass arguments to.
Imagine the following situation:
function foo(a, b)
print(b)
print(a)
end
foo('hello', 'world')
This is almost equivalent to
function foo(...)
local a, b = ...
print(b)
print(a)
end
foo('hello', 'world')
Which is 100% (Except maybe in performance) equivalent to
-- Note that [[ string ]] is just a convenient syntax for multiline "strings"
foo = load([[
local a, b = ...
print(b)
print(a)
]])
foo('hello', 'world')
From the Lua 5.1 Reference manual then {...} means the arguments passed to the program. In your case those are probably the arguments passed from the game to the addon.
You can see references to this in this question and this thread.
Put the following text at the start of the file:
local args = {...}
for __, arg in ipairs(args) do
print(arg)
end
And it reveals that:
args[1] is the name of the addon
args[2] is a (empty) table passed by reference to all files in the same addon
Information inserted to args[2] is therefore available to different files.

Strange behavior caused by debug.getinfo(1, "n").name

I learned how to get the function name inside a function by using debug.getinfo(1, "n").name.
Using this feature, I found out the strange behavior in Lua.
Here's my code:
function myFunc()
local name = debug.getinfo(1, "n").name
return name
end
function foo()
return myFunc()
end
function boo()
local name = myFunc()
return name
end
print(foo())
print(boo())
Result:
nil
myFunc
As you can see, the function foo() and boo() calls the same function myFunc() but they return different results.
If I replace debug.getinfo(1, "n").name with other string, they return the same results as expected but I don't understand the unexpected behavior caused by using the debug.getinfo().
Is it possible to correct myFunc() function so calling both foo() and boo() functions return the same result?
Expected result:
myFunc
myFunc
In Lua, any return statement of the form return <expression_yielding_a_function>(...) is a "tail call". Tail calls essentially don't exist in the call stack, so they take up no additional space or resources. The function you call effectively gets erased from the debug information.
Is it possible to correct myFunc() function so calling both foo() and boo() functions return the same result?
Um... yes, but before I tell you how, allow me to try to convince you not to do this.
As previously mentioned, tail calls are part of the Lua language. The removal of tail calls from the stack is not an "optimization" any more than it is an "optimization" for a for loop to exit when you use break. It is a part of Lua's grammar, and Lua programmers have just as much a right to expect a tail call to be a tail call as they have the right to expect break to exit loops.
Lua, as a language, specifically states that this:
local function recursive(...)
--some terminating condition
return recursive(modified_args)
end
will never, ever, run out of stack space. It will be just as stack space efficient as performing a loop. This is a part of the Lua language, just as much a part of it as the behavior of for and while.
If a user wants to call your function via a tail call, that is their right as the user of a language that makes tail calls a thing. Denying users of a language the right to use the features of that language is rude.
So don't do it.
Furthermore, your code suggests that you are attempting to rely on functions having names. That you're doing something significant and meaningful with those names.
Well, Lua is not Python; Lua functions do not have to have names, period. As such, you should not write code that meaningfully relies upon the name of a function. For debugging or logging purposes, fine. But you should not break user expectations just for debugging and logging. So if the user made a tail call, just accept that's what the user wanted and that your debugging/logging will suffer slightly.
OK, so, do we agree that you shouldn't do this? That Lua users have the right to tail calls, and you don't have the right to deny them? That Lua functions are not named and you shouldn't write code that requires them to maintain a name? OK?
What follows is terrible code that you should never use! (in Lua 5.3):
function bypass_tail_call(Func)
local function tail_call_bypass(...)
local rets = table.pack(Func(...))
return table.unpack(rets, rets.n)
end
return tail_call_bypass
end
Then, simply replace your real function with the return of the bypass:
function myFunc()
local name = debug.getinfo(1, "n").name
return name
end
myFunc = bypass_tail_call(myFunc)
Note that the bypass function has to build an array to hold the return values, then unpack them into the final return statement. This obviously requires additional memory allocations that don't have to happen in regular code.
So there's another reason not to do this.
You can run your code through luac -l -p
...
function <stdin:6,8> (4 instructions at 0x555f561592a0)
0 params, 2 slots, 1 upvalue, 0 locals, 1 constant, 0 functions
1 [7] GETTABUP 0 0 -1 ; _ENV "myFunc"
2 [7] TAILCALL 0 1 0
3 [7] RETURN 0 0
4 [8] RETURN 0 1
function <stdin:10,13> (4 instructions at 0x555f561593b0)
0 params, 2 slots, 1 upvalue, 1 local, 1 constant, 0 functions
1 [11] GETTABUP 0 0 -1 ; _ENV "myFunc"
2 [11] CALL 0 1 2
3 [12] RETURN 0 2
4 [13] RETURN 0 1
Those are the two function that are of interest to us: foo and boo
As you can see, when boo calls myFunc, it's just a normal CALL, so nothing interesting there.
foo, however, does something called a tail call. That is, the return value of foo is the return value of myFunc.
What makes this kind of call special is that there is no need for the program to jump back into foo; once foo calls myFunc it can just hand over the keys and say "You know what to do"; myFunc then returns its results directly to where foo was called. This has two advantages:
The stack frame of foo can be cleaned up before myFunc is called
once myFunc returns, it doesn't need two jumps to return to the main thread; only one
Both of those are insignificant in examples like yours, but once you have a chain of lots and lots of tail calls, it becomes significant.
The downside of this is that, once the stack of foo gets cleaned up, Lua also forgets all the debugging information associated with it; it only remembers that myFunc was called as a tail call, but not from where.
An interesting side note, is that boo is almost also a tail call. If Lua didn't have multiple return values, it'd be exactly identical to foo, and a smarter compiler like LuaJIT might compile it to a tail call. PUC Lua won't though, since it needs a literal return some_function() to recognize the tail call.
The difference is that boo only returns the first value returned by myFunc, and while in your example, there will only ever be one, the interpreter can't make that assumption (LuaJIT might make that assumption during JIT compilation, but that's beyond my understanding)
Also note that, technically, the word tail call just describes a function A directly returning the return value of another function B.
It often gets used interchangeably with tail call optimization, which is what the compiler does when it re-uses the stack frame and turns the function call into a jump.
Strictly speaking, C (for example) has tail calls, but it has no tail call optimization, meaning something like
int recursive(n) { return recursive(n+1); }
is valid C code, but will eventually cause a stack overflow, while in Lua
local function recursive(n) return recursive(n+1) end
will just run forever. Both are tail calls, but only the second gets optimized.
EDIT: As always with C, some compilers may, on their own, implement tail call optimization, so don't go around telling everyone that "C never ever does it"; it's just not a requried part of the language, while in Lua it's actually defined in the language specification, so it's not Lua until it has TCO.
This is a result of tail call optimisation, which Lua does.
In this case, Lua translates the function call into a "goto" statement, and does not use any extra stack frame to perform the tail call.
You can add traceback statement to check it:
function myFunc()
local name = debug.getinfo(1, "n").name
print(debug.traceback("Stack trace"))
return name
end
Tail call optimisation happens in Lua when you return with a function call:
-- Optimized
function good1()
return test()
end
-- Optimized
function good2()
return test(foo(), bar(5 + baz()))
end
-- Not optimised
function bad1()
return test() + 1
end
-- Not optimised
function bad2()
return test()[2] + foo()
end
You can refer to the following links for more information:
- Programming in Lua - 6.3: Proper Tail Calls
- What is tail call optimisation? - Stack Overflow

When to use parentheses when calling a function in f#?

I'm learning about f# and I understand you don't need to use parentheses when calling a function.
Ex
let addOne arg1 =
arg1 + 1
addOne 1
vs
this.GetType()
Why do I have to use parentheses on the second function?
There is a bit of a mismatch between working with .NET libraries and working with F# libraries when it comes to parameters, but you can generally see () not as parentheses, but as a special value of type unit that means "no useful information".
This means that when you say:
addOne 1
You are calling addOne with a value - number 1 - as a parameter. Now, when you apply the same reading to the second example:
this.GetType()
You can read this as calling this.GetType with a value - the special () unit value as a parameter. If you wanted to be consistent, you could write this with space too:
this.GetType ()
In practice, most people will omit the space when calling .NET libraries. When you do not write the space, F# also supports method chaining so you can write e.g. foo().bar().
Many F# functions taking multiple parameters will use the "curried" form, which means that the parameters need to be separated by spaces. For example:
let add a b = a + b
let mul a b = a * b
add 10 (mul 20 3)
Here, you need parentheses around the second expression, so that the compiler knows how to parse the code. This is in contrast with typical .NET methods, which take parameters as a tuple. F# tuples are written as (10, "hello") and so you can see a method call as an ordinary call accepting a tuple:
some.Operation (10, "Hello")
Again, typically you wouldn't write the space here, because you know this is actually a .NET method call, rather than "passing tuple to a function", but conceptually, you can think of it in both ways.
This is the summary - there are a few corner cases where method calls do not really behave like tuples (e.g. when it comes to named parameters), but this way of thinking about it should give you an idea about how things work.

.fsx script ignoring a function call when I add a parameter to it

Alright, so I'm a happy fsx-script programmer, because I love how I can have the compiler shout at me when I do mistakes before they show up at runtime.
However I've found a case which really bothers me because I was expecting that by doing some refactoring (i.e.: adding an argument to a function) I was going to be warned by the compiler about all the places where I need to put the new argument. But, not only this did not happen, fsharpi ran my script and ignored the function call completely!! :(
How can I expect to refactor my scripts if this happens?
Here is my code:
let Foo (bar: string) =
Console.WriteLine("I received " + bar)
Foo("hey")
It works.
Now, later, I decide to add a second argument to the function (but I forget to add the argument to all the calls to it):
let Foo (bar: string) (baz: bool) =
Console.WriteLine("I received " + bar)
Foo("hey")
The result of this is: instead of the compiler telling me that I'm missing an argument, it is fsharpi running the script and ignoring the call to Foo! Why?
PS: I know the difference between currying and tuples, so I know Foo("hey") becomes a function (instead of a function call), because of partial application. But I want to understand better why the compiler is not expecting a function evaluation here, instead of seeing a function and ignoring it. Can I enable a warningAsError somehow? I would like to avoid resorting to using tuples in order to workaround this problem.
The fsharpi (or fsi if you're on Windows) interpreter makes no distinction between running a script and typing code at the interactive prompt (or, most often, submitting code from your editor via a select-and-hit-Alt-Enter keyboard shortcut).
Therefore, if you got what you're asking for -- fsharpi issuing a warning whenever a script line has a return value that isn't () -- it would ruin the value of fsharpi for the most common use case, which is people using an interactive fsharpi session to test their code, and rapidly iterate through non-working prototypes to get to one that works correctly. This is one of F#'s great strengths, and giving you what you're asking for would eliminate that strength. It is therefore never going to happen.
BUT... that doesn't mean that you're sunk. If you have functions that return unit, and you want fsharpi to give you a compile-time error when you refactor them to take more arguments, you can do it this way. Replace all occurrences of:
Foo("hey")
with:
() = Foo("hey")
As long as the function Foo has only one argument (and returns null), this will evaluate to true; the true value will be happily ignored by fsharpi, and your script will run. However, if you then change Foo to take two arguments, so that Foo("hey") now returns a function, the () = Foo("hey") line will no longer compile, and you'll get an error like:
error FS0001: This expression was expected to have type
unit
but here has type
'a -> unit
So if you want fsharpi to refuse to compile your script when you refactor a function, go through and change your calls to () = myfunc arg1 arg2. For functions that don't return unit, make the value you're testing against a value of that function's return type. For example, given this function:
let f x = x * 2
You could do
0 = f 5
This will be false, of course, but it will compile. But if you refactor f:
let f x y = x * 2 + y
Now the line 0 = f 5 will not compile, but will give you the error message:
error FS0001: This expression was expected to have type
int
but here has type
int -> int
To summarize: you won't ever get the feature you're looking for, because it would harm the language. But with a bit of work, you can do something that fits your needs.
Or in other words, as the famous philosopher Mick Jagger once put it:
You can't always get what you want. But if you try, sometimes you might find you get what you need.

Lua: lua_resume and lua_yield argument purposes

What is the purpose of passing arguments to lua_resume and lua_yield?
I understand that on the first call to lua_resume the arguments are passed to the lua function that is being resumed. This makes sense. However I'd expect that all subsequent calls to lua_resume would "update" the arguments in the coroutine's function. However that's not the case.
What is the purpose of passing arguments to lua_resume for lua_yield to return? Can the lua function running under the coroutine have access to the arguments passed by lua_resume?
What Nicol said. You can still preserve the values from the first resume call if you want:
do
local firstcall
function willyield(a)
firstcall = a
while a do
print(a, firstcall)
a = coroutine.yield()
end
end
end
local coro = coroutine.create(willyield)
coroutine.resume(coro, 1)
coroutine.resume(coro, 10)
coroutine.resume(coro, 100)
coroutine.resume(coro)
will print
1 1
10 1
100 1
Lua cannot magically give the original arguments new values. They might not even be on the stack anymore, depending on optimizations. Furthermore, there's no indication where the code was when it yielded, so it may not be able to see those arguments anymore. For example, if the coroutine called a function, that new function can't see the arguments passed into the old one.
coroutine.yield() returns the arguments passed to the resume call that continues the coroutine, so that the site of the yield call can handle parameters as it so desires. It allows the code doing the resuming to communicate with the specific code doing the yielding. yield() passes its arguments as return values from resume, and resume passes its arguments as return values to yield. This sets up a pathway of communication.
You can't do that in any other way. Certainly not by modifying arguments that may not be visible from the yield site. It's simple, elegant, and makes sense.
Also, it's considered exceedingly rude to go poking at someone's values. Especially a function already in operation. Remember: arguments are just local variables filled with values. The user shouldn't expect the contents of those variables to change unless it changes them itself. They're local variables, after all. They can only be changed locally; hence the name.
A simple example:
co = coroutine.create (function (a, b)
print("First args: ", a, b)
coroutine.yield(a+10, b+10)
print("Second args: ", a, b)
coroutine.yield(a+10, b+10)
end)
print(coroutine.resume(co, 1, 2))
print(coroutine.resume(co, 3, 4))
Prints:
First args: 1 2
true 11 12
Second args: 1 2
true 11 12
Showing that the orginal values for the args a and b did not change.

Resources