I am trying to understand how JavaRx's Flux.merge and switchIfEmpty work together in regards to the below code as I am a bit confused on results I am seeing which is no doubt the result of my not fully grasping Java RX.
My question is ... If the call to wOneRepository... returns an empty list or the call to wTwoRepository... returns an empty list, will the switchIfEmpty code get executed? Or will it only get executed if both calls return an empty list?
Flux<Widget> f1 = wOneRepository.findWidgets(email).onErrorResume(error -> Flux.empty());
Flux<Widget> f2 = wTwoRepository.findWidgets(email).onErrorResume(error -> Flux.empty());
return Flux.merge(f1, f2)
.switchIfEmpty(Flux.error(new IllegalArgumentException("Widgets were not found")));
Thank you
switchIfEmpty() will only be called if the upstream Flux completes without emitting anything, and that will only happen if both f1 and f2 complete without emitting anything. So, if both findWidget calls fail, or both return empty Flux instances, or some combination of those, then switchIfEmpty will be called. If either f1 or f2 emits a Widget, then that Widget will be emitted from the merge operator, which means switchIfEmpty will not be called.
Related
I have the following computation expression builder:
type ExprBuilder() =
member this.Return(x) =
Some x
let expr = new ExprBuilder()
I understand the purpose of methods Return, Zero and Combine, but I don't understand what is the difference between expressions shown below:
let a = expr{
printfn "Hello"
return 1
} // result is Some 1
let c = expr{
return 1
printfn "Hello"
} // do not compile. Combine method required
I also don't understand why in the first case Zero method in not required for printfn statement?
In the first expression, you perform some computation that results in value 1, and that's it. You don't need a Zero in it, because Zero is only needed for return-less expressions (that's why it's called "zero" - it's what is there when nothing is there), and your expression does have a return.
To specifically answer your question, Zero is not required "for the printfn statement", because not every line within the expression gets transformed. When compiling computation expressions, the compiler breaks them up at "special" points, such as let!, do!, return, etc., leaving all the rest of the code between those points intact. In this case, your printfn call just becomes part of the code that executes before the return.
In the second expression, you perform two computations: the first one results in value 1, and second one results in Zero (which is implicitly assumed when expression lacks a return). But the whole computation expression can't have two return values, it must have one. So in order to bring results of the two computations together (one might say, combine them), you need the Combine method.
Besides Combine and Zero, you'd also need to implement Delay for this to work. Multipart (i.e. "combined") computations are also wrapped in Delay in order to allow the builder to defer evaluation and possibly drop some parts within the Combine implementation.
I recommend reading through this introduction by Scott Wlaschin, specifically part 3 about Delay and Run.
Last night I learned about the /redo option for when you return from a function. It lets you return another function, which is then invoked at the calling site and reinvokes the evaluator from the same position
>> foo: func [a] [(print a) (return/redo (func [b] [print b + 10]))]
>> foo "Hello" 10
Hello
20
Even though foo is a function that only takes one argument, it now acts like a function that took two arguments. Something like that would otherwise require the caller to know you were returning a function, and that caller would have to manually use the do evaluator on it.
Thus without return/redo, you'd get:
>> foo: func [a] [(print a) (return (func [b] [print b + 10]))]
>> foo "Hello" 10
Hello
== 10
foo consumed its one parameter and returned a function by value (which was not invoked, thus the interpreter moved on). Then the expression evaluated to 10. If return/redo did not exist you'd have had to write:
>> do foo "Hello" 10
Hello
20
This keeps the caller from having to know (or care) if you've chosen to return a function to execute. And is cool because you can do things like tail call optimization, or writing a wrapper for the return functionality itself. Here's a variant of return that prints a message but still exits the function and provides the result:
>> myreturn: func [] [(print "Leaving...") (return/redo :return)]
>> foo: func [num] [myreturn num + 10]
>> foo 10
Leaving...
== 20
But functions aren't the only thing that have behavior in do. So if this is a general pattern for "removing the need for a DO at the callsite", then why doesn't this print anything?
>> test: func [] [return/redo [print "test"]]
>> test
== [print "test"]
It just returned the block by value, like a normal return would have. Shouldn't it have printed out "test"? That's what do would...uh, do with it:
>> do [print "test"]
test
The short answer is because it is generally unnecessary to evaluate a block at the call point, because blocks in Rebol don't take parameters so it mostly doesn't matter where they are evaluated. However, that "mostly" may need some explanation...
It comes down to two interesting features of Rebol: static binding, and how do of a function works.
Static Binding and Scopes
Rebol doesn't have scoped word bindings, it has static direct word bindings. Sometimes it seems like we have lexical scope, but we really fake that by updating the static bindings each time we're building a new "scoped" code block. We can also rebind words manually whenever we want.
What that means for us in this case though, is that once a block exists, its bindings and values are static - they're not affected by where the block is physically located, or where it is being evaluated.
However, and this is where it gets tricky, function contexts are weird. While the bindings of words bound to a function context are static, the set of values assigned to those words are dynamically scoped. It's a side effect of how code is evaluated in Rebol: What are language statements in other languages are functions in Rebol, so a call to if, for instance, actually passes a block of data to the if function which if then passes to do. That means that while a function is running, do has to look up the values of its words from the call frame of the most recent call to the function that hasn't returned yet.
This does mean that if you call a function and return a block of code with words bound to its context, evaluating that block will fail after the function returns. However, if your function calls itself and that call returns a block of code with its words bound to it, evaluating that block before your function returns will make it look up those words in the call frame of the current call of your function.
This is the same for whether you do or return/redo, and affects inner functions as well. Let me demonstrate:
Function returning code that is evaluated after the function returns, referencing a function word:
>> a: 10 do do has [a] [a: 20 [a]]
** Script error: a word is not bound to a context
** Where: do
** Near: do do has [a] [a: 20 [a]]
Same, but with return/redo and the code in a function:
>> a: 10 do has [a] [a: 20 return/redo does [a]]
** Script error: a word is not bound to a context
** Where: function!
** Near: [a: 20 return/redo does [a]]
Code do version, but inside an outer call to the same function:
>> do f: function [x] [a: 10 either zero? x [do f 1] [a: 20 [a]]] 0
== 10
Same, but with return/redo and the code in a function:
>> do f: function [x] [a: 10 either zero? x [f 1] [a: 20 return/redo does [a]]] 0
== 10
So in short, with blocks there is usually no advantage to doing the block elsewhere than where it is defined, and if you want to it is easier to use another call to do instead. Self-calling recursive functions that need to return code to be executed in outer calls of the same function are an exceedingly rare code pattern that I have never seen used in Rebol code at all.
It could be possible to change return/redo so it would handle blocks as well, but it probably isn't worth the increased overhead to return/redo to add a feature that is only useful in rare circumstances and already has a better way to do it.
However, that brings up an interesting point: If you don't need return/redo for blocks because do does the same job, doesn't the same apply to functions? Why do we need return/redo at all?
How DO of a Function Works
Basically, we have return/redo because it uses exactly the same code that we use to implement do of a function. You might not realize it, but do of a function is really unusual.
In most programming languages that can call a function value, you have to pass the parameters to the function as a complete set, sort of how R3's apply function works. Regular Rebol function calling causes some unknown-ahead-of-time number of additional evaluations to happen for its arguments using unknown-ahead-of-time evaluation rules. The evaluator figures out these evaluation rules at runtime and just passes the results of the evaluation to the function. The function itself doesn't handle the evaluation of its parameters, or even necessarily know how those parameters were evaluated.
However, when you do a function value explicitly, that means passing the function value to a call to another function, a regular function named do, and then that magically causes the evaluation of additional parameters that weren't even passed to the do function at all.
Well it's not magic, it's return/redo. The way do of a function works is that it returns a reference to the function in a regular shortcut-return value, with a flag in the shortcut-return value that tells the interpreter that called do to evaluate the returned function as if it were called right there in the code. This is basically what is called a trampoline.
Here's where we get to another interesting feature of Rebol: The ability to shortcut-return values from a function is built into the evaluator, but it doesn't actually use the return function to do it. All of the functions you see from Rebol code are wrappers around the internal stuff, even return and do. The return function we call just generates one of those shortcut-return values and returns it; the evaluator does the rest.
So in this case, what really happened is that all along we had code that did what return/redo does internally, but Carl decided to add an option to our return function to set that flag, even though the internal code doesn't need return to do so because the internal code calls the internal function. And then he didn't tell anyone that he was making the option externally available, or why, or what it did (I guess you can't mention everything; who has the time?). I have the suspicion, based on conversations with Carl and some bugs we've been fixing, that R2 handled do of a function differently, in a way that would have made return/redo impossible.
That does mean that the handling of return/redo is pretty thoroughly oriented towards function evaluation, since that is its entire reason for existing at all. Adding any overhead to it would add overhead to do of a function, and we use that a lot. Probably not worth extending it to blocks, given how little we'd gain and how rarely we'd get any benefit at all.
For return/redo of a function though, it seems to be getting more and more useful the more we think about it. In the last day we've come up with all sorts of tricks that this enables. Trampolines are useful.
While the question originally asked why return/redo did not evaluate blocks, there were also formulations like: "is cool because you can do things like tail call optimization", "[can write] a wrapper for the return functionality", "it seems to be getting more and more useful the more we think about it".
I do not think these are true. My first example demonstrates a case where return/redo can really be used, an example being in the "area of expertise" of return/redo, so to speak. It is a variadic sum function called sumn:
use [result collect process] [
collect: func [:value [any-type!]] [
unless value? 'value [return process result]
append/only result :value
return/redo :collect
]
process: func [block [block!] /local result] [
result: 0
foreach value reduce block [result: result + value]
result
]
sumn: func [] [
result: copy []
return/redo :collect
]
]
This is the usage example:
>> sumn 1 * 2 2 * 3 4
== 12
Variadic functions taking "unlimited number" of arguments are not as useful in Rebol as it may look at the first sight. For example, if we wanted to use the sumn function in a small script, we would have to wrap it into a paren to indicate where it should stop collecting arguments:
result: (sumn 1 * 2 2 * 3 4)
print result
This is not any better than using a more standard (non-variadic) alternative called e.g. block-sum and taking just one argument, a block. The usage would be like
result: block-sum [1 * 2 2 * 3 4]
print result
Of course, if the function can somehow detect what is its last argument without needing enclosing paren, we really gain something. In this case we could use the #[unset!] value as the sumn stopping argument, but that does not spare typing either:
result: sumn 1 * 2 2 * 3 4 #[unset!]
print result
Seeing the example of a return wrapper I would say that return/redo is not well suited for return wrappers, return wrappers being outside of its area of expertise. To demonstrate that, here is a return wrapper written in Rebol 2 that actually is outside of return/redo's area of expertise:
myreturn: func [
{my RETURN wrapper returning the string "indefinite" instead of #[unset!]}
; the [throw] attribute makes this function a RETURN wrapper in R2:
[throw]
value [any-type!] {the value to return}
] [
either value? 'value [return :value] [return "indefinite"]
]
Testing in R2:
>> do does [return #[unset!]]
>> do does [myreturn #[unset!]]
== "indefinite"
>> do does [return 1]
== 1
>> do does [myreturn 1]
== 1
>> do does [return 2 3]
== 2
>> do does [myreturn 2 3]
== 2
Also, I do not think it is true that return/redo helps with tail call optimizations. There are examples how tail calls can be implemented without using return/redo at the www.rebol.org site. As said, return/redo was tailor-made to support implementation of variadic functions and it is not flexible enough for other purposes as far as argument passing is concerned.
I have an algorithm which uses a bunch of different functions or steps during it's work. I would like to run the algorithm with different possible functions bound to those steps. In essence I want to prepare some sets of values (which specific function should be bound to this specific step) and run my algorithm with every one of those sets. And to capture results of every run alongside with input set.
Something like that:
(binding [step1 f1
step2 f2]
(do-my-job))
(binding [step1 f11
step2 f22]
(do-my-job))
but with dynamic binding expressions.
What are my options?
So you are trying to do something like a parameter sweep?
I can't see why you need to do a dynamic binding. Your algorithm is defined in terms of first class function calls. Just pass the functions in as parameters to your algorithms.
To try all the values, just generate a permutation of the values, and run map over this list. You will get all the results from this.
because binding is a macro you will need to write a macro that generates the dynamic binding forms.
Well, it seems I have it working the following way:
(def conditions [[`step1 f1 `step2 f2] [`step1 f11 `step2 f22]])
(map #(eval `(binding ~% body)) conditions)
So, I have tested this and as far as I can see, it all just works.
In the example below, I create a var, then rebind this var to a function.
As you can see the call to the function happens outside of the lexical scope of
binding form, so we have dynamic binding here.
(def
^{:dynamic true}
*bnd-fn*
nil
)
(defn fn1 []
(println "fn1"))
(defn fn2 []
(println "fn2"))
(defn callfn []
(*bnd-fn*))
;; crash with NPE
(callfn)
;; prints fn1
(binding [*bnd-fn* fn1]
(callfn))
;; prints fn2
(binding [*bnd-fn* fn2]
(callfn))
I've been using a similar approach for a library of my own (Clojure-owl if you are interested!), although in my case the thing I with to dynamically rebind is a Java object.
In that case, while I have allowed dynamic rebinding, in most cases I don't; I just use a different java object for different name spaces. This works nicely for me.
In reply to your comment, if you want to have a single binding form, then, this is easy to achieve. Add the following code.
(defn dobinding [fn]
(binding [*bnd-fn* fn]
(callfn)))
(dorun
(map dobinding
[fn1 fn2]))
The function dobinding runs all of the others. Then I eval this with a single map (and dorun or you get a lazy sequence). This runs two functions for each step. Obviously, you will need to pass a list of lists in. You should be able to parallelize the whole lot if you choose.
This is a lot easier than trying to splice in the whole vector. The value in a binding form is evaluated so it can be anything you like.
I have a number of events that are merged into one observable that executes some commands. If a command succeeded some result takes place. In addition, the command should be logged.
In terms of code, this looks like
let mevts = modifyingevents |> Observable.filter exec_action
|> Observable.add (fun action -> self.OutlineEdited <- true)
where the function exec_action results in some side effect such as editing a treeview. If this succeeded then the property OutlineEdited is set to true.
I was hoping to follow this with something like
mevts |> Observable.scan (fun log action -> action::log) []
but it turns out that Observable.filter is executed once for each subscribed observer. Meaning that the side effect will be repeated.
Can you please suggest another way to achieve the same result without having the exec_action executed twice? I am hoping to avoid having to use a mutable variable if possible.
This example ilustrates nicely the difference between the IObservable<'T> type (used in this example via the Observable module) and the F# type IEvent<'T> (and functions in Event module).
When you use observables, every subscriber creates a new chain of operations (so side-effects are executed once for every subscriber). If you use events then the state is shared and side-effects are executed just once (regardless of the number of subscribers). On the other hand, the events do not get garbage collected when you remove all subscribers from an event.
So, if you do not need the events to be removed when all subscribers are removed, you should get the behaviour you want just by using Event instead of Observable:
modifyingevents
|> Event.filter exec_action
|> Event.scan (fun log action -> action::log) []
I realize the following function calls are all same, but I do not understand why.
val list = List(List(1), List(2, 3), List(4, 5, 6))
list.map(_.length) // res0 = List(1,2,3) result of 1st call
list map(_.length) // res1 = List(1,2,3) result of 2nd call
list map (_.length) // res2 = List(1,2,3) result of 3rd call
I can understand 1st call, which is just a regular function call because map is a member function of class List
But I can not understand 2nd and 3rd call. For example, in the 3rd call, how can Scala compiler know "(_.length)" is parameter of "map"? How can compiler know "map" is a member function of "list"?
The only difference between variant 2 and 3 is the blank in front of the parenthesis? This can only be a delimiter - list a and lista is of course different, but a opening parens is a new token, and you can put a blank or two or three in front - or none. I don't see how you can expect a difference here.
In Java, there is no difference between
System.out.println ("foo");
// and
System.out.println("foo");
too.
This is the operator notation. The reason it works is the same reason why 2 + 2 works.
The space is used to distinguish between words -- listmap(_.length) would make the compiler look for listmap. But if you write list++list, it will work too, as will list ++ list.
So, one you are using operator notation, the space is necessary to separate words, but otherwise may be present or not.