Android LiveData order of observation - android-livedata

I have a LiveData and, say, n Observers of it. One of the Observers, call it o, updates global state, upon which the other n-1 Observers depend.
Is there any way to guarantee that o is executed first? Is it merely that o be added first?
I've looked through the relevant docs and see nothing mentioned explicitly.

Theoretically there's no specific guarantee declared that I'm aware of.
Practically as long as all of your observers are added using the same lifecycleOwner they should be invoked in registration order as LiveData internally uses forward iteration (source code) to update them.

Related

Can I assume the running order of LeafSystem's CalcOutput function?

I am working on a LeafSystem like this:
class exampleLeafSystem(LeafSystem):
def __init__(self, plant):
self._plant = plant
self._plant_context = plant.CreateDefaultContext()
self.DeclareVectorIutputPort("q_v", BasicVector(6))
self.DeclareVectorOutputPort("tau", BasicVector(3), self.TauCalcOutput)
self.DeclareVectorOutputPort("xe", BasicVector(3), self.xeCalcOutput)
def TauCalcOutput(self, context, output):
q_v = self.get_input_port(0).Eval(context)
self._plant.SetPositionsAndVelocities(self._plant_context, q_v)
# Do some calculation with self._plant and self._plant_context to get the output
output.SetFromVector(tau)
def xeCalcOutput(self, context, output):
q_v = self.get_input_port(0).Eval(context)
self._plant.SetPositionsAndVelocities(self._plant_context, q_v)
# Do some calculation with self._plant and self._plant_context to get the output
output.SetFromVector(xe)
In the two methods TauCalcOutput and xeCalcOutput here, I need frist update the MultibodyPlant's state information and then do the calculation to compute the output. However, since I do not know the order of the two CalcOutput method being called, in order for this two method to use the newest state information, I have to write
q_v = self.get_input_port(0).Eval(context)
self._plant.SetPositionsAndVelocities(self._plant_context, q_v)
in both methods, which seems a bit unnecessary. If I can assume the order of the CalcOutput methods being runned, for example, say that TauCalcOutput always runs first, then I can only have
q_v = self.get_input_port(0).Eval(context)
self._plant.SetPositionsAndVelocities(self._plant_context, q_v)
in TauCalcOutput method, without worrying that XeCalcOutput method will use MultibodyPlant's state information that is one step lagged.
So my question is that is there a specific order for the CalcOutput methods being called?
No. The contract is that output ports can be called in any order at any time, and are only called when they are evaluated (e.g. by a downstream system). The order that they will be called will depend on the other systems in the Diagram; they might not all get called during a single simulation step (e.g. if one system is consumed by a discrete time system with time step 0.1, and another by time step 0.2), or may not be called at all if they are not connected. Users can even call get_output_port().Eval() manually.
For a general approach to avoiding duplicate computation in output ports, you should store the result of the shared computation in the Context, either as state or as a "cache entry". See DeclareCacheEntry for more details.
For this workflow, specifically, perhaps the simplest solution is to check whether the positions and velocities are already set to the same value, just to avoid repeating the kinematics evaluation, for instance as you see here:
https://github.com/RobotLocomotion/drake/blob/6e6e37ffa677362245773f13c0628f0042b47414/multibody/inverse_kinematics/kinematic_constraint_utilities.cc#L47-L54

The difference between Mono.just(1) vs. Flux.just(1)

I wonder, is there any difference in behavior/guarantees between the MonoJust and FluxJust created with exactly one argument?
From the source code of the Reactor Core 3.3.7 I am able to see that the former one is using the Operators#ScalarSubscription as its subscription object, while the latter one uses its private WeakScalarSubscription.
The only difference between these two is that ScalarSubscription has this volatile int once thing (a counter) defined and checked on each method call and somewhat ensures the onComplete() is called exactly once. At the same time, WeakScalarSubscription uses the boolean terminado thing (a non-volatile flag) for the same purposes, but without the "exactly once" guarantees for onComplete() call.
Using volatile in Java has its price, which is payed off e.g. when one creates a lot of these things (with Mono.just(1) or Flux.just(1)) in the highly-concurrent client code. (As we do in our project inside the flatMap that runs in parallel on a dedicated thread pool.)
There's no class javadoc for MonoJust, so I wonder if my assumptions are correct: that the only difference is that FluxJust may send the completion signal more than once in some circumstances — and that's it? Or are there other subtle differences?
I think that the biggest difference is how you use Flux and Mono. Mono emits one item or error and then completes, whereas Flux can emit more than one element, error, and then completion signal.
just() methods are meant to evaluate one element (or vararg variant for Flux) and return it immediately. I can imagine cases when Flux with only one element is returned.

Swift "timer" variable

I need multithread safe Int variable in Swift that could be incremented or decremented in different threads. Moreover, this variable must be decremented by 1 every second and notify (by block or selector) when it has zero value.
What is the best way to implement this?
Regarding apple swift block you needn't think about it.
One of the primary reasons to choose value types over reference types
is the ability to more easily reason about your code. If you always
get a unique, copied instance, you can trust that no other part of
your app is changing the data under the covers. This is especially
helpful in multi-threaded environments where a different thread could
alter your data out from under you. This can create nasty bugs that
are extremely hard to debug.
https://developer.apple.com/swift/blog/?id=10
Int is a value type. So, you can use it in several threads without worries.

Does the compiler optimize (close) identical FieldByName calls?

In some code I'm maintaining, I see two different methods used in TClientDataSet.OnCalcFields event handlers:
with DataSet do
begin
// 1. Call FieldByName twice
if AMinDate > FieldByName(SPlanAllocatieFromDate).AsDateTime then
AMinDate := FieldByName(sPlanAllocatieFromDate).AsDateTime;
// 2. Put the retrieved FieldByName value in a temp var
lEmpID := FieldByName(SPlanAllocatieEmpID).AsInteger;
if lEmpID <> 0 then lTSAllocatedEmpIDs.Add(IntToStr(lEmpID));
end;
Will the compiler (Delphi XE2, Win32 app) optimize method 2 to use a temp var? The two FieldByNames are quite close, you could even say nested.
If not, I should rewrite 1. because OnCalcFields executes often.
BTW. I know about Fields[] versus FieldByName(), or using a temp TField var when running an EOF loop, those are not the issue here.
No version of the Delphi compiler does anything like this.
Such optimizations would require the compiler to be able to prove that the two calls to FieldByName would always give the same result, and there is currently no provision for flagging a method as being deterministic.
Note that it is quite possible in theory (if unlikely in reality) for the two calls NOT to give the same result, in this case e.g. if a different thread deletes a field out of the collection between the first and second call. Generally, the compiler does not know or care at the call site what a particular method call actually does.
Does the compiler optimize (close) identical FieldByName calls?
No it does not.
The compiler does not look inside function calls to see what is within. It therefore has no way to prove that the value returned by successive calls to a function would be the same. Likewise it has no way to prove that the function has no side-effects. These are the two prerequisites for the optimisation under consideration.
You will need to perform the optimisation yourself, by explicitly adding and using a local variable to store the value returned by a single call to FieldByName.
Beyond the consideration of performance, I would argue that the use of a local variable to hold the field is semantically much better. This makes it clear to the reader that all actions are performed on the same field. That reason alone would be enough to persuade me to make the change you describe. Don't repeat yourself.
And while we are in code review mode, you might care to reconsider the use of with.

What is the difference between "raw" dirty operations, and dirty operations within mnesia:async_transaction

What is the difference between a series of mnesia:dirty_ commands executed within a function passed to mnesia:async_dirty() and those very same transactions executed "raw"?
I.e., is there any difference between doing:
mnesia:dirty_write({table, Rec1}),
mnesia:dirty_write({table, Rec1}),
mnesia:dirty_write({table, Rec1})
and
F = fun() ->
mnesia:dirty_write({table, Rec1}),
mnesia:dirty_write({table, Rec1}),
mnesia:dirty_write({table, Rec1})
end,
mnesia:async_dirty(F)
Thanks
Lets first quote the Users' Guide on async_dirty context:
By passing the same "fun" as argument to the function mnesia:async_dirty(Fun [, Args]) it will be performed in dirty context. The function calls will be mapped to the corresponding dirty functions.This will still involve logging, replication and subscriptions but there will beno locking, local transaction storage or commit protocols involved. Checkpointretainers will be updated but will be updated "dirty". Thus, they will be updatedasynchronously. The functions will wait for the operation to be performed on one node but not the others. If the table resides locally no waiting will occur.
The two options you have provided will be executed the same way. However, when you execute the dirty functions out of the fun as in option one, each one is a separate call into mnesia. With async_dirty, the 3 calls will be bundled and mnesia will only wait until the 3 are completed on the local node to return. However, the behavior of these two may differ in a mnesia multi-node cluster. Do some tests :)

Resources