Is there any clear difference between event and state - statechart

This question is a bit philosophical and is like "data and code are the same thing or not".
Is there any clear difference between event (signal) and state?
Example:
For example, there is a car passing by the road. When the car horns, a man (man_A) crossing the road stops suddenly. Horn is the signal, "man_A stops suddenly" is the state of man_A.
Another man (man_B) was crossing the road too at the same time, at the same place.
Let's consider that man_B was deaf, so he can't hear the horn. But realizing "man_A stopped suddenly" would be a signal for him. He would stop suddenly as if he heard the horn.
So I would say "A state could be a signal for another process. A signal puts a process another state. That's why they are exactly the same thing"
Am I wrong, is there a clear difference between them?
A signal is a state change. We may define any signal with two states.

They are very, very different:
The same event may cause transitions to different states, depending on the current state:
In SCXML you can have <parallel> states defining orthogonal regions. In this case a single event may trigger multiple simultaneous transitions to different states:
Further, due to the possible presence of cond="…" attributes, a transition to another state may or may not occur when triggered by a an event. So now we have an event that might not change state.
Further, it is possible to have a transition with no event attribute, causing a state change as soon as some data model value or script result is right. So now we have a state change that can take place with no triggering event.

Well, a state is not a signal because a signal comes at a certain specific point in time.
A state-change is a result of a signal and can be seen as a signal by itsself. But it is not the state itsself. The state continues to be after the signal is long gone.
How would the initial state be a signal, for example.

Related

Combine session and tumbling window: time windows that are aligned to the first event for each key

i read about flink`s window assigners over here: https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/windows.html#window-assigners , but i cant find any solution for my problem.
as part of my project i need a windowing that the timer will start given the first element of the key and will be closed and set ready for processing after X minutes. for example:
first keyA comes at (hh:mm:ss) 00:00:02, i want all keyA will be windowing until 00:01:02, and then the timer of 1 minutes will start again only when keyA will be given as input.
Is it possible to do something like that in flink? is there a workaround?
hope i made it clear enough.
Implementing keyed windows that are aligned with the first event, rather than with the epoch, is quite difficult, in general, which I believe is why this isn't supported by Flink's window API. The problem is that with an out-of-order stream using event time processing, as earlier events arrive you may need to revise your notion of when the window began, and when it should end. For example, if the first keyA arrives at 00:00:02, but then some time later an event with keyA arrives with a timestamp of 00:00:01, now suddenly the window should end at 00:01:01, rather than 00:01:02. And if the out-of-orderness is large compared to the window length, handling this becomes quite complex -- imagine, for example, that the event from 00:00:01 arrives 2 minutes after the event from 00:00:02.
Rather than trying to implement this with the window API, I would use a KeyedProcessFunction. If you only need to support processing time windows, then these concerns about out-of-orderness do not apply, and the solution can be fairly simple. It suffices to keep one object in keyed state, which might be a list holding all of the events in the window, or a counter or other aggregator, depending on what you're trying to accomplish.
When an event arrives, if the state (for this key) is null, then there is no open window for this key. Initialize the state (i.e., create a new, empty list, or set the counter to zero), and create a Timer to fire at the appropriate time. Then regardless of whether the state had been null, add the incoming event to the state (i.e., append it to the list, or increment the counter).
When the timer fires, emit the window's result and reset the state to null.
If, on the other hand, you want to do this with event time windows, first sort the stream and then use the same approach. Note that you won't be able to handle late events, so plan your watermarking accordingly (reducing the likelihood of late events to a manageable level), or go for a more complex implementation.

Gradually transitioning FSM

It is given that a particular object can have two states at any point in time and that the object transitions from one state to another gradually.
I am modeling the state-graph to capture all the possible states the object can have and their corresponding transitions.
Now, for the modeling of the execution of such a graph, pedantically modeling using an FSM means that an object should be in one and only one state. With a dual-state object as described above, I want to understand if there exists something similar (in academia) to FSM that talks about handling such a use-case.

Serializing Flux in Reactor

Is possible to serialize Reactor Flux. For example my Flux is in some state and is currently processing some event. And suddenly service is terminated. Current state of Flux is saved to database or to file. And then on restart of aplication I just take all Flux from that file/table and subscribe them to restart processing from last state. This is possible in reactor?
No, this is not possible. Flux are not serializable and are closer to a chain of functions, they don't necessarily have a state[1] but describe what to do given an input (provided by an initial generating Flux)...
So in order to "restart" a Flux, you'd have to actually create a new one that gets fed the remaining input the original one would have received upon service termination.
Thus it would be more up to the source of your data to save the last emitted state and allow restarting a new Flux sequence from there.
[1] Although, depending on what operators you chained in, you could have it impact some external state. In that case things will get more complicated, as you'll have to also persist that state.

Remove duplicates across window triggers/firings

Let's say I have an unbounded pcollection of sentences keyed by userid, and I want a constantly updated value for whether the user is annoying, we can calculate whether a user is annoying by passing all of the sentences they've ever said into the funcion isAnnoying(). Forever.
I set the window to global with a trigger afterElement(1), accumulatingFiredPanes(), do GroupByKey, then have a ParDo that emits userid,isAnnoying
That works forever, keeps accumulating the state for each user etc. Except it turns out the vast majority of the time a new sentence does not change whether a user isAnnoying, and so most of the times the window fires and emits a userid,isAnnoying tuple it's a redundant update and the io was unnecessary. How do I catch these duplicate updates and drop while still getting an update every time a sentence comes in that does change the isAnnoying value?
Today there is no way to directly express "output only when the combined result has changed".
One approach that you may be able to apply to reduce data volume, depending on your pipeline: Use .discardingFiredPanes() and then follow the GroupByKey with an immediate filter that drops any zero values, where "zero" means the identity element of your CombineFn. I'm using the fact that associativity requirements of Combine mean you must be able to independently calculate the incremental "annoying-ness" of a sentence without reference to the history.
When BEAM-23 (cross-bundle mutable per-key-and-window state for ParDo) is implemented, you will be able to manually maintain the state and implement this sort of "only send output when the result changes" logic yourself.
However, I think this scenario likely deserves explicit consideration in the model. It blends the concepts embodied today by triggers and the accumulation mode.

Game Center turn timeouts

This might seem like a pretty obvious question, but I've been combing through Apple's documentation and can't seem to find a straight answer.
What actually happens when a turn times out - that is to say, the time interval passed as the turnTimeout parameter to endTurnWithNextParticipants:turnTimeout:matchData:completionHandler: has passed? Logic dictates that either there would be a callback similar to handleTurnEventForMatch:didBecomeActive: to explicitly handle no move being made, or the next player in the nextParticipants array would receive a turn notification.
Unfortunately, although Apple are quite happy to describe how turnTimeout limits how long a player has to act (and to tell you that it's up to your game to decide how to handle this), there's no information about what methods are called or what data is supplied, and I'm seeing some very odd behavior - namely the player who passed is getting a handleTurnEvent notification with the same match data as the turn they just timed out on. Anyone have any advice?
Apple's documentation on what it does:
If the next player to act does not take their turn in the specified
interval, the next player in the array receives a notification to act.
This process continues until a player takes a turn or the last player
in the list is notified.
In the case of a 2 player match, at least during testing, nothing actually happens. If P1 plays a turn the list of next participants looks like [ P2, P1 ]. If P2 times out, P1 should get notified as it is the last one in the list, however P1 just went, Apple must consider "end of the list" as when you get back to the person that last played and not when you actually run out of people. This prevents people from playing two turns in a row. Although is not what I would expect to happen based on the documentation. I have not tested this in a 3+ player game.

Resources