In actionscript, if multiple instances are listening for the same event, which executes first? - actionscript

I have various instances of various classes all listening for ENTER_FRAME. I need to know what order they will execute in.

By default event listeners are executed based upon the order in which they were added:
Event listeners with the same priority are executed in the order that they were added, so the earlier a listener is added, the sooner it will be executed.
But you can control this order by the priority param:
When you call addEventListener(), you can set the priority for that event listener by passing an integer value as the priority parameter. The default value is 0, but you can set it to negative or positive integer values. The higher the number, the sooner that event listener will be executed.
Ref: http://help.adobe.com/en_US/ActionScript/3.0_ProgrammingAS3/WS5b3ccc516d4fbf351e63e3d118a9b90204-7e54.html
Beware that with broadcast events, i.e. ones attached to multiple objects, the priority param does NOT effect the event order across multiple objects, only the order within listeners attached to the same object, thus the execution order is still controlled by the order in which they were add.

Related

Run multiple instances of Observable Sequentially in dynamic way | RxSwift

I have a function that writes/modifies the data and returns an Observable to track the progress of the task.
func modifyChunk(data: SomeData) -> Observable<Double>
Now in my use case user may preform N number of operations in runtime randomly, where he may modify same data over and over. I'm looking for a way to hold/postpone new Observable from modifyChunk(data:) till previously existing observable is finished, and once that happen immediately kick start this new Observable and so on.
I figured out that, I may need a queue that will take Observable tasks at runtime and execute them in FIFO manner. But I'm not able figure out how to achieve that using RxSwift.
This sounds like a job for concat which will subscribe to an observable and wait until it's complete before subscribing to the next one. You haven't provided enough information, but if the number of observables is static, then you can use the Observable.concat operator. If the number of observables is dynamic, then concatMap is the solution you need.

Firebase Does it cost to attach and detach SnapshotListener many times when there are no changes within 30 minutes?

I want to be able to stop snapshot listener when I don't need to listen for changes so later I can get all the updates at once. Is it possible to temporarily halt the snapshot listener? I think you have to remove it explicitly and reinitialize everything according to the doc. So I can explicitly call remove and reinitialize the snapshot listener to get changes, but is there a price penalty for doing this? I know reading cached values within 30 minutes doesn't cost anything, so does this mean it wouldn't cost anything to attach and detach the snapshot listener?
If a document has no changes, and I attach the snapshot listener over and over say 50 times in 30 minutes, would that cost me anything?
Firebase Does it cost to attach and detach SnapshotListener many times when there are no changes within 30 minutes?
No, the listener will not fire if there are no changes in your database. However, if you detach the listener and you attach it again, there is a cost of one document read, even if the query returns no results. According to Frank van Puffelen's comment, here's the reason why it works that way:
The server needs to perform a query on your behalf, so if there are no individual documents being read on the server, it charges you for the query (actually reading the index) to prevent constant detach/reattaches (which use up server resources).
And it really makes sense.
I think you have to remove it explicitly and reinitialize everything according to the doc.
Yes, that's correct. When the listener is no more needed, simply detach it.
So I can explicitly call remove and reinitialize the snapshot listener to get changes, but is there a price penalty for doing this?
You'll always be billed for the number of documents that you read. For instance, if your query returns two documents, you be billed with two document reads. If you get no results, you only be billed with a single read operation.
I know reading cached values within 30 minutes doesn't cost anything, so does this mean it wouldn't cost anything to attach and detach the snapshot listener?
If a document has no changes, and I attach a snapshot listener over and over say 50 times in 30 minutes, would that cost me anything?
If you detach the listener and attach a new one for 50 times, and if there are no changes in the database, it will cost you 50 document reads.

Device Delete event Handling in Rule chain being able to reduce the total device count at Customer Level

I am using total count of devices as the "server attributes" at customer entity level that is in turn being used for Dashboard widgets like in "Doughnut charts". Hence to get the total count information, I have put a rule chain in place that handles "Device" Addition / Assignment event to increment the "totalDeviceCount" attribute at customer level. But when the device is getting deleted / UnAssigned , I am unable to get access to the Customer entity using "Enrichment" node as the relation is already removed at the trigger of this event. With this I have the challenge of maintaining the right count information for the widgets.
Has anyone come across similar requirement? How to handle this scenario?
Has anyone come across similar requirement? How to handle this scenario?
What you could do is to count your devices periodically, instead of tracking each individual addition/removal.
This you can achieve using the Aggregate Latest Node, where you can indicate a period (say, each minute), the entity or devices you want to count, and to which variable name you want to save it.
This node outputs a POST_TELEMETRY_REQUEST. If you are ok with that then just route that node to Save Timeseries. If you want an attribute, route that node to a Script Transformation Node and change the msgType to POST_ATTRIBUTE_REQUEST.

How to dedupe across over-lapping sliding windows in apache beam / dataflow

I have the following requirement:
read events from a pub sub topic
take a window of duration 30 mins and period 1 minute
in that window if 3 events for a given id all match match some predicate then i need to raise an event in a different pub sub topic
The event should be raised as soon as the 3rd event comes in for the grouping id as this is for detecting fraudulent behaviour. In one pane there many be many ids that have 3 events that match my predicate so i may need to emit multiple events per pane
I am able to write a function which consumes a PCollection does the necessary grouping, logic and filtering and emit events according to my business logic.
Questions:
The output PCollection contains duplicates due to the overlapping sliding windows. I understand this is the expected behaviour of sliding windows but how can I avoid this whilst staying in the same dataflow pipeline. I realise I could dedupe in an external system but that is just adding complexity to my system.
I also need to write some sort of trigger that fires each and every time my condition is reached in a window
Is dataflow suitable for this type of realtime detection scenario
Many thanks
You can rewindow the output PCollection into the global window (using the regular Window.into()) and dedupe using a GroupByKey.
It sounds like you're already returning the events of interest as a PCollection. In order to "do something for each event", all you need is a ParDo.of(whatever action you want) applied to this collection. Triggers do something else: they control what happens when a new value V arrives for a particular key K in a GroupByKey<K, V>: whether to drop the value, or buffer it, or to pass the buffered KV<K, Iterable<V>> for downstream processing.
Yes :)

Queue management in Rails

I am planning to have something like this for a website that is on Ruby on Rails. User comes and enters a bunch of names in a text field, and a queue gets created from all the names. From there the website keeps asking more details for each one from the queue until the queue finishes.
Is there any queue management gem available in Ruby or I have to just create an array and keep incrementing the index in session variable to emulate a queue behaviour?
The easiest thing is probably to use the push and shift methods of ruby arrays.
Push sticks things on the end of the array, shift will return and remove the first element.
As you receive data about each of the names, you could construct a second list of the names - a done array. Or if you're not concerned about that and just want to save and more on with them, just store the array in the session (assuming it's not going to be massive) and move on.
If your array is massive, consider storing the names to be added in temporary rows in a table then removing them when necessary. If this is the route you take, be sure to have a regularly running cleanup routine that removes entries that were never filled out.
References
http://apidock.com/ruby/Array/push
http://apidock.com/ruby/Array/shift
Try modeling a Queue with ActiveRecord
Queue.has_many :tasks
attributes: name, id, timestamps
Task.belongs_to :queue
attributes: name, id, position, timestamps, completed
Use timestamps to set initial position. Once a task is completed, set position to [highest position]+1 (assuming the lower the position number, the higher up on the queue). Completed tasks will sink to the bottom of the queue and a new task will rise to the top.
Hope this helps!

Resources