Make loop execute at random intervals with Spritekit - ios

I'm trying to make a program that adds multiple nodes at a random intervals from 0 to 3 seconds. Can you please explain why I need runAction or SKAction in the first place? And why I can't put the random function I made inside this block? Also, is there a way to convert the loop into a while loop so that I can break it more easily?
This is what I have right now:
let wait = Double(random(min:0.0, max:3.0))
runAction(SKAction.repeatActionForever(
SKAction.sequence([
SKAction.runBlock(addNode),
SKAction.waitForDuration(wait)
])
I tried this but it doesn't seem to work
var wait = Double(random(min:0.0, max:3.0))
var x = true
while x == true
{
addNode()
SKAction.waitForDuration(wait)
wait = Double(random(min:0.0, max:3.0))
}

waitForDuration takes in a range parameter that will do +- 1/2 value of the range specified, so if you specify 2, it will do a ranged difference -1 <-> 1 to your specified time
E.G. duration 5 seconds range 2
Results
Waits 4
Waits 5.5
Waits 4.47
Waits 4.93
Waits 5.99
To answer the specific question
SKAction.waitForDuration(1.5, withRange 3)

Related

Rx SerialDispatchQueueScheduler doesn't seem to make the code run in serial sequence

I have a problem with an Observable<Data?> function that is called so many times and so fast that the function doesn't complete until the next one is run. This makes sense and is good in most cases. But in this case it becomes very problematic because the function in question uses a counter.
func sendMessage(input: MessageToSend) -> Observable<Data?> {
input.counter = self.counter
print("-- 1", input.counter)
let transformMessage = transform(message: input)
self.counter += 1
print("-- 2", input.counter)
return transformMessage
}
Obviously I need input.counter to increase by 1 every time the function is called. But unfortunately this isn't what's happening. Because of the async nature of rx sendMessage() is run and adds 0 to input.counter, but before it has had the chance to increment self.counter by 1, sendMessage() is run again and again adds 0 to input.counter. Then the first call reaches self.counter += 1, and immediately after, the second call reaches self.counter += 1 as well. So now the counter has reached 2. So when the third call is made, input.counter gets 2 as value.
The prints will look like this:
-- 1 0
-- 1 0
-- 2 0
-- 2 0
-- 1 2
-- 2 2
My initial idea on how to fix this was to force the call to a serial task. So instead of calling it like this:
disposeble = tsiHandler?.sendMessage(message: message).subscribe()
I called it like this:
disposeble = tsiHandler?.sendMessage(message: message)
.subscribeOn(SerialDispatchQueueScheduler(internalSerialQueueName: "serial"))
.subscribe()
In my world this would force it to become serial which would make all the calls to sendMessage() wait for other calls to complete. But for some reason it doesn't work. I get the exact same result in the prints. Am I missunderstanding how SerialDispatchQueueScheduler work?
As it is written, each call to sendMessage is being placed on a different scheduler, so even if the schedulers are serial, that wouldn't accomplish what you are trying to do. Instead of creating a new scheduler for each call, have them all use the same scheduler.
Even so, this code looks fishy and your comments about it implies a fundamental misunderstanding of Rx. For example it is not inherently async. It merely sets up callback chains...

How to hold a value for a period of time

New to Lua scripting, so hopefully I'm using the right terminology to describe this, please forgive if I'm not, I'll get better over time....
I'm writing code for a game using Lua. And I have a function that sets a value. Now I need to "hold" this value for a set period of time after which it gets set to another value.
I'm trying to replicate this behavior:
Imagine a car-start key.
0 = off
1 = ignition on
2 = ignition on + starter -- this fires to get the engine to ignite. In real-life, this "start" position 2 is HELD while the starter does its thing, then once the engine ignites, you'd let go and it springs back to = 1 to keep the ignition on while the engine keeps running.
Now imagine instead, a start button and it can not NOT be pushed in and held in position=2 (like the key) for the time required for the starter to ignite the engine. Instead, once touched, that should cause the starter to runs for a set period of time in position 2 and when it senses a certain =>RPM, the starter stops and goes back to position 1, and leaves the engine running.
Similarly, I have a button that when "touched" fires the starter, but that starter event as expected goes from 0 to 2 and back to 1 in a blink. If I set it to be 2 then it fires forever.
I need to hold its phase 2 position in place for 15 seconds, then it can get back to 1
What I have:
A button that has an animated state. (0 off, 1, 2).
If 0 then 1, if 1 then 2 using phase==2 as the spring step from 2 back to 1
At position == 2, StarterKey = 2
No syntax issues.
The StarterKey variable is triggered to be =2 at position 2 -- BUT not long enough for the engine to ignite. I need StarterKey=2 to stay as a 2 for 15 seconds. Or, do I need to time the entire if phase==2 stage to be held for longer? Not sure.
What should I be looking at please? Here's a relevant snip..
function EngineStart()
if phase == 0 then
if ButtonState == 0 then
ButtonState = 1
StarterKey = 2 -- I tried adding *duration = 15* this event but that does not work
end
end
end
I also have the subsequent elseif phase == 2 part for when the button is in position == 2 so it springs back to 1 -- that all works fine.
How do I, or what do I need to use, to introduce time for my events and functions? I don't want to measure time, but set event time.
Thanks!
Solved this by watching a Lua tutorial video online (taught me how) + discovered that the sim has a timer function that I then used to set the number of seconds ("time") that the starter key-press stayed in its 2 position, which then fired it long enough for the igniters to do their work. (what to use)
QED :)) HA!

Apache Beam - Sliding Windows Only Emit Earliest Active Window

I'm trying to use Apache Beam (via Scio) to run a continuous aggregation of the last 3 days of data (processing time) from a streaming source and output results from the earliest, active window every 5 minutes. Earliest meaning the window with the earliest start time, active meaning that the end of the window hasn't yet passed. Essentially I'm trying to get a 'rolling' aggregation by dropping the non-overlapping period between sliding windows.
A visualization of what I'm trying to accomplish with an example sliding window of size 3 days and period 1 day:
early firing - ^ no firing - x
|
** stop firing from this window once time passes this point
^ ^ ^ ^ ^ ^ ^ ^
| | | | | | | | ** stop firing from this window once time passes this point
w1: +====================+^ ^ ^
x x x x x x x | | |
w2: +====================+^ ^ ^
x x x x x x x | | |
w3: +====================+
time: ----d1-----d2-----d3-----d4-----d5-----d6-----d7---->
I've tried using sliding windows (size=3 days, period=5 min), but they produce a new window for every 3 days/5 min combination in the future and are emitting early results for every window. I tried using trigger = AfterWatermark.pastEndOfWindow(), but I need early results when the job first starts. I've tried comparing the pane data (isLast, timestamp, etc.) between windows but they seem identical.
My most recent attempt, which seems somewhat of a hack, included attaching window information to each key in a DoFn, re-windowing into a fixed window, and attempting to group and reduce to the oldest window from the attached data, but the final reduceByKey doesn't seem to output anything.
DoFn to attach window information
// ValueType is just a case class I'm using for objects
type DoFnT = DoFn[KV[String, ValueType], KV[String, (ValueType, Instant)]]
class Test extends DoFnT {
// Window.toString looks like the following:
// [2020-05-16T23:57:00.000Z..2020-05-17T00:02:00.000Z)
def parseWindow(window: String): Instant = {
Instant.parse(
window
.stripPrefix("[")
.stripSuffix(")")
.split("\\.\\.")(1))
}
#ProcessElement
def process(
context: DoFnT#ProcessContext,
window: BoundedWindow): Unit = {
context.output(
KV.of(
context.element().getKey,
(context.element().getValue, parseWindow(window.toString))
)
)
}
}
sc
.pubsubSubscription(...)
.keyBy(_.key)
.withSlidingWindows(
size = Duration.standardDays(3),
period = Duration.standardMinutes(5),
options = WindowOptions(
accumulationMode = DISCARDING_FIRED_PANES,
allowedLateness = Duration.ZERO,
trigger = Repeatedly.forever(
AfterWatermark.pastEndOfWindow()
.withEarlyFirings(
AfterProcessingTime
.pastFirstElementInPane()
.plusDelayOf(Duration.standardMinutes(1)))))))
.reduceByKey(ValueType.combineFunction())
.applyPerKeyDoFn(new Test())
.withFixedWindows(
duration = Duration.standardMinutes(5),
options = WindowOptions(
accumulationMode = DISCARDING_FIRED_PANES,
trigger = AfterWatermark.pastEndOfWindow(),
allowedLateness = Duration.ZERO))
.reduceByKey((x, y) => if (x._2.isBefore(y._2)) x else y)
.saveAsCustomOutput(
TextIO.write()...
)
Any suggestions?
First, regarding processing time: If you want to window according to processing time, you should set your event time to the processing time. This is perfectly fine - it means that the event you are processing is the event of ingesting the record, not the event that the record represents.
Now you can use sliding windows off-the-shelf to get the aggregation you want, grouped the way you want.
But you are correct that it is a bit of a headache to trigger the way you want. Triggers are not easily expressive enough to say "output the last 3 day aggregation but only begin when the window is 5 minutes from over" and even less able to express "for the first 3 day period from pipeline startup, output the whole time".
I believe a stateful ParDo(DoFn) will be your best choice. State is partitioned per key and window. Since you want to have interactions across 3 day aggregations you will need to run your DoFn in the global window and manage the partitioning of the aggregations yourself. You tagged your question google-cloud-dataflow and Dataflow does not support MapState so you will need to use a ValueState that holds a map of the active 3 day aggregations, starting new aggregations as needed and removing old ones when they are done. Separately, you can easily track the aggregation from which you want to periodically output, and have a timer callback that periodically emits the active aggregation. Something like the following pseudo-Java; you can translate to Scala and insert your own types:
DoFn<> {
#StateId("activePeriod") StateSpec<ValueState<Period>> activePeriod = StateSpecs.value();
#StateId("accumulators") StateSpec<ValueState<Map<Period, Accumulator>>> accumulators = StateSpecs.value();
#TimerId("nextPeriod") TimerSpec nextPeriod = TimerSpecs.timer(TimeDomain.EVENT_TIME);
#TimerId("output") TimerSpec outputTimer = TimerSpecs.timer(TimeDomain.EVENT_TIME);
#ProcessElement
public void process(
#Element element,
#TimerId("nextPeriod") Timer nextPeriod,
#TimerId("output") Timer output,
#StateId("activePeriod") ValueState<Period> activePeriod
#StateId("accumulators") ValueState<Map<Period, Accumulator>> accumulators) {
// Set nextPeriod if it isn't already running
// Set output if it isn't already running
// Set activePeriod if it isn't already set
// Add the element to the appropriate accumulator
}
#OnTimer("nextPeriod")
public void onNextPeriod(
#TimerId("nextPeriod") Timer nextPeriod,
#StateId("activePriod") ValueState<Period> activePeriod {
// Set activePeriod to the next one
// Clear the period we will never read again
// Reset the timer (there's a one-time change in this logic after the first window; add a flag for this)
}
#OnTimer("output")
public void onOutput(
#TimerId("output") Timer output,
#StateId("activePriod") ValueState<Period> activePeriod,
#StateId("accumulators") ValueState<MapState<Period, Accumulator>> {
// Output the current accumulator for the active period
// Reset the timer
}
}
I do have some reservations about this, because the outputs we are working so hard to suppress are not comparable to the outputs that are "replacing" them. I would be interesting in learning more about the use case. It is possible there is a more straightforward way to express the result you are interested in.

Applying multiple GroupByKey transforms in a DataFlow job causing windows to be applied multiple times

We have a DataFlow job that is subscribed to a PubSub stream of events. We have applied sliding windows of 1 hour with a 10 minute period. In our code, we perform a Count.perElement to get the counts for each element and we then want to run this through a Top.of to get the top N elements.
At a high level:
1) Read from pubSub IO
2) Window.into(SlidingWindows.of(windowSize).every(period)) // windowSize = 1 hour, period = 10 mins
3) Count.perElement
4) Top.of(n, comparisonFunction)
What we're seeing is that the window is being applied twice so data seems to be watermarked 1 hour 40 mins (instead of 50 mins) behind current time. When we dig into the job graph on the Dataflow console, we see that there are two groupByKey operations being performed on the data:
1) As part of Count.perElement. Watermark on the data from this step onwards is 50 minutes behind current time which is expected.
2) As part of the Top.of (in the Combine.PerKey). Watermark on this seems to be another 50 minutes behind the current time. Thus, data in steps below this is watermarked 1:40 mins behind.
This ultimately manifests in some downstream graphs being 50 minutes late.
Thus it seems like every time a GroupByKey is applied, windowing seems to kick in afresh.
Is this expected behavior? Anyway we can make the windowing only be applicable for the Count.perElement and turn it off after that?
Our code is something on the lines of:
final int top = 50;
final Duration windowSize = standardMinutes(60);
final Duration windowPeriod = standardMinutes(10);
final SlidingWindows window = SlidingWindows.of(windowSize).every(windowPeriod);
options.setWorkerMachineType("n1-standard-16");
options.setWorkerDiskType("compute.googleapis.com/projects//zones//diskTypes/pd-ssd");
options.setJobName(applicationName);
options.setStreaming(true);
options.setRunner(DataflowPipelineRunner.class);
final Pipeline pipeline = Pipeline.create(options);
// Get events
final String eventTopic =
"projects/" + options.getProject() + "/topics/eventLog";
final PCollection<String> events = pipeline
.apply(PubsubIO.Read.topic(eventTopic));
// Create toplist
final PCollection<List<KV<String, Long>>> topList = events
.apply(Window.into(window))
.apply(Count.perElement()) //as eventIds are repeated
// get top n to get top events
.apply(Top.of(top, orderByValue()).withoutDefaults());
Windowing is not applied each time there is a GroupByKey. The lag you were seeing was likely the result of two issues, both of which should be resolved.
The first was that data that was buffered for later windows at the first group by key was preventing the watermark from advancing, which meant that the earlier windows were getting held up at the second group by key. This has been fixed in the latest versions of the SDK.
The second was that the sliding windows was causing the amount of data to increase significantly. A new optimization has been added which uses the combine (you mentioned Count and Top) to reduce the amount of data.

Do a predefined loop consisting of 4 variables 100 times

I am pretty new at SPSS macro's, but I think I need one.
I have 400 variables, I want to do this loop 400 times. My variables are ordered consecutively. So first I want to do this loop for variables 1 to 4, then for variables 5 to 8, then for variables 9 to 12 and so on.
vector TEQ5DBv=T0EQ5DNL to T4EQ5DNL.
loop #index = 1 to 4.
+ IF( MISSING(TEQ5DBv(#index+1))) TEQ5DBv(#index+1) = TEQ5DBv(#index) .
end loop.
EXECUTE.
Below is an example of what it appears to me you are trying to do. Note I replaced your use of the looping and index with a do repeat command. To me it is just more clear what you are doing by making two lists in the do repeat command as opposed to calling lead indexes in your loop.
*making data.
DATA LIST FIXED /X1 to X4 1-4.
BEGIN DATA
1111
0101
1 0
END DATA.
*I make new variables, so you dont overwrite your original variables.
vector X_rec (4,F1.0).
do repeat X_rec = X_rec1 to X_rec4 / X = X1 to X4.
compute X_rec = X.
end repeat.
execute.
do repeat X_later = X_rec2 to X_rec4 / X_early = X1 to X3.
if missing(X_later) = 1 X_later = X_early.
end repeat.
execute.
A few notes on this. Previously your code was overwriting your initial variables, in this code I create a set a new variables named "X_rec1 ... X_rec4", and then set those values to the same as the original set of variables (X1 to X4). The second do repeat command fills in the recoded variables if a missing value occurs with the previous variable. One big difference between this and your prior code, in your prior code if you ran it repeatedly it would continue to fill in the missing data, whereas my code would not. If you want to continue to fill in the missing data, you would just have to replace in the code above X_early = X1 to X3 with X_early = X_rec1 to X_rec3 and then just run the code at least 3 times (of course if you have a case with all missing data for the four variables, it will all still be missing.) Below is a macro to simplify calling this repeated code.
SET MPRINT ON.
DEFINE !missing_update (list = !TOKENS(1)).
!LET !list_rec = !CONCAT(!list,"_rec")
!LET !list_rec1 = !CONCAT(!list_rec,"1")
!LET !list_rec2 = !CONCAT(!list_rec,"2")
!LET !list_rec4 = !CONCAT(!list_rec,"4")
!LET !list_1 = !CONCAT(!list,"1")
!LET !list_3 = !CONCAT(!list,"3")
!LET !list_4 = !CONCAT(!list,"4")
vector !list_rec (4,F1.0).
do repeat UpdatedVar = !list_rec1 to !list_rec4 / OldVar = !list_1 to !list_4.
compute UpdatedVar = OldVar.
end repeat.
execute.
do repeat UpdatedVar = !list_rec2 to !list_rec4 / OldVar = !list_1 to !list_3.
if missing(UpdatedVar) = 1 UpdatedVar = OldVar.
end repeat.
execute.
!ENDDEFINE.
*dropping recoded variables I made before.
match files file = *
/drop X_rec1 to X_rec4.
execute.
!missing_update list = X.
I suspect there is a way to loop through all of the variables in the dataset without having to call the macro repeatedly for each set, but I'm not sure how to do it (it may not be possible within DEFINE, and you may have to resort to writing up a python program). Worst case you just have to write the above macro defined function 400 times!
Your Loop-Syntax is incorrect because when #index reaches "4" your code says that you want to do an operation on TEQ5DBv(5). So you definetly will get an error.
I don't know what exactly you want to do, but a nested loop might help you to achieve your goal.
Here is an example:
* Creating some Data.
DATA LIST FIXED /v1 to v12 1-12.
BEGIN DATA
1234 9012
2 4 6 8 1 2
1 3 5 7 9 1
12 56 90
456 012
END DATA.
* Vectorset of variables
VECTOR vv = v1 TO v12.
LOOP #i = 1 TO 12 BY 4.
LOOP #j = 0 TO 2. /* inner Loop runs only up to "2" so you wont exceed your inner block.
IF(MISSING(vv(#i+#j+1))) vv(#i+#j+1) = vv(#i+#j).
END LOOP.
END LOOP.
EXECUTE.

Resources