i am having difficulty getting GCD to fire my dispatch at the correct time. It seems to miss by huge margins and i don't understand why it's happening.
i send 3 dispatches to my queue for execution . .one at 5 mins in the future ( 300 seconds ) one at 10 minutes ( 600 secpnds ) and another at 20 minutes (1200 seconds)
All three fire and the audio file plays in all three dispatches but not at their designated times.
The first one fires after 10 minutes
The second one after 16 minutes
and the last one fires at 21 minutes..
These fire times vary a bit in testing but they NEVER fire when they are supposed to..
If you know why it would be greatly appreciated . .also
i've tried this with my iphone and ipad ( same result ) ..
At the top in my viewcontroller i have this code bit ..
var myAudio = AVAudioPlayer()
var timer: DispatchSourceTimer!
Then after clicking a button i send 3 dispatches like this below one but with different times in each .. then lock the device which RESIGNS the app. and wait for the short 16 second audio file to play.
i also have the key Application_does_not_run_in_background set to true in my plist.info file so my app does not go to the background when the user locks the screen.
First Create my queue:
let Myqueue = DispatchQueue(label: "Myqueue")
Then here is the dispatch to my Myqueue - same thing 3 times with different intervals etc. .
Myqueue.asyncAfter (wallDeadline:DispatchWallTime.now() + 900.0, qos: .userInitiated, flags: DispatchWorkItemFlags(rawValue: 0)) {
print(Date())
self.myAudio.prepareToPlay()
self.myAudio.play()
}
i am running Xcode 9 .. and compiling for ios 8 and up
i spent many hours testing trying to figure out why this doesn't work.
Found many answers here in the past - hope someone can help
Cheers
Related
I have two streams, stream A and stream B. Both streams contain the same type of event which has an ID and a timestamp. For now, all i want the flink job to do is join the events that have the same ID inside of a window of 1 minute. The watermark is assigned on event.
sourceA = initialSourceA.map(parseToEvent)
sourceB = initialSourceB.map(parseToEvent)
streamA = sourceA
.assignTimestampsAndWatermarks(CustomWatermarkStrategy())
.keyBy(Event.Key)
streamB = sourceB
.assignTimestampsAndWatermarks(CustomWatermarkStrategy())
.keyBy(Event.Key)
streamA
.join(streamB)
.where(Event.Key)
.equalTo(Event.Key)
.window(TumblingEventTimeWindows.of(Time.of(1, TimeUnit.MINUTES)))
.apply(giveMePairOfEvents)
.print()
Inside my test I try to send the following:
sourceA.send(Event(ID_1, 0 seconds))
sourceB.send(Event(ID_1, 0 seconds))
//to increase the watermark
sourceA.send(Event(ID_1, 62 seconds))
sourceB.send(Event(ID_1, 62 seconds))
For parallelism = 1, I can see the events from time 0 getting joined together.
However, for parallelism = 2 the print does not display anything getting joined. To figure out the problem, I tried to print the events after the keyBy of each stream and I can see they are all running on the same instance. Placing the print after the watermarking, for obvious reasons, that the events are currently on the different instances.
This leads me to believe that I am somehow doing something incorrectly when it comes to watermarking since for a parallelism higher than 1 it doesn't increase the watermark. So here's a couple of questions i asked myself:
Is it possible that each event has a seperate watermark generator and i have to increase them specifically?
Do I run keyBy first and then watermark so that my events from each stream use the same watermarkgenerator?
Sending another set of events as follows:
sourceA.send(Event(ID_1, 0 seconds))
sourceB.send(Event(ID_1, 0 seconds))
//to increase the watermark
sourceA.send(Event(ID_1, 62 seconds))
sourceB.send(Event(ID_1, 62 seconds))
sourceA.send(Event(ID_1, 122 seconds))
sourceB.send(Event(ID_1, 122 seconds))
Ended up sending the joined first events. Further inspection showed that the third set of events used the same watermarkgenerator that the second one didn't use. Something which I am not very clear on why is happening. How can I assign and increase watermarks correctly when using a join function in Flink?
EDIT 1:
The custom watermark generator:
class CustomWaterMarkGenerator(
private val maxOutOfOrderness: Long,
private var currentMaxTimeStamp: Long = 0,
)
: WatermarkGenerator<EventType> {
override fun onEvent(event: EventType, eventTimestamp: Long, output: WatermarkOutput) {
val a = currentMaxTimeStamp.coerceAtLeast(eventTimestamp)
currentMaxTimeStamp = a
output.emitWatermark(Watermark(currentMaxTimeStamp - maxOutOfOrderness - 1));
}
override fun onPeriodicEmit(output: WatermarkOutput?) {
}
}
The watermark strategy:
class CustomWatermarkStrategy(
): WatermarkStrategy<Event> {
override fun createWatermarkGenerator(context: WatermarkGeneratorSupplier.Context?): WatermarkGenerator<Event> {
return CustomWaterMarkGenerator(0)
}
override fun createTimestampAssigner(context: TimestampAssignerSupplier.Context?): TimestampAssigner<Event> {
return TimestampAssigner{ event: Event, _: Long->
event.timestamp
}
}
}
Custom source:
The sourceFunction is currently an rsocket connection that connects to a mockstream where i can send events through mockStream.send(event). The first thing I do with the events is parse them using a map function (from string into my event type) and then i assign my watermarks etc.
Each parallel instance of the watermark generator will operate independently, based solely on the events it observes. Doing the watermarking immediately after the sources makes sense (although even better, in general, is to do watermarking directly in the sources).
An operator with multiple input channels (such as the keyed windowed join in your application) sets its current watermark to the minimum of the watermarks it has received from its active input channels. This has the effect that any idle source instances will cause the watermarks to stall in downstream tasks -- unless those sources explicitly mark themselves as idle. (And FLINK-18934 meant that prior to Flink 1.14 idleness propagation didn't work correctly with joins.) An idle source is a likely suspect in your situation.
One strategy for debugging this sort of problem is to bring up the Flink WebUI and observe the behavior of the current watermark in all of the tasks.
To get more help, please share the rest of the application, or at least the custom source and watermark strategy.
New to Lua scripting, so hopefully I'm using the right terminology to describe this, please forgive if I'm not, I'll get better over time....
I'm writing code for a game using Lua. And I have a function that sets a value. Now I need to "hold" this value for a set period of time after which it gets set to another value.
I'm trying to replicate this behavior:
Imagine a car-start key.
0 = off
1 = ignition on
2 = ignition on + starter -- this fires to get the engine to ignite. In real-life, this "start" position 2 is HELD while the starter does its thing, then once the engine ignites, you'd let go and it springs back to = 1 to keep the ignition on while the engine keeps running.
Now imagine instead, a start button and it can not NOT be pushed in and held in position=2 (like the key) for the time required for the starter to ignite the engine. Instead, once touched, that should cause the starter to runs for a set period of time in position 2 and when it senses a certain =>RPM, the starter stops and goes back to position 1, and leaves the engine running.
Similarly, I have a button that when "touched" fires the starter, but that starter event as expected goes from 0 to 2 and back to 1 in a blink. If I set it to be 2 then it fires forever.
I need to hold its phase 2 position in place for 15 seconds, then it can get back to 1
What I have:
A button that has an animated state. (0 off, 1, 2).
If 0 then 1, if 1 then 2 using phase==2 as the spring step from 2 back to 1
At position == 2, StarterKey = 2
No syntax issues.
The StarterKey variable is triggered to be =2 at position 2 -- BUT not long enough for the engine to ignite. I need StarterKey=2 to stay as a 2 for 15 seconds. Or, do I need to time the entire if phase==2 stage to be held for longer? Not sure.
What should I be looking at please? Here's a relevant snip..
function EngineStart()
if phase == 0 then
if ButtonState == 0 then
ButtonState = 1
StarterKey = 2 -- I tried adding *duration = 15* this event but that does not work
end
end
end
I also have the subsequent elseif phase == 2 part for when the button is in position == 2 so it springs back to 1 -- that all works fine.
How do I, or what do I need to use, to introduce time for my events and functions? I don't want to measure time, but set event time.
Thanks!
Solved this by watching a Lua tutorial video online (taught me how) + discovered that the sim has a timer function that I then used to set the number of seconds ("time") that the starter key-press stayed in its 2 position, which then fired it long enough for the igniters to do their work. (what to use)
QED :)) HA!
I'm taking a PCollection of sessions and trying to get average session duration per channel/connection. I'm doing something where my early triggers are firing for each window produced - if 60min windows sliding every 1 minute, an early trigger will fire 60 times. Looking at the timestamps on the outputs, there's a window every minute for 60minutes into the future. I'd like the trigger to fire once for the most recent window so that every 10 seconds I have an average of session durations for the last 60 minutes.
I've used sliding windows before and had the results I expected. By mixing sliding and sessions windows, I'm somehow causing this.
Let me paint you a picture of my pipeline:
First, I'm creating sessions based on active users:
.apply("Add Window Sessions",
Window.<KV<String, String>> into(Sessions.withGapDuration(Duration.standardMinutes(60)))
.withOnTimeBehavior(Window.OnTimeBehavior.FIRE_ALWAYS)
.triggering(
AfterWatermark.pastEndOfWindow()
.withEarlyFirings(AfterProcessingTime
.pastFirstElementInPane()
.plusDelayOf(Duration.standardSeconds(10))))
.withAllowedLateness(Duration.ZERO)
.discardingFiredPanes()
)
.apply("Group Sessions", Latest.perKey())
Steps after this create a session object, compute session duration, etc. This ends with a PCollection(Session).
I create a KV of connection,duration from the Pcollection(Session).
Then I apply the sliding window and then the mean.
.apply("Apply Rolling Minute Window",
Window. < KV < String, Integer >> into(
SlidingWindows
.of(Duration.standardMinutes(60))
.every(Duration.standardMinutes(1)))
.triggering(
Repeatedly.forever(
AfterWatermark.pastEndOfWindow()
.withEarlyFirings(AfterProcessingTime
.pastFirstElementInPane()
.plusDelayOf(Duration.standardSeconds(10)))
)
)
.withAllowedLateness(Duration.standardMinutes(1))
.discardingFiredPanes()
)
.apply("Get Average", Mean.perKey())
It's at this point where I'm seeing issues. What I'd like to see is a single output per key with the average duration. What I'm actually seeing is 60 outputs for the same key for each minute into the next 60 minutes.
With this log in a DoFn with C being the ProcessContext:
LOG.info(c.pane().getTiming() + " " + c.timestamp());
I get this output 60 times with timestamps 60 minutes into the future:
EARLY 2017-12-17T20:41:59.999Z
EARLY 2017-12-17T20:43:59.999Z
EARLY 2017-12-17T20:56:59.999Z
(cont)
The log was printed at Dec 17, 2017 19:35:19.
The number of outputs is always window size/slide duration. So if I did 60 minute windows every 5 minutes, I would get 12 output.
I think I've made sense of this.
Sliding windows create a new window with the .every() function. Setting early firings applies to each window so getting multiple firings makes sense.
In order to fit my use case and only output the "current window", I'm checking c.pane().isFirst() == true before outputting results and adjusting the .every() to control the frequency.
In my program I need two tasks to run simultaneously in the background. To do that i have used concurrent queues as below,
let concurrentQueue = DispatchQueue(label: "concurrentQueue", qos: .utility, attributes: .concurrent)
concurrentQueue.async {
for i in 0 ..< 10{
print(i)
}
}
concurrentQueue.async {
for i in (0 ..< 10).reversed(){
print(i)
}
}
Here I need the output like this,
0
9
1
8
2
7
3
6
4
5
5
4
6
3
7
2
8
1
9
0
But what I get is,
I referred below tutorial in order to have some basic knowledge about Concurrent Queues in Swift 3
https://www.appcoda.com/grand-central-dispatch/
Can someone tell me what is wrong with my code? or else is it the result I should get? Is there any other ways to get my thing done? Any help would be highly appreciated.
There is nothing wrong with your code sample. That is the correct syntax for submitting two tasks to a concurrent queue.
The problem is the expectation that you'd necessarily see them run concurrently. There are two issues that could affect this:
The first dispatched task can run so quickly that it just happens to finish before the second task gets going. If you slow them down a bit, you'll see your concurrent behavior:
let concurrentQueue = DispatchQueue(label: Bundle.main.bundleIdentifier! + ".concurrentQueue", qos: .utility, attributes: .concurrent)
concurrentQueue.async {
for i in 0 ..< 10 {
print("forward: ", i)
Thread.sleep(forTimeInterval: 0.1)
}
}
concurrentQueue.async {
for i in (0 ..< 10).reversed() {
print("reversed:", i)
Thread.sleep(forTimeInterval: 0.1)
}
}
You'd never sleep in production code, but for pedagogical purposes, it can better illustrate the issue.
You can also omit the sleep calls, and just increase the numbers dramatically (e.g. 1_000 or 10_000), and you might start to see concurrent processing taking place.
Your device could be resource constrained, preventing it from running the code concurrently. Devices have a limited number of CPU cores to run concurrent tasks. Just because you submitted the tasks to concurrent queue, it doesn't mean the device is capable of running the two tasks at the same time. It depends upon the hardware and what else is running on that device.
By the way, note that you might see different behavior on the simulator (which is using your Mac's CPU, which could be running many other tasks) than on a device. You might want to make sure to test this behavior on an actual device, if you're not already.
Also note that you say you "need" the output to alternate print statements between the two queues. While the screen snapshots from your tutorial suggest that this should be expected, you have absolutely no assurances that this will be the case.
If you really need them to alternate back and forth, you have to add some mechanism to coordinate them. You can use semaphores (which I'm reluctant to suggest simply because they're such a common source of problems, especially for new developers) or operation queues with dependencies.
May be you could try using semaphore.
let semaphore1 = DispatchSemaphore(value: 1)
let semaphore2 = DispatchSemaphore(value: 0)
concurrentQueue.async {
for i in 0 ..< 10{
semaphore1.wait()
print(i)
semaphore2.signal()
}
}
concurrentQueue.async {
for i in (0 ..< 10).reversed(){
semaphore2.wait()
print(i)
semaphore1.signal()
}
}
This question already has answers here:
iOS GCD custom concurrent queue execution sequence
(2 answers)
Closed 5 years ago.
I have a class which contains two methods as per the example in Mastering Swift by Jon Hoffman. The class is as below:
class DoCalculation {
func doCalc() {
var x = 100
var y = x * x
_ = y/x
}
func performCalculation(_ iterations: Int, tag: String) {
let start = CFAbsoluteTimeGetCurrent()
for _ in 0..<iterations {
self.doCalc()
}
let end = CFAbsoluteTimeGetCurrent()
print("time for \(tag): \(end - start)")
}
}
Now in the viewDidLoad() of the ViewController from the single view template, I create an instance of the above class and then create a concurrent queue. I then add the blocks executing the performCalculation(: tag:) method to the queue.
cqueue.async {
print("Starting async1")
calculation.performCalculation(10000000, tag: "async1")
}
cqueue.async {
print("Starting async2")
calculation.performCalculation(1000, tag: "async2")
}
cqueue.async {
print("Starting async3")
calculation.performCalculation(100000, tag: "async3")
}
Every time I run the application on simulator, I get random out put for the start statements. Example outputs that I get are below:
Example 1:
Starting async1
Starting async3
Starting async2
time for async2: 4.1961669921875e-05
time for async3: 0.00238299369812012
time for async1: 0.117094993591309
Example 2:
Starting async3
Starting async2
Starting async1
time for async2: 2.80141830444336e-05
time for async3: 0.00216799974441528
time for async1: 0.114436984062195
Example 3:
Starting async1
Starting async3
Starting async2
time for async2: 1.60336494445801e-05
time for async3: 0.00220298767089844
time for async1: 0.129496037960052
I don't understand why the blocks don't start in FIFO order. Can somebody please explain what am I missing here?
I know they will be executed concurrently, but its stated that concurrent queue will respect FIFO for starting the execution of tasks, but won't guarantee which one completes first. So at least the starting task statements should have started with
Starting async1
Starting async3
Starting async2
and this completion statements random:
time for async2: 4.1961669921875e-05
time for async3: 0.00238299369812012
time for async1: 0.117094993591309
and the completion statements random.
A concurrent queue runs the jobs you submit to it concurrentlyThat's what it's for.
If you want a queue the runs jobs in FIFO order, you want a serial queue.
I see what you're saying about the docs claiming that the jobs will be submitted in FIFO order, but your test doesn't really establish the order in which they're run. If the concurrent queue has 2 threads available but only one processor to run those threads on, it might swap out one of the threads before it gets a chance to print, run the other job for a while, and then go back to running the first job. There's no guarantee that a job runs to the end before getting swapped out.
I don't think a print statement gives you reliable information about the order in which the jobs are started.
cqueue is a concurrent queue which is dispatching your block of work to three different threads(it actually depends on the threads availability) at almost the same time but you can not control the time at which each thread completes the work.
If you want to perform a task serially in a background queue, you are much better using serial queue.
let serialQueue = DispatchQueue(label: "serialQueue")
Serial Queue will start the next task in queue only when your previous task is completed.
"I don't understand why the blocks don't start in FIFO order" How do you know they don't? They do start in FIFO order!
The problem is that you have no way to test that. The notion of testing it is, in fact, incoherent. The soonest you can test anything is the first line of each block — and by that time, it is perfectly legal for another line of code from another block to execute, because these blocks are asynchronous. That is what asynchronous means.
So, they start in FIFO order, but there is no guarantee about the order in which, given multiple asynchronous blocks, their first lines will be executed.
With a concurrent queue, you are effectively specifing that they can run at the same time. So while they’re added in FIFO manner, you have a race condition between these various worker threads, and thus you have no assurance which will hit its respective print statement first.
So, this raises the question: Why do you care which order they hit their respective print statements? If order is really important, you shouldn't be using concurrent queue. Or, the other way of saying that, if you want to use a concurrent queue, write code that isn't dependent upon the order with which they run.
You asked:
Would you suggest some way to get the info when a Task is dequeued from the queue so that I can log it to get the FIFO order.
If you're asking how to enjoy FIFO starting of the tasks on concurrent queue in real-world app, the answer is "you don't", because of the aforementioned race condition. When using concurrent queues, never write code that is strictly dependent upon the FIFO behavior.
If you're asking how to verify this empirically for purely theoretical purposes, just do something that ties up the CPUs and frees them up one by one:
// utility function to spin for certain amount of time
func spin(for seconds: TimeInterval, message: String) {
let start = CACurrentMediaTime()
while CACurrentMediaTime() - start < seconds { }
os_log("%#", message)
}
// my concurrent queue
let queue = DispatchQueue(label: label, attributes: .concurrent)
// just something to occupy up the CPUs, with varying
// lengths of time; don’t worry about these re FIFO behavior
for i in 0 ..< 20 {
queue.async {
spin(for: 2 + Double(i) / 2, message: "\(i)")
}
}
// Now, add three tasks on concurrent queue, demonstrating FIFO
queue.async {
os_log(" 1 start")
spin(for: 2, message: " 1 stop")
}
queue.async {
os_log(" 2 start")
spin(for: 2, message: " 2 stop")
}
queue.async {
os_log(" 3 start")
spin(for: 2, message: " 3 stop")
}
You'll be able to see those last three tasks are run in FIFO order.
The other approach, if you want to confirm precisely what GCD is doing, is to refer to the libdispatch source code. It's admittedly pretty dense code, so it's not exactly obvious, but it's something you can dig into if you're feeling ambitious.