Waiting period between rebounds and total time unitl an error is logged - flowground

I understood that through a rebound the processing can be repeated max. 10 times until an error is really logged.
But how long is the waiting period between the individual rebounds and how long is the total time between the first and the last attempt?

The time for waiting between rebound processing doubles with every iteration until a maximum of 10 minutes.
The wait time for the first iteration is 15 seconds.
The number of iterations is limited to 20.

Related

How can I calculate rate (events per minute) in Google Sheets?

I am working on creating a spreadsheet template for a video observation tool that my organization will use. Specifically, we will watch ~20-minute long videos, and record the rate (occurrences per minute) of certain behaviors within subsections of the video. For example, "in the clip from 2:06 to 4:30, the speaker asked the audience an average of 2.5 questions per minute."
I think it would be easiest for users to denote individual clips by providing start and end times (e.g. Start: 22:40 End: 23:02). Users should be able to input a count of certain occurrences, and then the spreadsheet will divide that number by the time elapsed and calculate a rate per minute. That is to say, if the speaker asked 8 questions between the timestamps 22:40 and 24:20, the spreadsheet should return a value of 8/(1.67 minutes) = 4.8 questions per minute.
I'm having trouble figuring out a way to enter time values in Google Sheets without it treating them as actual times in a 24-hour day. For example, 22:40 shouldn't refer to 00:22:40am nor to 10:40pm; I just mean 22 minutes and 40 seconds. I guess in theory, I would need it to treat the End Time as x-many minutes (or fractions of a minute) after a given Start Time, so it would need to calculate the total number of seconds elapsed between two mm:ss values and divide that sum by 60 to get the time elapsed in minutes. Then, I could simply divide the count of occurrences (e.g. 8 questions) by that number (1.67 minutes), and get my answer.
Does anyone have any tips about how this could be done? Thank you so much for your help!!
Current State:
Start Time: 22:40
End Time: 24:20
Questions Asked: 8
When I enter =8/(End Time - Start Time), I get 0:00 for some reason. I want it to return 4.8.
Format those durations as Format > Number > Duration. Enter durations complete with elapsed hours, minutes and seconds, as in 0:22:40 and 0:24:20.
You can then calculate events per minute like this:
=E2 / 24 / 60 / (T2 - S2)
...where E2 is the total number of events, S2 is the start moment, and T2 is the end moment.
Format the formula cell as Format > Number > Number.
See this answer for an explanation of how date and time values work in spreadsheets.

A Time Series with a strange behaviour

I hope everyone is doing well.
I am working on a time series project to predict hourly the waiting time (idle time) of a zone.
The idle time of a zone at a given hour is the average idle time of vehicles that start to wait at the given hour in that zone, and the idle time of a vehicle is the amount of time a vehicle should wait in that zone to be booked. For example, if we predict at 16h00 for zone A, a value of 90 minutes, it means a vehicle that starts to wait in this zone between 16h00 and 17h00 will wait 90 minutes to be booked.
For our idle time (our ground truth), at a given hour B, we have to wait 2 days (48 hours) to establish the complete ground truth value for hour B since we have to wait a maximum of two days for vehicles that start to wait at B and are not booked yet. So each time we want to make a prediction, the last 48 points are unstable. For example, if we want to make a prediction at time n, the ground truth of n-1 is partial and incomplete, and we have to wait 48–1 = 47 hours to establish the final value of the waiting time at n-1.
We can resume that problem as the recent past data at prediction time is changing and not fixed.
The following image illustrates what I explained above.enter image description here
My questions are :
Is this kind of behaviour known in the time series field? If that's the case, does it have a specific name?
2-How to mix stable and unstable points in order to make accurate predictions?
Any suggestions? and thank you ahead of time:)

How much computing time does the kernel need

I wrote a program for a LED display. The program allows to set the refresh rate via webconfiguration. To meet the refresh rate I measure the processing time of a loop. At the end I calculate the delay and wait until the next loop.
e.g. Refresh Rate 5 Hz -> 200 milli seconds for one loop. 50 milli seconds computing time results in 150 milli seconds delay.
The ratio of process time (50 milli seconds) to total time (200 milli seconds) indicates the processor load of my program. But to find the optimal setting, I need the actual total processor load. And not only that of my program. But since I don't know the real processor load of the delay() (in which WIFI etc. is done), I don't really know the processor load. In other words, I don't know how much time the system spends doing system tasks in the delay(150).
Is there a way to find out how much of a delay is actually used for system tasks before the processor truly waits?
In other words, I'm looking for a way to get the kernel time within a certain time frame.
Cheers Gabriel

How to write bosun alerts which handle low traffic volumes

If you are writing a bosun alert which is based of a percentage error rate for requests handled by your system, how do you write it in such a way that it handles periods of low traffic.
For example:
If I have an alert which looks back over the last 5 minutes and works out the error rate for requests
$errorRate = $numberErr/$numberReq and then triggers an alarm if the errorRate exceeds a predefined threshold crit = $errorRate > 0.05 this can work quite well so long as every 5 minute period had a sufficiently large number of requests ($numberReq).
If the number of requests in a 5 minute period was 10,000 then 501 errors would be required to trigger an alarm. However if the number of requests in a 5 minute period was 100 then only 5 errors would be required to trigger an alarm.
How can I write an alert which handles periods where the number of requests are so low that a small number of errors will equate to a large error rate. I had considered a sliding window of time, rather than a fixed 5 minute period, where the window would increase in size until the number of requests was high enough to give some confidence in the alarm. e.g. increase the time period until the number of requests is 10,000.
I can't find a way to achieve this in bosun, and I don't want to commit to a larger period of time for my alerts because the traffic rate varies so much. A longer period during peak traffic could result in an actual error causing a much larger impact.
I generally pair any percentage and/or historical based alerts with a static threshold.
For example: crit = numberErr > 100 && $errorRate > 0.05. That way the percent part doesn't matter unless the number of errors have also crossed some threshold because the entire statement won't be true.

How does Google Analytics calculate requests/hits frequency?

I want to use Google Analytics to track my iOS application hits.
I've read Google Analytics Collection Limits and Quotas article. It says
Each property starts with 60 hits that are replenished at a rate of 1 hit every 2 seconds. Applies to All hits except for ecommerce (item or transaction)
It is not quite clear for me what "1 hit every 2 seconds" means.
Here is what i think:
1 hit every 2 seconds = 0.5 hits per second
frequency (hits per second) = number of hits / time interval (seconds)
So my question is:
What time interval does Google Analytics use to calculate hits frequency?
Is it time elapsed from session start? Or is it a time for current day? Or is it calculated every 2 seconds?
I believe this rate limiting happens on the client (via the SDK) and not on the server. Server side limits exist, but they apply equally to all clients (so not iOS-specific).
The 60 hits + 1 per 2 seconds rule means that when you instantiate the tracker object in your app, it starts out with a 60 hit quota, and it adds 1 additional hit every 2 seconds.
As an example, if you instantiated the tracker, and the user didn't do anything for 10 seconds, you'd have 65 hits left in your quota. If the user then performed 10 actions within the next 10 seconds, you'd be back to 60 hits left in your quota. Does that make sense?
So to answer your ultimate question, the it's not about time interval, it's about when the clock starts, and that happens when the tracker object is created on the client.

Resources