what's the maxGranularity/minGranularity? - media-source

I'd like to know definition of maxGranularity/minGranularity at MSE.
When i googling, https://yt-dash-mse-test.commondatastorage.googleapis.com/unit-tests/2017.html?test_type=progressive-test&timestamp=1505692969966
it looks google's mse test page?
Could somebody guide me about 'granularity'?
If there's keyword for that, please tell me.
Thanks:)

This has more to do with the HTMLMediaElement (<video>) than it does MSE.
The media element will have a timeupdate event, which is regularly fired as playback occurs. It allows you to update a text field or something with the current time offset of the media.
This isn't an event that fires every frame... it typically fires 3-5 times a second, and can fire less if the tab isn't active or the browser chooses.
Granularity is just indicating roughly how frequently that event is going to fire, despite having video running at different speeds. It's a test to make sure that the event is still firing, even if the playback rate is very low or very high.

Related

What would be a good way to handle "snoozed" functions?

I've tried a few solutions so far but I'm not sure if there is a "right" way. I'm working on a todo list with a snooze function. Each task has a "wake from snooze" time when I need to run a function to move it into a different list.
I've tried looping through each task and moving it when viewWillAppear and/or viewDidAppear runs. This generally works but seems to misfire occasionally and wouldn't happen while the app is running and the user isn't switching views.
I've tried using the Timer() function to move items at the actual specific times which only works while the app is running.
I've tried moving the items in the background using performFetchWithCompletionHandler and checking every hour.
Anything else I should try to make this run more smoothly? Is it a combination of all of these? If that's the case, how would I make sure that I'm not trying to move items twice at the same time?
In general the question seems to be how to ensure that something happens at a certain date-time.
Work out what the target date-time is and write it down somewhere stable (a file, user defaults, whatever) along with what has to happen then.
Watch the clock. A timer that fires every minute will probably be good enough; just look at the clock and see if that time has passed. If it has, do the thing and erase the info saying that this thing needs to happen at this date-time.
If the app goes into the background, stop the timer.
When the app comes to the foreground again, check your date-time.
If the time has passed, do the thing and erase the info saying that this thing needs to happen at this date-time.
If not, start the timer again and keep watching the clock.

Dataflow Sliding Window vs Global window with trigger usecase?

I am designing a basket abandoning system for an Ecommerce company. The system will send a message to a user based on the below rules:
There is no interaction by the user on the site for 30 minutes.
Has added more than $50 worth of products to the basket.
Has not yet completed a transaction.
I use Google Cloud Dataflow to process the data and decide if a message should be sent. I have couple of options in below:
Use a Sliding window with a duration of 30 minutes.
A global window with a time based trigger with a delay of 30 minutes.
I think Sliding Window might work here. But my question is, can there be a solution based on using a global window with a processing time based trigger and a delay for this usecase?
As far as i understand the triggers based on Apache Beam documentation =>
Triggers allow Beam to emit early results, before a given window is closed. For example, emitting after a certain amount of time elapses, or after a certain number of elements arrives.
Triggers allow processing late data by triggering after the event time watermark passes the end of the window.
So, for my use case and as per the above trigger concepts, i don't think the trigger can be triggered after a set delay for each and every user (It is mentioned in above - can emit only after a certain number of elements it is mentioned above, but not sure if that could be 1). Can you confirm?
Both answers 1 - Sliding Windows and 2 - Global Window are incorrect
Sliding windows is not correct because - assuming there is one key per user, a message will be sent 30 minutes after they first started browsing even if they are still browsing
Global Windows is not correct because - it will cause messages to be sent out every 30 minutes to all users regardless of where they are in their current session
Even Fixed Windows would be incorrect in this case, because assuming there is one key per user, a message will be sent every 30 minutes
Correct answer would be - Use a session window with a gap duration of 30 minutes
This is correct because it will send a message per user after that user is inactive for 30 minutes
I think that sliding window is the correct approach from what you described, and I don't think you can solve this with trigger+delay. If event time sliding windowing makes sense from your business logic perspective, try to use it first, that's what it's for.
My understanding is that while you can use a trigger to produce early results, it is not guaranteed to fire at specific (server/processing) time or with exact number of elements (received so far for the window). The trigger condition enables/unblocks the runner to emit the window contents but it doesn't force it to do so.
In case of event time this makes sense, as it doesn't matter when the event arrives or when the trigger fires, because if the element has a timestamp within a window, then it will be assigned to the correct window no matter when it arrives. And when the trigger will fire for the window, the element will be guaranteed to be in that window if it has arrived.
With processing time you can't do this. If event arrives late, it will be accounted for at that time, and will be emitted next time the trigger fires, basically. And because the trigger doesn't guarantee the exact moment it fires you can potentially end up with unexpected data belonging to unexpected emitted panes. It is useful to get the early results in general but I am not sure if you can reason about windowing based on that.
Also, trigger delay only adds a fire delay (e.g. if it was supposed to fire at 12pm, not it will fire at 12.05pm) but it doesn't allow you to reliably stagger multiple trigger firings so that it goes off at specific intervals.
You can look at the design doc for triggers here: https://s.apache.org/beam-triggers , and possibly lateness doc may be relevant as well: https://s.apache.org/beam-lateness
Other docs can be found here, if you are interested: https://beam.apache.org/contribute/design-documents/ .
Update:
Rui pointed that this use case can be more complicated and probably not easily solvable by sliding windows. Maybe it's worth looking into session windows or manual logic on top of keys+state+timers
I find state[1] and timer[2] doc of Apache Beam, which should be able to handle this specific use case without using processing time trigger in global window.
Assuming the incoming data are events of users' actions, and each event(action) can be keyed by user_id.
The nice property that state and timer have is on per key and window basis. So you can accumulate state for each user_id and the state is amount of money in cart in this case. Timer can be set at the first time when amount in cart exceeds $50, and timer can be reset when user still have shopping actions within 30 mins in processing time.
Assume transaction completion is also a user_id keyed event. When an transaction completion event is seen, timer can be deleted[3].
update:
This idea is completely on processing time domain so it will have false alarm messages depending on lateness problem in system. So the question is how to improve this idea to event time domain so we have less false alarm. One possibility is event time based timer[4]. I am not clear what does event time based timer mean at this moment.
[1] https://beam.apache.org/blog/2017/02/13/stateful-processing.html
[2] https://docs.google.com/document/d/1zf9TxIOsZf_fz86TGaiAQqdNI5OO7Sc6qFsxZlBAMiA/edit#
[3] https://github.com/apache/beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/state/Timers.java#L45
[4] https://github.com/apache/beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/state/TimeDomain.java#L33

iOS touchesMoved delay between first and second calls

I have some problems in my iOS app.
I have checked timestamps in the touchesMoved with clock() function and use difference between current and previous values. Difference between 1st and 2nd events is greater than others.
Have you any ideas?
iOS is not a hard-realtime OS. There is generally no expectation that touch events will be delivered with regular frequency/uniform periods. You should not structure your app to depend on that. Events are delivered via the main thread, which could be delayed by other processing (drawing, etc.)
EDIT: If you're seeing huge differences in periods between the 1st and 2nd touches (relative to the periods between subsequent touch events), the first step would be to run the app in Instruments' Profiler template to see if some work that you're doing on the main thread in response to the first touch is causing the delay. If the delay is of your own doing, then fixing that is the first order of business.
Beyond that, you could try using various signal processing approaches to re-sample your data into uniform periodic data, but the problem is that any of those algorithms won't be able to deliver the first re-quantized point until at LEAST the 2nd (and more likely the 3rd or even later) event comes in, so if the principal problem is "The 2nd event doesn't come as soon as you'd like", then this won't help you.
Another good test: Try making an empty single-view sample app that does nothing but log the events as they come in -- if you see the same "odd" delay between 1st and 2nd touches in that trivial sample app, then that's just "how it is," and you can stop wasting time trying to change the OS behavior and move on to some different approach/solution.

What is the best method of synchronizing audio across iOS devices with WiFi?

Basically, for my team's app, we need to be able to synchronize music across multiple iOS devices. The first way we did this was by having the music on all the devices already and just sending a play command to all the devices. Some would get it later than others, so that method did not work. There was an idea mentioned to calculate the latency between all the devices and send the commands at the appropriate times based on the latency.
The second way proposed would be to stream the music. If we were to implement streaming, how should we go about doing it. Should Audio Units be used, OpenAL, etc.? Also, if streaming was being done, how would we go about making sure that each device's stream was in sync.
Basically, the audio has to be in sync so that the person hearing it cannot differentiate between the devices. A few milliseconds off should not be a problem (unless the listener has super-human hearing).
You'd be amazed at how good the human ear us at spotting audio anomalies...
Sync the time of day
Effectively your trying to meet a real time requirement with a whole load if very variable things in the way (wifi, etc). I strongly suspect the only way you're going to get close to doing this is to issue a 'play' instruction that includes a particular time to start playing. Of course that relies on all the clocks being accurately set.
NTP
I don't know how iPhones get their time of day. If they use (or could use) NTP then you'll be getting close. NTP is designed to convey accurate time of day information over a network despite variable network delays. I've had a quick look and it seems that most NTP clients for iOS are the simple ones, not the full NTP that measures and tunes out network delays, clock drifts, etc.
GPS
Alternatively GPS is also a very good source of time information. Again I don't know if iPhones can or do use GPS for setting their clock but if it could be done then that would likely be pretty good. On Solaris (and I think Linux too) the 1 pulse per second that most GPS chips generate from the GPS signal can be used to condition the internal OS clock, making it very accurate indeed (sub microsecond accuracy).
I fear that iPhones don't do either of these things natively; both involve using a fair bit of electricity, so I wouldn't be surprised if they did something else less sophisticated.
Cell Time Service
Some Cell networks provide a time service too, but I don't think it's designed for accurate time setting. Also it tends not to be available everywhere. You often find it at major airports so that recent arrivals get their phones set to something close to local time.
Play at time X
So if one of those could be used to ensure that all the iPhones are set to exactly the same time of day then all you have to do is write your software to start playing at a specific time. That will probably involve polling the clock in a very tight loop waiting for it to tick over; most OSes don't provide a means of sleeping until a specific time. They do at least allow for sleeping for a period of time, which can be used to sleep until close to the appointed time. You'd then start polling the clock until the right time is reached.
Delay Measurement and Standard Deviation
Your first method is doomed I think. You might be able to measure average delays and so forth but that doesn't mean that every message has exactly the same latency. The standard deviation in the latency will tell you what you can expect to achieve, and I don't think that's going to be particularly small. If so then the message has got to include a timestamp.
NTP can work because it's only interested in the average delay measured over a period of time (hours sometimes), whereas you're interested in instantaneous delay.
Streaming with RTP
Your second method may work if you can time sync the devices as discussed above. The RTP protocol was designed for use in these circumstances; it doesn't help with achieving sync, but it does help a lot with the streaming. It tells you whereabouts in the stream any one piece of received data fits, allowing you to play it at the right time.
Clock Drift
Another problem to deal with is how long you're playing for. If it's a long time then you may discover that the 44kHz (or whatever) audio clock rate on each device isn't quite the same. So, whilst you might find a way of starting to play all at the same time, the separate devices will then start diverging ever so slightly. Over a long period of time they may be noticeably out.
BlueTooth
It might be possible to do something with BlueTooth. It has many weird and wonderful profiles, and it might be that one of those would serve to send an accurate 'start now' message.
Audio Trigger
You might also use sound as a means of conveying a start signal. One device can play a particular sound whilst your software in the others is listening with the mic. When a particular feature is detected in the sound, that's the time for everyone to start playing. Sort of a computerised "1, 2, a 1 2 3 4".
Camera Flash
Should be easy to spot in software...
I think your first way would work if you expand it a little bit. Assuming all the clocks on the devices are in sync you could include a timestamp in your play command. Then each device would calculate the time between the timestamp and when it received the command. You would then play the music and offset it by the time difference.

timing events in an audioQueue

I have created an iOS 5/iOS 6 app with a display that responds to changes in the musical pitch performed by the user. It uses the record function in the sample SpeakHere code but does not actually save a file because it is designed to respond in real time.
I would now like to extend this app to respond simultaneously to the pitch itself and the duration that the same pitch is sustained (for example, changing the color when the same pitch is held steadily for a minimum period of time). I have been reading about NSTimer and NSDate functions, which seem straightforward, as well as AudioTimeStamp functions, which are apparently C based and which I find very confusing. Based on other posts, it seems like NSTimer and NSDate checks might cause the display's real-time response to an actual musical performance to lag. How about dispatchAfter? Could I expect the block to execute at the scheduled time?
My question is, what approach is most likely to yield the desired result of allowing me to measure duration of a particular pitch in the AudioQueue and update my display continuously in real time? Do I need to be saving to a file for this to work?
I am self-taught and have only been programming for a few months, so no matter what I will have to do a lot of learning of APIs/C language features that are new to me. I'm hoping someone can point me in a fruitful direction. Thanks!
You're definitely getting into pretty advanced stuff here. Here are a few thoughts:
Your audio processing seems to be the most intensive operation. Because this processing needs to be continuous, you're probably going to have to do this processing in another thread. By processing, I mean examining the audio to determine pitch.
Once you've identified the pitch, you should store the time for which it began.
Then, in the main thread, setup an NSTimer that repeats continuously and in the NSTimer's fire method, subtract the pitch's start date from the current date to get the elapsed time, as an NSTimeInterval.
Send the NSTimeInterval to your display logic so that you can update the color on screen.
Some things to check out:
Beginner's tutorial on multi-threading and Grand Central Dispatch on iOS
NSTimer
Using NSTimers
Hope that helps you out!

Resources