How to get AVPlayer buffer time needed before playback was started - ios

I am trying to measure how long an AVPlayerItem buffered before it began playback.
A trivial solution for this would be to simply start a timer/save a timestamp when buffering begins, and stop the timer/compare the timestamps either when playbackLikelyToKeepUp first becomes positive, or use addBoundaryTimeObserverForTimes with a 1ms boundary.
Since this is something I want to use in a production environment (to track performance metrics), I prefer not to start a bunch of timers that could potentially bring down app performance.
Is there a way to achieve this by using KVO or some other method?

Related

AudioKit metronome synced with time-pitched audio loops

I'm making an app that plays synced audio loops with a metronome. For example, I might have 3 files like this:
bass_60bpm.m4a
drums_60bpm.m4a
guitar_60bpm.m4a
And a metronome sound tick.m4a, which I play with AKSamplerMetronome.
I need to play them back at arbitrary tempos, so I use AKTimePitcher on the AKAudioFiles (so playing at 90bpm, I'd play bass_60bpm.m4a at 1.5x).
This almost works, but after 3-5 loops, the metronome gets out of sync with the audio loops. I think I understand why that happens (audio_sample_length * floating_point_number is not equivalent to AKSamplerMetronome's tempo calculations), but I don't know how to fix it.
What I suspect I need to do is manually reimplement some or all of AKSamplerMetronome, but play the metronome ticks based on AKTimePitcher's output, but I can't piece together enough info from the API, docs, and examples to make it happen.
An alternate approach might be to use AKSequencer instead of AKSamplerMetronome. The midi output of the sequencer's track could be sent to an AKCallbackInstrument, and the sequencer's events could get the callback function to trigger both the time-stretched sample and the metronome ticks (and you could also trigger synchronized UI events from there as a bonus). This would guarantee that they stay in sync.
Apple's MusicSequence, which is what AKSequencer uses under the hood, is a little flakey with its timing immediately after you call play, but it's pretty solid after that. If you start the sequencer just before its looping point (i.e., if you have a 1 bar loop, start it one sixteenth note before the end of the first bar) then you can get passed that flakiness before the actual loop starts.

IOS 8: Real Time Sound Processing and Sound Pitching - OpenAL or another framework

I'm trying to realize an app which plays a sequence of tones in a loop.
Actually, I use OpenAL and my experiences with such framework are positive, as I can perform a sound pitch also.
Here's the scenario:
load a short sound (3 seconds) from a CAF file
play that sound in a loop and perform a sound shift also.
This works well, provided that the tact rate isn't too high - I mean a time of more than 10 milliseconds per tone.
Anyhow, my NSTimer (which embeds my sound sequence to play) should be configurable - and as soon as my tact rate increases (I mean less than 10 ms per tone), the sound is no more echoed correctly - even some tones are dropped in an obvious random way.
It seems that real time sound processing becomes an issue.
I'm still a novice in IOS programming, but I believe that Apple sets a limit concerning time consumption and/or semaphore.
Now my questions:
OpenAL is written in C - until now, I didn't understand the whole code and philosophy behind that framework. Is there a possibility to resolve my above mentioned problem making some modifications - I mean setting flags/values or overwriting certain methods?
If not, do you know another IOS sound framework more appropriate for such kind of real time sound processing?
Many thanks in advance!
I know that it deals with a quite extraordinary and difficult problem - maybe s.o. of you has resolved a similar one? Just to emphasize: sound pitch must be guaranteed!
It is not immediately clear from the explanation precisely what you're trying to achieve. Some code is expected.
However, your use of NSTimer to sequence audio playback is clearly problematic. It is neither intended as a reliable nor a high resolution timer.
NSTimer delivers events through a run-loop queue - probably your application's main queue - where they content with user interface events.
As the main thread is not a real-time thread, it may not even be scheduled to run for some time.
There may be quantisation effects on with the delay you requested, meaning your events effectively round to zero clock ticks and get scheduled immediately.
Perioidic timers have deleterious effects on battery life. iOS and MacOSX both take steps to reduce their impact by timer coalescing
The clock you should be using for sequencing events is the playback sample clock - which is available in the render handler of whatever framework you use. As well as being reliable this is efficient as well, as the render handler will be running periodically anyway, and in a real-time thread.

Precisely scheduling sound in iOS 7

I'm working on an iOS7-only app that needs to display a clock complete with ticking sound. I've used a NSTimer of 1s and I use AVAudioPlayer to play the tick sound every second.
Unfortunately, there's something slightly off with the timing. I've measured that timer is off by between 2 and 22 thousands of a second, which you wouldn't think would matter a great deal, but the lag creates a nail biting tension.. kind of like a heart flutter :-)
I've looked around a bit but it sounds like using audio queue services is the only way to go.. and I really don't fancy delving into the depths of that particular framework again.
My question: Is there some other way of getting precisely scheduled sound events in iOS 7 and failing that is there a decent wrapper framework for audio queue services available somewhere? Or better still is there a way of more precisely scheduling NSTimers?
Using any of NSTimer, libdispatch, or spawning a thread that sleeps for the tick duration rely on the underlying thread getting scheduled in time. The kernel provides no guarantee of this, and it is not surprising that the you observe timing jitter; the latency you observe looks reasonable.
NSTimer running on the main thread is likely to perform worst of these as you are also contending against other events delivered through it.
I think your options here are either to use audio queue services, a real-time thread to schedule the events with AVAudioPlayer, or render the audio yourself to a remoteIO unit.
I don't think AVPlayer provides any particular guarantees about timing either.

timing events in an audioQueue

I have created an iOS 5/iOS 6 app with a display that responds to changes in the musical pitch performed by the user. It uses the record function in the sample SpeakHere code but does not actually save a file because it is designed to respond in real time.
I would now like to extend this app to respond simultaneously to the pitch itself and the duration that the same pitch is sustained (for example, changing the color when the same pitch is held steadily for a minimum period of time). I have been reading about NSTimer and NSDate functions, which seem straightforward, as well as AudioTimeStamp functions, which are apparently C based and which I find very confusing. Based on other posts, it seems like NSTimer and NSDate checks might cause the display's real-time response to an actual musical performance to lag. How about dispatchAfter? Could I expect the block to execute at the scheduled time?
My question is, what approach is most likely to yield the desired result of allowing me to measure duration of a particular pitch in the AudioQueue and update my display continuously in real time? Do I need to be saving to a file for this to work?
I am self-taught and have only been programming for a few months, so no matter what I will have to do a lot of learning of APIs/C language features that are new to me. I'm hoping someone can point me in a fruitful direction. Thanks!
You're definitely getting into pretty advanced stuff here. Here are a few thoughts:
Your audio processing seems to be the most intensive operation. Because this processing needs to be continuous, you're probably going to have to do this processing in another thread. By processing, I mean examining the audio to determine pitch.
Once you've identified the pitch, you should store the time for which it began.
Then, in the main thread, setup an NSTimer that repeats continuously and in the NSTimer's fire method, subtract the pitch's start date from the current date to get the elapsed time, as an NSTimeInterval.
Send the NSTimeInterval to your display logic so that you can update the color on screen.
Some things to check out:
Beginner's tutorial on multi-threading and Grand Central Dispatch on iOS
NSTimer
Using NSTimers
Hope that helps you out!

Stopping youtube video once it reaches a point

It looks like the YouTube API does not have a way to stop a video playing once it reaches a certain point. It has a way to start it at a certain point, but not to stop it at a certain point. I'm wondering if there's a workaround for this? or maybe I glanced over it without noticing.
You could do some repetitive polling of the time elapsed with player.getCurrentTime() and then when it reaches the point you want, call player.stopVideo(). If that's a little too busy, you could use a timer and only start polling after a certain time had elapsed.

Resources