IOS AudioQueue doesnt play when enqueue packets late - ios

I have an app that enqueues packets to AudioQueue, and it's working perfectly. The problem is when I have a delay in the network, and I can't serve packets in time to AudioQueue.
All the application is working well and the enqueueBuffer doesnt return any error, but AudioQueue is discarding packets (so I have no sound), because they are too old.
Can I force AudioQueue to play those audio packets?, or at least, know that the packets are being discarded?. Because if I know it, i can do Pause-Play to restart the Queue... (not very good solution, but I haven't anything better)
Because the delay could be very big, I can't use a big buffer, because this would minimice error, but not solve it
Thank you very much

You're on the right track. You can handle network delays by detecting in your callback procedure when you have reached the end of your network buffer, then pausing the AudioQueue. Later on, restart the queue once enough packets have been buffered. Your code would look like this
if playerState.packetsRead == playerState.packetsWritten {
playerState.isPlaying = false
AudioQueuePause(aq)
}
And in your network code
if (playerState.packetsWritten >= playerState.packetsRead + kNumPacketsToBuffer) {
if !playerState.isPlaying {
playerState.isPlaying = true
for buffer in playerState.buffers {
audioCallback(playerState, aq: playerState.queue, buffer: buffer)
}
AudioQueueStart(playerState.queue, nil)
}
}
You would also need to update your code so that every time you receive a packet from the network, playerState.packetsWritten is incremented, and similarly for playerState.packetsRead when you add a packet to the audio queue. The optimal number for kNumPacketsToBuffer depends on the codec and network conditions. I would recommend using 256 for AAC and adjusting up/down based on performance.

Related

How to reset a IXAudio2SourceVoice's 'SamplesPlayed' counter after flushing source buffers?

IXAudio2SourceVoice has a GetState function which returns an XAUDIO2_VOICE_STATE structure. This structure has a SamplesPlayed member, which is:
Total number of samples processed by this voice since it last started, or since the last audio stream ended (as marked with the XAUDIO2_END_OF_STREAM flag).
What I want to be able to do it stop the source voice, flush all its buffers, and then reset the SamplesPlayed counter to zero. Neither calling Stop nor FlushSourceBuffers will by themselves reset SamplesPlayed. And while flagging the last buffer with XAUDIO2_END_OF_STREAM does correctly reset SamplesPlayed back to zero, this seemingly only works if that last buffer is played to completion; if the buffer is flushed, then SamplesPlayed does not get reset. I have also tried calling Discontinuity both before and after stopping/flushing with no effect.
My current workaround is, after stopping and flushing the source voice, to submit a tiny 1-sample silent buffer with the XAUDIO2_END_OF_STREAM flag set and then let the source voice play to process that buffer and thus reset SamplesPlayed to zero. This works fine-ish for my use case, but it seems pretty hacky/clumsy. Is there a better solution?
Looking at the XAudio2 source, there's no exposed way to do that in the API other than letting a packet play with XAUDIO2_END_OF_STREAM.
Calling Discontinuity sets up the end-of-stream flag on the currently playing buffer, or if there's none playing and a queued buffer it sets it there. You need to call Discontinuity and then let the voice play to completion before you recycle it.

Implementing Callback for AuAudioBuffer in AVAudioEngine

I recently watched the WWDC2014, Session on AVAudioEngine in practice, I have a question about the concept explained using AVAudioBuffers with NodeTap installed on the InputNode.
The Speaker mentioned that, its possible to notify the App module using Callback.
So my question is instead of Waiting for the callback until the buffer is full, is it possible to notify the app module after certain amount of time in ms. So once when the AVAudioEngine is started, is it possible to configure / register for Callback on this buffer for every 100 milliseconds of Recording. So that the App module gets notified to process this buffer for every 100ms.
Have anyone tried this before. Let me know your suggestions on how to implement this. It would be great if you point out some resource for this logic.
Thanks for your support in advance.
-Suresh
Sadly, the promising bufferSize argument of installTapOnBus which should let you choose a buffer size of 100ms:
input.installTapOnBus(bus, bufferSize: 512, format: input.inputFormatForBus(bus)) { (buffer, time) -> Void in
print("duration: \(buffer.frameLength, buffer.format.sampleRate) -> \((Double)(buffer.frameLength)/buffer.format.sampleRate)s")
}
is free to be ignored,
the requested size of the incoming buffers. The implementation may choose another size.
and is:
duration: (16537, 44100.0) -> 0.374988662131519s
So for more control over your input buffer size/duration, I suggest you use CoreAudio's remote io audio unit.

Is it possible to get audio from an ICY stream with percentage and seek function

I'm trying to reproduce audio from an ICY stream. I'm able to reproduce that with AVPlayer and some good open source library but I'm not able to control the stream. I have no idea how I can get the percentage reproduced or how to seek to a specific time in the stream. Is that possible? Is there a good library that can help me?
Actually I'm using AFSoundManager but I'm always receiving negative numbers for percentage and I get invalid time when trying to seek the stream at a specified time.
That's the code that I'm using:
AFSoundManager.sharedManager().startStreamingRemoteAudioFromURL("http://www.abstractpath.com/files/audiosamples/sample.mp3") { (percentage, elapsedTime, timeRemaining, error, poppi) in
if error == nil {
//This block will be fired when the audio progress increases in 1%
if elapsedTime > 0 {
println(elapsedTime)
self.slider.value = Float(elapsedTime*1000)
}
} else {
//Handle the error
println(error)
}
I'm able of course to get the elapsedTime but not the percentage or the remainingTime. I always get negative numbers.
This code works perfectly with remote or local audio file but not with the stream.
This isn't possible.
These streams are live. There is nothing to seek to because what you haven't heard hasn't happened yet. Even streams that playback music end-to-end are still "live" in the sense that the audio you haven't received hasn't been encoded yet. (Small codec and transit buffers aside, of course.)

What causes ExtAudioFileRead to make ioData->mBuffers[0].mDataByteSize negative?

The problem occurs when I often stop and start audio playback and seek a lot back and forth in an AAC audio file through an ExtAudioFileRef object. In few cases, this strange behaviour is shown by ExtAudioFileRead:
Sometimes it assigns these numbers to the mDataByteSize of the only AudioBuffer of the AudioBufferList:
-51604480
-51227648
-51350528
-51440640
-51240960
In hex, these numbers have the pattern 0xFC....00.
The code:
status = ExtAudioFileRead(_file, &numberFramesRead, ioData);
printf("s=%li d=%p d.nb=%li, d.b.d=%p, d.b.dbs=%li, d.b.nc=%li\n", status, ioData, ioData->mNumberBuffers, ioData->mBuffers[0].mData, ioData->mBuffers[0].mDataByteSize, ioData->mBuffers[0].mNumberChannels);
Output:
s=0 d=0x16668bd0 d.nb=1, d.b.d=0x30de000, d.b.dbs=1024, d.b.nc=2 // good (usual)
s=0 d=0x16668bd0 d.nb=1, d.b.d=0x30de000, d.b.dbs=-51240960, d.b.nc=2 // misbehaving
The problem occurs on an iPhone 4S on iOS 7. I could not reproduce the problem in the Simulator.
The problem occurs when concurrently calling ExtAudioFileRead() and ExtAudioFileSeek() for the same ExtAudioFileRef from two different threads/queues.
The read function was called directly from the AURenderCallback, so it was executed on AudioUnit's real-time thread while the seek was done on my own serial queue.
I've modified the code of the render callback to also dispatch_sync() to the same serial queue to which the seek gets dispatched. That solved the problem.

iOS: Playing PCM buffers from a stream

I'm receiving a series of UDP packets from a socket containing encoded PCM buffers. After decoding them, I'm left with an int16 * audio buffer, which I'd like to immediately play back.
The intended logic goes something like this:
init(){
initTrack(track, output, channels, sample_rate, ...);
}
onReceiveBufferFromSocket(NSData data){
//Decode the buffer
int16 * buf = handle_data(data);
//Play data
write_to_track(track, buf, length_of_buf, etc);
}
I'm not sure about everything that has to do with playing back the buffers though. On Android, I'm able to achieve this by creating an AudioTrack object, setting it up by specifying a sample rate, a format, channels, etc... and then just calling the "write" method with the buffer (like I wish I could in my pseudo-code above) but on iOS I'm coming up short.
I tried using the Audio File Stream Services, but I'm guessing I'm doing something wrong since no sound ever comes out and I feel like those functions by themselves don't actually do any playback. I also attempted to understand the Audio Queue Services (which I think might be close to what I want), however I was unable to find any simple code samples for its usage.
Any help would be greatly appreciated, specially in the form of example code.
You need to use some type of buffer to hold your incoming UDP data. This is an easy and good circular buffer that I have used.
Then to play back data from the buffer, you can use Audio Unit framework. Here is a good example project.
Note: The first link also shows you how to playback using Audio Unit.
You could use audioQueue services as well, make sure your doing some kind of packet re-ordering, if your using ffmpeg to decode the streams there is an option for this.
otherwise audio queues are easy to set up.
https://github.com/mooncatventures-group/iFrameExtractor/blob/master/Classes/AudioController.m
You could also use AudioUnits, a bit more complicated though.

Resources