How to play Audio using raw data in iOS? - ios

I have been working on Audio capture and playback, I am trying to play audio using a buffer. I will get buffer as character pointer I have to read from the buffer and play if anything is present in that buffer. I came to know about AudioQueue, I am not sure AudioQueue will be the correct way for my task. Can anyone done this before, Please suggest some ideas where to start?

Yes, AudioQueues are fine for playback and recording of LPCM audio data signals. You've a lot to learn about audio signals and CoreAudio APIs before you will understand how this all works (an audio signal and AudioQueue crash course is way too big for one SO answer).
Start with some AudioQueue examples and tutorials. Reserve a good amount of time.

Related

Audio waveform visualization in swift

This is a two part question:
Using AVAudioRecorder is it possible to have a waveform respond to the incoming audio in real time similar to what happens when you activate siri on the iphone. Perhaps using averagePowerForChannel?
Also, is there a way to gather the audio samples of a recording to render a waveform?
I know novocaine exists, but I was hoping not to use a framework.
Does not seem possible using AVAudioRecorder by itself.
An alternative would be to use AVCaptureSession with an AVCaptureAudioDataOutput which provides access to the raw audio buffer, from which the wave form can be read.
Most of the processing would be done in the delegate:
func captureOutput(AVCaptureOutput!, didOutputSampleBuffer: CMSampleBuffer!, from: AVCaptureConnection!)
You would probably need to implement some sort of throttling to only process every Nth sample so that your visualiser code doesn't interfere with the audio.
AVCaptureSession is far more rudimentary compared to AVAudioRecorder - it does not provide any recording facilities by itself for example, and so if you wanted to also record the audio you would need to use an AVAssetWriter to save the samples.
This SO question shows how to access the sample buffers. It uses AVAssetReader to load a file, but the delegate is exactly the same as would be used for realtime processing:
Reading audio samples via AVAssetReader

How to obtain audio chunks for analysis in core audio or AVFoundation

I need to analyse chunks of audio data of (approximately) 1 second with a sample rate of 8kHz. Although the audio will be recorded in real time, it will only be used for detecting specific events. Hence, there are no strict latency requirements. What would be the best framework to use in this case?
I already started learning Core Audio and I worked through the book Learning Core Audio. With the minimal amount of Swift documentation available on the internet I was able to set up an AUGraph for iOS to record audio with the remote I/O audio unit and to get acces to the raw samples with the output render callback. Unfortunately, I got stuck to create chunks of 1 seconds of audio samples to perform the audio analysis. Could a custom AudioBufferList be used for this? Or could a large ringbuffer be implemented on the remote I/O audio unit (like it is required in case of a HAL audio unit)?
I also tried to adopt AVFoundation with AVAssetReader to obtain the audio chunks. Although I was able to obtain samples of a recorded audio signal, I did not succes in creating a buffer of 1 second (and I even don’t know whether it would be possible to do this in realtime). Would AVFoundation be a good choice in this situation anyhow?
I would appreciate any advice on this.
A main problem for me is the fact that I try to use Swift but that there is not much example code available and that there is even less documentation. I feel that it would be better to switch to Objective-C for audio programming, and to stop trying to get everything in Swift. I am curious whether this would be a better time investment?
For analyzing 1 second windows of audio samples, the simplest solution would be to use the Audio Queue API with a lock-free ring buffer (say around 2 seconds long) to record samples. You can use a repeating nstimer task to poll how full the buffer is, and emit 1 second chunks to a processing task when they become available.
Core Audio and the RemoteIO Audio Unit is for if you need much shorter data windows with latency requirements on the order a few milliseconds.
Core Audio is a C API.
Objective-C is an extension of C. I find that Objective-C is much nicer for working with core audio than swift.
I created a cross platform c lockless ring buffer. There is sample code that demonstrates setting up the ring, setting up the mic, playing audio, and reading and writing from the ring.
The ring records that last N number of seconds that you specify. Old data is overwritten by new data. So you specify that you want the latest 3 seconds recorded. The sample I show plays a sine wave while recording through the microphone. Every 7 seconds, it grabs the last 2 seconds of recorded audio.
Here is the complete sample code on github.

Getting raw pcm audio buffer from XAudio2 when playing compressed file

Is this possible to access the raw audio PCM data that is being played when using XAudio2 to play file?
I've been searching for several ways to access a decoded version of audio files being played in SL4/Windows Phone, without success.
According to this post someone had success writing a custom XAPO that just grabs samples and is enabled on a Submix Voice. http://social.msdn.microsoft.com/Forums/windowsapps/en-US/05593fad-dfd8-4c77-983b-8c84cd4a324b/xaudio2-saving-output-custom-xapos-slow-down-audio-play-backwards
Please note that if you just want to do this for audio processing this approach is not optimal because you are limited to the speed of audio playback.

play decoded raw audio data in iPhone

i am developing one streaming application for iOS and i am getting all audio packets correctly and decoded also but now i am totally confused about how to play it on iPhone.. I have decoded packets using ffmpeg.. All codes i get so far are playing audio from a file but i my case i have to play audio packets which i am getting from server in an order they are coming.. I dont want to save all packets to a file so any code that will help me to solve my problem is appreciated..
Thnak you...
You'll need to use Audio Services to do this. Either AudioQueue or AudioUnit. AudioQueue is better for streaming type applications.
The classic sample - for AudioQueue - is Apple's SpeakHere.
Matt Gallagher also has some superb tutorials with sample code for streaming.
See Streaming MP3/AAC audio again.
If you want to go the AudioUnit route, see Using RemoteIO audio unit.
By basing your code on Matt Gallagher's sample, possibly also using SpeakHere, you should be able to play your decoded packets. See my other answers for how to play using a buffer rather than from a file.
Don't forget that this is quite advanced stuff. You'll need to be comfortable with buffers, pointers, etc. Make sure you understand frames, packets, etc. as well. Some pain in getting your audio out there is to be expected.

Redirection playback output of avplayer item

What I want to do is to take the output samples of an AVAsset corresponding to an audio file (no video involved) and send them to an audio effect class that takes in a block of samples, and I want to be able to this in real time.
I am currently looking at the AVfoundation class reference and programming guide, but I can't see a way of redirect the output of a player item and send it to my effect class, and from there, send the transformed samples to an Audio output (using AVAssetReaderAudioMixOutput?) and hear it from there. I see that the AVAssetReader class gives me a way to get a block of samples using
[myAVAssetReader addOutput:myAVAssetReaderTrackOutput];
[myAVAssetReaderTrackOutput copyNextSampleBuffer];
but Apple documentation specifies that the AVAssetReader class is not made and should not be used for real-time situations. Does anybody have a suggestion on where to look, or if I am having the right approach?
The MTAudioProcessingTap is perfect for this. By leveraging an AVPlayer, you can avoid having to block the samples yourself with the AVAssetReaderOutput and then render them yourself in an Audio Queue or with an Audio Unit.
Instead, attach an MTAudioProcessingTap to the inputParameters of your AVAsset's audioMix, and you'll be given samples in blocks which are easy to then throw into an effect unit.
Another benefit from this is that it will work with AVAssets derived from URLs that can't always be opened by other Apple APIs (like Audio File Services), such as the user's iPod library. Additionally, you get all of the functionality like tolerance of audio interruptions that the AVPlayer provides for free, which you would otherwise have to implement by hand if you went with an AVAssetReader solution.
To set up a tap you have to set up some callbacks that the system invokes as appropriate during playback. Full code for such processing can be found at this tutorial here.
There's a new MTAudioProcessingTap object in iOS 6 and Mac OS 10.8 . Check out the Session 517 video from WWDC 2012 - they've demonstrated exactly what you want to do.
WWDC Link
AVAssetReader is not ideal for realtime usage because it handles the decoding for you, and in various cases copyNextSampleBuffer can block for random amounts of time.
That being said, AVAssetReader can be used wonderfully well in a producer thread feeding a circular buffer. It depends on your required usage, but I've had good success using this method to feed a RemoteIO output, and doing my effects/signal processing in the RemoteIO callback.

Resources