Audio waveform visualization in swift - ios

This is a two part question:
Using AVAudioRecorder is it possible to have a waveform respond to the incoming audio in real time similar to what happens when you activate siri on the iphone. Perhaps using averagePowerForChannel?
Also, is there a way to gather the audio samples of a recording to render a waveform?
I know novocaine exists, but I was hoping not to use a framework.

Does not seem possible using AVAudioRecorder by itself.
An alternative would be to use AVCaptureSession with an AVCaptureAudioDataOutput which provides access to the raw audio buffer, from which the wave form can be read.
Most of the processing would be done in the delegate:
func captureOutput(AVCaptureOutput!, didOutputSampleBuffer: CMSampleBuffer!, from: AVCaptureConnection!)
You would probably need to implement some sort of throttling to only process every Nth sample so that your visualiser code doesn't interfere with the audio.
AVCaptureSession is far more rudimentary compared to AVAudioRecorder - it does not provide any recording facilities by itself for example, and so if you wanted to also record the audio you would need to use an AVAssetWriter to save the samples.
This SO question shows how to access the sample buffers. It uses AVAssetReader to load a file, but the delegate is exactly the same as would be used for realtime processing:
Reading audio samples via AVAssetReader

Related

Can i use AVAudioRecorder for sending a stream of recordings to server?

I am trying to write a program for live streaming the audio to the server. Does AVAudioRecorder has the streaming functionality or should I use any other frameworks? Preferably I am trying to use apple builtin frameworks.
I have used AVCaptureSession for streaming audio coupled with AVCaptureDevice as Audio as input device and output device as AVCaptureAudioDataOutput which in turn calls AVCaptureAudioDataOutputSampleBufferDelegate and gives data as a buffered stream.
AVFoundation Cameras and Media Capture
According to this document, you have to initialize an AVAudioRecorder with a file path, which means: if you want to do the live streaming, you have to either wait for the current recording to finish, or initialize a new AVAudioRecorder with another path.
I would recommend you to create multiple AVAudioRecorder instances and run each instance based on the size of the audio chunk. (You can also divide them based on the time, but make sure your buffer is large to keep them all)
And, just upload previous chunks and then start a new instance to keep the recording going on.

Why does no AVPlayerItemAudioOutput exist in AVFoundation?

AVPlayerItemVideoOutput is a subclass of AVPlayerItemOutput in AVFoundation, I can get the visual data in pixel buffer format and do some process. (through copyPixelBufferForItemTime:)
However, there is no AVPlayerItemAudioOutput exists accordingly. How can I process the audio data?
Do I have to use the AVAssetReader class to get this?
This is a great question. -[AVPlayerItem addOutput:] mentions audio but there is nothing to be found on it in AVPlayerItemOutput.h (unless you're meant to get audio via the AVPlayerItemLegibleOutput class - I'm only half joking, as a class that vends CMSampleBuffers, I think a a hypothetical AVPlayerItemAudioOutput would look a lot like this).
So I don't know where AVPlayerItemAudioOutput is, but yes you can use AVAssetReader to get at audio data.
However if you're already using an AVPlayer, your most painless path would be using MTAudioProcessingTap to play the role of the hypothetical AVPlayerItemAudioOutput.
You can add a tap to the inputParameters of your AVPlayer's currentItem's audioMix to receive (and even modify) the audio of your chosen audio tracks.
It's probably easier to read some example code than it is to parse what I just wrote.

How to play Audio using raw data in iOS?

I have been working on Audio capture and playback, I am trying to play audio using a buffer. I will get buffer as character pointer I have to read from the buffer and play if anything is present in that buffer. I came to know about AudioQueue, I am not sure AudioQueue will be the correct way for my task. Can anyone done this before, Please suggest some ideas where to start?
Yes, AudioQueues are fine for playback and recording of LPCM audio data signals. You've a lot to learn about audio signals and CoreAudio APIs before you will understand how this all works (an audio signal and AudioQueue crash course is way too big for one SO answer).
Start with some AudioQueue examples and tutorials. Reserve a good amount of time.

RTP iPhone camera - How to read AVAssetWriter file while its being written?

I'm trying to stream RTSP/RTP iPhone camera capture to a Wowza server.
Apple's API does not allow direct access to H264 encoded frames, but only allow you to write it into a container '.mov' file.
Either way, I cannot get access to that file content until AVAssetWriter has finished writing, which doesn't allow me to stream live camera capture.
I've tried accessing it using named pipe in order to get access to the file's content on real-time but no success there - AVAssetWriter will not write to an existing file.
Does anyone know how to do it?
Thanks!
Edit: Starting on iOS 8, encoder & decoder has APIs
You can use a AVCaptureVideoDataOutput to process/stream each frame while the camera is running and AVAssetWriter to write the video file at the same time (appending each frame of the video data output queue).
See also
Simultaneous AVCaptureVideoDataOutput and AVCaptureMovieFileOutput
and Can use AVCaptureVideoDataOutput and AVCaptureMovieFileOutput at the same time?
Only solution i've found working so far,
is capturing without sound, then the file is written to the location you've defined.
Otherwise it's probably written to a temp location you can't reach.
Here is Apple's example for capturing video: AVCam
You'll need to remove sound channels.
If anyone has a better way, please publish it here.

Redirection playback output of avplayer item

What I want to do is to take the output samples of an AVAsset corresponding to an audio file (no video involved) and send them to an audio effect class that takes in a block of samples, and I want to be able to this in real time.
I am currently looking at the AVfoundation class reference and programming guide, but I can't see a way of redirect the output of a player item and send it to my effect class, and from there, send the transformed samples to an Audio output (using AVAssetReaderAudioMixOutput?) and hear it from there. I see that the AVAssetReader class gives me a way to get a block of samples using
[myAVAssetReader addOutput:myAVAssetReaderTrackOutput];
[myAVAssetReaderTrackOutput copyNextSampleBuffer];
but Apple documentation specifies that the AVAssetReader class is not made and should not be used for real-time situations. Does anybody have a suggestion on where to look, or if I am having the right approach?
The MTAudioProcessingTap is perfect for this. By leveraging an AVPlayer, you can avoid having to block the samples yourself with the AVAssetReaderOutput and then render them yourself in an Audio Queue or with an Audio Unit.
Instead, attach an MTAudioProcessingTap to the inputParameters of your AVAsset's audioMix, and you'll be given samples in blocks which are easy to then throw into an effect unit.
Another benefit from this is that it will work with AVAssets derived from URLs that can't always be opened by other Apple APIs (like Audio File Services), such as the user's iPod library. Additionally, you get all of the functionality like tolerance of audio interruptions that the AVPlayer provides for free, which you would otherwise have to implement by hand if you went with an AVAssetReader solution.
To set up a tap you have to set up some callbacks that the system invokes as appropriate during playback. Full code for such processing can be found at this tutorial here.
There's a new MTAudioProcessingTap object in iOS 6 and Mac OS 10.8 . Check out the Session 517 video from WWDC 2012 - they've demonstrated exactly what you want to do.
WWDC Link
AVAssetReader is not ideal for realtime usage because it handles the decoding for you, and in various cases copyNextSampleBuffer can block for random amounts of time.
That being said, AVAssetReader can be used wonderfully well in a producer thread feeding a circular buffer. It depends on your required usage, but I've had good success using this method to feed a RemoteIO output, and doing my effects/signal processing in the RemoteIO callback.

Resources