I'm developing an iOS application and I'm quite new to iOS development. So far I have implemented a h264 decoder from network stream using VideoToolbox, which was quite hard.
Now I need to play an audio stream that comes from network, but with no file involved, just a raw AAC stream read directly from the socket. This streams comes from the output of a ffmpeg instance.
The problem is that I don't know how to start with this, it seems there is little information about this topic. I have already tried with AVAudioPlayer but found just silence. I think I have first need to decompress the packets from the stream, just like with the h264 decoder.
I have been trying also with AVAudioEngine and AVAudioPlayerNode but no sucess, same as with AVAudioPlayer. Can someone provide me some guidance? Maybe AudioToolbox? AudioQueue?
Thank you very much for the help :)
Edit:
I'm playing around with AVAudioCompressedBuffer and having no error using AVAudioEngine and AVAudioNode. But, I don't know what this output means:
inBuffer: <AVAudioCompressedBuffer#0x6040004039f0: 0/1024 bytes>
Does this mean that the buffer is empty? I have been trying to feed this buffer in several ways, but always returns something like 0/1024. I think I'm not doing this right:
compressedBuffer.mutableAudioBufferList.pointee = audioBufferList
Any idea?
Thank you!
Edit 2:
I'm editing for reflecting my code for decompressing the buffer. Maybe some one can point me in the right direction.
Note: The packet that is ingested by this function actually is passed without the ADTS header (9 bytes) but I have also tried passing it with the header.
func decodeCompressedPacket(packet: Data) -> AVAudioPCMBuffer {
var packetCopy = packet
var streamDescription: AudioStreamBasicDescription = AudioStreamBasicDescription.init(mSampleRate: 44100, mFormatID: kAudioFormatMPEG4AAC, mFormatFlags: UInt32(MPEG4ObjectID.AAC_LC.rawValue), mBytesPerPacket: 0, mFramesPerPacket: 1024, mBytesPerFrame: 0, mChannelsPerFrame: 1, mBitsPerChannel: 0, mReserved: 0)
let audioFormat = AVAudioFormat.init(streamDescription: &streamDescription)
let compressedBuffer = AVAudioCompressedBuffer.init(format: audioFormat!, packetCapacity: 1, maximumPacketSize: 1024)
print("packetCopy count: \(packetCopy.count)")
var audioBuffer: AudioBuffer = AudioBuffer.init(mNumberChannels: 1, mDataByteSize: UInt32(packetCopy.count), mData: &packetCopy)
var audioBufferList: AudioBufferList = AudioBufferList.init(mNumberBuffers: 1, mBuffers: audioBuffer)
var mNumberBuffers = 1
var packetSize = packetCopy.count
// memcpy(&compressedBuffer.mutableAudioBufferList[0].mBuffers, &audioBuffer, MemoryLayout<AudioBuffer>.size)
// memcpy(&compressedBuffer.mutableAudioBufferList[0].mBuffers.mDataByteSize, &packetSize, MemoryLayout<Int>.size)
// memcpy(&compressedBuffer.mutableAudioBufferList[0].mNumberBuffers, &mNumberBuffers, MemoryLayout<UInt32>.size)
// compressedBuffer.mutableAudioBufferList.pointee = audioBufferList
var bufferPointer = compressedBuffer.data
for byte in packetCopy {
memset(compressedBuffer.mutableAudioBufferList[0].mBuffers.mData, Int32(byte), MemoryLayout<UInt8>.size)
}
print("mBuffers: \(compressedBuffer.audioBufferList[0].mBuffers.mNumberChannels)")
print("mBuffers: \(compressedBuffer.audioBufferList[0].mBuffers.mDataByteSize)")
print("mBuffers: \(compressedBuffer.audioBufferList[0].mBuffers.mData)")
var uncompressedBuffer = uncompress(inBuffer: compressedBuffer)
print("uncompressedBuffer: \(uncompressedBuffer)")
return uncompressedBuffer
}
So you are right in thinking you will (most likely) need to decompress the packets received from the stream. The idea is to get them to raw PCM format so that this can be sent directly to the audio output. This way you could also apply any DSP / audio manipulation you could want to the audio stream.
As you mentioned, you will probably need to be looking into the AudioQueue direction and the Apple Docs provide a good example of streaming audio in realtime, although this is in obj-c (in this case I think it may be a good idea to carry this out in obj-c). This is probably the best place to get started (interfacing the obj-c to swift is super simple).
Looking again at it in Swift there is the class AVAudioCompressedBuffer which seems to handle AAC for your case (would not need to decode the AAC if you get this to work), however there is no direct method for setting the buffer as it is intended for just being a storage container, I believe. Here's a working example of someone using the AVAudioCompressedBuffer along with an AVAudioFile (maybe you could buffer everything into files in background threads? I think it would be too much IO overhead).
However, if you tackle this in obj-c there is a post on how to set the AVAudioPCMBuffer (maybe works with AVAudioCompressedBuffer?) directly through memset (kind of digusting but at the same time lovely as an embedded programmer myself).
// make a silent stereo buffer
AVAudioChannelLayout *chLayout = [[AVAudioChannelLayout alloc] initWithLayoutTag:kAudioChannelLayoutTag_Stereo];
AVAudioFormat *chFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatFloat32
sampleRate:44100.0
interleaved:NO
channelLayout:chLayout];
AVAudioPCMBuffer *thePCMBuffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:chFormat frameCapacity:1024];
thePCMBuffer.frameLength = thePCMBuffer.frameCapacity;
for (AVAudioChannelCount ch = 0; ch < chFormat.channelCount; ++ch) {
memset(thePCMBuffer.floatChannelData[ch], 0, thePCMBuffer.frameLength * chFormat.streamDescription->mBytesPerFrame);
}
I know this is a lot to take and no way seems like a simple solution, but I think the obj-c AudioQueue technique would be my first stop!
Hope this helps!
Related
Is it possible to create a buffer concept similar to AudioQueue services in AVRecorder Framework. In my application , i need to capture the Audio buffer and send it over the Internet. The server connection part is done, but i wanted to know if there is a way to record the voice continuously in the foreground, and pass this audio buffer by buffer at the background to the server using Swift.
Comments are appreciated.
AVAudioRecorder records to a file, so you can't easily use it to stream audio data out of your app. AVAudioEngine on the other hand can call you back as it captures audio buffers:
var engine = AVAudioEngine()
func startCapturingBuffers() {
let input = engine.inputNode!
let bus = 0
input.installTapOnBus(bus, bufferSize: 512, format: input.inputFormatForBus(bus)) { (buffer, time) -> Void in
// buffer.floatChannelData contains audio data
}
try! engine.start()
}
I have a simple audio file in .wav format (the audio file is cut perfectly to loop). I've tried different methods to loop it. My first attempt was simply using AVPlayer and NSNotification to detect when audioItem ended to seek time at zero and play again. However, there was clearly a gap.
I've been looking at different solutions online, and found people using AVQueuePlayer to do a switching:
Looping AVPlayer seamlessly
However, when implemented, this still produces a gap.
Here's my current notification code:
weak var weakSelf = self
NSNotificationCenter.defaultCenter().addObserverForName(AVPlayerItemDidPlayToEndTimeNotification, object: nil, queue: nil, usingBlock: {(note: NSNotification) -> Void in
if weakSelf?.currentQueuePlayer.currentItem == weakSelf?.currentAudioItemOne {
weakSelf?.currentQueuePlayer.insertItem((weakSelf?.currentAudioItemTwo)!, afterItem: nil)
weakSelf?.currentAudioItemTwo.seekToTime(kCMTimeZero)
} else {
weakSelf?.currentQueuePlayer.insertItem((weakSelf?.currentAudioItemOne)!, afterItem: nil)
weakSelf?.currentAudioItemOne.seekToTime(kCMTimeZero)
}
})
Here's my code to set up the current QueuePlayer.
let audioPlayerItem = AVPlayerItem(URL: url)
currentAudioItemOne = audioPlayerItem
currentAudioItemTwo = audioPlayerItem
currentQueuePlayer = AVQueuePlayer()
currentQueuePlayer.insertItem(currentAudioItemOne, afterItem: nil)
currentQueuePlayer.play()
I've been working at this problem for several days now. Any leads or new things to try would be appreciated. The only thing I haven't tried so far is lower quality audio files. These .wav files are all over 1mb, and had be suspecting that the file size could be affecting the seamless looping.
EDIT:
Using AVPlayerLooper to create the 'Treadmill' effect:
let url = URL(fileURLWithPath: path)
let audioPlayerItem = AVPlayerItem(url: url)
currentAudioItemOne = audioPlayerItem
currentQueuePlayer = AVQueuePlayer()
currentAudioPlayerLayer = AVPlayerLayer(player: currentQueuePlayer)
currentAudioLooper = AVPlayerLooper(player: currentQueuePlayer, templateItem: currentAudioItemOne)
currentQueuePlayer.play()
EDIT 2:
afinfo on one of my wav files:
Num Tracks: 1
----
Data format: 2 ch, 44100 Hz, 'lpcm' (0x0000000C) 16-bit little-endian signed integer
no channel layout.
estimated duration: 11.302336 sec
audio bytes: 1993732
audio packets: 498433
bit rate: 1411200 bits per second
packet size upper bound: 4
maximum packet size: 4
audio data file offset: 44
not optimized
source bit depth: I16
----
You are inserting the item too late in your current solution. You need to queue up more than one initial item, so there's always a primed AVPlayerItem ready to go.
This is called the AVPlayerQueue "treadmill pattern" as better described in this WWDC 2016 session. If you're targeting iOS 10, you can use new AVPlayerLooper class which does it for you (also described in the same link). Apple has also provided a sample project which provides an example of both strategies.
Lower level solutions include queuing up the audio buffers to an AVAudioEngine instance or using an AudioQueue or mashing the buffers together yourself with an AudioUnit.
I have a project for Android reading a short[] array with PCM data from microphone Buffer for live analysis. I need to convert this functionality to iOS Swift. In Android it is very simple and looks like this..
import android.media.AudioFormat;
import android.media.AudioRecord;
...
AudioRecord recorder = new AudioRecord(MediaRecorder.AudioSource.DEFAULT, someSampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, AudioRecord.getMinBufferSize(...));
recorder.startRecording();
later I read the buffer with
recorder.read(data, offset, length); //data is short[]
(That's what i'm looking for)
Documentation: https://developer.android.com/reference/android/media/AudioRecord.html
I'm very new to Swift and iOS. I've read a lot of documentation about AudioToolkit, ...Core and whatever. All I found is C++/Obj-C and Bridging Swift Header solutions. Thats much to advanced and outdated for me.
For now I can read PCM-Data to a CAF-File with AVFoundation
settings = [
AVLinearPCMBitDepthKey: 16 as NSNumber,
AVFormatIDKey: Int(kAudioFormatLinearPCM),
AVLinearPCMIsBigEndianKey: 0 as NSNumber,
AVLinearPCMIsFloatKey: 0 as NSNumber,
AVSampleRateKey: 12000.0,
AVNumberOfChannelsKey: 1 as NSNumber,
]
...
recorder = try AVAudioRecorder(URL: someURL, settings: settings)
recorder.delegate = self
recorder.record()
But that's not what I'm looking for (or?). Is there an elegant way to achieve the android read functionality described above? I need to get a sample-array from the microphone buffer. Or do i need to do the reading on the recorded CAF file?
Thanks a lot! Please help me with easy explanations or code examples. iOS terminology is not mine yet ;-)
If you don't mind floating point samples and 48kHz, you can quickly get audio data from the microphone like so:
let engine = AVAudioEngine() // instance variable
func setup() {
let input = engine.inputNode!
let bus = 0
input.installTapOnBus(bus, bufferSize: 512, format: input.inputFormatForBus(bus)) { (buffer, time) -> Void in
let samples = buffer.floatChannelData[0]
// audio callback, samples in samples[0]...samples[buffer.frameLength-1]
}
try! engine.start()
}
Is it possible to set the audio format for an AUGraphAddRenderNotify callback? If not, is it possible just to see what the format is at init time?
I have a very simple AUGraph which plays audio from a kAudioUnitSubType_AudioFilePlayer to a kAudioUnitSubType_RemoteIO. I'm doing some live processing on the audio so I've added a AUGraphAddRenderNotify callback to the graph to do it there. This all works fine, but when I initialise the graph, I need to set up a couple buffers and some other data for my processing, and I need to know what format will be supplied in the callback. (On some devices it's interleaved, on others it's not — this is fine I just need to know).
Here's the setup:
NewAUGraph(&audioUnitGraph);
AUNode playerNode;
AUNode outputNode;
AudioComponentDescription playerDescription = {
.componentType = kAudioUnitType_Generator,
.componentSubType = kAudioUnitSubType_AudioFilePlayer,
.componentManufacturer = kAudioUnitManufacturer_Apple
};
AudioComponentDescription outputDescription = {
.componentType = kAudioUnitType_Output,
.componentSubType = kAudioUnitSubType_RemoteIO,
.componentManufacturer = kAudioUnitManufacturer_Apple
};
AUGraphAddNode(audioUnitGraph, &playerDescription, &playerNode);
AUGraphAddNode(audioUnitGraph, &outputDescription, &outputNode);
AUGraphOpen(audioUnitGraph);
AUGraphNodeInfo(audioUnitGraph, playerNode, NULL, &playerAudioUnit);
AUGraphNodeInfo(audioUnitGraph, outputNode, NULL, &outputAudioUnit);
// Tried adding all manner of AudioUnitSetProperty() calls here to set the AU formats
AUGraphConnectNodeInput(audioUnitGraph, playerNode, 0, outputNode, 0);
AUGraphAddRenderNotify(audioUnitGraph, render, (__bridge void *)self);
AUGraphInitialize(audioUnitGraph);
// Some time later...
// - Set up audio file in the file player
// - Start the graph with AUGraphStart()
I can understand that altering the formats used by the two audio units may not have any effect on the format 'seen' at the point the AUGraph renders into its callback (as this is downstream of them), but surely there is a way to know at init time what that format will be?
I'm currently working on a VOIP project for iOS.
I use AudioUnits to get data from the mic and play sounds.
My main app is written in C# (Xamarin) and uses a C++ library for faster audio and codec processing.
To test the input/output result I'm currently testing recording & playback on the same device
- store the mic audio data in a buffer in the recordingCallback
- play the data from the buffer in the playbackCallback
That works as expected, the voice quality is good.
I need to save the incoming audio data from the mic to a raw PCM file.
I have done that, but the resulting file only contains some short "beep" signals.
So my question is:
What Audio settings do I need, that I can hear my voice (real audio signals) in the resulting raw PCM file instead of short beep sounds?
Has anyone an idea what could be wrong or what I have to do that I'm able to replay the resulting PCM file correctly?
My current format settings are (C# code):
int framesPerPacket = 1;
int channelsPerFrame = 1;
int bitsPerChannel = 16;
int bytesPerFrame = bitsPerChannel / 8 * channelsPerFrame;
int bytesPerPacket = bytesPerFrame * framesPerPacket;
AudioStreamBasicDescription audioFormat = new AudioStreamBasicDescription ()
{
SampleRate = 8000,
Format = AudioFormatType.LinearPCM,
FormatFlags = AudioFormatFlags.LinearPCMIsSignedInteger | AudioFormatFlags.LinearPCMIsPacked | AudioFormatFlags.LinearPCMIsAlignedHigh,
BitsPerChannel = bitsPerChannel,
ChannelsPerFrame = channelsPerFrame,
BytesPerFrame = bytesPerFrame,
FramesPerPacket = framesPerPacket,
BytesPerPacket = bytesPerPacket,
Reserved = 0
};
Additional C# settings (here in short without error checking):
AVAudioSession session = AVAudioSession.SharedInstance();
NSError error = null;
session.SetCategory(AVAudioSession.CategoryPlayAndRecord, out error);
session.SetPreferredIOBufferDuration(Config.packetLength, out error);
session.SetPreferredSampleRate(Format.samplingRate,out error);
session.SetActive(true,out error);
My current recording callback in short (only for PCM file saving) (C++ code):
OSStatus
NotSoAmazingAudioEngine::recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
std::pair<BufferData*, int> bufferInfo = _sendBuffer.getNextEmptyBufferList();
AudioBufferList* bufferList = new AudioBufferList();
bufferList->mNumberBuffers = 1;
bufferList->mBuffers[0].mData = NULL;
OSStatus status = AudioUnitRender(_instance->_audioUnit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, bufferList);
if(_instance->checkStatus(status))
{
if(fout != NULL) //fout is a "FILE*"
{
fwrite(bufferList->mBuffers[0].mData, sizeof(short), bufferList->mBuffers[0].mDataByteSize/sizeof(short), fout);
}
}
delete bufferList;
return noErr;
}
Background info why I need a raw PCM file:
To compress the audio data I'd like to use the Opus codec.
With the codec I have the problem that there is a tiny "tick" at the end of each frame:
With a frame size of 60ms I nearly can't hear them, at 20ms its annoying, at 10 ms frame sizes my own voice can't be heared because of the ticking (for the VOIP application I try to get 10ms frames).
I don't encode & decode in the callback functions (I encode/decode the data in the functions which I use to transfer audio data from the "micbuffer" to the "playbuffer").
And everytime the playbackCallback wants to play some data, there is a frame in my buffer.
I also eliminate my Opus encoding/decoding functions as error source, because if I read PCM data from a raw PCM file, encode & decode it afterwards, and save it to a new raw PCM file, the ticking does not appear (if I play the result file with "Softe Audio Tools", the output file audio is OK).
To find out what causes the ticking, I'd like to save the raw PCM data from the mic to a file to make further investigations on that issue.
I found the solution myself:
My PCM player expected 44100 Hz stereo, but my file only had 8000 Hz mono and therefore my saved file was played about 10x too fast.