How to convert CMSampleBuffer to Data in Swift? - ios

I need to convert CMSampleBuffer to Data format. I am using one Third party framework for audio related task. That framework gives me the streaming (i.e Real Time audio) audio in CMSampleBuffer object.
Like this:
func didAudioStreaming(audioSample: CMSampleBuffer!) {
//Here I need to conver this to Data format.
//Because I am using GRPC framework for Audio Recognization,
}
Please provide me the steps to convert the CMSampleBuffer to Data.
FYI
let formatDesc:CMFormatDescription? = CMSampleBufferGetFormatDescription(audioSample)
<CMAudioFormatDescription 0x17010d890 [0x1b453ebb8]> {
mediaType:'soun'
mediaSubType:'lpcm'
mediaSpecific: {
ASBD: {
mSampleRate: 16000.000000
mFormatID: 'lpcm'
mFormatFlags: 0xc
mBytesPerPacket: 2
mFramesPerPacket: 1
mBytesPerFrame: 2
mChannelsPerFrame: 1
mBitsPerChannel: 16 }
cookie: {(null)}
ACL: {(null)}
FormatList Array: {(null)}
}
extensions: {(null)}
}

Try below code to convert CMSampleBuffer to NSData.
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: 0))
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!)
let height = CVPixelBufferGetHeight(imageBuffer!)
let src_buff = CVPixelBufferGetBaseAddress(imageBuffer!)
let data = NSData(bytes: src_buff, length: bytesPerRow * height)
CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: 0))
EDIT-
For AudioBuffer use below code -
var audioBufferList = AudioBufferList()
var data = Data()
var blockBuffer : CMBlockBuffer?
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, nil, &audioBufferList, MemoryLayout<AudioBufferList>.size, nil, nil, 0, &blockBuffer)
let buffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers, count: Int(audioBufferList.mNumberBuffers))
for audioBuffer in buffers {
let frame = audioBuffer.mData?.assumingMemoryBound(to: UInt8.self)
data.append(frame!, count: Int(audioBuffer.mDataByteSize))
}

Using CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer will require to call at some point CFRelease(blockBuffer) because the buffer is retained and if not released the pool of buffers will become eventually empty and no new CMSampleBuffer will be generated.
I'd suggest to get directly the data using the following:
CMBlockBufferRef blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
size_t lengthAtOffset;
size_t totalLength;
char *data;
CMBlockBufferGetDataPointer(blockBuffer, 0, &lengthAtOffset, &totalLength, &data);
NSData *audioData = [NSData dataWithBytes:data length:totalLength];

Related

Get microphone input using Audio Queue in Swift 3

I am developing an app that records voice via built-in microphone and sends it to a server live. So I need to get the byte stream from the microphone while recording.
After googling and stack-overflowing for quite a while, I think I figured out how it should work, but it does not. I think using Audio Queues might be the way to go.
Here is what I tried so far:
func test() {
func callback(_ a :UnsafeMutableRawPointer?, _ b : AudioQueueRef, _ c :AudioQueueBufferRef, _ d :UnsafePointer<AudioTimeStamp>, _ e :UInt32, _ f :UnsafePointer<AudioStreamPacketDescription>?) {
print("test")
}
var inputQueue: AudioQueueRef? = nil
var aqData = AQRecorderState(
mDataFormat: AudioStreamBasicDescription(
mSampleRate: 16000,
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: 0,
mBytesPerPacket: 2,
mFramesPerPacket: 1, // Must be set to 1 for uncomressed formats
mBytesPerFrame: 2,
mChannelsPerFrame: 1, // Mono recording
mBitsPerChannel: 2 * 8, // 2 Bytes
mReserved: 0), // Must be set to 0 according to https://developer.apple.com/reference/coreaudio/audiostreambasicdescription
mQueue: inputQueue!,
mBuffers: [AudioQueueBufferRef](),
bufferByteSize: 32,
mCurrentPacket: 0,
mIsRunning: true)
var error = AudioQueueNewInput(&aqData.mDataFormat,
callback,
nil,
nil,
nil,
0,
&inputQueue)
AudioQueueStart(inputQueue!, nil)
}
It compiles and the app starts, but as soon as I call test() I get an exception:
fatal error: unexpectedly found nil while unwrapping an Optional value
The exception is caused by
mQueue: inputQueue!
I understand why this happens (inputQueue has no value) but I don't know how to initialise inputQueue correctly. The problem is that Audio Queues are very poorly documented for Swift users and I didn't find any working example on the internet.
Can anybody tell me what I am doing wrong?
Use AudioQueueNewInput(...) (or output) to initialize your audio queue before you are using it:
let sampleRate = 16000
let numChannels = 2
var inFormat = AudioStreamBasicDescription(
mSampleRate: Double(sampleRate),
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kAudioFormatFlagsNativeFloatPacked,
mBytesPerPacket: UInt32(numChannels * MemoryLayout<UInt32>.size),
mFramesPerPacket: 1,
mBytesPerFrame: UInt32(numChannels * MemoryLayout<UInt32>.size),
mChannelsPerFrame: UInt32(numChannels),
mBitsPerChannel: UInt32(8 * (MemoryLayout<UInt32>.size)),
mReserved: UInt32(0)
var inQueue: AudioQueueRef? = nil
AudioQueueNewInput(&inFormat, callback, nil, nil, nil, 0, &inQueue)
var aqData = AQRecorderState(
mDataFormat: inFormat,
mQueue: inQueue!, // inQueue is initialized now and can be unwrapped
mBuffers: [AudioQueueBufferRef](),
bufferByteSize: 32,
mCurrentPacket: 0,
mIsRunning: true)
Find details in Apples Documentation
This code from our project works fine:
AudioBuffer * buff;
AudioQueueRef queue;
AudioStreamBasicDescription fmt = { 0 };
static void HandleInputBuffer (
void *aqData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription *inPacketDesc
) {
}
- (void) initialize {
thisClass = self;
__block struct AQRecorderState aqData;
NSError * error;
fmt.mFormatID = kAudioFormatLinearPCM;
fmt.mSampleRate = 44100.0;
fmt.mChannelsPerFrame = 1;
fmt.mBitsPerChannel = 16;
fmt.mChannelsPerFrame = 1;
fmt.mFramesPerPacket = 1;
fmt.mBytesPerFrame = sizeof (SInt16);
fmt.mBytesPerPacket = sizeof (SInt16);
fmt.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
OSStatus status = AudioQueueNewInput ( // 1
&fmt, // 2
HandleInputBuffer, // 3
&aqData, // 4
NULL, // 5
kCFRunLoopCommonModes, // 6
0, // 7
&queue // 8
);
AudioQueueBufferRef buffers[kNumberBuffers];
UInt32 bufferByteSize = kSamplesSize;
for (int i = 0; i < kNumberBuffers; ++i) { // 1
OSStatus allocateStatus;
allocateStatus = AudioQueueAllocateBuffer ( // 2
queue, // 3
bufferByteSize, // 4
&buffers[i] // 5
);
OSStatus enqueStatus;
NSLog(#"allocateStatus = %d" , allocateStatus);
enqueStatus = AudioQueueEnqueueBuffer ( // 6
queue, // 7
buffers[i], // 8
0, // 9
NULL // 10
);
NSLog(#"enqueStatus = %d" , enqueStatus);
}
AudioQueueStart ( // 3
queue, // 4
NULL // 5
);
}

Duplicate / Copy CVPixelBufferRef with CVPixelBufferCreate

I need to create a copy of a CVPixelBufferRef in order to be able to manipulate the original pixel buffer in a bit-wise fashion using the values from the copy. I cannot seem to achieve this with CVPixelBufferCreate, or with CVPixelBufferCreateWithBytes.
According to this question, it could possibly also be done with memcpy(). However, there is no explanation on how this would be achieved, and which Core Video library calls would be needed regardless.
This seems to work:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Get pixel buffer info
const int kBytesPerPixel = 4;
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int bufferWidth = (int)CVPixelBufferGetWidth(pixelBuffer);
int bufferHeight = (int)CVPixelBufferGetHeight(pixelBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
uint8_t *baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer);
// Copy the pixel buffer
CVPixelBufferRef pixelBufferCopy = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, bufferWidth, bufferHeight, kCVPixelFormatType_32BGRA, NULL, &pixelBufferCopy);
CVPixelBufferLockBaseAddress(pixelBufferCopy, 0);
uint8_t *copyBaseAddress = CVPixelBufferGetBaseAddress(pixelBufferCopy);
memcpy(copyBaseAddress, baseAddress, bufferHeight * bytesPerRow);
// Do what needs to be done with the 2 pixel buffers
}
ooOlly's code was not working for me with YUV pixel buffers in all cases (green line at bottom and sig trap in memcpy), so this works in swift for YUV pixel buffers from the camera:
var copyOut: CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer), CVPixelBufferGetPixelFormatType(pixelBuffer), nil, &copyOut)
let copy = copyOut!
CVPixelBufferLockBaseAddress(copy, [])
CVPixelBufferLockBaseAddress(pixelBuffer, [])
let ydestPlane = CVPixelBufferGetBaseAddressOfPlane(copy, 0)
let ysrcPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0)
memcpy(ydestPlane, ysrcPlane, CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0) * CVPixelBufferGetHeightOfPlane(pixelBuffer, 0))
let uvdestPlane = CVPixelBufferGetBaseAddressOfPlane(copy, 1)
let uvsrcPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1)
memcpy(uvdestPlane, uvsrcPlane, CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1) * CVPixelBufferGetHeightOfPlane(pixelBuffer, 1))
CVPixelBufferUnlockBaseAddress(copy, [])
CVPixelBufferUnlockBaseAddress(pixelBuffer, [])
better error handling than the force unwrap is strongly suggested of course.
Maxi Mus's code only due with RGB/BGR buffer.
so for YUV buffer below code should work.
// Copy the pixel buffer
CVPixelBufferRef pixelBufferCopy = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, bufferWidth, bufferHeight, pixelFormat, NULL, &pixelBufferCopy);
CVPixelBufferLockBaseAddress(pixelBufferCopy, 0);
//BGR
// uint8_t *copyBaseAddress = CVPixelBufferGetBaseAddress(pixelBufferCopy);
// memcpy(copyBaseAddress, baseAddress, bufferHeight * bytesPerRow);
uint8_t *yDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBufferCopy, 0);
//YUV
uint8_t *yPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
memcpy(yDestPlane, yPlane, bufferWidth * bufferHeight);
uint8_t *uvDestPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBufferCopy, 1);
uint8_t *uvPlane = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
memcpy(uvDestPlane, uvPlane, bufferWidth * bufferHeight/2);
CVPixelBufferUnlockBaseAddress(pixelBufferCopy, 0);

Create a silent audio CMSampleBufferRef

How do you create a silent audio CMSampleBufferRef in Swift? I am looking to append silent CMSampleBufferRefs to an instance of AVAssetWriterInput.
You don't say what format you want your zeros (integer/floating point, mono/stereo, sample rate), but maybe it doesn't matter. Anyway, here's one way to create a silent CD audio style CMSampleBuffer in swift.
func createSilentAudio(startFrm: Int64, nFrames: Int, sampleRate: Float64, numChannels: UInt32) -> CMSampleBuffer? {
let bytesPerFrame = UInt32(2 * numChannels)
let blockSize = nFrames*Int(bytesPerFrame)
var block: CMBlockBuffer?
var status = CMBlockBufferCreateWithMemoryBlock(
kCFAllocatorDefault,
nil,
blockSize, // blockLength
nil, // blockAllocator
nil, // customBlockSource
0, // offsetToData
blockSize, // dataLength
0, // flags
&block
)
assert(status == kCMBlockBufferNoErr)
// we seem to get zeros from the above, but I can't find it documented. so... memset:
status = CMBlockBufferFillDataBytes(0, block!, 0, blockSize)
assert(status == kCMBlockBufferNoErr)
var asbd = AudioStreamBasicDescription(
mSampleRate: sampleRate,
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kLinearPCMFormatFlagIsSignedInteger,
mBytesPerPacket: bytesPerFrame,
mFramesPerPacket: 1,
mBytesPerFrame: bytesPerFrame,
mChannelsPerFrame: numChannels,
mBitsPerChannel: 16,
mReserved: 0
)
var formatDesc: CMAudioFormatDescription?
status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &asbd, 0, nil, 0, nil, nil, &formatDesc)
assert(status == noErr)
var sampleBuffer: CMSampleBuffer?
// born ready
status = CMAudioSampleBufferCreateReadyWithPacketDescriptions(
kCFAllocatorDefault,
block, // dataBuffer
formatDesc!,
nFrames, // numSamples
CMTimeMake(startFrm, Int32(sampleRate)), // sbufPTS
nil, // packetDescriptions
&sampleBuffer
)
assert(status == noErr)
return sampleBuffer
}
Doesn't it make you sorry you asked? Do you really need silent CMSampleBuffers? Can't you insert silence into an AVAssetWriterInput by moving the presentation time stamp forward?
Updated for XCode 10.3. Swift 5.0.1.
Don't forget the import CoreMedia.
import Foundation
import CoreMedia
class CMSampleBufferFactory
{
static func createSilentAudio(startFrm: Int64, nFrames: Int, sampleRate: Float64, numChannels: UInt32) -> CMSampleBuffer? {
let bytesPerFrame = UInt32(2 * numChannels)
let blockSize = nFrames*Int(bytesPerFrame)
var block: CMBlockBuffer?
var status = CMBlockBufferCreateWithMemoryBlock(
allocator: kCFAllocatorDefault,
memoryBlock: nil,
blockLength: blockSize,
blockAllocator: nil,
customBlockSource: nil,
offsetToData: 0,
dataLength: blockSize,
flags: 0,
blockBufferOut: &block
)
assert(status == kCMBlockBufferNoErr)
guard var eBlock = block else { return nil }
// we seem to get zeros from the above, but I can't find it documented. so... memset:
status = CMBlockBufferFillDataBytes(with: 0, blockBuffer: eBlock, offsetIntoDestination: 0, dataLength: blockSize)
assert(status == kCMBlockBufferNoErr)
var asbd = AudioStreamBasicDescription(
mSampleRate: sampleRate,
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kLinearPCMFormatFlagIsSignedInteger,
mBytesPerPacket: bytesPerFrame,
mFramesPerPacket: 1,
mBytesPerFrame: bytesPerFrame,
mChannelsPerFrame: numChannels,
mBitsPerChannel: 16,
mReserved: 0
)
var formatDesc: CMAudioFormatDescription?
status = CMAudioFormatDescriptionCreate(allocator: kCFAllocatorDefault, asbd: &asbd, layoutSize: 0, layout: nil, magicCookieSize: 0, magicCookie: nil, extensions: nil, formatDescriptionOut: &formatDesc)
assert(status == noErr)
var sampleBuffer: CMSampleBuffer?
status = CMAudioSampleBufferCreateReadyWithPacketDescriptions(
allocator: kCFAllocatorDefault,
dataBuffer: eBlock,
formatDescription: formatDesc!,
sampleCount: nFrames,
presentationTimeStamp: CMTimeMake(value: startFrm, timescale: Int32(sampleRate)),
packetDescriptions: nil,
sampleBufferOut: &sampleBuffer
)
assert(status == noErr)
return sampleBuffer
}
}
You need to create a block buffer using CMBlockBufferCreateWithMemoryBlock().
Fill the block buffer with a bunch of zeros and then pass it into CMAudioSampleBufferCreateWithPacketDescriptions().
Disclaimer: I haven't actually done this in Swift, I attempted it but found myself fighting the compiler at every turn so I switched to obj-c. The Core Media Framework is a low level C framework and was a lot easier to use without screwing around with Swifts type system. I know this isn't the answer you're looking for buy hopefully it will point you in the right direction.
Example

AudioBufferList in Swift

I'm trying to convert the following code to Swift:
CMSampleBufferRef sampleBuffer = [assetOutput copyNextSampleBuffer];
CMBlockBufferRef blockBuffer;
AudioBufferList audioBufferList;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(AudioBufferList), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer);
for (NSUInteger i = 0; i < audioBufferList.mNumberBuffers; i++) {
AudioBuffer audioBuffer = audioBufferList.mBuffers[i];
[audioStream writeData:audioBuffer.mData maxLength:audioBuffer.mDataByteSize];
}
CFRelease(blockBuffer);
CFRelease(sampleBuffer);
I seem to be unable to iterate over the audioBuffer list no matter what I try. Does anyone have an answer?
Code convert in Swift-3
var sampleBuffer: CMSampleBuffer? = assetOutput.copyNextSampleBuffer()
let audioStream = OutputStream()
var blockBuffer: CMBlockBuffer?
var audioBufferList = AudioBufferList()
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, nil, &audioBufferList, MemoryLayout<AudioBufferList>.size, nil, nil, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer)
let buffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers, count: Int(audioBufferList.mNumberBuffers))
for audioBuffer in buffers {
let frame = audioBuffer.mData?.assumingMemoryBound(to: UInt8.self)
audioStream.write(frame!, maxLength: Int(audioBuffer.mDataByteSize))
}

AudioConverterFillComplexBuffer work with Internet streamed mp3

I am currently streaming mp3 audio through the Internet. I am using AudioFileStream to parse the mp3 steam
comes through a CFReadStreamRef, decode the mp3 using AudioConverterFillComplexBuffer and copy the converted PCM
data into a ring buffer and finally play the PCM using RemoteIO.
The problem I am currently facing is the AudioConverterFillComplexBuffer always returns 0 (no error) but the conversion
result seems incorrect. In details, I can notice,
A. The UInt32 *ioOutputDataPacketSize keeps the same value I sent in.
B. The convertedData.mBuffers[0].mDataByteSize always been set to the size of the outputbuffer (doesn't matter how big the buffer is).
C. I can only hear clicking noise with the output data.
Below is my procedures for rendering the audio.
The same procedure works for my Audio queue implementation so I believe
I didn't something wrong in either the place I invoking the AudioConverterFillComplexBuffer or the callback of AudioConverterFillComplexBuffer.
I have been stuck on this issue for a long time. Any help will be highly appreciated.
Open a AudioFileStream.
// create an audio file stream parser
AudioFileTypeID fileTypeHint = kAudioFileMP3Type;
AudioFileStreamOpen(self, MyPropertyListenerProc, MyPacketsProc, fileTypeHint, &audioFileStream);
Handle the parsed data in the callback function ("MyPacketsProc").
void MyPacketsProc(void * inClientData,
UInt32 inNumberBytes,
UInt32 inNumberPackets,
const void * inInputData,
AudioStreamPacketDescription *inPacketDescriptions)
{
#synchronized(self)
{
// Init the audio converter.
if (!audioConverter)
AudioConverterNew(&asbd, &asbd_out, &audioConverter);
struct mp3Data mSettings;
memset(&mSettings, 0, sizeof(mSettings));
UInt32 packetsPerBuffer = 0;
UInt32 outputBufferSize = 1024 * 32; // 32 KB is a good starting point.
UInt32 sizePerPacket = asbd.mBytesPerPacket;
// Calculate the size per buffer.
// Variable Bit Rate Data.
if (sizePerPacket == 0)
{
UInt32 size = sizeof(sizePerPacket);
AudioConverterGetProperty(audioConverter, kAudioConverterPropertyMaximumOutputPacketSize, &size, &sizePerPacket);
if (sizePerPacket > outputBufferSize)
outputBufferSize = sizePerPacket;
packetsPerBuffer = outputBufferSize / sizePerPacket;
}
//CBR
else
packetsPerBuffer = outputBufferSize / sizePerPacket;
// Prepare the input data for the callback.
mSettings.inputBuffer.mDataByteSize = inNumberBytes;
mSettings.inputBuffer.mData = (void *)inInputData;
mSettings.inputBuffer.mNumberChannels = 1;
mSettings.numberPackets = inNumberPackets;
mSettings.packetDescription = inPacketDescriptions;
// Set up our output buffers
UInt8 * outputBuffer = (UInt8*)malloc(sizeof(UInt8) * outputBufferSize);
memset(outputBuffer, 0, outputBufferSize);
// describe output data buffers into which we can receive data.
AudioBufferList convertedData;
convertedData.mNumberBuffers = 1;
convertedData.mBuffers[0].mNumberChannels = 1;
convertedData.mBuffers[0].mDataByteSize = outputBufferSize;
convertedData.mBuffers[0].mData = outputBuffer;
// Convert.
UInt32 ioOutputDataPackets = packetsPerBuffer;
OSStatus result = AudioConverterFillComplexBuffer(audioConverter,
converterComplexInputDataProc,
&mSettings,
&ioOutputDataPackets,
&convertedData,
NULL
);
// Enqueue the ouput pcm data.
TPCircularBufferProduceBytes(&m_pcmBuffer, convertedData.mBuffers[0].mData, convertedData.mBuffers[0].mDataByteSize);
free(outputBuffer);
}
}
Feed the audio converter from its callback function ("converterComplexInputDataProc").
OSStatus converterComplexInputDataProc(AudioConverterRef inAudioConverter,
UInt32* ioNumberDataPackets,
AudioBufferList* ioData,
AudioStreamPacketDescription** ioDataPacketDescription,
void* inUserData)
{
struct mp3Data THIS = (struct mp3Data) inUserData;
if (THIS->inputBuffer.mDataByteSize > 0)
{
*ioNumberDataPackets = THIS->numberPackets;
ioData->mNumberBuffers = 1;
ioData->mBuffers[0].mDataByteSize = THIS->inputBuffer.mDataByteSize;
ioData->mBuffers[0].mData = THIS->inputBuffer.mData;
ioData->mBuffers[0].mNumberChannels = 1;
if (ioDataPacketDescription)
*ioDataPacketDescription = THIS->packetDescription;
}
else
*ioDataPacketDescription = 0;
return 0;
}
Playback using the RemoteIO component.
The input and output AudioStreamBasicDescription.
Input:
Sample Rate: 16000
Format ID: .mp3
Format Flags: 0
Bytes per Packet: 0
Frames per Packet: 576
Bytes per Frame: 0
Channels per Frame: 1
Bits per Channel: 0
output:
Sample Rate: 44100
Format ID: lpcm
Format Flags: 3116
Bytes per Packet: 4
Frames per Packet: 1
Bytes per Frame: 4
Channels per Frame: 1
Bits per Channel: 32

Resources