Mix two AUNodes in Spotify iOS SDK - ios

I'm trying to use the dynamic processor and a bunch of filters to compress a specific frequency band within the spotify method connectOutputBus but when i mix the nodes in the kAudioUnitSubType_MultiChannelMixer only the sound of the first added node comes out.
obs: i actually use filters on sourceNodeCopy to remove the freqs that will be compressed on the souceNode but to keep things short i omitted them.
Here's the code:
override func connectOutputBus(_ sourceOutputBusNumber: UInt32, ofNode sourceNode: AUNode, toInputBus destinationInputBusNumber: UInt32, ofNode destinationNode: AUNode, in graph: AUGraph!) throws {
let sourceNodeCopy = sourceNode //original node without the harsh freq
//create a filter for the harsh frequencies
var filterDescription = AudioComponentDescription()
filterDescription.componentType = kAudioUnitType_Effect
filterDescription.componentSubType = kAudioUnitSubType_BandPassFilter
filterDescription.componentManufacturer = kAudioUnitManufacturer_Apple
filterDescription.componentFlags = 0
filterDescription.componentFlagsMask = 0
AUGraphAddNode(graph, &filterDescription, &filterNode!) // Add the filter node
AUGraphNodeInfo(graph, filterNode!, nil, &filterUnit!) // Get the Audio Unit from the node
AudioUnitInitialize(filterUnit!) // Initialize the audio unit
// Set filter params
AudioUnitSetParameter(filterUnit!, kBandpassParam_CenterFrequency, kAudioUnitScope_Global, 0, 10038, 0)
//create a processor to compress the frequency
var dynamicProcessorDescription = AudioComponentDescription()
dynamicProcessorDescription.componentType = kAudioUnitType_Effect
dynamicProcessorDescription.componentSubType = kAudioUnitSubType_DynamicsProcessor
dynamicProcessorDescription.componentManufacturer = kAudioUnitManufacturer_Apple
dynamicProcessorDescription.componentFlags = 0
dynamicProcessorDescription.componentFlagsMask = 0
// Add the dynamic processor node
AUGraphAddNode(graph, &dynamicProcessorDescription, &dynamicProcessorNode)
AUGraphNodeInfo(graph, dynamicProcessorNode, nil, &dynamicProcessorUnit)
AudioUnitInitialize(dynamicProcessorUnit!)
// Set compressor params
AudioUnitSetParameter(dynamicProcessorUnit!, kDynamicsProcessorParam_Threshold, kAudioUnitScope_Global, 0, -35, 0)
AudioUnitSetParameter(dynamicProcessorUnit!, kDynamicsProcessorParam_AttackTime, kAudioUnitScope_Global, 0, 0.02, 0)
AudioUnitSetParameter(dynamicProcessorUnit!, kDynamicsProcessorParam_ReleaseTime, kAudioUnitScope_Global, 0, 0.04, 0)
AudioUnitSetParameter(dynamicProcessorUnit!, kDynamicsProcessorParam_HeadRoom, kAudioUnitScope_Global, 0, 0, 0)
//mixer
var mixerDescription = AudioComponentDescription()
mixerDescription.componentType = kAudioUnitType_Mixer
mixerDescription.componentSubType = kAudioUnitSubType_MultiChannelMixer
mixerDescription.componentManufacturer = kAudioUnitManufacturer_Apple
mixerDescription.componentFlags = 0
mixerDescription.componentFlagsMask = 0
AUGraphAddNode(graph, &mixerDescription, &audioEffectsController.mixerNode)
AUGraphNodeInfo(graph, audioEffectsController.mixerNode, nil,
&audioEffectsController.mixerUnit)
AudioUnitInitialize(audioEffectsController.mixerUnit!)
AudioUnitSetParameter(mixerUnit!, kMultiChannelMixerParam_Volume, kAudioUnitScope_Input, 0, 1.0, 0);
AudioUnitSetParameter(mixerUnit!, kMultiChannelMixerParam_Volume, kAudioUnitScope_Output, 0, 1.0, 0);
//connect the nodes
AUGraphConnectNodeInput(graph, sourceNode, sourceOutputBusNumber, filterNode, 0)
AUGraphConnectNodeInput(graph, filterNode, sourceOutputBusNumber, dynamicProcessorNode, 0)
AUGraphConnectNodeInput(graph, sourceNodeCopy, sourceOutputBusNumber, mixerNode, 0)
AUGraphConnectNodeInput(graph, fiterNode, sourceOutputBusNumber, mixerNode, 0)
//connect the mixer to the output
AUGraphConnectNodeInput(graph, audioEffectsController.mixerNode, 0, destinationNode, destinationInputBusNumber)

In your code you connect the node like this
AUGraphConnectNodeInput(graph, sourceNodeCopy, sourceOutputBusNumber, mixerNode, 0)
AUGraphConnectNodeInput(graph, fiterNode, sourceOutputBusNumber, mixerNode, 0)
If you do this, you will connect sourceNodeCopy and filterNode to the same input (0) of the MixerNode. But only one can be connected...
You should try this instead
AUGraphConnectNodeInput(graph, sourceNodeCopy, sourceOutputBusNumber, mixerNode, 0)
AUGraphConnectNodeInput(graph, fiterNode, sourceOutputBusNumber, mixerNode, 1)

Related

How to use a stereo source in 3D Mixer Unit (kAudioUnitSubType_SpatialMixer)?

There are three Audio Unit:
equalizerUnit(kAudioUnitSubType_NBandEQ),
3DmixerUnit(kAudioUnitSubType_SpatialMixer),
remoteIOUnit(kAudioUnitSubType_RemoteIO).
With AUGraph and Nodes (equalizerNode, 3DmixerNode, remoteNode), they are correctly connected to each other:
equalizerUnit -> mixerUnit -> remoteIOUnit.
One problem, to connect equalizerUnit and 3DmixerUnit, I use a Converter Unit(kAudioUnitSubType_AUConverter), on the output of which I set AudioStreamBasicDescription:
.mSampleRate = 44100.00,
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved,
.mFramesPerPacket = 1,
.mChannelsPerFrame = 1,
.mBytesPerFrame = 2,
.mBitsPerChannel = 16,
.mBytesPerPacket = 2
As a result, I get mono sound from Output Score 3DmixerUnit.
How solve problem with stereo in 3DmixerUnit?
I would appreciate any help!
p.s. Some edit info:
The main problem lies in the fact that I need a stereo signal to apply for two mono inputs of 3DmixerUnit.
Apple's 3D Mixer Audio Unit guide states:
To use a stereo source, you may treat its left and right channels as two independent single-channel sources, and then feed each side of the stereo stream to its own input bus.
https://developer.apple.com/library/ios/qa/qa1695/_index.html
I can not figure out how split stereo of my equalizerUnit into the two independent single-channel sources. How does one do this?
Perhaps someone in the future will save their time by solving this problem.
canonicalAudioStreamBasicDescription = (AudioStreamBasicDescription) {
.mSampleRate = 44100.00,
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked,
.mFramesPerPacket = 1,
.mChannelsPerFrame = 2,
.mBytesPerFrame = 4,
.mBitsPerChannel = 16,
.mBytesPerPacket = 4
};
canonicalAudioStreamBasicDescription3Dmixer = (AudioStreamBasicDescription) {
.mSampleRate = 44100.00,
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked,
.mFramesPerPacket = 1,
.mChannelsPerFrame = 1,
.mBytesPerFrame = 2,
.mBitsPerChannel = 16,
.mBytesPerPacket = 2
};
canonicalAudioStreamBasicDescriptionNonInterleaved = (AudioStreamBasicDescription) {
.mSampleRate = 44100.00,
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved,
.mFramesPerPacket = 1,
.mChannelsPerFrame = 2,
.mBytesPerFrame = 2,
.mBitsPerChannel = 16,
.mBytesPerPacket = 2
};
convertUnitDescription = (AudioComponentDescription) {
.componentType = kAudioUnitType_FormatConverter,
.componentSubType = kAudioUnitSubType_AUConverter,
.componentFlags = 0,
.componentFlagsMask = 0,
.componentManufacturer = kAudioUnitManufacturer_Apple
};
splittertUnitDescription = (AudioComponentDescription) {
.componentType = kAudioUnitType_FormatConverter,
.componentSubType = kAudioUnitSubType_Splitter,
.componentFlags = 0,
.componentFlagsMask = 0,
.componentManufacturer = kAudioUnitManufacturer_Apple
};
mixerDescription = (AudioComponentDescription){
.componentType = kAudioUnitType_Mixer,
.componentSubType = kAudioUnitSubType_SpatialMixer,
.componentFlags = 0,
.componentFlagsMask = 0,
.componentManufacturer = kAudioUnitManufacturer_Apple
};
AUGraphAddNode(audioGraph, &mixerDescription, &mixerNode);
AUGraphNodeInfo(audioGraph, mixerNode, &mixerDescription, &mixerUnit);
AudioUnitSetProperty(mixerUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, sizeof(maxFramesPerSlice));
UInt32 busCount = 2;
AudioUnitSetProperty(mixerUnit, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0, &busCount, sizeof(busCount));
Float64 graphSampleRate = 44100.0;
AudioUnitSetProperty(mixerUnit, kAudioUnitProperty_SampleRate, kAudioUnitScope_Output, 0, &graphSampleRate, sizeof(graphSampleRate));
AudioUnitSetParameter(mixerUnit, kSpatialMixerParam_Distance, kAudioUnitScope_Input, 0, 1.0, 0);
AudioUnitSetParameter(mixerUnit, kSpatialMixerParam_Azimuth, kAudioUnitScope_Input, 0, -90, 0);
AudioUnitSetParameter(mixerUnit, kSpatialMixerParam_Distance, kAudioUnitScope_Input, 1, 1.0, 0);
AudioUnitSetParameter(mixerUnit, kSpatialMixerParam_Azimuth, kAudioUnitScope_Input, 1, 90, 0);
AUNode splitterNode;
AudioUnit splittertUnit;
AUGraphAddNode(audioGraph, &splittertUnitDescription, &splitterNode);
AUGraphNodeInfo(audioGraph, splitterNode, &splittertUnitDescription, &splittertUnit);
AUNode convertNodeFromInterlevantToNonInterleaved;
AudioUnit convertUnitFromInterlevantToNonInterleaved;
AUGraphAddNode(audioGraph, &convertUnitDescription, &convertNodeFromInterlevantToNonInterleaved);
AUGraphNodeInfo(audioGraph, convertNodeFromInterlevantToNonInterleaved, &convertUnitDescription, &convertUnitFromInterlevantToNonInterleaved);
AudioUnitSetProperty(convertUnitFromInterlevantToNonInterleavedLeft, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &srcFormatFromEqualizer, sizeof(srcFormatFromEqualizer));
AudioUnitSetProperty(convertUnitFromInterlevantToNonInterleavedLeft, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &canonicalAudioStreamBasicDescriptionNonInterleaved, sizeof(canonicalAudioStreamBasicDescriptionNonInterleaved));
AudioUnitSetProperty(convertUnitFromInterlevantToNonInterleavedLeft, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, sizeof(maxFramesPerSlice));
AUNode convertNodeFromInterlevantToNonInterleavedRight;
AudioUnit convertUnitFromInterlevantToNonInterleavedRight;
AUGraphAddNode(audioGraph, &convertUnitDescription, &convertNodeFromInterlevantToNonInterleavedRight);
AUGraphNodeInfo(audioGraph, convertNodeFromInterlevantToNonInterleavedRight, &convertUnitDescription, &convertUnitFromInterlevantToNonInterleavedRight);
AudioUnitSetProperty(convertUnitFromInterlevantToNonInterleavedRight, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &srcFormatFromEqualizer, sizeof(srcFormatFromEqualizer));
AudioUnitSetProperty(convertUnitFromInterlevantToNonInterleavedRight, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &canonicalAudioStreamBasicDescriptionNonInterleaved, sizeof(canonicalAudioStreamBasicDescriptionNonInterleaved));
AudioUnitSetProperty(convertUnitFromInterlevantToNonInterleavedRight, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, sizeof(maxFramesPerSlice));
AUNode converterNodeFromNonInterleavedToMonoLeftChannel;
AudioUnit converUnitFromNonInterleavedToMonoLeftChannel;;
SInt32 left[1] = {0};
UInt32 leftSize = (UInt32)sizeof(left);
AUGraphAddNode(audioGraph, &convertUnitDescription, &converterNodeFromNonInterleavedToMonoLeftChannel);
AUGraphNodeInfo(audioGraph, converterNodeFromNonInterleavedToMonoLeftChannel, &convertUnitDescription, &converUnitFromNonInterleavedToMonoLeftChannel);
AudioUnitSetProperty(converUnitFromNonInterleavedToMonoLeftChannel, kAudioOutputUnitProperty_ChannelMap, kAudioUnitScope_Input, 0, &left, leftSize);
AudioUnitSetProperty(converUnitFromNonInterleavedToMonoLeftChannel, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &canonicalAudioStreamBasicDescriptionNonInterleaved, sizeof(canonicalAudioStreamBasicDescriptionNonInterleaved));
AudioUnitSetProperty(converUnitFromNonInterleavedToMonoLeftChannel, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &canonicalAudioStreamBasicDescription3Dmixer, sizeof(canonicalAudioStreamBasicDescription3Dmixer));
AudioUnitSetProperty(converUnitFromNonInterleavedToMonoLeftChannel, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, sizeof(maxFramesPerSlice));
AUNode converterNodeFromNonInterleavedToMonoRightChannel;
AudioUnit converUnitFromNonInterleavedToMonoRightChannel;
SInt32 right[1] = {1};
UInt32 rightSize = (UInt32)sizeof(right);
AUGraphAddNode(audioGraph, &convertUnitDescription, &converterNodeFromNonInterleavedToMonoRightChannel);
AUGraphNodeInfo(audioGraph, converterNodeFromNonInterleavedToMonoRightChannel, &convertUnitDescription, &converUnitFromNonInterleavedToMonoRightChannel);
AudioUnitSetProperty(converUnitFromNonInterleavedToMonoRightChannel, kAudioOutputUnitProperty_ChannelMap, kAudioUnitScope_Input, 0, &right, rightSize);
AudioUnitSetProperty(converUnitFromNonInterleavedToMonoRightChannel, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &canonicalAudioStreamBasicDescriptionNonInterleaved, sizeof(canonicalAudioStreamBasicDescriptionNonInterleaved));
AudioUnitSetProperty(converUnitFromNonInterleavedToMonoRightChannel, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &canonicalAudioStreamBasicDescription3Dmixer, sizeof(canonicalAudioStreamBasicDescription3Dmixer));
AudioUnitSetProperty(converUnitFromNonInterleavedToMonoRightChannel, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice, sizeof(maxFramesPerSlice));
AUGraphConnectNodeInput(audioGraph, еqualizerNode, 0, splitterNode, 0);
AUGraphConnectNodeInput(audioGraph, splitterNode, 0, convertNodeFromInterlevantToNonInterleavedLeft, 0);
AUGraphConnectNodeInput(audioGraph, splitterNode, 1, convertNodeFromInterlevantToNonInterleavedRight, 0);
AUGraphConnectNodeInput(audioGraph, convertNodeFromInterlevantToNonInterleavedLeft, 0, converterNodeFromNonInterleavedToMonoLeftChannel, 0);
AUGraphConnectNodeInput(audioGraph, convertNodeFromInterlevantToNonInterleavedRight, 0, converterNodeFromNonInterleavedToMonoRightChannel, 0);
AUGraphConnectNodeInput(audioGraph, converterNodeFromNonInterleavedToMonoLeftChannel, 0, mixerNode, 0);
AUGraphConnectNodeInput(audioGraph, converterNodeFromNonInterleavedToMonoRightChannel, 0, mixerNode, 1);
That's all. Full working key part of the code.

use AudioUnit of kAudioUnitType_FormatConverter type to resample LinearPCM data from FFmpeg

i'm trying to play audio use AudioUnit on iOS.Because the default sample rate is 44100 and i have several audio stream of different sample rate, such as 32000,48000 etc, so i tried to set preferredSampleRate and preferredIOBufferDuration of AVAudioSession according to the audio stream.
But i found it difficult to get a proper preferredIOBufferDuration according to the preferredSampleRate, it seemed that preferredIOBufferDuration must be set differently according to the preferredSampleRate, or there may be noise.
So now i'm trying to resample all kinds of audio stream to the default hardware sample rate(44100Hz) use AudioUnit of kAudioUnitType_FormatConverter.
I use AUGraph with FormatConverter unit and remoteIO unit to do this.And now it seems i set the kAudioUnitProperty_SampleRate for kAudioUnitScope_Output successfully(the kAudioUnitProperty_SampleRate property read back is indeed 44100.But there also is noise when the input audio stream is not 44100, while it sounds normal when the input audio stream is originally 44100 Hz. Everything seems the same as i didn't use FormatConverter and directly stream data to remoteIO Unit(44100 is OK, while others not).
I wonder where is my problem.Does it not do the resampling at all, or is the output data wrong? Does anyone have experience of FormatConverter AudioUnit? Any help would be appreciated.
My AUGraph:
AUGraphConnectNodeInput(_processingGraph, converterNode, 0, remoteIONode, 0);
Converter unit: (input format is AV_SAMPLE_FMT_FLTP from FFMpeg)
UInt32 bytesPerFrame = bitsPerChannel / 8;
UInt32 bytesPerPacket = bytesPerFrame * 1;
AudioStreamBasicDescription streamDescription = {
.mSampleRate = spec->sample_rate,
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = formatFlags,
.mChannelsPerFrame = spec->channels,
.mFramesPerPacket = 1,
.mBitsPerChannel = bitsPerChannel,
.mBytesPerFrame = bytesPerFrame,
.mBytesPerPacket = bytesPerPacket
};
status = AudioUnitSetProperty(converterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &streamDescription, sizeof(streamDescription));
if (status != noErr) {
NSLog(#"AudioUnit: failed to set stream format (%d)", (int)status);
}
/* input callback */
AURenderCallbackStruct renderCallback;
renderCallback.inputProc = performRender;
renderCallback.inputProcRefCon = (__bridge void *)self;
AUGraphSetNodeInputCallback(_processingGraph, converterNode, 0, &renderCallback);
Converter unit output sample rate:
Float64 sampleRate = 44100.0;
AudioUnitSetProperty(converterUnit, kAudioUnitProperty_SampleRate, kAudioUnitScope_Output, 0, &sampleRate, sizeof(sampleRate));
i also tried
AudioStreamBasicDescription outStreamDescription = {
.mSampleRate = 44100.0,
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = formatFlags,
.mChannelsPerFrame = spec->channels,
.mFramesPerPacket = 1,
.mBitsPerChannel = bitsPerChannel,
.mBytesPerFrame = bytesPerFrame,
.mBytesPerPacket = bytesPerPacket
};
status = AudioUnitSetProperty(converterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &outStreamDescription, sizeof(outStreamDescription));
but seemed no difference.

write character buffer in Bluetooth Low Energy

I am working on a Bluetooth Low Energy supported application.
I will be communicating with a hardware device that supports a characterstic. Lets assume that the characteristic is defined as "TestCharacteristic".
Based on the scenario, I want to write different data to the same characteristic.
In my iOS app, i will have two buttons.
I want to send a character buffer of 20 byte on clicking the first button and different character buffer of 20 byte on clicking the second button.
OnButton 1 Click:
BluetoothGattCharacteristic charac = Service
.getCharacteristic("TestCharacteristic");
if (charac == null) {
Log.e(TAG, "char not found!");
return false;
}
byte[20] value = {0xEF, 1, 1, 0, 0, 0, 0xEF, 0, 0, 0, 0, 0, 0xEF, 1, 2, 0, 0, 0, 0xEF, 0, 0, 0, 0, 0, 0xEF, 0, 0, 0, 0, 0 };
charac.setValue(value);
boolean status = mBluetoothGatt.writeCharacteristic(charac);
OnButton 2 Click:
BluetoothGattCharacteristic charac = Service
.getCharacteristic("TestCharacteristic");
if (charac == null) {
Log.e(TAG, "char not found!");
return false;
}
byte[20] value = {0xDC, 0, 0, 0 ,0 , 0xDC, 0, 0, 0 ,0, 0xDC, 0, 0, 0 ,0, 0xDC, 0, 0, 0 ,0};
charac.setValue(value);
boolean status = mBluetoothGatt.writeCharacteristic(chara
c);
Is above operation possible?
Can I send different values for the same characteristic?
Thanks & Regards,
Phil

Create a silent audio CMSampleBufferRef

How do you create a silent audio CMSampleBufferRef in Swift? I am looking to append silent CMSampleBufferRefs to an instance of AVAssetWriterInput.
You don't say what format you want your zeros (integer/floating point, mono/stereo, sample rate), but maybe it doesn't matter. Anyway, here's one way to create a silent CD audio style CMSampleBuffer in swift.
func createSilentAudio(startFrm: Int64, nFrames: Int, sampleRate: Float64, numChannels: UInt32) -> CMSampleBuffer? {
let bytesPerFrame = UInt32(2 * numChannels)
let blockSize = nFrames*Int(bytesPerFrame)
var block: CMBlockBuffer?
var status = CMBlockBufferCreateWithMemoryBlock(
kCFAllocatorDefault,
nil,
blockSize, // blockLength
nil, // blockAllocator
nil, // customBlockSource
0, // offsetToData
blockSize, // dataLength
0, // flags
&block
)
assert(status == kCMBlockBufferNoErr)
// we seem to get zeros from the above, but I can't find it documented. so... memset:
status = CMBlockBufferFillDataBytes(0, block!, 0, blockSize)
assert(status == kCMBlockBufferNoErr)
var asbd = AudioStreamBasicDescription(
mSampleRate: sampleRate,
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kLinearPCMFormatFlagIsSignedInteger,
mBytesPerPacket: bytesPerFrame,
mFramesPerPacket: 1,
mBytesPerFrame: bytesPerFrame,
mChannelsPerFrame: numChannels,
mBitsPerChannel: 16,
mReserved: 0
)
var formatDesc: CMAudioFormatDescription?
status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &asbd, 0, nil, 0, nil, nil, &formatDesc)
assert(status == noErr)
var sampleBuffer: CMSampleBuffer?
// born ready
status = CMAudioSampleBufferCreateReadyWithPacketDescriptions(
kCFAllocatorDefault,
block, // dataBuffer
formatDesc!,
nFrames, // numSamples
CMTimeMake(startFrm, Int32(sampleRate)), // sbufPTS
nil, // packetDescriptions
&sampleBuffer
)
assert(status == noErr)
return sampleBuffer
}
Doesn't it make you sorry you asked? Do you really need silent CMSampleBuffers? Can't you insert silence into an AVAssetWriterInput by moving the presentation time stamp forward?
Updated for XCode 10.3. Swift 5.0.1.
Don't forget the import CoreMedia.
import Foundation
import CoreMedia
class CMSampleBufferFactory
{
static func createSilentAudio(startFrm: Int64, nFrames: Int, sampleRate: Float64, numChannels: UInt32) -> CMSampleBuffer? {
let bytesPerFrame = UInt32(2 * numChannels)
let blockSize = nFrames*Int(bytesPerFrame)
var block: CMBlockBuffer?
var status = CMBlockBufferCreateWithMemoryBlock(
allocator: kCFAllocatorDefault,
memoryBlock: nil,
blockLength: blockSize,
blockAllocator: nil,
customBlockSource: nil,
offsetToData: 0,
dataLength: blockSize,
flags: 0,
blockBufferOut: &block
)
assert(status == kCMBlockBufferNoErr)
guard var eBlock = block else { return nil }
// we seem to get zeros from the above, but I can't find it documented. so... memset:
status = CMBlockBufferFillDataBytes(with: 0, blockBuffer: eBlock, offsetIntoDestination: 0, dataLength: blockSize)
assert(status == kCMBlockBufferNoErr)
var asbd = AudioStreamBasicDescription(
mSampleRate: sampleRate,
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kLinearPCMFormatFlagIsSignedInteger,
mBytesPerPacket: bytesPerFrame,
mFramesPerPacket: 1,
mBytesPerFrame: bytesPerFrame,
mChannelsPerFrame: numChannels,
mBitsPerChannel: 16,
mReserved: 0
)
var formatDesc: CMAudioFormatDescription?
status = CMAudioFormatDescriptionCreate(allocator: kCFAllocatorDefault, asbd: &asbd, layoutSize: 0, layout: nil, magicCookieSize: 0, magicCookie: nil, extensions: nil, formatDescriptionOut: &formatDesc)
assert(status == noErr)
var sampleBuffer: CMSampleBuffer?
status = CMAudioSampleBufferCreateReadyWithPacketDescriptions(
allocator: kCFAllocatorDefault,
dataBuffer: eBlock,
formatDescription: formatDesc!,
sampleCount: nFrames,
presentationTimeStamp: CMTimeMake(value: startFrm, timescale: Int32(sampleRate)),
packetDescriptions: nil,
sampleBufferOut: &sampleBuffer
)
assert(status == noErr)
return sampleBuffer
}
}
You need to create a block buffer using CMBlockBufferCreateWithMemoryBlock().
Fill the block buffer with a bunch of zeros and then pass it into CMAudioSampleBufferCreateWithPacketDescriptions().
Disclaimer: I haven't actually done this in Swift, I attempted it but found myself fighting the compiler at every turn so I switched to obj-c. The Core Media Framework is a low level C framework and was a lot easier to use without screwing around with Swifts type system. I know this isn't the answer you're looking for buy hopefully it will point you in the right direction.
Example

AudioUnit kAudioUnitSubType_Reverb2 and kAudioUnitType_FormatConverter

I have this AUGraph configuration
AudioUnitGraph 0x2505000:
Member Nodes:
node 1: 'aufx' 'ipeq' 'appl', instance 0x15599530 O
node 2: 'aufx' 'rvb2' 'appl', instance 0x1566ffd0 O
node 3: 'aufc' 'conv' 'appl', instance 0x15676900 O
node 4: 'aumx' 'mcmx' 'appl', instance 0x15676a30 O
node 5: 'aumx' 'mcmx' 'appl', instance 0x15677ac0 O
node 6: 'aumx' 'mcmx' 'appl', instance 0x15678a40 O
node 7: 'auou' 'rioc' 'appl', instance 0x15679a20 O
node 8: 'augn' 'afpl' 'appl', instance 0x1558b710 O
Connections:
node 7 bus 1 => node 5 bus 0 [ 1 ch, 44100 Hz, 'lpcm' (0x00000C2C) 8.24-bit little-endian signed integer, deinterleaved]
node 5 bus 0 => node 3 bus 0 [ 2 ch, 44100 Hz, 'lpcm' (0x00000C2C) 8.24-bit little-endian signed integer, deinterleaved]
node 3 bus 0 => node 2 bus 0 [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]
node 2 bus 0 => node 6 bus 0 [ 1 ch, 44100 Hz, 'lpcm' (0x00000C2C) 8.24-bit little-endian signed integer, deinterleaved]
node 8 bus 0 => node 4 bus 0 [ 1 ch, 44100 Hz, 'lpcm' (0x00000C2C) 8.24-bit little-endian signed integer, deinterleaved]
node 4 bus 0 => node 6 bus 1 [ 2 ch, 44100 Hz, 'lpcm' (0x00000C2C) 8.24-bit little-endian signed integer, deinterleaved]
node 6 bus 0 => node 1 bus 0 [ 2 ch, 44100 Hz, 'lpcm' (0x00000C2C) 8.24-bit little-endian signed integer, deinterleaved]
node 1 bus 0 => node 7 bus 0 [ 2 ch, 0 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]
CurrentState:
mLastUpdateError=0, eventsToProcess=F, isInitialized=F, isRunning=F
and the error
AUGraphInitialize err = -10868
since I have connected a reverb unit between two mixer units even if I have created a converter unit:
OSStatus err = noErr;
UInt32 micBus = 0;
UInt32 filePlayerBus = 1;
//// ionode:1 ----> vfxNode:0 bus 0
err = AUGraphConnectNodeInput(processingGraph, ioNode, 1, vfxNode, 0);
if (err) { NSLog(#"ioNode:1 ---> vfxNode:0 err = %ld", err); }
//// vfxNode:0 ---> convertNode:0
err = AUGraphConnectNodeInput(processingGraph, vfxNode, 0, convertNode, 0);
//// convertNode:0 ---> vfxRevNode:0
err = AUGraphConnectNodeInput(processingGraph, convertNode, 0, vfxRevNode, 0);
//// vfxRevNode:0 ---> mixerNode:0
err = AUGraphConnectNodeInput(processingGraph, vfxRevNode, 0, mixerNode, micBus );
//if (err) { NSLog(#"vfxRevNode:0 ---> mixerNode:0 err = %ld", err); }
//// vfxNode:0 ----> mixerNode:0
//err = AUGraphConnectNodeInput(processingGraph, vfxNode, 0, mixerNode, micBus );
if (err) { NSLog(#"vfxNode:0 ---> mixerNode:0 err = %ld", err); }
//// audioPlayerNode:0 ----> fxNode:0
err = AUGraphConnectNodeInput(processingGraph, audioPlayerNode, 0, fxNode, 0);
if (err) { NSLog(#"audioPlayerNode:0 ---> fxNode:0 err = %ld", err); }
//// fxNode:0 ----> mixerNode:1
err = AUGraphConnectNodeInput(processingGraph, fxNode, 0, mixerNode, filePlayerBus);
if (err) { NSLog(#"fxNode:0 ---> mixerNode:1 err = %ld", err); }
///// mixerNode:0 ----> eqNode:0
err = AUGraphConnectNodeInput(processingGraph, mixerNode, 0, eqNode, 0);
if (err) { NSLog(#"mixerNode:0 ---> eqNode:0 err = %ld", err); }
//// eqNode:0 ----> ioNode:0
err = AUGraphConnectNodeInput(processingGraph, eqNode, 0, ioNode, 0);
if (err) { NSLog(#"eqNode:0 ---> ioNode:0 err = %ld", err); }
Here are the nodes:
////
//// EQ NODE
////
err = AUGraphAddNode(processingGraph, &EQUnitDescription, &eqNode);
if (err) { NSLog(#"eqNode err = %ld", err); }
////
//// REV NODE
////
err = AUGraphAddNode(processingGraph, &ReverbUnitDescription, &vfxRevNode);
if (err) { NSLog(#"vfxRevNode err = %ld", err); }
////
//// FORMAT CONVERTER NODE
////
err = AUGraphAddNode (processingGraph, &convertUnitDescription, &convertNode);
if (err) { NSLog(#"convertNode err = %ld", err); }
////
//// FX NODE
////
err = AUGraphAddNode(processingGraph, &FXUnitDescription, &fxNode);
if (err) { NSLog(#"fxNode err = %ld", err); }
////
//// VFX NODE
////
err = AUGraphAddNode(processingGraph, &VFXUnitDescription, &vfxNode);
if (err) { NSLog(#"vfxNode err = %ld", err); }
///
/// MIXER NODE
///
err = AUGraphAddNode (processingGraph, &MixerUnitDescription, &mixerNode );
if (err) { NSLog(#"mixerNode err = %ld", err); }
///
/// OUTPUT NODE
///
err = AUGraphAddNode(processingGraph, &iOUnitDescription, &ioNode);
if (err) { NSLog(#"outputNode err = %ld", err); }
////
/// PLAYER NODE
///
err = AUGraphAddNode(processingGraph, &playerUnitDescription, &audioPlayerNode);
if (err) { NSLog(#"audioPlayerNode err = %ld", err); }
and the component descriptions:
OSStatus err = noErr;
err = NewAUGraph(&processingGraph);
// OUTPUT unit
AudioComponentDescription iOUnitDescription;
iOUnitDescription.componentType = kAudioUnitType_Output;
iOUnitDescription.componentSubType = kAudioUnitSubType_RemoteIO;//kAudioUnitSubType_VoiceProcessingIO;//kAudioUnitSubType_RemoteIO;
iOUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
iOUnitDescription.componentFlags = 0;
iOUnitDescription.componentFlagsMask = 0;
// MIXER unit
AudioComponentDescription MixerUnitDescription;
MixerUnitDescription.componentType = kAudioUnitType_Mixer;
MixerUnitDescription.componentSubType = kAudioUnitSubType_MultiChannelMixer;
MixerUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
MixerUnitDescription.componentFlags = 0;
MixerUnitDescription.componentFlagsMask = 0;
// PLAYER unit
AudioComponentDescription playerUnitDescription;
playerUnitDescription.componentType = kAudioUnitType_Generator;
playerUnitDescription.componentSubType = kAudioUnitSubType_AudioFilePlayer;
playerUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
// EQ unit
AudioComponentDescription EQUnitDescription;
EQUnitDescription.componentType = kAudioUnitType_Effect;
EQUnitDescription.componentSubType = kAudioUnitSubType_AUiPodEQ;
EQUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
EQUnitDescription.componentFlags = 0;
EQUnitDescription.componentFlagsMask = 0;
// Reverb unit
AudioComponentDescription ReverbUnitDescription;
ReverbUnitDescription.componentType = kAudioUnitType_Effect;
ReverbUnitDescription.componentSubType = kAudioUnitSubType_Reverb2;
ReverbUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
ReverbUnitDescription.componentFlags = 0;
ReverbUnitDescription.componentFlagsMask = 0;
// Format Converter between VFX and Reverb units
AudioComponentDescription convertUnitDescription;
convertUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
convertUnitDescription.componentType = kAudioUnitType_FormatConverter;
convertUnitDescription.componentSubType = kAudioUnitSubType_AUConverter;
convertUnitDescription.componentFlags = 0;
convertUnitDescription.componentFlagsMask = 0;
// FX unit
AudioComponentDescription FXUnitDescription;
FXUnitDescription.componentType = kAudioUnitType_Mixer;
FXUnitDescription.componentSubType = kAudioUnitSubType_MultiChannelMixer;
FXUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
FXUnitDescription.componentFlags = 0;
FXUnitDescription.componentFlagsMask = 0;
// VFX unit
AudioComponentDescription VFXUnitDescription;
VFXUnitDescription.componentType = kAudioUnitType_Mixer;
VFXUnitDescription.componentSubType = kAudioUnitSubType_MultiChannelMixer;
VFXUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
VFXUnitDescription.componentFlags = 0;
VFXUnitDescription.componentFlagsMask = 0;
I created the setup of the converter node to have as input the input node stream format (that is a mixer unit) and the output node stream format as output format (that is the reverb)
OSStatus err = noErr;
err = AUGraphNodeInfo(processingGraph, convertNode, NULL, &convertUnit);
if (err) { NSLog(#"setupConverterUnit error = %ld", err); }
// set converter input format to vfxunit format
AudioStreamBasicDescription asbd = {0};
size_t bytesPerSample;
bytesPerSample = sizeof(SInt16);
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
asbd.mBitsPerChannel = 8 * bytesPerSample;
asbd.mFramesPerPacket = 1;
asbd.mChannelsPerFrame = 1;
asbd.mBytesPerPacket = bytesPerSample * asbd.mFramesPerPacket;
asbd.mBytesPerFrame = bytesPerSample * asbd.mChannelsPerFrame;
asbd.mSampleRate = sampleRate;
err = AudioUnitSetProperty(convertUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &asbd, sizeof(asbd));
if (err) { NSLog(#"setupConverterUnit kAudioUnitProperty_StreamFormat error = %ld", err); }
// set converter output format to reverb format
UInt32 streamFormatSize = sizeof(monoStreamFormat);
err = AudioUnitSetProperty(convertUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &monoStreamFormat, streamFormatSize);
if (err) { NSLog(#"setupConverterUnit kAudioUnitProperty_StreamFormat error = %ld", err); }
The reverb unit is configured as follows:
OSStatus err = noErr;
err = AUGraphNodeInfo(processingGraph, vfxRevNode, NULL, &vfxRevUnit);
if (err) { NSLog(#"setupReverbUnit err = %ld", err); }
UInt32 size = sizeof(mReverbPresetArray);
err = AudioUnitGetProperty(vfxRevUnit, kAudioUnitProperty_FactoryPresets, kAudioUnitScope_Global, 0, &mReverbPresetArray, &size);
if (err) { NSLog(#"kAudioUnitProperty_FactoryPresets err = %ld", err); }
printf("setupReverbUnit Preset List:\n");
UInt8 count = CFArrayGetCount(mReverbPresetArray);
for (int i = 0; i < count; ++i) {
AUPreset *aPreset = (AUPreset*)CFArrayGetValueAtIndex(mReverbPresetArray, i);
CFShow(aPreset->presetName);
}
and the input mixer unit as
OSStatus err;
err = AUGraphNodeInfo(processingGraph, vfxNode, NULL, &vfxUnit);
if (err) { NSLog(#"setVFxUnit err = %ld", err); }
UInt32 busCount = 1;
err = AudioUnitSetProperty (
vfxUnit,
kAudioUnitProperty_ElementCount,
kAudioUnitScope_Input,
0,
&busCount,
sizeof (busCount)
);
AudioStreamBasicDescription asbd = {0};
size_t bytesPerSample;
bytesPerSample = sizeof(SInt16);
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
asbd.mBitsPerChannel = 8 * bytesPerSample;
asbd.mFramesPerPacket = 1;
asbd.mChannelsPerFrame = 1;
asbd.mBytesPerPacket = bytesPerSample * asbd.mFramesPerPacket;
asbd.mBytesPerFrame = bytesPerSample * asbd.mChannelsPerFrame;
asbd.mSampleRate = sampleRate;
err = AudioUnitSetProperty (
vfxUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&asbd,
sizeof (asbd)
);
So far no way to make it works. I need the reverb unit to be there since I need it on the mic input only before entering the next mixer units.
Where I'm wrong with AudioStreamBasicDescription on the converter ?
EDIT
What happens is that there is no sound. The audio graph is not being initialized and it comes out with the error
AUGraphInitialize err = -10868
The graph described here can be depicted in this way
A mic input connected to vfxNode to get the render callback. This was connected to a mixer on bus 0. A song player node connected to another mixer where there is a second render callback that processes the audio in playback. This is connected to bus 1 of the last mixer in the chain. A eq node connects the mixer to the ouput ioNode.
When using a converter node between the fx mixer (vfxNode) and the reverb there is no sound anymore:
//// vfxNode:0 ---> convertNode:0
err = AUGraphConnectNodeInput(processingGraph, vfxNode, 0, convertNode, 0);
//// convertNode:0 ---> vfxRevNode:0
err = AUGraphConnectNodeInput(processingGraph, convertNode, 0, vfxRevNode, 0);
//// vfxRevNode:0 ---> mixerNode:0
err = AUGraphConnectNodeInput(processingGraph, vfxRevNode, 0, mixerNode, micBus );
//if (err) { NSLog(#"vfxRevNode:0 ---> mixerNode:0 err = %ld", err); }
Having no reverb node hence no converter node everything works properly:
//// ionode:1 ----> vfxNode:0 bus 0
err = AUGraphConnectNodeInput(processingGraph, ioNode, 1, vfxNode, 0);
if (err) { NSLog(#"ioNode:1 ---> vfxNode:0 err = %ld", err); }
//// vfxNode:0 ----> mixerNode:0
err = AUGraphConnectNodeInput(processingGraph, vfxNode, 0, mixerNode, micBus );
if (err) { NSLog(#"vfxNode:0 ---> mixerNode:0 err = %ld", err); }
You haven't said what the problem is (error, no sound, etc.), but I'm willing to take a guess. Those effect units prefer to be floating point PCM, and will generally refuse to be connected to anything that isn't floating point (the non-effect units will generally default to ints or fixed point). Are you getting -50 (paramErr) when you try to start the graph?
What I've found I have to do to use effect units is to read the ASBD the effect unit defaults to (either on input or output scope), and then set that throughout the graph (the other units are generally willing to accept floating point ASBDs). Note that this is going to be about 100 lines of boilerlplate where you get the unit from each node, get the ASBD from the effect's input or output scope (don't think it matters which), and then set that as the format received or produced by any unit in your graph. Good luck.

Resources