How do you create a silent audio CMSampleBufferRef in Swift? I am looking to append silent CMSampleBufferRefs to an instance of AVAssetWriterInput.
You don't say what format you want your zeros (integer/floating point, mono/stereo, sample rate), but maybe it doesn't matter. Anyway, here's one way to create a silent CD audio style CMSampleBuffer in swift.
func createSilentAudio(startFrm: Int64, nFrames: Int, sampleRate: Float64, numChannels: UInt32) -> CMSampleBuffer? {
let bytesPerFrame = UInt32(2 * numChannels)
let blockSize = nFrames*Int(bytesPerFrame)
var block: CMBlockBuffer?
var status = CMBlockBufferCreateWithMemoryBlock(
kCFAllocatorDefault,
nil,
blockSize, // blockLength
nil, // blockAllocator
nil, // customBlockSource
0, // offsetToData
blockSize, // dataLength
0, // flags
&block
)
assert(status == kCMBlockBufferNoErr)
// we seem to get zeros from the above, but I can't find it documented. so... memset:
status = CMBlockBufferFillDataBytes(0, block!, 0, blockSize)
assert(status == kCMBlockBufferNoErr)
var asbd = AudioStreamBasicDescription(
mSampleRate: sampleRate,
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kLinearPCMFormatFlagIsSignedInteger,
mBytesPerPacket: bytesPerFrame,
mFramesPerPacket: 1,
mBytesPerFrame: bytesPerFrame,
mChannelsPerFrame: numChannels,
mBitsPerChannel: 16,
mReserved: 0
)
var formatDesc: CMAudioFormatDescription?
status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &asbd, 0, nil, 0, nil, nil, &formatDesc)
assert(status == noErr)
var sampleBuffer: CMSampleBuffer?
// born ready
status = CMAudioSampleBufferCreateReadyWithPacketDescriptions(
kCFAllocatorDefault,
block, // dataBuffer
formatDesc!,
nFrames, // numSamples
CMTimeMake(startFrm, Int32(sampleRate)), // sbufPTS
nil, // packetDescriptions
&sampleBuffer
)
assert(status == noErr)
return sampleBuffer
}
Doesn't it make you sorry you asked? Do you really need silent CMSampleBuffers? Can't you insert silence into an AVAssetWriterInput by moving the presentation time stamp forward?
Updated for XCode 10.3. Swift 5.0.1.
Don't forget the import CoreMedia.
import Foundation
import CoreMedia
class CMSampleBufferFactory
{
static func createSilentAudio(startFrm: Int64, nFrames: Int, sampleRate: Float64, numChannels: UInt32) -> CMSampleBuffer? {
let bytesPerFrame = UInt32(2 * numChannels)
let blockSize = nFrames*Int(bytesPerFrame)
var block: CMBlockBuffer?
var status = CMBlockBufferCreateWithMemoryBlock(
allocator: kCFAllocatorDefault,
memoryBlock: nil,
blockLength: blockSize,
blockAllocator: nil,
customBlockSource: nil,
offsetToData: 0,
dataLength: blockSize,
flags: 0,
blockBufferOut: &block
)
assert(status == kCMBlockBufferNoErr)
guard var eBlock = block else { return nil }
// we seem to get zeros from the above, but I can't find it documented. so... memset:
status = CMBlockBufferFillDataBytes(with: 0, blockBuffer: eBlock, offsetIntoDestination: 0, dataLength: blockSize)
assert(status == kCMBlockBufferNoErr)
var asbd = AudioStreamBasicDescription(
mSampleRate: sampleRate,
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kLinearPCMFormatFlagIsSignedInteger,
mBytesPerPacket: bytesPerFrame,
mFramesPerPacket: 1,
mBytesPerFrame: bytesPerFrame,
mChannelsPerFrame: numChannels,
mBitsPerChannel: 16,
mReserved: 0
)
var formatDesc: CMAudioFormatDescription?
status = CMAudioFormatDescriptionCreate(allocator: kCFAllocatorDefault, asbd: &asbd, layoutSize: 0, layout: nil, magicCookieSize: 0, magicCookie: nil, extensions: nil, formatDescriptionOut: &formatDesc)
assert(status == noErr)
var sampleBuffer: CMSampleBuffer?
status = CMAudioSampleBufferCreateReadyWithPacketDescriptions(
allocator: kCFAllocatorDefault,
dataBuffer: eBlock,
formatDescription: formatDesc!,
sampleCount: nFrames,
presentationTimeStamp: CMTimeMake(value: startFrm, timescale: Int32(sampleRate)),
packetDescriptions: nil,
sampleBufferOut: &sampleBuffer
)
assert(status == noErr)
return sampleBuffer
}
}
You need to create a block buffer using CMBlockBufferCreateWithMemoryBlock().
Fill the block buffer with a bunch of zeros and then pass it into CMAudioSampleBufferCreateWithPacketDescriptions().
Disclaimer: I haven't actually done this in Swift, I attempted it but found myself fighting the compiler at every turn so I switched to obj-c. The Core Media Framework is a low level C framework and was a lot easier to use without screwing around with Swifts type system. I know this isn't the answer you're looking for buy hopefully it will point you in the right direction.
Example
Related
My app generates ply files after scanning objects. After ios 14 update the color of my 3d models does not load correctly. Also I am unable to view ply files in xcode (works fine in preview).
Anyone know the workaround to this problem?
I tired reading ply file content and display vertices and faces in scene geometry but it takes too long to load a file.
Apparently creating mdlAsset() throws some Metal warning and the mesh color does not show up properly.
Here are the sample images from ios 13 and 14 preview in sceneKit.
same problem,i found it is a SceneKit's bug, i had a solution that read .ply file with C, and creat a SCNGeometry instance with data, main code:
first we need read vertexCount and faceCount in .ply file (my file is ASCII format,so, )
bool readFaceAndVertexCount(char* filePath, int *vertexCount, int *faceCount);
example:
bool readFaceAndVertexCount(char* filePath, int *vertexCount, int *faceCount) {
char data[100];
FILE *fp;
if((fp = fopen(filePath,"r")) == NULL)
{
printf("error!");
return false;
}
while (!feof(fp))
{
fgets(data,1024,fp);
unsigned long i = strlen(data);
data[i - 1] = '\0';
if (strstr(data, "element vertex") != NULL) {
char *res = strtok(data," ");
while (res != NULL) {
res = strtok(NULL, " ");
if (res != NULL) {
*vertexCount = atoi(res);
}
}
}
if (strstr(data, "element face") != NULL) {
char *res = strtok(data," ");
while (res != NULL) {
res = strtok(NULL, " ");
if (res != NULL) {
*faceCount = atoi(res);
}
}
}
if (*faceCount > 0 && *vertexCount > 0) {
break;
}
}
fclose(fp);
return true;
}
2, read data to array:
in .c
// you need to implement with your files
bool readPlyFile(char* filePath, const int vertexCount, int faceCount, float *vertex, float *color, int *elment)
in .swift:
var vertex: [Float] = Array.init(repeating: 0, count: Int(vertexCount) * 3)
var color: [Float] = Array.init(repeating: 0, count: Int(vertexCount) * 3)
var face: [Int32] = Array.init(repeating: 0, count: Int(faceCount) * 3)
readPlyFile(UnsafeMutablePointer<Int8>(mutating: url.path),vertexCount,faceCount,&vertex,&color,&face)
3 creat a custom SCNGeometry:
let positionData = NSData.init(bytes: vertex, length: MemoryLayout<Float>.size * vertex.count)
let vertexSource = SCNGeometrySource.init(data: positionData as Data, semantic: .vertex, vectorCount: Int(vertexCount), usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<Float>.size * 3)
let colorData = NSData.init(bytes: color, length: MemoryLayout<Float>.size * color.count)
let colorSource = SCNGeometrySource.init(data: colorData as Data, semantic: .color, vectorCount: Int(vertexCount), usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<Float>.size * 3)
let indexData = NSData(bytes: face, length: MemoryLayout<Int32>.size * face.count)
let element = SCNGeometryElement(data: indexData as Data, primitiveType: SCNGeometryPrimitiveType.triangles, primitiveCount: Int(faceCount), bytesPerIndex: MemoryLayout<Int32>.size)
let gemetry = SCNGeometry.init(sources: [vertexSource,colorSource], elements: [element])
let node = SCNNode.init(geometry: gemetry)
let scene = SCNScene.init()
node.geometry?.firstMaterial?.cullMode = .back
node.geometry?.firstMaterial?.isDoubleSided = true
scene.rootNode.addChildNode(node)
scnView.scene = scene
it work! and faster!
I need to convert CMSampleBuffer to Data format. I am using one Third party framework for audio related task. That framework gives me the streaming (i.e Real Time audio) audio in CMSampleBuffer object.
Like this:
func didAudioStreaming(audioSample: CMSampleBuffer!) {
//Here I need to conver this to Data format.
//Because I am using GRPC framework for Audio Recognization,
}
Please provide me the steps to convert the CMSampleBuffer to Data.
FYI
let formatDesc:CMFormatDescription? = CMSampleBufferGetFormatDescription(audioSample)
<CMAudioFormatDescription 0x17010d890 [0x1b453ebb8]> {
mediaType:'soun'
mediaSubType:'lpcm'
mediaSpecific: {
ASBD: {
mSampleRate: 16000.000000
mFormatID: 'lpcm'
mFormatFlags: 0xc
mBytesPerPacket: 2
mFramesPerPacket: 1
mBytesPerFrame: 2
mChannelsPerFrame: 1
mBitsPerChannel: 16 }
cookie: {(null)}
ACL: {(null)}
FormatList Array: {(null)}
}
extensions: {(null)}
}
Try below code to convert CMSampleBuffer to NSData.
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: 0))
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!)
let height = CVPixelBufferGetHeight(imageBuffer!)
let src_buff = CVPixelBufferGetBaseAddress(imageBuffer!)
let data = NSData(bytes: src_buff, length: bytesPerRow * height)
CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: 0))
EDIT-
For AudioBuffer use below code -
var audioBufferList = AudioBufferList()
var data = Data()
var blockBuffer : CMBlockBuffer?
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, nil, &audioBufferList, MemoryLayout<AudioBufferList>.size, nil, nil, 0, &blockBuffer)
let buffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers, count: Int(audioBufferList.mNumberBuffers))
for audioBuffer in buffers {
let frame = audioBuffer.mData?.assumingMemoryBound(to: UInt8.self)
data.append(frame!, count: Int(audioBuffer.mDataByteSize))
}
Using CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer will require to call at some point CFRelease(blockBuffer) because the buffer is retained and if not released the pool of buffers will become eventually empty and no new CMSampleBuffer will be generated.
I'd suggest to get directly the data using the following:
CMBlockBufferRef blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
size_t lengthAtOffset;
size_t totalLength;
char *data;
CMBlockBufferGetDataPointer(blockBuffer, 0, &lengthAtOffset, &totalLength, &data);
NSData *audioData = [NSData dataWithBytes:data length:totalLength];
I'm trying to use the dynamic processor and a bunch of filters to compress a specific frequency band within the spotify method connectOutputBus but when i mix the nodes in the kAudioUnitSubType_MultiChannelMixer only the sound of the first added node comes out.
obs: i actually use filters on sourceNodeCopy to remove the freqs that will be compressed on the souceNode but to keep things short i omitted them.
Here's the code:
override func connectOutputBus(_ sourceOutputBusNumber: UInt32, ofNode sourceNode: AUNode, toInputBus destinationInputBusNumber: UInt32, ofNode destinationNode: AUNode, in graph: AUGraph!) throws {
let sourceNodeCopy = sourceNode //original node without the harsh freq
//create a filter for the harsh frequencies
var filterDescription = AudioComponentDescription()
filterDescription.componentType = kAudioUnitType_Effect
filterDescription.componentSubType = kAudioUnitSubType_BandPassFilter
filterDescription.componentManufacturer = kAudioUnitManufacturer_Apple
filterDescription.componentFlags = 0
filterDescription.componentFlagsMask = 0
AUGraphAddNode(graph, &filterDescription, &filterNode!) // Add the filter node
AUGraphNodeInfo(graph, filterNode!, nil, &filterUnit!) // Get the Audio Unit from the node
AudioUnitInitialize(filterUnit!) // Initialize the audio unit
// Set filter params
AudioUnitSetParameter(filterUnit!, kBandpassParam_CenterFrequency, kAudioUnitScope_Global, 0, 10038, 0)
//create a processor to compress the frequency
var dynamicProcessorDescription = AudioComponentDescription()
dynamicProcessorDescription.componentType = kAudioUnitType_Effect
dynamicProcessorDescription.componentSubType = kAudioUnitSubType_DynamicsProcessor
dynamicProcessorDescription.componentManufacturer = kAudioUnitManufacturer_Apple
dynamicProcessorDescription.componentFlags = 0
dynamicProcessorDescription.componentFlagsMask = 0
// Add the dynamic processor node
AUGraphAddNode(graph, &dynamicProcessorDescription, &dynamicProcessorNode)
AUGraphNodeInfo(graph, dynamicProcessorNode, nil, &dynamicProcessorUnit)
AudioUnitInitialize(dynamicProcessorUnit!)
// Set compressor params
AudioUnitSetParameter(dynamicProcessorUnit!, kDynamicsProcessorParam_Threshold, kAudioUnitScope_Global, 0, -35, 0)
AudioUnitSetParameter(dynamicProcessorUnit!, kDynamicsProcessorParam_AttackTime, kAudioUnitScope_Global, 0, 0.02, 0)
AudioUnitSetParameter(dynamicProcessorUnit!, kDynamicsProcessorParam_ReleaseTime, kAudioUnitScope_Global, 0, 0.04, 0)
AudioUnitSetParameter(dynamicProcessorUnit!, kDynamicsProcessorParam_HeadRoom, kAudioUnitScope_Global, 0, 0, 0)
//mixer
var mixerDescription = AudioComponentDescription()
mixerDescription.componentType = kAudioUnitType_Mixer
mixerDescription.componentSubType = kAudioUnitSubType_MultiChannelMixer
mixerDescription.componentManufacturer = kAudioUnitManufacturer_Apple
mixerDescription.componentFlags = 0
mixerDescription.componentFlagsMask = 0
AUGraphAddNode(graph, &mixerDescription, &audioEffectsController.mixerNode)
AUGraphNodeInfo(graph, audioEffectsController.mixerNode, nil,
&audioEffectsController.mixerUnit)
AudioUnitInitialize(audioEffectsController.mixerUnit!)
AudioUnitSetParameter(mixerUnit!, kMultiChannelMixerParam_Volume, kAudioUnitScope_Input, 0, 1.0, 0);
AudioUnitSetParameter(mixerUnit!, kMultiChannelMixerParam_Volume, kAudioUnitScope_Output, 0, 1.0, 0);
//connect the nodes
AUGraphConnectNodeInput(graph, sourceNode, sourceOutputBusNumber, filterNode, 0)
AUGraphConnectNodeInput(graph, filterNode, sourceOutputBusNumber, dynamicProcessorNode, 0)
AUGraphConnectNodeInput(graph, sourceNodeCopy, sourceOutputBusNumber, mixerNode, 0)
AUGraphConnectNodeInput(graph, fiterNode, sourceOutputBusNumber, mixerNode, 0)
//connect the mixer to the output
AUGraphConnectNodeInput(graph, audioEffectsController.mixerNode, 0, destinationNode, destinationInputBusNumber)
In your code you connect the node like this
AUGraphConnectNodeInput(graph, sourceNodeCopy, sourceOutputBusNumber, mixerNode, 0)
AUGraphConnectNodeInput(graph, fiterNode, sourceOutputBusNumber, mixerNode, 0)
If you do this, you will connect sourceNodeCopy and filterNode to the same input (0) of the MixerNode. But only one can be connected...
You should try this instead
AUGraphConnectNodeInput(graph, sourceNodeCopy, sourceOutputBusNumber, mixerNode, 0)
AUGraphConnectNodeInput(graph, fiterNode, sourceOutputBusNumber, mixerNode, 1)
I am developing an app that records voice via built-in microphone and sends it to a server live. So I need to get the byte stream from the microphone while recording.
After googling and stack-overflowing for quite a while, I think I figured out how it should work, but it does not. I think using Audio Queues might be the way to go.
Here is what I tried so far:
func test() {
func callback(_ a :UnsafeMutableRawPointer?, _ b : AudioQueueRef, _ c :AudioQueueBufferRef, _ d :UnsafePointer<AudioTimeStamp>, _ e :UInt32, _ f :UnsafePointer<AudioStreamPacketDescription>?) {
print("test")
}
var inputQueue: AudioQueueRef? = nil
var aqData = AQRecorderState(
mDataFormat: AudioStreamBasicDescription(
mSampleRate: 16000,
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: 0,
mBytesPerPacket: 2,
mFramesPerPacket: 1, // Must be set to 1 for uncomressed formats
mBytesPerFrame: 2,
mChannelsPerFrame: 1, // Mono recording
mBitsPerChannel: 2 * 8, // 2 Bytes
mReserved: 0), // Must be set to 0 according to https://developer.apple.com/reference/coreaudio/audiostreambasicdescription
mQueue: inputQueue!,
mBuffers: [AudioQueueBufferRef](),
bufferByteSize: 32,
mCurrentPacket: 0,
mIsRunning: true)
var error = AudioQueueNewInput(&aqData.mDataFormat,
callback,
nil,
nil,
nil,
0,
&inputQueue)
AudioQueueStart(inputQueue!, nil)
}
It compiles and the app starts, but as soon as I call test() I get an exception:
fatal error: unexpectedly found nil while unwrapping an Optional value
The exception is caused by
mQueue: inputQueue!
I understand why this happens (inputQueue has no value) but I don't know how to initialise inputQueue correctly. The problem is that Audio Queues are very poorly documented for Swift users and I didn't find any working example on the internet.
Can anybody tell me what I am doing wrong?
Use AudioQueueNewInput(...) (or output) to initialize your audio queue before you are using it:
let sampleRate = 16000
let numChannels = 2
var inFormat = AudioStreamBasicDescription(
mSampleRate: Double(sampleRate),
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kAudioFormatFlagsNativeFloatPacked,
mBytesPerPacket: UInt32(numChannels * MemoryLayout<UInt32>.size),
mFramesPerPacket: 1,
mBytesPerFrame: UInt32(numChannels * MemoryLayout<UInt32>.size),
mChannelsPerFrame: UInt32(numChannels),
mBitsPerChannel: UInt32(8 * (MemoryLayout<UInt32>.size)),
mReserved: UInt32(0)
var inQueue: AudioQueueRef? = nil
AudioQueueNewInput(&inFormat, callback, nil, nil, nil, 0, &inQueue)
var aqData = AQRecorderState(
mDataFormat: inFormat,
mQueue: inQueue!, // inQueue is initialized now and can be unwrapped
mBuffers: [AudioQueueBufferRef](),
bufferByteSize: 32,
mCurrentPacket: 0,
mIsRunning: true)
Find details in Apples Documentation
This code from our project works fine:
AudioBuffer * buff;
AudioQueueRef queue;
AudioStreamBasicDescription fmt = { 0 };
static void HandleInputBuffer (
void *aqData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription *inPacketDesc
) {
}
- (void) initialize {
thisClass = self;
__block struct AQRecorderState aqData;
NSError * error;
fmt.mFormatID = kAudioFormatLinearPCM;
fmt.mSampleRate = 44100.0;
fmt.mChannelsPerFrame = 1;
fmt.mBitsPerChannel = 16;
fmt.mChannelsPerFrame = 1;
fmt.mFramesPerPacket = 1;
fmt.mBytesPerFrame = sizeof (SInt16);
fmt.mBytesPerPacket = sizeof (SInt16);
fmt.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
OSStatus status = AudioQueueNewInput ( // 1
&fmt, // 2
HandleInputBuffer, // 3
&aqData, // 4
NULL, // 5
kCFRunLoopCommonModes, // 6
0, // 7
&queue // 8
);
AudioQueueBufferRef buffers[kNumberBuffers];
UInt32 bufferByteSize = kSamplesSize;
for (int i = 0; i < kNumberBuffers; ++i) { // 1
OSStatus allocateStatus;
allocateStatus = AudioQueueAllocateBuffer ( // 2
queue, // 3
bufferByteSize, // 4
&buffers[i] // 5
);
OSStatus enqueStatus;
NSLog(#"allocateStatus = %d" , allocateStatus);
enqueStatus = AudioQueueEnqueueBuffer ( // 6
queue, // 7
buffers[i], // 8
0, // 9
NULL // 10
);
NSLog(#"enqueStatus = %d" , enqueStatus);
}
AudioQueueStart ( // 3
queue, // 4
NULL // 5
);
}
I'm trying to convert the following code to Swift:
CMSampleBufferRef sampleBuffer = [assetOutput copyNextSampleBuffer];
CMBlockBufferRef blockBuffer;
AudioBufferList audioBufferList;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(AudioBufferList), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer);
for (NSUInteger i = 0; i < audioBufferList.mNumberBuffers; i++) {
AudioBuffer audioBuffer = audioBufferList.mBuffers[i];
[audioStream writeData:audioBuffer.mData maxLength:audioBuffer.mDataByteSize];
}
CFRelease(blockBuffer);
CFRelease(sampleBuffer);
I seem to be unable to iterate over the audioBuffer list no matter what I try. Does anyone have an answer?
Code convert in Swift-3
var sampleBuffer: CMSampleBuffer? = assetOutput.copyNextSampleBuffer()
let audioStream = OutputStream()
var blockBuffer: CMBlockBuffer?
var audioBufferList = AudioBufferList()
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, nil, &audioBufferList, MemoryLayout<AudioBufferList>.size, nil, nil, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer)
let buffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers, count: Int(audioBufferList.mNumberBuffers))
for audioBuffer in buffers {
let frame = audioBuffer.mData?.assumingMemoryBound(to: UInt8.self)
audioStream.write(frame!, maxLength: Int(audioBuffer.mDataByteSize))
}