I am developing an app that records voice via built-in microphone and sends it to a server live. So I need to get the byte stream from the microphone while recording.
After googling and stack-overflowing for quite a while, I think I figured out how it should work, but it does not. I think using Audio Queues might be the way to go.
Here is what I tried so far:
func test() {
func callback(_ a :UnsafeMutableRawPointer?, _ b : AudioQueueRef, _ c :AudioQueueBufferRef, _ d :UnsafePointer<AudioTimeStamp>, _ e :UInt32, _ f :UnsafePointer<AudioStreamPacketDescription>?) {
print("test")
}
var inputQueue: AudioQueueRef? = nil
var aqData = AQRecorderState(
mDataFormat: AudioStreamBasicDescription(
mSampleRate: 16000,
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: 0,
mBytesPerPacket: 2,
mFramesPerPacket: 1, // Must be set to 1 for uncomressed formats
mBytesPerFrame: 2,
mChannelsPerFrame: 1, // Mono recording
mBitsPerChannel: 2 * 8, // 2 Bytes
mReserved: 0), // Must be set to 0 according to https://developer.apple.com/reference/coreaudio/audiostreambasicdescription
mQueue: inputQueue!,
mBuffers: [AudioQueueBufferRef](),
bufferByteSize: 32,
mCurrentPacket: 0,
mIsRunning: true)
var error = AudioQueueNewInput(&aqData.mDataFormat,
callback,
nil,
nil,
nil,
0,
&inputQueue)
AudioQueueStart(inputQueue!, nil)
}
It compiles and the app starts, but as soon as I call test() I get an exception:
fatal error: unexpectedly found nil while unwrapping an Optional value
The exception is caused by
mQueue: inputQueue!
I understand why this happens (inputQueue has no value) but I don't know how to initialise inputQueue correctly. The problem is that Audio Queues are very poorly documented for Swift users and I didn't find any working example on the internet.
Can anybody tell me what I am doing wrong?
Use AudioQueueNewInput(...) (or output) to initialize your audio queue before you are using it:
let sampleRate = 16000
let numChannels = 2
var inFormat = AudioStreamBasicDescription(
mSampleRate: Double(sampleRate),
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kAudioFormatFlagsNativeFloatPacked,
mBytesPerPacket: UInt32(numChannels * MemoryLayout<UInt32>.size),
mFramesPerPacket: 1,
mBytesPerFrame: UInt32(numChannels * MemoryLayout<UInt32>.size),
mChannelsPerFrame: UInt32(numChannels),
mBitsPerChannel: UInt32(8 * (MemoryLayout<UInt32>.size)),
mReserved: UInt32(0)
var inQueue: AudioQueueRef? = nil
AudioQueueNewInput(&inFormat, callback, nil, nil, nil, 0, &inQueue)
var aqData = AQRecorderState(
mDataFormat: inFormat,
mQueue: inQueue!, // inQueue is initialized now and can be unwrapped
mBuffers: [AudioQueueBufferRef](),
bufferByteSize: 32,
mCurrentPacket: 0,
mIsRunning: true)
Find details in Apples Documentation
This code from our project works fine:
AudioBuffer * buff;
AudioQueueRef queue;
AudioStreamBasicDescription fmt = { 0 };
static void HandleInputBuffer (
void *aqData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp *inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription *inPacketDesc
) {
}
- (void) initialize {
thisClass = self;
__block struct AQRecorderState aqData;
NSError * error;
fmt.mFormatID = kAudioFormatLinearPCM;
fmt.mSampleRate = 44100.0;
fmt.mChannelsPerFrame = 1;
fmt.mBitsPerChannel = 16;
fmt.mChannelsPerFrame = 1;
fmt.mFramesPerPacket = 1;
fmt.mBytesPerFrame = sizeof (SInt16);
fmt.mBytesPerPacket = sizeof (SInt16);
fmt.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
OSStatus status = AudioQueueNewInput ( // 1
&fmt, // 2
HandleInputBuffer, // 3
&aqData, // 4
NULL, // 5
kCFRunLoopCommonModes, // 6
0, // 7
&queue // 8
);
AudioQueueBufferRef buffers[kNumberBuffers];
UInt32 bufferByteSize = kSamplesSize;
for (int i = 0; i < kNumberBuffers; ++i) { // 1
OSStatus allocateStatus;
allocateStatus = AudioQueueAllocateBuffer ( // 2
queue, // 3
bufferByteSize, // 4
&buffers[i] // 5
);
OSStatus enqueStatus;
NSLog(#"allocateStatus = %d" , allocateStatus);
enqueStatus = AudioQueueEnqueueBuffer ( // 6
queue, // 7
buffers[i], // 8
0, // 9
NULL // 10
);
NSLog(#"enqueStatus = %d" , enqueStatus);
}
AudioQueueStart ( // 3
queue, // 4
NULL // 5
);
}
Related
My app generates ply files after scanning objects. After ios 14 update the color of my 3d models does not load correctly. Also I am unable to view ply files in xcode (works fine in preview).
Anyone know the workaround to this problem?
I tired reading ply file content and display vertices and faces in scene geometry but it takes too long to load a file.
Apparently creating mdlAsset() throws some Metal warning and the mesh color does not show up properly.
Here are the sample images from ios 13 and 14 preview in sceneKit.
same problem,i found it is a SceneKit's bug, i had a solution that read .ply file with C, and creat a SCNGeometry instance with data, main code:
first we need read vertexCount and faceCount in .ply file (my file is ASCII format,so, )
bool readFaceAndVertexCount(char* filePath, int *vertexCount, int *faceCount);
example:
bool readFaceAndVertexCount(char* filePath, int *vertexCount, int *faceCount) {
char data[100];
FILE *fp;
if((fp = fopen(filePath,"r")) == NULL)
{
printf("error!");
return false;
}
while (!feof(fp))
{
fgets(data,1024,fp);
unsigned long i = strlen(data);
data[i - 1] = '\0';
if (strstr(data, "element vertex") != NULL) {
char *res = strtok(data," ");
while (res != NULL) {
res = strtok(NULL, " ");
if (res != NULL) {
*vertexCount = atoi(res);
}
}
}
if (strstr(data, "element face") != NULL) {
char *res = strtok(data," ");
while (res != NULL) {
res = strtok(NULL, " ");
if (res != NULL) {
*faceCount = atoi(res);
}
}
}
if (*faceCount > 0 && *vertexCount > 0) {
break;
}
}
fclose(fp);
return true;
}
2, read data to array:
in .c
// you need to implement with your files
bool readPlyFile(char* filePath, const int vertexCount, int faceCount, float *vertex, float *color, int *elment)
in .swift:
var vertex: [Float] = Array.init(repeating: 0, count: Int(vertexCount) * 3)
var color: [Float] = Array.init(repeating: 0, count: Int(vertexCount) * 3)
var face: [Int32] = Array.init(repeating: 0, count: Int(faceCount) * 3)
readPlyFile(UnsafeMutablePointer<Int8>(mutating: url.path),vertexCount,faceCount,&vertex,&color,&face)
3 creat a custom SCNGeometry:
let positionData = NSData.init(bytes: vertex, length: MemoryLayout<Float>.size * vertex.count)
let vertexSource = SCNGeometrySource.init(data: positionData as Data, semantic: .vertex, vectorCount: Int(vertexCount), usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<Float>.size * 3)
let colorData = NSData.init(bytes: color, length: MemoryLayout<Float>.size * color.count)
let colorSource = SCNGeometrySource.init(data: colorData as Data, semantic: .color, vectorCount: Int(vertexCount), usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<Float>.size * 3)
let indexData = NSData(bytes: face, length: MemoryLayout<Int32>.size * face.count)
let element = SCNGeometryElement(data: indexData as Data, primitiveType: SCNGeometryPrimitiveType.triangles, primitiveCount: Int(faceCount), bytesPerIndex: MemoryLayout<Int32>.size)
let gemetry = SCNGeometry.init(sources: [vertexSource,colorSource], elements: [element])
let node = SCNNode.init(geometry: gemetry)
let scene = SCNScene.init()
node.geometry?.firstMaterial?.cullMode = .back
node.geometry?.firstMaterial?.isDoubleSided = true
scene.rootNode.addChildNode(node)
scnView.scene = scene
it work! and faster!
I need to convert CMSampleBuffer to Data format. I am using one Third party framework for audio related task. That framework gives me the streaming (i.e Real Time audio) audio in CMSampleBuffer object.
Like this:
func didAudioStreaming(audioSample: CMSampleBuffer!) {
//Here I need to conver this to Data format.
//Because I am using GRPC framework for Audio Recognization,
}
Please provide me the steps to convert the CMSampleBuffer to Data.
FYI
let formatDesc:CMFormatDescription? = CMSampleBufferGetFormatDescription(audioSample)
<CMAudioFormatDescription 0x17010d890 [0x1b453ebb8]> {
mediaType:'soun'
mediaSubType:'lpcm'
mediaSpecific: {
ASBD: {
mSampleRate: 16000.000000
mFormatID: 'lpcm'
mFormatFlags: 0xc
mBytesPerPacket: 2
mFramesPerPacket: 1
mBytesPerFrame: 2
mChannelsPerFrame: 1
mBitsPerChannel: 16 }
cookie: {(null)}
ACL: {(null)}
FormatList Array: {(null)}
}
extensions: {(null)}
}
Try below code to convert CMSampleBuffer to NSData.
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: 0))
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!)
let height = CVPixelBufferGetHeight(imageBuffer!)
let src_buff = CVPixelBufferGetBaseAddress(imageBuffer!)
let data = NSData(bytes: src_buff, length: bytesPerRow * height)
CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags(rawValue: 0))
EDIT-
For AudioBuffer use below code -
var audioBufferList = AudioBufferList()
var data = Data()
var blockBuffer : CMBlockBuffer?
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, nil, &audioBufferList, MemoryLayout<AudioBufferList>.size, nil, nil, 0, &blockBuffer)
let buffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers, count: Int(audioBufferList.mNumberBuffers))
for audioBuffer in buffers {
let frame = audioBuffer.mData?.assumingMemoryBound(to: UInt8.self)
data.append(frame!, count: Int(audioBuffer.mDataByteSize))
}
Using CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer will require to call at some point CFRelease(blockBuffer) because the buffer is retained and if not released the pool of buffers will become eventually empty and no new CMSampleBuffer will be generated.
I'd suggest to get directly the data using the following:
CMBlockBufferRef blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
size_t lengthAtOffset;
size_t totalLength;
char *data;
CMBlockBufferGetDataPointer(blockBuffer, 0, &lengthAtOffset, &totalLength, &data);
NSData *audioData = [NSData dataWithBytes:data length:totalLength];
How do you create a silent audio CMSampleBufferRef in Swift? I am looking to append silent CMSampleBufferRefs to an instance of AVAssetWriterInput.
You don't say what format you want your zeros (integer/floating point, mono/stereo, sample rate), but maybe it doesn't matter. Anyway, here's one way to create a silent CD audio style CMSampleBuffer in swift.
func createSilentAudio(startFrm: Int64, nFrames: Int, sampleRate: Float64, numChannels: UInt32) -> CMSampleBuffer? {
let bytesPerFrame = UInt32(2 * numChannels)
let blockSize = nFrames*Int(bytesPerFrame)
var block: CMBlockBuffer?
var status = CMBlockBufferCreateWithMemoryBlock(
kCFAllocatorDefault,
nil,
blockSize, // blockLength
nil, // blockAllocator
nil, // customBlockSource
0, // offsetToData
blockSize, // dataLength
0, // flags
&block
)
assert(status == kCMBlockBufferNoErr)
// we seem to get zeros from the above, but I can't find it documented. so... memset:
status = CMBlockBufferFillDataBytes(0, block!, 0, blockSize)
assert(status == kCMBlockBufferNoErr)
var asbd = AudioStreamBasicDescription(
mSampleRate: sampleRate,
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kLinearPCMFormatFlagIsSignedInteger,
mBytesPerPacket: bytesPerFrame,
mFramesPerPacket: 1,
mBytesPerFrame: bytesPerFrame,
mChannelsPerFrame: numChannels,
mBitsPerChannel: 16,
mReserved: 0
)
var formatDesc: CMAudioFormatDescription?
status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &asbd, 0, nil, 0, nil, nil, &formatDesc)
assert(status == noErr)
var sampleBuffer: CMSampleBuffer?
// born ready
status = CMAudioSampleBufferCreateReadyWithPacketDescriptions(
kCFAllocatorDefault,
block, // dataBuffer
formatDesc!,
nFrames, // numSamples
CMTimeMake(startFrm, Int32(sampleRate)), // sbufPTS
nil, // packetDescriptions
&sampleBuffer
)
assert(status == noErr)
return sampleBuffer
}
Doesn't it make you sorry you asked? Do you really need silent CMSampleBuffers? Can't you insert silence into an AVAssetWriterInput by moving the presentation time stamp forward?
Updated for XCode 10.3. Swift 5.0.1.
Don't forget the import CoreMedia.
import Foundation
import CoreMedia
class CMSampleBufferFactory
{
static func createSilentAudio(startFrm: Int64, nFrames: Int, sampleRate: Float64, numChannels: UInt32) -> CMSampleBuffer? {
let bytesPerFrame = UInt32(2 * numChannels)
let blockSize = nFrames*Int(bytesPerFrame)
var block: CMBlockBuffer?
var status = CMBlockBufferCreateWithMemoryBlock(
allocator: kCFAllocatorDefault,
memoryBlock: nil,
blockLength: blockSize,
blockAllocator: nil,
customBlockSource: nil,
offsetToData: 0,
dataLength: blockSize,
flags: 0,
blockBufferOut: &block
)
assert(status == kCMBlockBufferNoErr)
guard var eBlock = block else { return nil }
// we seem to get zeros from the above, but I can't find it documented. so... memset:
status = CMBlockBufferFillDataBytes(with: 0, blockBuffer: eBlock, offsetIntoDestination: 0, dataLength: blockSize)
assert(status == kCMBlockBufferNoErr)
var asbd = AudioStreamBasicDescription(
mSampleRate: sampleRate,
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kLinearPCMFormatFlagIsSignedInteger,
mBytesPerPacket: bytesPerFrame,
mFramesPerPacket: 1,
mBytesPerFrame: bytesPerFrame,
mChannelsPerFrame: numChannels,
mBitsPerChannel: 16,
mReserved: 0
)
var formatDesc: CMAudioFormatDescription?
status = CMAudioFormatDescriptionCreate(allocator: kCFAllocatorDefault, asbd: &asbd, layoutSize: 0, layout: nil, magicCookieSize: 0, magicCookie: nil, extensions: nil, formatDescriptionOut: &formatDesc)
assert(status == noErr)
var sampleBuffer: CMSampleBuffer?
status = CMAudioSampleBufferCreateReadyWithPacketDescriptions(
allocator: kCFAllocatorDefault,
dataBuffer: eBlock,
formatDescription: formatDesc!,
sampleCount: nFrames,
presentationTimeStamp: CMTimeMake(value: startFrm, timescale: Int32(sampleRate)),
packetDescriptions: nil,
sampleBufferOut: &sampleBuffer
)
assert(status == noErr)
return sampleBuffer
}
}
You need to create a block buffer using CMBlockBufferCreateWithMemoryBlock().
Fill the block buffer with a bunch of zeros and then pass it into CMAudioSampleBufferCreateWithPacketDescriptions().
Disclaimer: I haven't actually done this in Swift, I attempted it but found myself fighting the compiler at every turn so I switched to obj-c. The Core Media Framework is a low level C framework and was a lot easier to use without screwing around with Swifts type system. I know this isn't the answer you're looking for buy hopefully it will point you in the right direction.
Example
I am trying to add a wav header on top of raw PCM data to make it playable via AVAudioPlayer. But i couldn't find any solution or source code to do that on iOS using Objective-C/Swift. Though i found this but it doesn't have correct answer.
But i found a piece of code here which is in C and also contains some issue. The wav file doesn't play properly which is generated from that code.
I have given my codes below which i have coded so far.
int NumChannels = AUDIO_CHANNELS_PER_FRAME;
short BitsPerSample = AUDIO_BITS_PER_CHANNEL;
int SamplingRate = AUDIO_SAMPLE_RATE;
int numOfSamples = [[NSData dataWithContentsOfFile:filePath] length];
int ByteRate = NumChannels*BitsPerSample*SamplingRate/8;
short BlockAlign = NumChannels*BitsPerSample/8;
int DataSize = NumChannels*numOfSamples*BitsPerSample/8;
int chunkSize = 16;
int totalSize = 36 + DataSize;
short audioFormat = 1;
if((fout = fopen([wavFilePath cStringUsingEncoding:1], "w")) == NULL)
{
printf("Error opening out file ");
}
fwrite("RIFF", sizeof(char), 4,fout);
fwrite(&totalSize, sizeof(int), 1, fout);
fwrite("WAVE", sizeof(char), 4, fout);
fwrite("fmt ", sizeof(char), 3, fout);
fwrite(&chunkSize, sizeof(int),1,fout);
fwrite(&audioFormat, sizeof(short), 1, fout);
fwrite(&NumChannels, sizeof(short),1,fout);
fwrite(&SamplingRate, sizeof(int), 1, fout);
fwrite(&ByteRate, sizeof(int), 1, fout);
fwrite(&BlockAlign, sizeof(short), 1, fout);
fwrite(&BitsPerSample, sizeof(short), 1, fout);
fwrite("data", sizeof(char), 3, fout);
fwrite(&DataSize, sizeof(int), 1, fout);
The file is playing too fast, the sound is distorted and only first 10 to 20(around) seconds are playing. I think, the wav header isn't generating correctly(Because i am able to play same PCM data/buffer using AudioUnit/AudioQueue). So what i am missing in my code ? Any help would be highly appreciated.
Thanks in advance.
OK, I am answering my own question if it helps someone else. After few days of tireless trying, at last i have got it working. Below is a complete Function written with Objective-C and C. It takes a file path as a parameter which contains RAW PCM data directly captured from microphone and returns a file path which contains PCM data followed by appropriate wav header info. Then you can play that file with AVAudioPlayer or AVPlayer. Here is the code...
- (NSURL *) getAndCreatePlayableFileFromPcmData:(NSString *)filePath
{
NSString *wavFileName = [[filePath lastPathComponent] stringByDeletingPathExtension];
NSString *wavFileFullName = [NSString stringWithFormat:#"%#.wav",wavFileName];
[self createFileWithName:wavFileFullName];
NSArray *dirPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *docsDir = [dirPaths objectAtIndex:0];
NSString *wavFilePath = [docsDir stringByAppendingPathComponent:wavFileFullName];
NSLog(#"PCM file path : %#",filePath);
FILE *fout;
short NumChannels = AUDIO_CHANNELS_PER_FRAME;
short BitsPerSample = AUDIO_BITS_PER_CHANNEL;
int SamplingRate = AUDIO_SAMPLE_RATE;
int numOfSamples = [[NSData dataWithContentsOfFile:filePath] length];
int ByteRate = NumChannels*BitsPerSample*SamplingRate/8;
short BlockAlign = NumChannels*BitsPerSample/8;
int DataSize = NumChannels*numOfSamples*BitsPerSample/8;
int chunkSize = 16;
int totalSize = 46 + DataSize;
short audioFormat = 1;
if((fout = fopen([wavFilePath cStringUsingEncoding:1], "w")) == NULL)
{
printf("Error opening out file ");
}
fwrite("RIFF", sizeof(char), 4,fout);
fwrite(&totalSize, sizeof(int), 1, fout);
fwrite("WAVE", sizeof(char), 4, fout);
fwrite("fmt ", sizeof(char), 4, fout);
fwrite(&chunkSize, sizeof(int),1,fout);
fwrite(&audioFormat, sizeof(short), 1, fout);
fwrite(&NumChannels, sizeof(short),1,fout);
fwrite(&SamplingRate, sizeof(int), 1, fout);
fwrite(&ByteRate, sizeof(int), 1, fout);
fwrite(&BlockAlign, sizeof(short), 1, fout);
fwrite(&BitsPerSample, sizeof(short), 1, fout);
fwrite("data", sizeof(char), 4, fout);
fwrite(&DataSize, sizeof(int), 1, fout);
fclose(fout);
NSMutableData *pamdata = [NSMutableData dataWithContentsOfFile:filePath];
NSFileHandle *handle;
handle = [NSFileHandle fileHandleForUpdatingAtPath:wavFilePath];
[handle seekToEndOfFile];
[handle writeData:pamdata];
[handle closeFile];
return [NSURL URLWithString:wavFilePath];
}
But that function only works with the following audio settings.
// Audio settings.
#define AUDIO_SAMPLE_RATE 8000
#define AUDIO_FRAMES_PER_PACKET 1
#define AUDIO_CHANNELS_PER_FRAME 1
#define AUDIO_BITS_PER_CHANNEL 16
#define AUDIO_BYTES_PER_PACKET 2
#define AUDIO_BYTES_PER_FRAME 2
Very helpful question and answer, thank you very much.
This swift version is for those in need:
static func createWAV(from pcmFilePath: String, to wavFilePath: String) -> Bool {
// Make sure that the path does not contain non-ascii characters
guard let fout = fopen(wavFilePath.cString(using: .ascii), "w") else { return false }
guard let pcmData = try? Data(contentsOf: URL(fileURLWithPath: pcmFilePath)) else { return false }
var numChannels: CShort = 1
let numChannelsInt: CInt = 1
var bitsPerSample: CShort = 16
let bitsPerSampleInt: CInt = 16
var samplingRate: CInt = 16000
let numOfSamples = CInt(pcmData.count)
var byteRate = numChannelsInt * bitsPerSampleInt * samplingRate / 8
var blockAlign = numChannelsInt * bitsPerSampleInt / 8
var dataSize = numChannelsInt * numOfSamples * bitsPerSampleInt / 8
var chunkSize: CInt = 16
var totalSize = 46 + dataSize
var audioFormat: CShort = 1
fwrite("RIFF".cString(using: .ascii), MemoryLayout<CChar>.size, 4, fout)
fwrite(&totalSize, MemoryLayout<CInt>.size, 1, fout)
fwrite("WAVE".cString(using: .ascii), MemoryLayout<CChar>.size, 4, fout);
fwrite("fmt ".cString(using: .ascii), MemoryLayout<CChar>.size, 4, fout);
fwrite(&chunkSize, MemoryLayout<CInt>.size,1,fout);
fwrite(&audioFormat, MemoryLayout<CShort>.size, 1, fout);
fwrite(&numChannels, MemoryLayout<CShort>.size,1,fout);
fwrite(&samplingRate, MemoryLayout<CInt>.size, 1, fout);
fwrite(&byteRate, MemoryLayout<CInt>.size, 1, fout);
fwrite(&blockAlign, MemoryLayout<CShort>.size, 1, fout);
fwrite(&bitsPerSample, MemoryLayout<CShort>.size, 1, fout);
fwrite("data".cString(using: .ascii), MemoryLayout<CChar>.size, 4, fout);
fwrite(&dataSize, MemoryLayout<CInt>.size, 1, fout);
fclose(fout);
guard let handle = FileHandle(forUpdatingAtPath: wavFilePath) else { return false }
handle.seekToEndOfFile()
handle.write(pcmData)
handle.closeFile()
return true
}
Modified from qiz's answer for swift 5
func extractSubchunks(data:Data) -> RiffFile? {
var data = data
var chunks = [SubChunk]()
let position = data.subdata(in: 8..<12)
let filelengthBytes = data.subdata(in: 4..<8).map { UInt32($0) }
let filelength: UInt32 = filelengthBytes[0] << 24 + filelengthBytes[1] << 16 + filelengthBytes[2] << 8 + filelengthBytes[3]
let wave = String(bytes: position, encoding: .utf8) ?? "NoName"
guard wave == "WAVE" else {
print("File is \(wave) not WAVE")
return nil
}
data.removeSubrange(0..<12)
print("Found chunks")
while data.count != 0{
let position = data.subdata(in: 0..<4)
let lengthBytes = data.subdata(in: 4..<8).map { UInt32($0) }
let length: UInt32 = lengthBytes[0] << 24 + lengthBytes[1] << 16 + lengthBytes[2] << 8 + lengthBytes[3]
guard let current = String(bytes: position, encoding: .utf8) else{
return nil
}
data.removeSubrange(0..<8)
let chunkData = data.subdata(in: 0..<Int(length))
data.removeSubrange(0..<Int(length))
let subchunk = SubChunk(name: current, size: Int(length), data: chunkData)
chunks.append(subchunk)
print(subchunk.debugDescription)
}
let riff = RiffFile(size: Int(filelength), subChunks: chunks)
return riff
}
This is a Data extension for Swift that returns another Data, made using the answer from qiz.
extension Data {
func wavValue: Data? {
var numChannels: CShort = 1
let numChannelsInt: CInt = 1
var bitsPerSample: CShort = 16
let bitsPerSampleInt: CInt = 16
var samplingRate: CInt = 44100
let numOfSamples = CInt(pcmData.count)
var byteRate = numChannelsInt * bitsPerSampleInt * samplingRate / 8
var blockAlign = numChannelsInt * bitsPerSampleInt / 8
var dataSize = numChannelsInt * numOfSamples * bitsPerSampleInt / 8
var chunkSize: CInt = 16
var totalSize = 46 + dataSize
var audioFormat: CShort = 1
let wavNSData = NSMutableData()
wavNSData.append("RIFF".cString(using: .ascii) ?? .init(), length: MemoryLayout<CChar>.size * 4)
wavNSData.append(&totalSize, length: MemoryLayout<CInt>.size)
wavNSData.append("WAVE".cString(using: .ascii) ?? .init(), length: MemoryLayout<CChar>.size * 4)
wavNSData.append("fmt ".cString(using: .ascii) ?? .init(), length: MemoryLayout<CChar>.size * 4)
wavNSData.append(&chunkSize, length: MemoryLayout<CInt>.size)
wavNSData.append(&audioFormat, length: MemoryLayout<CShort>.size)
wavNSData.append(&numChannels, length: MemoryLayout<CShort>.size)
wavNSData.append(&samplingRate, length: MemoryLayout<CInt>.size)
wavNSData.append(&byteRate, length: MemoryLayout<CInt>.size)
wavNSData.append(&blockAlign, length: MemoryLayout<CShort>.size)
wavNSData.append(&bitsPerSample, length: MemoryLayout<CShort>.size)
wavNSData.append("data".cString(using: .ascii) ?? .init(), length: MemoryLayout<CChar>.size * 4)
wavNSData.append(&dataSize, length: MemoryLayout<CInt>.size)
wavNSData.append(self)
let wavData = Data(referencing: wavNSData)
return wavData
}
}
I want to read from AudioFileID and then write to the end of it with this method (make it loop):
UInt64 outDataSize = 0;
UInt32 thePropSize = sizeof(UInt64);
OSStatus result = AudioFileGetProperty(mBackupRecordFile, kAudioFilePropertyAudioDataByteCount, &thePropSize, &outDataSize);
UInt32 readPoint = outDataSize;
void* theData = malloc(outDataSize);
OSStatus result2 = AudioFileReadBytes(mBackupRecordFile, FALSE, 0, &readPoint, theData);
UInt32 writeBytes = readPoint;
OSStatus result3 = AudioFileWriteBytes(mBackupRecordFile, FALSE, readPoint, &writeBytes, theData);
The problem is that in result3 i get a big number and not 0 and the file won't increase.
the value of result3 is 1869627199 = kAudioFileOperationNotSupportedError