I want to record audio from the mic, and I need the audio to be in a specific format. Here's the code I'm trying to run:
AudioStreamBasicDescription asbd;
memset(&asbd, 0, sizeof(asbd));
asbd.mBitsPerChannel = 16;
asbd.mBytesPerFrame = 2;
asbd.mBytesPerPacket = 2;
asbd.mChannelsPerFrame = 1;
asbd.mFormatFlags = kLinearPCMFormatFlagIsBigEndian | kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFramesPerPacket = 1;
asbd.mSampleRate = 8000;
self.microphone = [EZMicrophone microphoneWithDelegate:self];
[self.microphone setAudioStreamBasicDescription:asbd];
But the App crash.Here is the screen shot.How to fix it?
Related
I'm currently recording stereo audio from the microphone of the iPhone and I have to record the data from the callbacks for analysis.
Currently my AudioStreamBasicDescription format is
AudioStreamBasicDescription format;
format.mSampleRate = 0;
format.mFormatID = kAudioFormatLinearPCM;
format.mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved;
format.mFramesPerPacket = 1;
format.mChannelsPerFrame = 2;
format.mBitsPerChannel = 32;
format.mBytesPerPacket = 4;
format.mBytesPerFrame = 4;
and the buffer list I render data into is
inputBufferList->mNumberBuffers = NUMCHANNELS;
for (size_t n = 0; n < NUMCHANNELS; n++) {
inputBufferList->mBuffers[n].mDataByteSize = inNumberFrames * sizeof(float);
inputBufferList->mBuffers[n].mNumberChannels = 1;
inputBufferList->mBuffers[n].mData = malloc(inputBufferList->mBuffers[n].mDataByteSize);
}
When I try to write this data into the ExtAudioFileWrite, it gives an error and it was said that the format is wrong. Is there any tutorial on how to write stereo audio using ExtAudioFileWrite?
Edit:
Here is how I'm setting it up
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString* destinationFilePath = [[NSString alloc] initWithFormat: #"%#/testrecording.wav", documentsDirectory];
CFURLRef destinationURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)destinationFilePath, kCFURLPOSIXPathStyle, false);
OSStatus status;
ExtAudioFileRef cfref;
status = ExtAudioFileCreateWithURL(destinationURL, kAudioFileWAVEType,
&format, NULL, kAudioFileFlags_EraseFile,
&cfref);
The status shows an exception in this
1718449215 is kAudioConverterErr_FormatNotSupported ('fmt?'), so I'm guessing that WAVE might not support float LPCM. You could try changing to kAudioFormatFlagIsSignedInteger or switching file format, e.g. kAudioFileM4AType, kAudioFileCAFType, or (maybe?) kAudioFileAIFFType.
Don't forget to update format sizes for the former, and filename extension for the latter.
I tried to convert PCM audio from 16kHz to 8kHz, just sample rate, no format change, the flow looks simple but I kept getting kAudioConverterErr_InvalidInputSize ("insz") from calling AudioConverterFillComplexBuffer. My input audio sample size is 320 bytes, the result is supposed to be 160 bytes but I just got 144 bytes in my output buffer. have been pulling my hair off for the last couple hours. Is there any setting wrong?
static AudioConverterRef PCM8kTo16kConverterRef;
- (instancetype)init {
self = [super init];
if (self) {
[self initConverter];
}
return self;
}
-(void)initConverter{
AudioStreamBasicDescription PCM8kDescription = {0};
PCM8kDescription.mSampleRate = 8000.0;
PCM8kDescription.mFormatID = kAudioFormatLinearPCM;
PCM8kDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked | kAudioFormatFlagsNativeEndian;
PCM8kDescription.mBitsPerChannel = 8 * sizeof(SInt16);
PCM8kDescription.mChannelsPerFrame = 1;
PCM8kDescription.mBytesPerFrame = sizeof(SInt16) * PCM8kDescription.mChannelsPerFrame;
PCM8kDescription.mFramesPerPacket = 1;
PCM8kDescription.mBytesPerPacket = PCM8kDescription.mBytesPerFrame * PCM8kDescription.mFramesPerPacket;
AudioStreamBasicDescription PCM16kDescription = {0};
PCM16kDescription.mSampleRate = 16000.0;
PCM16kDescription.mFormatID = kAudioFormatLinearPCM;
PCM16kDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked | kAudioFormatFlagsNativeEndian;
PCM16kDescription.mBitsPerChannel = 8 * sizeof(SInt16);
PCM16kDescription.mChannelsPerFrame = 1;
PCM16kDescription.mBytesPerFrame = sizeof(SInt16) * PCM16kDescription.mChannelsPerFrame;
PCM16kDescription.mFramesPerPacket = 1;
PCM16kDescription.mBytesPerPacket = PCM16kDescription.mBytesPerFrame * PCM16kDescription.mFramesPerPacket;
OSStatus status = AudioConverterNew(&PCM16kDescription, &PCM8kDescription, &converterRef);
}
OSStatus inInputDataProc(AudioConverterRef inAudioConverter, UInt32 *ioNumberDataPackets, AudioBufferList *ioData, AudioStreamPacketDescription **outDataPacketDescription, void *inUserData)
{
AudioBufferList audioBufferList = *(AudioBufferList *)inUserData;
ioData->mBuffers[0].mData = audioBufferList.mBuffers[0].mData;
ioData->mBuffers[0].mDataByteSize = audioBufferList.mBuffers[0].mDataByteSize;
return noErr;
}
- (NSData *)testSample:(NSData *)inAudio {
NSMutableData *ddd = [inAudio mutableCopy];
AudioBufferList inAudioBufferList = {0};
inAudioBufferList.mNumberBuffers = 1;
inAudioBufferList.mBuffers[0].mNumberChannels = 1;
inAudioBufferList.mBuffers[0].mDataByteSize = (UInt32)[ddd length];
inAudioBufferList.mBuffers[0].mData = [ddd mutableBytes];
uint32_t bufferSize = (UInt32)[inAudio length] / 2;
uint8_t *buffer = (uint8_t *)malloc(bufferSize);
memset(buffer, 0, bufferSize);
AudioBufferList outAudioBufferList;
outAudioBufferList.mNumberBuffers = 1;
outAudioBufferList.mBuffers[0].mNumberChannels = 1;
outAudioBufferList.mBuffers[0].mDataByteSize = bufferSize;
outAudioBufferList.mBuffers[0].mData = buffer;
UInt32 ioOutputDataPacketSize = bufferSize;
OSStatus ret = AudioConverterFillComplexBuffer(converterRef, inInputDataProc, &inAudioBufferList, &ioOutputDataPacketSize, &outAudioBufferList, NULL) ;
NSData *data = [NSData dataWithBytes:outAudioBufferList.mBuffers[0].mData length:outAudioBufferList.mBuffers[0].mDataByteSize];
free(buffer);
return data;
}
There are two problems:
your AudioConverterComplexInputDataProc isn't setting ioNumberDataPackets:
*ioNumberDataPackets = audioBufferList.mBuffers[0].mDataByteSize/2;
ioOutputDataPacketSize is supposed to be the output buffer capacity in packets/frames, not bytes, so shouldn't you divide by 2?
I'm trying to build an iOS app and I need to read a .wav file or microphone input as a float or int array to feed to already existing audio signal processing algorithms in C. Is there an easy way like how wavread is in Matlab?
void readAudio() {
NSString * name = #"Test";
NSString * source = [[NSBundle mainBundle] pathForResource:name ofType:#"caf"];
const char * cString = [source cStringUsingEncoding:NSASCIIStringEncoding];
CFStringRef str = CFStringCreateWithCString(NULL, cString, kCFStringEncodingMacRoman);
CFURLRef inputFileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, str, kCFURLPOSIXPathStyle, false);
AudioFileID fileID;
OSStatus err = AudioFileOpenURL(inputFileURL, kAudioFileReadPermission, 0, &fileID);
CheckError(err, "AudioFileOpenURL");
ExtAudioFileRef fileRef;
err = ExtAudioFileOpenURL(inputFileURL, &fileRef);
CheckError(err, "ExtAudioFileOpenURL");
AudioStreamBasicDescription clientFormat;
memset(&clientFormat, 0, sizeof(clientFormat));
clientFormat.mFormatID = kAudioFormatLinearPCM;
clientFormat.mFramesPerPacket = 1;
clientFormat.mChannelsPerFrame = 1;
clientFormat.mBitsPerChannel = 16;
clientFormat.mBytesPerPacket = clientFormat.mChannelsPerFrame * sizeof(SInt16);
clientFormat.mBytesPerFrame = clientFormat.mChannelsPerFrame * sizeof(SInt16);
clientFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
clientFormat.mSampleRate = 8000;
err = ExtAudioFileSetProperty(fileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &clientFormat);
CheckError(err, "ExtAudioFileSetProperty");
int numSamples = 64;
UInt32 sizePerPacket = clientFormat.mBytesPerPacket;
UInt32 packetsPerBuffer = numSamples;
UInt32 outputBufferSize = packetsPerBuffer * sizePerPacket;
UInt8 *outputBuffer = (UInt8 *)malloc(sizeof(UInt8 *) * outputBufferSize);
AudioBufferList convertedData;
convertedData.mNumberBuffers = 1;
convertedData.mBuffers[0].mNumberChannels = clientFormat.mChannelsPerFrame;
convertedData.mBuffers[0].mDataByteSize = outputBufferSize;
convertedData.mBuffers[0].mData = outputBuffer;
UInt32 frameCount = numSamples;
short *samplesAsCArray, *output = (short *)malloc(sizeof(UInt8 *) * numSamples);
while (frameCount > 0) {
err = ExtAudioFileRead(fileRef, &frameCount, &convertedData);
if(frameCount > 0) {
uint64_t startTime = mach_absolute_time() ;
AudioBuffer audioBuffer = convertedData.mBuffers[0];
samplesAsCArray = (short *)audioBuffer.mData;
FIRFilter(samplesAsCArray, audioBuffer.mDataByteSize/(sizeof(short)), output);
memcpy([iosAudio tempBuffer].mData, output, audioBuffer.mDataByteSize);
uint64_t duration = mach_absolute_time() - startTime;
NSLog(#"%f milliseconds", (float)duration/1e6);
/*for (int i =0; i< frameCount; i++) {
printf("%d\n", output[i]);
}*/
}
}
free(output);
}
I've implemented this much until now. I need to be able to save the edited file as a linear PCM (CAF, as apple works better with that), as text and be able to play the edited audio in real time also.
WithEZAudio I want create mono light audioBufferList, as far as it can be. In past I achive 46 bytes per audioBuffer but with relative small bufferDuration. First thing first, if I use below AudioStreamBasicDescription for input and output
AudioStreamBasicDescription audioFormat;
audioFormat.mBitsPerChannel = 8 * sizeof(AudioUnitSampleType);
audioFormat.mBytesPerFrame = sizeof(AudioUnitSampleType);
audioFormat.mBytesPerPacket = sizeof(AudioUnitSampleType);
audioFormat.mChannelsPerFrame = 2;
audioFormat.mFormatFlags = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFramesPerPacket = 1;
audioFormat.mSampleRate = 44100;
and use TPCircularBuffer as transporter then I get two buffers in bufferList with mDataByteSize 4096 is definitely too much. So I try to use my previous ASBD
audioFormat.mSampleRate = 8000.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 8;
audioFormat.mBytesPerPacket = 1;
audioFormat.mBytesPerFrame = 1;
Now mDataByteSize is 128 and I have only one buffer but TPCircularBuffer can't handle this properly. I figure it is because I want use only one channel. So atm I rejected TBCB and try encode and decode bytes to NSData or just for test straight passing AudioBufferList but even for first AudioStreamBasicDescription sound is too much distorted.
My current code
-(void)initMicrophone{
AudioStreamBasicDescription audioFormat;
//*
audioFormat.mBitsPerChannel = 8 * sizeof(AudioUnitSampleType);
audioFormat.mBytesPerFrame = sizeof(AudioUnitSampleType);
audioFormat.mBytesPerPacket = sizeof(AudioUnitSampleType);
audioFormat.mChannelsPerFrame = 2;
audioFormat.mFormatFlags = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFramesPerPacket = 1;
audioFormat.mSampleRate = 44100;
/*/
audioFormat.mSampleRate = 8000.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 8;
audioFormat.mBytesPerPacket = 1;
audioFormat.mBytesPerFrame = 1;
//*/
_microphone = [EZMicrophone microphoneWithDelegate:self withAudioStreamBasicDescription:audioFormat];
_output = [EZOutput outputWithDataSource:self withAudioStreamBasicDescription:audioFormat];
[EZAudio circularBuffer:&_cBuffer withSize:128];
}
-(void)startSending{
[_microphone startFetchingAudio];
[_output startPlayback];
}
-(void)stopSending{
[_microphone stopFetchingAudio];
[_output stopPlayback];
}
-(void)microphone:(EZMicrophone *)microphone
hasAudioReceived:(float **)buffer
withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels{
dispatch_async(dispatch_get_main_queue(), ^{
});
}
-(void)microphone:(EZMicrophone *)microphone
hasBufferList:(AudioBufferList *)bufferList
withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels{
//*
abufferlist = bufferList;
/*/
audioBufferData = [NSData dataWithBytes:bufferList->mBuffers[0].mData length:bufferList->mBuffers[0].mDataByteSize];
//*/
dispatch_async(dispatch_get_main_queue(), ^{
});
}
-(AudioBufferList*)output:(EZOutput *)output needsBufferListWithFrames:(UInt32)frames withBufferSize:(UInt32 *)bufferSize{
//*
return abufferlist;
/*/
// int bSize = 128;
// AudioBuffer audioBuffer;
// audioBuffer.mNumberChannels = 1;
// audioBuffer.mDataByteSize = bSize;
// audioBuffer.mData = malloc(bSize);
//// [audioBufferData getBytes:audioBuffer.mData length:bSize];
// memcpy(audioBuffer.mData, [audioBufferData bytes], bSize);
//
//
// AudioBufferList *bufferList = [EZAudio audioBufferList];
// bufferList->mNumberBuffers = 1;
// bufferList->mBuffers[0] = audioBuffer;
//
// return bufferList;
//*/
}
I know that value of bSize in output:needsBufferListWithFrames:withBufferSize: maybe changed.
My main gole is create light as much as it can be mono sound, encode it to nsdata and decode it to output. Could You suggest me what I'm doing wrong?
I had the same issue, moved to AVAudioRecorder and set the parameters i needed i kept EZAudio (EZMicrophone) for audio visualisation here is a link to achieve this :
iOS: Audio Recording File Format
This problem may be a bit too vast and nebulous for this space, but I'll give it a go.
I have an array of samples that I'm trying to write to a .wav file on my iOS and it is taking up to a minute and a half to do. Here is the loop where the drag is occurring:
for (int i=0; i< 1430529; i++) // 1430529 is the length of the array of samples
{
SInt16 sample;
sample = sample_array[i];
audioErr = AudioFileWriteBytes(audioFile, false, sampleCount*2, &bytesToWrite, &sample);
sampleCount++;
}
Any ideas?
EDIT 1
If it helps, this is the code that precedes it:
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
// THIS IS MY BIT TO CONVERT THE LOCATION TO NSSTRING
NSString *filePath = [[NSString alloc]init];
filePath = [NSString stringWithUTF8String:location];
// HERE I WANT TO REMOVE THE FILE NAME FROM THE LOCATION.
NSString *truncatedFilePath = filePath;
truncatedFilePath = [truncatedFilePath stringByReplacingOccurrencesOfString:#"/recordedFile.wav"
// withString:#"/newFile.caf"];
withString:#"/recordedFile.wav"];
NSLog(truncatedFilePath);
NSURL *fileURL = [NSURL fileURLWithPath:truncatedFilePath];
AudioStreamBasicDescription asbd;
memset(&asbd,0, sizeof(asbd));
asbd.mSampleRate = SAMPLE_RATE;
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
asbd.mBitsPerChannel = 16;
asbd.mChannelsPerFrame = 1;
asbd.mFramesPerPacket = 1;
asbd.mBytesPerFrame = 2;
asbd.mBytesPerPacket = 2;
AudioFileID audioFile;
OSStatus audioErr = noErr;
audioErr = AudioFileCreateWithURL((CFURLRef)fileURL, kAudioFileWAVEType, &asbd, kAudioFileFlags_EraseFile, &audioFile);
assert (audioErr == noErr);
long sampleCount = 0;
UInt32 bytesToWrite = 2;
Why do you need the loop ? Can't you write all the samples in one go, e.g.
numSamples = 1430529;
bytesToWrite = numSamples * 2;
audioErr = AudioFileWriteBytes(audioFile, false, 0, &bytesToWrite, sample_array);
?
Perhaps the number of bytes you are writing at each call to AudioFileWriteBytes is too small. How large is bytesToWrite?