I have been trying to create a source client for ice cast for ios. I have been able to connect using asyncsocket to connect to the socket. I am also able to write data to the server. The icecast configuration is done for mp3 format. But the mp3 file written to the server is corrupt. I am providing some code snippets.
Header:
NSString *string = #"SOURCE /sync HTTP/1.0\r\n"
"Authorization: Basic c291cmNlOmhhY2ttZQ==\r\n"
"User-Agent: butt-0.1.12\r\n"
"User-Agent: butt-0.1.12\r\n"
"content-type: audio/mpeg\r\n"
"ice-name: sync's Stream\r\n"
"ice-public: 0\r\n"
"ice-genre: Rock\r\n"
"ice-description: This is my server description\r\n"
"Connection: keep-alive\r\n"
"ice-audio-info: ice-samplerate=44100;ice-bitrate=48;ice-channels=2\r\n\r\n";
NSData *data = [string dataUsingEncoding:NSUTF8StringEncoding];
//sending http request to write the header
NSLog(#"Sending HTTP Request.");
[socket writeData:data withTimeout:-1 tag:1];
//write buffer data to server
[socket writeData:self.dataBuffer withTimeout:-1 tag:1];
for recording i am using aqrecorder using the following code to record it.
void AQRecorder::MyInputBufferHandler( void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription* inPacketDesc)
{
AQRecorder *aqr = (AQRecorder *)inUserData;
try {
if (inNumPackets > 0) {
// write packets to file
XThrowIfError(AudioFileWritePackets(aqr->mRecordFile, FALSE, inBuffer->mAudioDataByteSize,
inPacketDesc, aqr->mRecordPacket, &inNumPackets, inBuffer->mAudioData),
"AudioFileWritePackets failed");
aqr->mRecordPacket += inNumPackets;
NSLog(#"size = %u",(unsigned int)inBuffer->mAudioDataByteSize);
data = [[[NSData alloc]initWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize]retain];
server *srv = [[server alloc]init];
srv.dataBuffer=data;
[srv connecting];
}
// if we're not stopping, re-enqueue the buffe so that it gets filled again
if (aqr->IsRunning())
XThrowIfError(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL), "AudioQueueEnqueueBuffer failed");
} catch (CAXException e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
}
}
void AQRecorder::StartRecord(CFStringRef inRecordFile)
{
// server *srv=[[server alloc]init];
// [srv connecting];
int i, bufferByteSize;
UInt32 size;
CFURLRef url = nil;
try {
mFileName = CFStringCreateCopy(kCFAllocatorDefault, inRecordFile);
// // specify the recording format
// SetupAudioFormat(kAudioFormatMPEG4AAC);
// specify the recording format, use hardware AAC if available
// otherwise use IMA4
if(IsAACHardwareEncoderAvailable())
SetupAudioFormat(kAudioFormatMPEG4AAC);
else
SetupAudioFormat(kAudioFormatAppleIMA4);
// create the queue
XThrowIfError(AudioQueueNewInput(
&mRecordFormat,
MyInputBufferHandler,
this /* userData */,
NULL /* run loop */, NULL /* run loop mode */,
0 /* flags */, &mQueue), "AudioQueueNewInput failed");
// get the record format back from the queue's audio converter --
// the file may require a more specific stream description than was necessary to create the encoder.
mRecordPacket = 0;
size = sizeof(mRecordFormat);
XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_StreamDescription,
&mRecordFormat, &size), "couldn't get queue's format");
NSString *recordFile = [NSTemporaryDirectory() stringByAppendingPathComponent: (NSString*)inRecordFile];
//url = CFURLCreateWithString(kCFAllocatorDefault, (CFStringRef)recordFile, NULL);
url = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)recordFile, kCFURLPOSIXPathStyle, false);
// create the audio file
OSStatus status = AudioFileCreateWithURL(url, kAudioFileCAFType, &mRecordFormat, kAudioFileFlags_EraseFile, &mRecordFile);
CFRelease(url);
XThrowIfError(status, "AudioFileCreateWithURL failed");
// copy the cookie first to give the file object as much info as we can about the data going in
// not necessary for pcm, but required for some compressed audio
CopyEncoderCookieToFile();
// allocate and enqueue buffers
bufferByteSize = ComputeRecordBufferSize(&mRecordFormat, kBufferDurationSeconds); // enough bytes for half a second
for (i = 0; i < kNumberRecordBuffers; ++i) {
XThrowIfError(AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]),
"AudioQueueAllocateBuffer failed");
XThrowIfError(AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULL),
"AudioQueueEnqueueBuffer failed");
}
// start the queue
mIsRunning = true;
XThrowIfError(AudioQueueStart(mQueue, NULL), "AudioQueueStart failed");
}
catch (CAXException e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
}
catch (...) {
fprintf(stderr, "An unknown error occurred\n");;
}
}
Do i need to change the format to write to the server?
You're not sending MP3 data, you're sending AAC or M4A data. I don't believe Icecast supports M4A. Are you actually using Icecast or some other server?
For AAC, your Content-Type header is wrong. Try audio/aac, audio/aacp, audio/mp4 or audio/mpeg4-generic.
Also, you only need one User-Agent header, and you should pick something that matches the software you are writing rather than copying someone else's. In the future, there might need to be an adjustment of protocol for your code, and that would only be possible if you used your own user-agent string.
Related
I want to send ARFrame pixel buffer data over network, i have listed my setup below. With this setup if i try to send the frames app crashes in gstreamers C code after few frames, but if i send camera's AVCaptureVideoDataOutput pixel buffer instead, the stream works fine. I have set AVCaptureSession's pixel format type to
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
so it replicates the same type ARFrame gives. Please help i am unable to find any solution. I am sorry if my english is bad or i have missed something out, do ask me for it.
My Setup
Get pixelBuffer of ARFrame from didUpdateFrame delegate of ARKit
Encode to h264 using VTCompressionSession
- (void)SendFrames:(CVPixelBufferRef)pixelBuffer :(NSTimeInterval)timeStamp
{
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
if(session == NULL)
{
[self initEncoder:width height:height];
}
CMTime presentationTimeStamp = CMTimeMake(0, 1);
OSStatus statusCode = VTCompressionSessionEncodeFrame(session, pixelBuffer, presentationTimeStamp, kCMTimeInvalid, NULL, NULL, NULL);
if (statusCode != noErr) {
// End the session
VTCompressionSessionInvalidate(session);
CFRelease(session);
session = NULL;
return;
}
VTCompressionSessionEndPass(session, NULL, NULL);
}
- (void) initEncoder:(size_t)width height:(size_t)height
{
OSStatus status = VTCompressionSessionCreate(NULL, (int)width, (int)height, kCMVideoCodecType_H264, NULL, NULL, NULL, OutputCallback, NULL, &session);
NSLog(#":VTCompressionSessionCreate %d", (int)status);
if (status != noErr)
{
NSLog(#"Unable to create a H264 session");
return ;
}
VTSessionSetProperty(session, kVTCompressionPropertyKey_RealTime, kCFBooleanTrue);
VTSessionSetProperty(session, kVTCompressionPropertyKey_ProfileLevel, kVTProfileLevel_H264_Baseline_AutoLevel);
VTCompressionSessionPrepareToEncodeFrames(session);
}
Get sampleBuffer from callback, convert it to elementary stream
void OutputCallback(void *outputCallbackRefCon, void *sourceFrameRefCon, OSStatus status, VTEncodeInfoFlags infoFlags,CMSampleBufferRef sampleBuffer)
{
if (status != noErr) {
NSLog(#"Error encoding video, err=%lld", (int64_t)status);
return;
}
if (!CMSampleBufferDataIsReady(sampleBuffer))
{
NSLog(#"didCompressH264 data is not ready ");
return;
}
// In this example we will use a NSMutableData object to store the
// elementary stream.
NSMutableData *elementaryStream = [NSMutableData data];
// This is the start code that we will write to
// the elementary stream before every NAL unit
static const size_t startCodeLength = 4;
static const uint8_t startCode[] = {0x00, 0x00, 0x00, 0x01};
// Write the SPS and PPS NAL units to the elementary stream
CMFormatDescriptionRef description = CMSampleBufferGetFormatDescription(sampleBuffer);
// Find out how many parameter sets there are
size_t numberOfParameterSets;
int AVCCHeaderLength;
CMVideoFormatDescriptionGetH264ParameterSetAtIndex(description,
0, NULL, NULL,
&numberOfParameterSets,
&AVCCHeaderLength);
// Write each parameter set to the elementary stream
for (int i = 0; i < numberOfParameterSets; i++) {
const uint8_t *parameterSetPointer;
int NALUnitHeaderLengthOut = 0;
size_t parameterSetLength;
CMVideoFormatDescriptionGetH264ParameterSetAtIndex(description,
i,
¶meterSetPointer,
¶meterSetLength,
NULL, &NALUnitHeaderLengthOut);
// Write the parameter set to the elementary stream
[elementaryStream appendBytes:startCode length:startCodeLength];
[elementaryStream appendBytes:parameterSetPointer length:parameterSetLength];
}
// Get a pointer to the raw AVCC NAL unit data in the sample buffer
size_t blockBufferLength;
uint8_t *bufferDataPointer = NULL;
size_t lengthAtOffset = 0;
size_t bufferOffset = 0;
CMBlockBufferGetDataPointer(CMSampleBufferGetDataBuffer(sampleBuffer),
bufferOffset,
&lengthAtOffset,
&blockBufferLength,
(char **)&bufferDataPointer);
// Loop through all the NAL units in the block buffer
// and write them to the elementary stream with
// start codes instead of AVCC length headers
while (bufferOffset < blockBufferLength - AVCCHeaderLength) {
// Read the NAL unit length
uint32_t NALUnitLength = 0;
memcpy(&NALUnitLength, bufferDataPointer + bufferOffset, AVCCHeaderLength);
// Convert the length value from Big-endian to Little-endian
NALUnitLength = CFSwapInt32BigToHost(NALUnitLength);
// Write start code to the elementary stream
[elementaryStream appendBytes:startCode length:startCodeLength];
// Write the NAL unit without the AVCC length header to the elementary stream
[elementaryStream appendBytes:bufferDataPointer + bufferOffset + AVCCHeaderLength
length:NALUnitLength];
// Move to the next NAL unit in the block buffer
bufferOffset += AVCCHeaderLength + NALUnitLength;
}
char *bytePtr = (char *)[elementaryStream mutableBytes];
long maxSize = (long)elementaryStream.length;
CMTime presentationtime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
vidplayer_stream(bytePtr, maxSize, (long)presentationtime.value);
}
I'm trying to decode a raw stream of .H264 video data but I can't find a way to create a proper
- (void)decodeFrameWithNSData:(NSData*)data presentationTime:
(CMTime)presentationTime
{
#autoreleasepool {
CMSampleBufferRef sampleBuffer = NULL;
CMBlockBufferRef blockBuffer = NULL;
VTDecodeInfoFlags infoFlags;
int sourceFrame;
if( dSessionRef == NULL )
[self createDecompressionSession];
CMSampleTimingInfo timingInfo ;
timingInfo.presentationTimeStamp = presentationTime;
timingInfo.duration = CMTimeMake(1,100000000);
timingInfo.decodeTimeStamp = kCMTimeInvalid;
//Creates block buffer from NSData
OSStatus status = CMBlockBufferCreateWithMemoryBlock(CFAllocatorGetDefault(), (void*)data.bytes,data.length*sizeof(char), CFAllocatorGetDefault(), NULL, 0, data.length*sizeof(char), 0, &blockBuffer);
//Creates CMSampleBuffer to feed decompression session
status = CMSampleBufferCreateReady(CFAllocatorGetDefault(), blockBuffer,self.encoderVideoFormat,1,1,&timingInfo, 0, 0, &sampleBuffer);
status = VTDecompressionSessionDecodeFrame(dSessionRef,sampleBuffer, kVTDecodeFrame_1xRealTimePlayback, &sourceFrame,&infoFlags);
if(status != noErr) {
NSLog(#"Decode with data error %d",status);
}
}
}
At the end of the call I'm getting -12911 error in VTDecompressionSessionDecodeFrame that translates to kVTVideoDecoderMalfunctionErr which after reading this [post] pointed me that I should make a VideoFormatDescriptor using CMVideoFormatDescriptionCreateFromH264ParameterSets. But how can I create a new VideoFormatDescription if I don't have information of the currentSps or currentPps? How can I get that information from my raw .H264 streaming?
CMFormatDescriptionRef decoderFormatDescription;
const uint8_t* const parameterSetPointers[2] =
{ (const uint8_t*)[currentSps bytes], (const uint8_t*)[currentPps bytes] };
const size_t parameterSetSizes[2] =
{ [currentSps length], [currentPps length] };
status = CMVideoFormatDescriptionCreateFromH264ParameterSets(NULL,
2,
parameterSetPointers,
parameterSetSizes,
4,
&decoderFormatDescription);
Thanks in advance,
Marcos
[post] : Decoding H264 VideoToolkit API fails with Error -8971 in VTDecompressionSessionCreate
You you MUST call CMVideoFormatDescriptionCreateFromH264ParameterSets first. The SPS/PPS may be stored/transmitted separately from the video stream. Or may come inline.
Note that for VTDecompressionSessionDecodeFrame your NALUs must be preceded with a size, and not a start code.
You can read more here:
Possible Locations for Sequence/Picture Parameter Set(s) for H.264 Stream
So, basically I want to play some audio files (mp3 and caf mostly). But the callback never gets called. Only when I call them to prime the queue.
Here's my data struct:
struct AQPlayerState
{
CAStreamBasicDescription mDataFormat;
AudioQueueRef mQueue;
AudioQueueBufferRef mBuffers[kBufferNum];
AudioFileID mAudioFile;
UInt32 bufferByteSize;
SInt64 mCurrentPacket;
UInt32 mNumPacketsToRead;
AudioStreamPacketDescription *mPacketDescs;
bool mIsRunning;
};
Here's my callback function:
static void HandleOutputBuffer (void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer)
{
NSLog(#"HandleOutput");
AQPlayerState *pAqData = (AQPlayerState *) aqData;
if (pAqData->mIsRunning == false) return;
UInt32 numBytesReadFromFile;
UInt32 numPackets = pAqData->mNumPacketsToRead;
AudioFileReadPackets (pAqData->mAudioFile,
false,
&numBytesReadFromFile,
pAqData->mPacketDescs,
pAqData->mCurrentPacket,
&numPackets,
inBuffer->mAudioData);
if (numPackets > 0) {
inBuffer->mAudioDataByteSize = numBytesReadFromFile;
AudioQueueEnqueueBuffer (pAqData->mQueue,
inBuffer,
(pAqData->mPacketDescs ? numPackets : 0),
pAqData->mPacketDescs);
pAqData->mCurrentPacket += numPackets;
} else {
// AudioQueueStop(pAqData->mQueue, false);
// AudioQueueDispose(pAqData->mQueue, true);
// AudioFileClose (pAqData->mAudioFile);
// free(pAqData->mPacketDescs);
// free(pAqData->mFloatBuffer);
pAqData->mIsRunning = false;
}
}
And here's my method:
- (void)playFile
{
AQPlayerState aqData;
// get the source file
NSString *p = [[NSBundle mainBundle] pathForResource:#"1_Female" ofType:#"mp3"];
NSURL *url2 = [NSURL fileURLWithPath:p];
CFURLRef srcFile = (__bridge CFURLRef)url2;
OSStatus result = AudioFileOpenURL(srcFile, 0x1/*fsRdPerm*/, 0/*inFileTypeHint*/, &aqData.mAudioFile);
CFRelease (srcFile);
CheckError(result, "Error opinning sound file");
UInt32 size = sizeof(aqData.mDataFormat);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyDataFormat, &size, &aqData.mDataFormat),
"Error getting file's data format");
CheckError(AudioQueueNewOutput(&aqData.mDataFormat, HandleOutputBuffer, &aqData, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &aqData.mQueue),
"Error AudioQueueNewOutPut");
// we need to calculate how many packets we read at a time and how big a buffer we need
// we base this on the size of the packets in the file and an approximate duration for each buffer
{
bool isFormatVBR = (aqData.mDataFormat.mBytesPerPacket == 0 || aqData.mDataFormat.mFramesPerPacket == 0);
// first check to see what the max size of a packet is - if it is bigger
// than our allocation default size, that needs to become larger
UInt32 maxPacketSize;
size = sizeof(maxPacketSize);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyPacketSizeUpperBound, &size, &maxPacketSize),
"Error getting max packet size");
// adjust buffer size to represent about a second of audio based on this format
CalculateBytesForTime(aqData.mDataFormat, maxPacketSize, 1.0/*seconds*/, &aqData.bufferByteSize, &aqData.mNumPacketsToRead);
if (isFormatVBR) {
aqData.mPacketDescs = new AudioStreamPacketDescription [aqData.mNumPacketsToRead];
} else {
aqData.mPacketDescs = NULL; // we don't provide packet descriptions for constant bit rate formats (like linear PCM)
}
printf ("Buffer Byte Size: %d, Num Packets to Read: %d\n", (int)aqData.bufferByteSize, (int)aqData.mNumPacketsToRead);
}
// if the file has a magic cookie, we should get it and set it on the AQ
size = sizeof(UInt32);
result = AudioFileGetPropertyInfo(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, NULL);
if (!result && size) {
char* cookie = new char [size];
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, cookie),
"Error getting cookie from file");
CheckError(AudioQueueSetProperty(aqData.mQueue, kAudioQueueProperty_MagicCookie, cookie, size),
"Error setting cookie to file");
delete[] cookie;
}
aqData.mCurrentPacket = 0;
for (int i = 0; i < kBufferNum; ++i) {
CheckError(AudioQueueAllocateBuffer (aqData.mQueue,
aqData.bufferByteSize,
&aqData.mBuffers[i]),
"Error AudioQueueAllocateBuffer");
HandleOutputBuffer (&aqData,
aqData.mQueue,
aqData.mBuffers[i]);
}
// set queue's gain
Float32 gain = 1.0;
CheckError(AudioQueueSetParameter (aqData.mQueue,
kAudioQueueParam_Volume,
gain),
"Error AudioQueueSetParameter");
aqData.mIsRunning = true;
CheckError(AudioQueueStart(aqData.mQueue,
NULL),
"Error AudioQueueStart");
}
And the output when I press play:
Buffer Byte Size: 40310, Num Packets to Read: 38
HandleOutput start
HandleOutput start
HandleOutput start
I tryed replacing CFRunLoopGetCurrent() with CFRunLoopGetMain() and CFRunLoopCommonModes with CFRunLoopDefaultMode, but nothing.
Shouldn't the primed buffers start playing right away I start the queue?
When I start the queue, no callbacks are bang fired.
What am I doing wrong? Thanks for any ideas
What you are basically trying to do here is a basic example of audio playback using Audio Queues. Without looking at your code in detail to see what's missing (that could take a while) i'd rather recommend to you to follow the steps in this basic sample code that does exactly what you're doing (without the extras that aren't really relevant.. for example why are you trying to add audio gain?)
Somewhere else you were trying to play audio using audio units. Audio units are more complex than basic audio queue playback, and I wouldn't attempt them before being very comfortable with audio queues. But you can look at this example project for a basic example of audio queues.
In general when it comes to Core Audio programming in iOS, it's best you take your time with the basic examples and build your way up.. the problem with a lot of tutorials online is that they add extra stuff and often mix it with obj-c code.. when Core Audio is purely C code (ie the extra stuff won't add anything to the learning process). I strongly recommend you go over the book Learning Core Audio if you haven't already. All the sample code is available online, but you can also clone it from this repo for convenience. That's how I learned core audio. It takes time :)
I have inBuffer->mAudioData converted into NSData coming via Network. I do this using the Audio Queue Callback.
How do I convert this NSData so that I can either create a .caf sound file or directly give output to the speaker on the other side ?
Thanks for help.
Edit 1:
Below is the code I have used on the sender side to send Data on a wifi network:
void AudioInputCallback(void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription * inPacketDescs)
{
RecordState * recordState = (RecordState*)inUserData;
if (!recordState->recording)
{
printf("Not recording, returning\n");
}
// if (inNumberPacketDescriptions == 0 && recordState->dataFormat.mBytesPerPacket != 0)
// {
// inNumberPacketDescriptions = inBuffer->mAudioDataByteSize / recordState->dataFormat.mBytesPerPacket;
// }
printf("Writing buffer %lld\n", recordState->currentPacket);
OSStatus status = AudioFileWritePackets(recordState->audioFile,
false,
inBuffer->mAudioDataByteSize,
inPacketDescs,
recordState->currentPacket,
&inNumberPacketDescriptions,
inBuffer->mAudioData);
NSLog(#"DATA = %#",[NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize*NUM_BUFFERS]);
[[NSNotificationCenter defaultCenter] postNotificationName:#"Recording" object:[NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize*NUM_BUFFERS]];
if (status == 0)
{
recordState->currentPacket += inNumberPacketDescriptions;
}
AudioQueueEnqueueBuffer(recordState->queue, inBuffer, 0, NULL);
}
Here I have casted inBuffer->mAudioData to NSData and then I have sent to the outputStream.
On the other end that is the receivers side I have used the below code:
-(void)audioMessageData:(NSData *)audioData fromUser:(NSString *)userName {
NSLog(#"DATA = %#",audioData);
}
Whenever I get data the above function gets called and I get NSData which I have sent from the sender iPhone. Now I want to play whatever audioData I am receiving.
Thanks for the help.
The easiest way is to configure an Audio Queue or RemoteIO Audio Unit to play data of the same format (raw linear PCM?) and start pulling samples from a lock free circular buffer (playing silence for underflow). Then just copy your received data out of the NSData network packets into the circular buffer.
As you can see from the code, within my callback I extract out the audio data and place it into NSData data, then send that off to another class to upload that to the server. This all works, meaning the server receives and plays the audio data. HOWEVER there is a clicking or tapping noise between the buffers. I am hoping someone might show me what is causing that and how it can be fixed.
I have read other related postings however they all seemed to refer to only using 1 buffer and that adding more was the fix but I am using 3 buffers and have tried adjusting that number which did not fix it
AQRecorder.mm
#include "AQRecorder.h"
RestClient * restClient;
NSData* data;
// ____________________________________________________________________________________
// Determine the size, in bytes, of a buffer necessary to represent the supplied number
// of seconds of audio data.
int AQRecorder::ComputeRecordBufferSize(const AudioStreamBasicDescription *format, float seconds)
{
int packets, frames, bytes = 0;
try {
frames = (int)ceil(seconds * format->mSampleRate);
if (format->mBytesPerFrame > 0)
bytes = frames * format->mBytesPerFrame;
else {
UInt32 maxPacketSize;
if (format->mBytesPerPacket > 0)
maxPacketSize = format->mBytesPerPacket; // constant packet size
else {
UInt32 propertySize = sizeof(maxPacketSize);
XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_MaximumOutputPacketSize, &maxPacketSize,
&propertySize), "couldn't get queue's maximum output packet size");
}
if (format->mFramesPerPacket > 0)
packets = frames / format->mFramesPerPacket;
else
packets = frames; // worst-case scenario: 1 frame in a packet
if (packets == 0) // sanity check
packets = 1;
bytes = packets * maxPacketSize;
}
} catch (CAXException e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
return 0;
}
return bytes;
}
// ____________________________________________________________________________________
// AudioQueue callback function, called when an input buffers has been filled.
void AQRecorder::MyInputBufferHandler( void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription* inPacketDesc)
{
AQRecorder *aqr = (AQRecorder *)inUserData;
try {
if (inNumPackets > 0) {
// write packets to file
// XThrowIfError(AudioFileWritePackets(aqr->mRecordFile, FALSE, inBuffer->mAudioDataByteSize,
// inPacketDesc, aqr->mRecordPacket, &inNumPackets, inBuffer->mAudioData),
// "AudioFileWritePackets failed");
aqr->mRecordPacket += inNumPackets;
// int numBytes = inBuffer->mAudioDataByteSize;
// SInt8 *testBuffer = (SInt8*)inBuffer->mAudioData;
//
// for (int i=0; i < numBytes; i++)
// {
// SInt8 currentData = testBuffer[i];
// printf("Current data in testbuffer is %d", currentData);
//
// NSData * temp = [NSData dataWithBytes:currentData length:sizeof(currentData)];
// }
data=[[NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize]retain];
[restClient uploadAudioData:data url:nil];
}
// if we're not stopping, re-enqueue the buffer so that it gets filled again
if (aqr->IsRunning())
XThrowIfError(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL), "AudioQueueEnqueueBuffer failed");
} catch (CAXException e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
}
}
AQRecorder::AQRecorder()
{
mIsRunning = false;
mRecordPacket = 0;
data = [[NSData alloc]init];
restClient = [[RestClient sharedManager]retain];
}
AQRecorder::~AQRecorder()
{
AudioQueueDispose(mQueue, TRUE);
AudioFileClose(mRecordFile);
if (mFileName){
CFRelease(mFileName);
}
[restClient release];
[data release];
}
// ____________________________________________________________________________________
// Copy a queue's encoder's magic cookie to an audio file.
void AQRecorder::CopyEncoderCookieToFile()
{
UInt32 propertySize;
// get the magic cookie, if any, from the converter
OSStatus err = AudioQueueGetPropertySize(mQueue, kAudioQueueProperty_MagicCookie, &propertySize);
// we can get a noErr result and also a propertySize == 0
// -- if the file format does support magic cookies, but this file doesn't have one.
if (err == noErr && propertySize > 0) {
Byte *magicCookie = new Byte[propertySize];
UInt32 magicCookieSize;
XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_MagicCookie, magicCookie, &propertySize), "get audio converter's magic cookie");
magicCookieSize = propertySize; // the converter lies and tell us the wrong size
// now set the magic cookie on the output file
UInt32 willEatTheCookie = false;
// the converter wants to give us one; will the file take it?
err = AudioFileGetPropertyInfo(mRecordFile, kAudioFilePropertyMagicCookieData, NULL, &willEatTheCookie);
if (err == noErr && willEatTheCookie) {
err = AudioFileSetProperty(mRecordFile, kAudioFilePropertyMagicCookieData, magicCookieSize, magicCookie);
XThrowIfError(err, "set audio file's magic cookie");
}
delete[] magicCookie;
}
}
void AQRecorder::SetupAudioFormat(UInt32 inFormatID)
{
memset(&mRecordFormat, 0, sizeof(mRecordFormat));
UInt32 size = sizeof(mRecordFormat.mSampleRate);
XThrowIfError(AudioSessionGetProperty( kAudioSessionProperty_CurrentHardwareSampleRate,
&size,
&mRecordFormat.mSampleRate), "couldn't get hardware sample rate");
//override samplearate to 8k from device sample rate
mRecordFormat.mSampleRate = 8000.0;
size = sizeof(mRecordFormat.mChannelsPerFrame);
XThrowIfError(AudioSessionGetProperty( kAudioSessionProperty_CurrentHardwareInputNumberChannels,
&size,
&mRecordFormat.mChannelsPerFrame), "couldn't get input channel count");
// mRecordFormat.mChannelsPerFrame = 1;
mRecordFormat.mFormatID = inFormatID;
if (inFormatID == kAudioFormatLinearPCM)
{
// if we want pcm, default to signed 16-bit little-endian
mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
mRecordFormat.mBitsPerChannel = 16;
mRecordFormat.mBytesPerPacket = mRecordFormat.mBytesPerFrame = (mRecordFormat.mBitsPerChannel / 8) * mRecordFormat.mChannelsPerFrame;
mRecordFormat.mFramesPerPacket = 1;
}
if (inFormatID == kAudioFormatULaw) {
NSLog(#"is ulaw");
mRecordFormat.mSampleRate = 8000.0;
mRecordFormat.mFormatFlags = 0;
mRecordFormat.mFramesPerPacket = 1;
mRecordFormat.mChannelsPerFrame = 1;
mRecordFormat.mBitsPerChannel = 8;
mRecordFormat.mBytesPerPacket = 1;
mRecordFormat.mBytesPerFrame = 1;
}
}
NSString * GetDocumentDirectory(void)
{
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;
return basePath;
}
void AQRecorder::StartRecord(CFStringRef inRecordFile)
{
int i, bufferByteSize;
UInt32 size;
CFURLRef url;
try {
mFileName = CFStringCreateCopy(kCFAllocatorDefault, inRecordFile);
// specify the recording format
SetupAudioFormat(kAudioFormatULaw /*kAudioFormatLinearPCM*/);
// create the queue
XThrowIfError(AudioQueueNewInput(
&mRecordFormat,
MyInputBufferHandler,
this /* userData */,
NULL /* run loop */, NULL /* run loop mode */,
0 /* flags */, &mQueue), "AudioQueueNewInput failed");
// get the record format back from the queue's audio converter --
// the file may require a more specific stream description than was necessary to create the encoder.
mRecordPacket = 0;
size = sizeof(mRecordFormat);
XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_StreamDescription,
&mRecordFormat, &size), "couldn't get queue's format");
NSString *basePath = GetDocumentDirectory();
NSString *recordFile = [basePath /*NSTemporaryDirectory()*/ stringByAppendingPathComponent: (NSString*)inRecordFile];
url = CFURLCreateWithString(kCFAllocatorDefault, (CFStringRef)recordFile, NULL);
// create the audio file
XThrowIfError(AudioFileCreateWithURL(url, kAudioFileCAFType, &mRecordFormat, kAudioFileFlags_EraseFile,
&mRecordFile), "AudioFileCreateWithURL failed");
CFRelease(url);
// copy the cookie first to give the file object as much info as we can about the data going in
// not necessary for pcm, but required for some compressed audio
CopyEncoderCookieToFile();
// allocate and enqueue buffers
bufferByteSize = ComputeRecordBufferSize(&mRecordFormat, kBufferDurationSeconds); // enough bytes for half a second
for (i = 0; i < kNumberRecordBuffers; ++i) {
XThrowIfError(AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]),
"AudioQueueAllocateBuffer failed");
XThrowIfError(AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULL),
"AudioQueueEnqueueBuffer failed");
}
// start the queue
mIsRunning = true;
XThrowIfError(AudioQueueStart(mQueue, NULL), "AudioQueueStart failed");
}
catch (CAXException &e) {
char buf[256];
fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf));
}
catch (...) {
fprintf(stderr, "An unknown error occurred\n");
}
}
void AQRecorder::StopRecord()
{
// end recording
mIsRunning = false;
// XThrowIfError(AudioQueueReset(mQueue), "AudioQueueStop failed");
XThrowIfError(AudioQueueStop(mQueue, true), "AudioQueueStop failed");
// a codec may update its cookie at the end of an encoding session, so reapply it to the file now
CopyEncoderCookieToFile();
if (mFileName)
{
CFRelease(mFileName);
mFileName = NULL;
}
AudioQueueDispose(mQueue, true);
AudioFileClose(mRecordFile);
}
I changed my #define kBufferDurationSeconds from .5 to 5.0 and although the clicking is still there it is alot less noticeable.
Please if you have suggestions/answer still post as this is not a fix merely a work around thats somewhat better then before
I also tried to append data to data for a number of times prior to sending the data to the server. This also seems to have helped.