implementing record audio and play in another iDevices - ios

I want to record audio and streaming on another iPhone.
Is this format correct for record and streaming?
format -> mSampleRate = 44100.00; //
format -> mFormatID = kAudioFormatLinearPCM; //
format -> mFramesPerPacket = 1;
format -> mChannelsPerFrame = 1; //
format -> mBitsPerChannel = 16; //
format -> mReserved = 0;
format -> mBytesPerPacket = 2;
format -> mBytesPerFrame = 2;
format -> mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
call for start recording:
- (void) startRecordingInQueue {
[self setupAudioFormat:&recordState.dataFormat];
recordState.currentPacket = 0;
OSStatus status;
status = AudioQueueNewInput(&recordState.dataFormat, AudioInputCallback, &recordState, CFRunLoopGetCurrent(),kCFRunLoopCommonModes, 0, &recordState.queue);
status = 0;
if(status == 0) {
//Prime recording buffers with empty data
AudioQueueBufferRef buffer;
for (int i=0; i < NUM_BUFFERS; i++) {
AudioQueueAllocateBuffer(recordState.queue, SAMPLERATE, &recordState.buffers[i]);
AudioQueueEnqueueBuffer(recordState.queue, recordState.buffers[i], 0, NULL);
}
status = AudioFileCreateWithURL(fileURL, kAudioFileAIFFType, &recordState.dataFormat, kAudioFileFlags_EraseFile, &recordState.audioFile);
NSLog(#"ss %i",status);
status = 0;
if (status == 0) {
recordState.recording = true;
status = AudioQueueStart(recordState.queue, NULL);
if(status == 0) {
NSLog(#"-----------Recording--------------");
NSLog(#"File URL : %#", fileURL);
NSURL *url = [NSURL URLWithString:[NSString stringWithFormat:#"%#",fileURL] relativeToURL:NULL];
[[NSUserDefaults standardUserDefaults] setURL:url forKey:#"fileUrl"];
[[NSUserDefaults standardUserDefaults] synchronize];
}
}
}
if (status != 0) {
[self stopRecordingInQueue];
}
}
if it ok،
How to get out audio buffer data for sending to the server in this code?
And how to play its data in other devices?
void AudioInputCallback(void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription * inPacketDescs)
{
RecordState * recordState = (RecordState*)inUserData;
if (!recordState->recording)
{
printf("Not recording, returning\n");
}
printf("Writing buffer %lld\n", recordState->currentPacket);
OSStatus status = AudioFileWritePackets(recordState->audioFile,
false,
inBuffer->mAudioDataByteSize,
inPacketDescs,
recordState->currentPacket,
&inNumberPacketDescriptions,
inBuffer->mAudioData);
if (status == 0)
{
recordState->currentPacket += inNumberPacketDescriptions;
AudioQueueEnqueueBuffer(recordState->queue, inBuffer, 0, NULL);
}
}
if anyone has complete code for this project please link me to source.
thanks

This one of my code you try it to change the formate
Here I stopped the audio recording after that I'm sending to server. This is my requirement. For this I'm converting audio file into .wav. After sending into server also it's playing successfully. Try it...
- (IBAction)stop:(id)sender { // On tap stop button
NSString *urlString;
NSLog(#"Stop Recording");
if ([audioRecorder isRecording]) // If recording audio
{
[audioRecorder stop]; //Stop it
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryPlayback error:nil];
[audioSession setActive:NO error:nil];
NSLog(#"%#", audioRecorder.url);
unsigned long long size = [[NSFileManager defaultManager] attributesOfItemAtPath:[audioRecorder.url path] error:nil].fileSize;//Get audio file path
NSLog(#"This is the file size of the recording in bytes: %llu", size); // Audio file size in bytes
Here my file saved in documents directory with filename "Audio". So no fetched that file and I changed file name into 56e254fa816e8_1519650589.wav and I'm sending that file to server.
self.sendFileName = [#"56e254fa816e8" stringByAppendingString:#"_"];
self.sendFileName = [self.sendFileName stringByAppendingString:[#(self.unixTimeStamp) stringValue]];
self.sendFileName = [self.sendFileName stringByAppendingString:#".wav"]; // Here name converted in to .wav completely
NSLog(#"%#", self.sendFileName);//See here the file formate
urlString = [audioRecorder.url.absoluteString stringByReplacingOccurrencesOfString:#"Audio" withString:self.sendFileName];
self.sendingURL = [NSURL URLWithString:urlString];
NSLog(#"%#", self.sendingURL);//Complete file url
}
}
Try this...

Related

Using AVAssetWriter and AVAssetReader for Audio Recording and Playback

My app uses AVAssetReader to play songs in the iPOD library. Now I want to add the audio recording capability.
I recorded audio using AVAssetWriter. I checked the resulting audio file(MPEG4AAC format) by playing it back successfully using AVAudioPlayer. My goal is to play back audio using AVAssetReader. But when I create an AVURLAsset for the file, it has no track and hence AVAssetReader fails (error code: -11828 File Format Not Recognized).
What should I do to make AVAsset recognize the file format? Is there some special file format required for AVAsset?
Here are codes for recording:
void setup_ASBD(void *f, double fs, int sel, int numChannels);
static AVAssetWriter *assetWriter = NULL;
static AVAssetWriterInput *assetWriterInput = NULL;
static CMAudioFormatDescriptionRef formatDesc;
AVAssetWriter *newAssetWriter(NSURL *url) {
NSError *outError;
assetWriter = [AVAssetWriter assetWriterWithURL:url fileType:AVFileTypeAppleM4A error:&outError];
if(assetWriter == nil) {
NSLog(#"%s: asset=%x, %#\n", __FUNCTION__, (int)assetWriter, outError);
return assetWriter;
}
AudioChannelLayout audioChannelLayout = {
.mChannelLayoutTag = kAudioChannelLayoutTag_Mono,
.mChannelBitmap = 0,
.mNumberChannelDescriptions = 0
};
// Convert the channel layout object to an NSData object.
NSData *channelLayoutAsData = [NSData dataWithBytes:&audioChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
// Get the compression settings for 128 kbps AAC.
NSDictionary *compressionAudioSettings = #{
AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
AVEncoderBitRateKey : [NSNumber numberWithInteger:128000],
AVSampleRateKey : [NSNumber numberWithInteger:44100],
AVChannelLayoutKey : channelLayoutAsData,
AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:1]
};
// Create the asset writer input with the compression settings and specify the media type as audio.
assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:compressionAudioSettings];
assetWriterInput.expectsMediaDataInRealTime = YES;
// Add the input to the writer if possible.
if (assetWriterInput != NULL && [assetWriter canAddInput:assetWriterInput]) {
[assetWriter addInput:assetWriterInput];
}
else {
NSLog(#"%s:assetWriteInput problem: %x\n", __FUNCTION__, (int)assetWriterInput);
return NULL;
}
[assetWriter startWriting];
// Start a sample-writing session.
[assetWriter startSessionAtSourceTime:kCMTimeZero];
if(assetWriter.status != AVAssetWriterStatusWriting) {
NSLog(#"%s: Bad writer status=%d\n", __FUNCTION__, (int)assetWriter.status);
return NULL;
}
AudioStreamBasicDescription ASBD;
setup_ASBD(&ASBD, 44100, 2, 1);
CMAudioFormatDescriptionCreate (NULL, &ASBD, sizeof(audioChannelLayout), &audioChannelLayout, 0, NULL, NULL, &formatDesc);
//CMAudioFormatDescriptionCreate (NULL, &ASBD, 0, NULL, 0, NULL, NULL, &formatDesc);
return assetWriter;
}
static int sampleCnt = 0;
void writeNewSamples(void *buffer, int len) {
if(assetWriterInput == NULL) return;
if([assetWriterInput isReadyForMoreMediaData]) {
OSStatus result;
CMBlockBufferRef blockBuffer = NULL;
result = CMBlockBufferCreateWithMemoryBlock (NULL, buffer, len, NULL, NULL, 0, len, 0, &blockBuffer);
if(result == noErr) {
CMItemCount numSamples = len >> 1;
const CMSampleTimingInfo sampleTiming = {CMTimeMake(1, 44100), CMTimeMake(sampleCnt, 44100), kCMTimeInvalid};
CMItemCount numSampleTimingEntries = 1;
const size_t sampleSize = 2;
CMItemCount numSampleSizeEntries = 1;
CMSampleBufferRef sampleBuffer;
result = CMSampleBufferCreate(NULL, blockBuffer, true, NULL, NULL, formatDesc, numSamples, numSampleTimingEntries, &sampleTiming, numSampleSizeEntries, &sampleSize, &sampleBuffer);
if(result == noErr) {
if([assetWriterInput appendSampleBuffer:sampleBuffer] == YES) sampleCnt += numSamples;
else {
NSLog(#"%s: ERROR\n", __FUNCTION__);
}
printf("sampleCnt = %d\n", sampleCnt);
CFRelease(sampleBuffer);
}
}
}
else {
NSLog(#"%s: AVAssetWriterInput not taking input data: status=%ld\n", __FUNCTION__, assetWriter.status);
}
}
void stopAssetWriter(AVAssetWriter *assetWriter) {
[assetWriterInput markAsFinished];
[assetWriter finishWritingWithCompletionHandler:^{
NSLog(#"%s: Done: %ld: %d samples\n", __FUNCTION__, assetWriter.status, sampleCnt);
sampleCnt = 0;
}];
assetWriterInput = NULL;
}
It turns out that AVAsset expects "valid" file extension. So when the file name does not have one of those common extensions such as *.mp3, *.caf, *.m4a, etc, AVAsset refuses to look at the file header to figure out the media format. On the other hand, AVAudioPlay seems completely indifferent to the file name and figures out the media format by itself by looking at the file header.
This difference does not appear anywhere in Apple doc. and I ended up wasting more than a week on this. Sigh...

Multiple MIDI sounds with MusicDeviceMIDIEvent

I am creating MIDI sounds with MusicDeviceMIDIEvent and it's perfectly fine, I use some aupreset files I've created. One for each instrument.
My code is basically the example from apple: LoadPresetDemo. All I use is:
- (void)loadPreset:(id)sender instrumentName:(const char *)instrumentName {
NSString *presetURL1 = [NSString stringWithCString:instrumentName encoding:NSUTF8StringEncoding];
NSString *path = [[NSBundle mainBundle] pathForResource:presetURL1 ofType:#"aupreset"];
NSURL *presetURL = [[NSURL alloc] initFileURLWithPath: path];
if (presetURL) {
NSLog(#"Attempting to load preset '%#'\n", [presetURL description]);
}
else {
NSLog(#"COULD NOT GET PRESET PATH!");
}
[self loadSynthFromPresetURL: presetURL];
}
To load my aupreset file, then:
- (void) noteOn:(id)sender midiNumber:(int)midiNumber {
UInt32 noteNum = midiNumber;
UInt32 onVelocity = 127;
UInt32 noteCommand = kMIDIMessage_NoteOn << 4 | 0;
OSStatus result = noErr;
require_noerr (result = MusicDeviceMIDIEvent (self.samplerUnit, noteCommand, noteNum, onVelocity, 0), logTheError);
logTheError:
if (result != noErr) NSLog (#"Unable to start playing the low note. Error code: %d '%.4s'\n", (int) result, (const char *)&result);
}
to play the note of the previously loaded aupreset, and:
- (void) noteOff:(id)sender midiNumber:(int)midiNumber
when I want it to stop.
Now I would like to play one note of each instrument simultaneously. What is the easiest way of doing this?

Open al terminates game with NSException on iOS

when i play my sound it says there is something wrong with the code but i cannot see what it is. The game runs fine until i try to play the sound which is when it terminates with uncaught exception of type NSException. It is not the best example of Open Al but can anybody see what is wrong with it?
- (id)init
{
self = [super init];
if (self)
{
openALDevice = alcOpenDevice(NULL);
openALContext = alcCreateContext(openALDevice, NULL);
alcMakeContextCurrent(openALContext);
}
return self;
}
-(void)playSound {
NSUInteger sourceID;
alGenSources(1, &sourceID);
NSString *audioFilePath = [[NSBundle mainBundle] pathForResource:#"flyby" ofType:#"caf"];
NSURL *audioFileURL = [NSURL fileURLWithPath:audioFilePath];
AudioFileID afid;
OSStatus openAudioFileResult = AudioFileOpenURL((__bridge CFURLRef)audioFileURL, kAudioFileReadPermission, 0, &afid);
if (0 != openAudioFileResult)
{
NSLog(#"An error occurred when attempting to open the audio file %#: %ld", audioFilePath, openAudioFileResult);
return;
}
UInt64 audioDataByteCount = 0;
UInt32 propertySize = sizeof(audioDataByteCount);
OSStatus getSizeResult = AudioFileGetProperty(afid, kAudioFilePropertyAudioDataByteCount, &propertySize, &audioDataByteCount);
if (0 != getSizeResult)
{
NSLog(#"An error occurred when attempting to determine the size of audio file %#: %ld", audioFilePath, getSizeResult);
}
UInt32 bytesRead = (UInt32)audioDataByteCount;
void *audioData = malloc(bytesRead);
OSStatus readBytesResult = AudioFileReadBytes(afid, false, 0, &bytesRead, audioData);
if (0 != readBytesResult)
{
NSLog(#"An error occurred when attempting to read data from audio file %#: %ld", audioFilePath, readBytesResult);
}
AudioFileClose(afid);
ALuint outputBuffer;
alGenBuffers(1, &outputBuffer);
alBufferData(outputBuffer, AL_FORMAT_STEREO16, audioData, bytesRead, 44100);
if (audioData)
{
free(audioData);
audioData = NULL;
}
alSourcef(sourceID, AL_PITCH, 1.0f);
alSourcef(sourceID, AL_GAIN, 1.0f);
alSourcei(sourceID, AL_BUFFER, outputBuffer);
alSourcePlay(sourceID);
}

Sound not playing when App in background (prepareToPlay returns NO)

I've got a strange problem using AVAudioPlayer to play sound files (wav files) on an iPhone in the Background. I am using the following code:
AVAudioPlayer* audioplayer;
NSError* error;
audioplayer = [[AVAudioPlayer alloc] initWithData:soundfile error:&error];
if (error) {
NSLog(#"an error occured while init audioplayer...");
NSLog(#"%#", [error localizedDescription]);
}
audioplayer.currentTime = 0;
if (![audioplayer prepareToPlay])
NSLog(#"could not preparetoPlay");
audioplayer.volume = 1.0;
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback error:nil];
[[AVAudioSession sharedInstance] setActive: YES error: &error];
if (![audioplayer play])
NSLog(#"could not play sound");
audioplayer.delegate = [myApp sharedInstance];
This works fine while the app is in foreground. However, when moving the app to background [audioplayer prepareToPlay] returns NO.
This happens with AND without "App plays audio" added to the "Required background modes". Is there a way how to get a more precise error report from [audioplayer prepareToPlay]? Or do you have any hints what I am doing wrong or forgot?
You need to initialize your audio session before preparing the AVAudioPlayer instance. Ideally, move the audio session calls to your application delegate's didFinishLaunchingWithOptions method.
I'm not entirely sure this can be achieved using AVFoundation alone, you may need to use the AudioUnit framework and create a stream. Should be relatively simple to send the content of the .WAV file to the audio buffer.
This is how I've been doing it in Piti Piti Pa. The other benefit is you can better control the delay in the audio, in order to synchronize audio and video animations (more obvious when using Bluetooth).
Here's the code I'm using to initialize the audio unit:
+(BOOL)_createAudioUnitInstance
{
// Describe audio component
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
// Get audio units
OSStatus status = AudioComponentInstanceNew(inputComponent, &_audioUnit);
[self _logStatus:status step:#"instantiate"];
return (status == noErr );
}
+(BOOL)_setupAudioUnitOutput
{
UInt32 flag = 1;
OSStatus status = AudioUnitSetProperty(_audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
_outputAudioBus,
&flag,
sizeof(flag));
[self _logStatus:status step:#"set output bus"];
return (status == noErr );
}
+(BOOL)_setupAudioUnitFormat
{
AudioStreamBasicDescription audioFormat = {0};
audioFormat.mSampleRate = 44100.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 2;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 4;
audioFormat.mBytesPerFrame = 4;
OSStatus status = AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
_outputAudioBus,
&audioFormat,
sizeof(audioFormat));
[self _logStatus:status step:#"set audio format"];
return (status == noErr );
}
+(BOOL)_setupAudioUnitRenderCallback
{
AURenderCallbackStruct audioCallback;
audioCallback.inputProc = playbackCallback;
audioCallback.inputProcRefCon = (__bridge void *)(self);
OSStatus status = AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
_outputAudioBus,
&audioCallback,
sizeof(audioCallback));
[self _logStatus:status step:#"set render callback"];
return (status == noErr);
}
+(BOOL)_initializeAudioUnit
{
OSStatus status = AudioUnitInitialize(_audioUnit);
[self _logStatus:status step:#"initialize"];
return (status == noErr);
}
+(void)start
{
[self clearFeeds];
[self _startAudioUnit];
}
+(void)stop
{
[self _stopAudioUnit];
}
+(BOOL)_startAudioUnit
{
OSStatus status = AudioOutputUnitStart(_audioUnit);
[self _logStatus:status step:#"start"];
return (status == noErr);
}
+(BOOL)_stopAudioUnit
{
OSStatus status = AudioOutputUnitStop(_audioUnit);
[self _logStatus:status step:#"stop"];
return (status == noErr);
}
+(void)_logStatus:(OSStatus)status step:(NSString *)step
{
if( status != noErr )
{
NSLog(#"AudioUnit failed to %#, error: %d", step, (int)status);
}
}
#pragma mark - Mixer
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
#autoreleasepool {
AudioBuffer *audioBuffer = ioData->mBuffers;
_lastPushedFrame = _nextFrame;
[SIOAudioMixer _generateAudioFrames:inNumberFrames into:audioBuffer->mData];
}
return noErr;
}
Now you only need to extract the content of the .Wav files (easier if you export them to RAW format) and send it out to the buffer via the callback.
I hope that helps!
In AppDelegate set the AVAudioSession category like this: (Swift 2)
do {
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayAndRecord, withOptions: AVAudioSessionCategoryOptions.MixWithOthers)
}catch{
self.fireAnAlert("Set Category Failed", theMessage: "Failed to set AVAudioSession Category")
}
Setting the options to "Mix With Others" Is an important piece!
Then where ever you are going to play sound make sure you call beginReceivingRemoteControlEvents and then set the AVAudioSession to active like this:
do{
UIApplication.sharedApplication().beginReceivingRemoteControlEvents()
try AVAudioSession.sharedInstance().setActive(true)
}catch{
let e = error as NSError
self.appDelegate?.fireAnAlert("Error", theMessage: "\(e.localizedDescription)")
}

How can I record a conversation / phone call on iOS?

Is it theoretically possible to record a phone call on iPhone?
I'm accepting answers which:
may or may not require the phone to be jailbroken
may or may not pass apple's guidelines due to use of private API's (I don't care; it is not for the App Store)
may or may not use private SDKs
I don't want answers just bluntly saying "Apple does not allow that".
I know there would be no official way of doing it, and certainly not for an App Store application, and I know there are call recording apps which place outgoing calls through their own servers.
Here you go. Complete working example. Tweak should be loaded in mediaserverd daemon. It will record every phone call in /var/mobile/Media/DCIM/result.m4a. Audio file has two channels. Left is microphone, right is speaker. On iPhone 4S call is recorded only when the speaker is turned on. On iPhone 5, 5C and 5S call is recorded either way. There might be small hiccups when switching to/from speaker but recording will continue.
#import <AudioToolbox/AudioToolbox.h>
#import <libkern/OSAtomic.h>
//CoreTelephony.framework
extern "C" CFStringRef const kCTCallStatusChangeNotification;
extern "C" CFStringRef const kCTCallStatus;
extern "C" id CTTelephonyCenterGetDefault();
extern "C" void CTTelephonyCenterAddObserver(id ct, void* observer, CFNotificationCallback callBack, CFStringRef name, void *object, CFNotificationSuspensionBehavior sb);
extern "C" int CTGetCurrentCallCount();
enum
{
kCTCallStatusActive = 1,
kCTCallStatusHeld = 2,
kCTCallStatusOutgoing = 3,
kCTCallStatusIncoming = 4,
kCTCallStatusHanged = 5
};
NSString* kMicFilePath = #"/var/mobile/Media/DCIM/mic.caf";
NSString* kSpeakerFilePath = #"/var/mobile/Media/DCIM/speaker.caf";
NSString* kResultFilePath = #"/var/mobile/Media/DCIM/result.m4a";
OSSpinLock phoneCallIsActiveLock = 0;
OSSpinLock speakerLock = 0;
OSSpinLock micLock = 0;
ExtAudioFileRef micFile = NULL;
ExtAudioFileRef speakerFile = NULL;
BOOL phoneCallIsActive = NO;
void Convert()
{
//File URLs
CFURLRef micUrl = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)kMicFilePath, kCFURLPOSIXPathStyle, false);
CFURLRef speakerUrl = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)kSpeakerFilePath, kCFURLPOSIXPathStyle, false);
CFURLRef mixUrl = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)kResultFilePath, kCFURLPOSIXPathStyle, false);
ExtAudioFileRef micFile = NULL;
ExtAudioFileRef speakerFile = NULL;
ExtAudioFileRef mixFile = NULL;
//Opening input files (speaker and mic)
ExtAudioFileOpenURL(micUrl, &micFile);
ExtAudioFileOpenURL(speakerUrl, &speakerFile);
//Reading input file audio format (mono LPCM)
AudioStreamBasicDescription inputFormat, outputFormat;
UInt32 descSize = sizeof(inputFormat);
ExtAudioFileGetProperty(micFile, kExtAudioFileProperty_FileDataFormat, &descSize, &inputFormat);
int sampleSize = inputFormat.mBytesPerFrame;
//Filling input stream format for output file (stereo LPCM)
FillOutASBDForLPCM(inputFormat, inputFormat.mSampleRate, 2, inputFormat.mBitsPerChannel, inputFormat.mBitsPerChannel, true, false, false);
//Filling output file audio format (AAC)
memset(&outputFormat, 0, sizeof(outputFormat));
outputFormat.mFormatID = kAudioFormatMPEG4AAC;
outputFormat.mSampleRate = 8000;
outputFormat.mFormatFlags = kMPEG4Object_AAC_Main;
outputFormat.mChannelsPerFrame = 2;
//Opening output file
ExtAudioFileCreateWithURL(mixUrl, kAudioFileM4AType, &outputFormat, NULL, kAudioFileFlags_EraseFile, &mixFile);
ExtAudioFileSetProperty(mixFile, kExtAudioFileProperty_ClientDataFormat, sizeof(inputFormat), &inputFormat);
//Freeing URLs
CFRelease(micUrl);
CFRelease(speakerUrl);
CFRelease(mixUrl);
//Setting up audio buffers
int bufferSizeInSamples = 64 * 1024;
AudioBufferList micBuffer;
micBuffer.mNumberBuffers = 1;
micBuffer.mBuffers[0].mNumberChannels = 1;
micBuffer.mBuffers[0].mDataByteSize = sampleSize * bufferSizeInSamples;
micBuffer.mBuffers[0].mData = malloc(micBuffer.mBuffers[0].mDataByteSize);
AudioBufferList speakerBuffer;
speakerBuffer.mNumberBuffers = 1;
speakerBuffer.mBuffers[0].mNumberChannels = 1;
speakerBuffer.mBuffers[0].mDataByteSize = sampleSize * bufferSizeInSamples;
speakerBuffer.mBuffers[0].mData = malloc(speakerBuffer.mBuffers[0].mDataByteSize);
AudioBufferList mixBuffer;
mixBuffer.mNumberBuffers = 1;
mixBuffer.mBuffers[0].mNumberChannels = 2;
mixBuffer.mBuffers[0].mDataByteSize = sampleSize * bufferSizeInSamples * 2;
mixBuffer.mBuffers[0].mData = malloc(mixBuffer.mBuffers[0].mDataByteSize);
//Converting
while (true)
{
//Reading data from input files
UInt32 framesToRead = bufferSizeInSamples;
ExtAudioFileRead(micFile, &framesToRead, &micBuffer);
ExtAudioFileRead(speakerFile, &framesToRead, &speakerBuffer);
if (framesToRead == 0)
{
break;
}
//Building interleaved stereo buffer - left channel is mic, right - speaker
for (int i = 0; i < framesToRead; i++)
{
memcpy((char*)mixBuffer.mBuffers[0].mData + i * sampleSize * 2, (char*)micBuffer.mBuffers[0].mData + i * sampleSize, sampleSize);
memcpy((char*)mixBuffer.mBuffers[0].mData + i * sampleSize * 2 + sampleSize, (char*)speakerBuffer.mBuffers[0].mData + i * sampleSize, sampleSize);
}
//Writing to output file - LPCM will be converted to AAC
ExtAudioFileWrite(mixFile, framesToRead, &mixBuffer);
}
//Closing files
ExtAudioFileDispose(micFile);
ExtAudioFileDispose(speakerFile);
ExtAudioFileDispose(mixFile);
//Freeing audio buffers
free(micBuffer.mBuffers[0].mData);
free(speakerBuffer.mBuffers[0].mData);
free(mixBuffer.mBuffers[0].mData);
}
void Cleanup()
{
[[NSFileManager defaultManager] removeItemAtPath:kMicFilePath error:NULL];
[[NSFileManager defaultManager] removeItemAtPath:kSpeakerFilePath error:NULL];
}
void CoreTelephonyNotificationCallback(CFNotificationCenterRef center, void *observer, CFStringRef name, const void *object, CFDictionaryRef userInfo)
{
NSDictionary* data = (NSDictionary*)userInfo;
if ([(NSString*)name isEqualToString:(NSString*)kCTCallStatusChangeNotification])
{
int currentCallStatus = [data[(NSString*)kCTCallStatus] integerValue];
if (currentCallStatus == kCTCallStatusActive)
{
OSSpinLockLock(&phoneCallIsActiveLock);
phoneCallIsActive = YES;
OSSpinLockUnlock(&phoneCallIsActiveLock);
}
else if (currentCallStatus == kCTCallStatusHanged)
{
if (CTGetCurrentCallCount() > 0)
{
return;
}
OSSpinLockLock(&phoneCallIsActiveLock);
phoneCallIsActive = NO;
OSSpinLockUnlock(&phoneCallIsActiveLock);
//Closing mic file
OSSpinLockLock(&micLock);
if (micFile != NULL)
{
ExtAudioFileDispose(micFile);
}
micFile = NULL;
OSSpinLockUnlock(&micLock);
//Closing speaker file
OSSpinLockLock(&speakerLock);
if (speakerFile != NULL)
{
ExtAudioFileDispose(speakerFile);
}
speakerFile = NULL;
OSSpinLockUnlock(&speakerLock);
Convert();
Cleanup();
}
}
}
OSStatus(*AudioUnitProcess_orig)(AudioUnit unit, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inNumberFrames, AudioBufferList *ioData);
OSStatus AudioUnitProcess_hook(AudioUnit unit, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inNumberFrames, AudioBufferList *ioData)
{
OSSpinLockLock(&phoneCallIsActiveLock);
if (phoneCallIsActive == NO)
{
OSSpinLockUnlock(&phoneCallIsActiveLock);
return AudioUnitProcess_orig(unit, ioActionFlags, inTimeStamp, inNumberFrames, ioData);
}
OSSpinLockUnlock(&phoneCallIsActiveLock);
ExtAudioFileRef* currentFile = NULL;
OSSpinLock* currentLock = NULL;
AudioComponentDescription unitDescription = {0};
AudioComponentGetDescription(AudioComponentInstanceGetComponent(unit), &unitDescription);
//'agcc', 'mbdp' - iPhone 4S, iPhone 5
//'agc2', 'vrq2' - iPhone 5C, iPhone 5S
if (unitDescription.componentSubType == 'agcc' || unitDescription.componentSubType == 'agc2')
{
currentFile = &micFile;
currentLock = &micLock;
}
else if (unitDescription.componentSubType == 'mbdp' || unitDescription.componentSubType == 'vrq2')
{
currentFile = &speakerFile;
currentLock = &speakerLock;
}
if (currentFile != NULL)
{
OSSpinLockLock(currentLock);
//Opening file
if (*currentFile == NULL)
{
//Obtaining input audio format
AudioStreamBasicDescription desc;
UInt32 descSize = sizeof(desc);
AudioUnitGetProperty(unit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &desc, &descSize);
//Opening audio file
CFURLRef url = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)((currentFile == &micFile) ? kMicFilePath : kSpeakerFilePath), kCFURLPOSIXPathStyle, false);
ExtAudioFileRef audioFile = NULL;
OSStatus result = ExtAudioFileCreateWithURL(url, kAudioFileCAFType, &desc, NULL, kAudioFileFlags_EraseFile, &audioFile);
if (result != 0)
{
*currentFile = NULL;
}
else
{
*currentFile = audioFile;
//Writing audio format
ExtAudioFileSetProperty(*currentFile, kExtAudioFileProperty_ClientDataFormat, sizeof(desc), &desc);
}
CFRelease(url);
}
else
{
//Writing audio buffer
ExtAudioFileWrite(*currentFile, inNumberFrames, ioData);
}
OSSpinLockUnlock(currentLock);
}
return AudioUnitProcess_orig(unit, ioActionFlags, inTimeStamp, inNumberFrames, ioData);
}
__attribute__((constructor))
static void initialize()
{
CTTelephonyCenterAddObserver(CTTelephonyCenterGetDefault(), NULL, CoreTelephonyNotificationCallback, NULL, NULL, CFNotificationSuspensionBehaviorHold);
MSHookFunction(AudioUnitProcess, AudioUnitProcess_hook, &AudioUnitProcess_orig);
}
A few words about what's going on. AudioUnitProcess function is used for processing audio streams in order to apply some effects, mix, convert etc. We are hooking AudioUnitProcess in order to access phone call's audio streams. While phone call is active these streams are being processed in various ways.
We are listening for CoreTelephony notifications in order to get phone call status changes. When we receive audio samples we need to determine where they come from - microphone or speaker. This is done using componentSubType field in AudioComponentDescription structure. Now, you might think, why don't we store AudioUnit objects so that we don't need to check componentSubType every time. I did that but it will break everything when you switch speaker on/off on iPhone 5 because AudioUnit objects will change, they are recreated. So, now we open audio files (one for microphone and one for speaker) and write samples in them, simple as that. When phone call ends we will receive appropriate CoreTelephony notification and close the files. We have two separate files with audio from microphone and speaker that we need to merge. This is what void Convert() is for. It's pretty simple if you know the API. I don't think I need to explain it, comments are enough.
About locks. There are many threads in mediaserverd. Audio processing and CoreTelephony notifications are on different threads so we need some kind synchronization. I chose spin locks because they are fast and because the chance of lock contention is small in our case. On iPhone 4S and even iPhone 5 all the work in AudioUnitProcess should be done as fast as possible otherwise you will hear hiccups from device speaker which obviously not good.
Yes. Audio Recorder by a developer named Limneos does that (and quite well). You can find it on Cydia. It can record any type of call on iPhone 5 and up without using any servers etc'. The call will be placed on the device in an Audio file. It also supports iPhone 4S but for speaker only.
This tweak is known to be the first tweak ever that managed to record both streams of audio without using any 3rd party severs, VOIP or something similar.
The developer placed beeps on the other side of the call to alert the person you are recording but those were removed too by hackers across the net. To answer your question, Yes, it's very much possible, and not just theoretically.
Further reading
https://stackoverflow.com/a/19413363/202451
http://forums.macrumors.com/showthread.php?t=1566350
https://github.com/nst/iOS-Runtime-Headers
The only solution I can think of is to use the Core Telephony framework, and more specifically the callEventHandler property, to intercept when a call is coming in, and then to use an AVAudioRecorder to record the voice of the person with the phone (and maybe a little of the person on the other line's voice). This is obviously not perfect, and would only work if your application is in the foreground at the time of the call, but it may be the best you can get. See more about finding out if there is an incoming phone call here: Can we fire an event when ever there is Incoming and Outgoing call in iphone?.
EDIT:
.h:
#import <AVFoundation/AVFoundation.h>
#import<CoreTelephony/CTCallCenter.h>
#import<CoreTelephony/CTCall.h>
#property (strong, nonatomic) AVAudioRecorder *audioRecorder;
ViewDidLoad:
NSArray *dirPaths;
NSString *docsDir;
dirPaths = NSSearchPathForDirectoriesInDomains(
NSDocumentDirectory, NSUserDomainMask, YES);
docsDir = dirPaths[0];
NSString *soundFilePath = [docsDir
stringByAppendingPathComponent:#"sound.caf"];
NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath];
NSDictionary *recordSettings = [NSDictionary
dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:AVAudioQualityMin],
AVEncoderAudioQualityKey,
[NSNumber numberWithInt:16],
AVEncoderBitRateKey,
[NSNumber numberWithInt: 2],
AVNumberOfChannelsKey,
[NSNumber numberWithFloat:44100.0],
AVSampleRateKey,
nil];
NSError *error = nil;
_audioRecorder = [[AVAudioRecorder alloc]
initWithURL:soundFileURL
settings:recordSettings
error:&error];
if (error)
{
NSLog(#"error: %#", [error localizedDescription]);
} else {
[_audioRecorder prepareToRecord];
}
CTCallCenter *callCenter = [[CTCallCenter alloc] init];
[callCenter setCallEventHandler:^(CTCall *call) {
if ([[call callState] isEqual:CTCallStateConnected]) {
[_audioRecorder record];
} else if ([[call callState] isEqual:CTCallStateDisconnected]) {
[_audioRecorder stop];
}
}];
AppDelegate.m:
- (void)applicationDidEnterBackground:(UIApplication *)application//Makes sure that the recording keeps happening even when app is in the background, though only can go for 10 minutes.
{
__block UIBackgroundTaskIdentifier task = 0;
task=[application beginBackgroundTaskWithExpirationHandler:^{
NSLog(#"Expiration handler called %f",[application backgroundTimeRemaining]);
[application endBackgroundTask:task];
task=UIBackgroundTaskInvalid;
}];
This is the first time using many of these features, so not sure if this is exactly right, but I think you get the idea. Untested, as I do not have access to the right tools at the moment. Compiled using these sources:
Recording voice in background using AVAudioRecorder
http://prassan-warrior.blogspot.com/2012/11/recording-audio-on-iphone-with.html
Can we fire an event when ever there is Incoming and Outgoing call in iphone?
Apple does not allow it and does not provide any API for it.
However, on a jailbroken device I'm sure it's possible. As a matter of fact, I think it's already done. I remember seeing an app when my phone was jailbroken that changed your voice and recorded the call - I remember it was a US company offering it only in the states. Unfortunately I don't remember the name...
I guess some hardware could solve this. Connected to the minijack-port; having earbuds and a microphone passing through a small recorder. This recorder can be very simple. While not in conversation the recorder could feed the phone with data/the recording (through the jack-cable). And with a simple start button (just like the volum controls on the earbuds) could be sufficient for timing the recording.
Some setups
http://www.danmccomb.com/posts/483/how-to-record-iphone-conversations-using-zoom-h4n/
http://forums.macrumors.com/showthread.php?t=346430

Resources