Related
From a delegate method I am receiving an AudioBufferList while I am recording audio. I am trying to collect the data from the AudioBufferList and save it so I can load it into my AVAudioPlayer but the AVAudioPlayer throws an error and I am unable to play the recording. I need to be able to play the audio through AVAudioPlayer without having the file, just by using the AudioBufferList.
Originally I was saving the recording to a file then loading it into AVAudioPlayer but with this method I was unable to append to the recording without having to make another audio file then merging the 2 files after the append was made. This was taking to much time and I would still like to be able to listen to the recording between appends. So now I am not saving the audio file so that I can keep appending to it until I wish to save it. The problem with this is the NSData that I am saving from the AudioBufferList is not loading into the AVAudioPlayer properly.
Here is my code for gathering the NSData:
- (void) microphone:(EZMicrophone *)microphone
hasBufferList:(AudioBufferList *)bufferList
withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels
{
AudioBuffer sourceBuffer = bufferList->mBuffers[0];
if (audioBuffer.mDataByteSize != sourceBuffer.mDataByteSize)
{
free(audioBuffer.mData);
audioBuffer.mDataByteSize = sourceBuffer.mDataByteSize;
audioBuffer.mData = malloc(sourceBuffer.mDataByteSize);
}
int currentBuffer =0;
int maxBuf = 800;
for( int y=0; y< bufferList->mNumberBuffers; y++ )
{
if (currentBuffer < maxBuf)
{
AudioBuffer audioBuff = bufferList->mBuffers[y];
Float32 *frame = (Float32*)audioBuff.mData;
[data appendBytes:frame length:audioBuffer.mDataByteSize];
currentBuffer += audioBuff.mDataByteSize;
}
else
{
break;
}
}
}
When I try and load the NSData into AVAudioPlayer I get the following error:
self.audioPlayer = [[AVAudioPlayer alloc] initWithData:data error:&err];
err:
Error Domain=NSOSStatusErrorDomain Code=1954115647 "The operation couldn’t be completed. (OSStatus error 1954115647.)"
Any help would be appreciated.
Thank you,
I am new to Audio framework, anyone help me to write the audio file which is playing by capturing from microphone?
below is the code to play mic input through iphone speaker, now i would like to save the audio in iphone for future use.
i found the code from here to record audio using microphone http://www.stefanpopp.de/2011/capture-iphone-microphone/
/**
Code start from here for playing the recorded voice
*/
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
/**
This is the reference to the object who owns the callback.
*/
AudioProcessor *audioProcessor = (AudioProcessor*) inRefCon;
// iterate over incoming stream an copy to output stream
for (int i=0; i < ioData->mNumberBuffers; i++) {
AudioBuffer buffer = ioData->mBuffers[i];
// find minimum size
UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);
// copy buffer to audio buffer which gets played after function return
memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size);
// set data size
buffer.mDataByteSize = size;
// get a pointer to the recorder struct variable
Recorder recInfo = audioProcessor.audioRecorder;
// write the bytes
OSStatus audioErr = noErr;
if (recInfo.running) {
audioErr = AudioFileWriteBytes (recInfo.recordFile,
false,
recInfo.inStartingByte,
&size,
&buffer.mData);
assert (audioErr == noErr);
// increment our byte count
recInfo.inStartingByte += (SInt64)size;// size should be number of bytes
audioProcessor.audioRecorder = recInfo;
}
}
return noErr;
}
-(void)prepareAudioFileToRecord{
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES);
NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;
NSTimeInterval time = ([[NSDate date] timeIntervalSince1970]); // returned as a double
long digits = (long)time; // this is the first 10 digits
int decimalDigits = (int)(fmod(time, 1) * 1000); // this will get the 3 missing digits
// long timestamp = (digits * 1000) + decimalDigits;
NSString *timeStampValue = [NSString stringWithFormat:#"%ld",digits];
// NSString *timeStampValue = [NSString stringWithFormat:#"%ld.%d",digits ,decimalDigits];
NSString *fileName = [NSString stringWithFormat:#"test%#.caf",timeStampValue];
NSString *filePath = [basePath stringByAppendingPathComponent:fileName];
NSURL *fileURL = [NSURL fileURLWithPath:filePath];
// modify the ASBD (see EDIT: towards the end of this post!)
audioFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
// set up the file (bridge cast will differ if using ARC)
OSStatus audioErr = noErr;
audioErr = AudioFileCreateWithURL((CFURLRef)fileURL,
kAudioFileCAFType,
&audioFormat,
kAudioFileFlags_EraseFile,
&audioRecorder.recordFile);
assert (audioErr == noErr);// simple error checking
audioRecorder.inStartingByte = 0;
audioRecorder.running = true;
self.audioRecorder = audioRecorder;
}
thanks in advance
bala
To write the bytes from an AudioBuffer to a file locally we need the help from the AudioFileServices link class which is included in the AudioToolbox framework.
Conceptually we will do the following - set up an audio file and maintain a reference to it (we need this reference to be accessible from the render callback that you included in your post). We also need to keep track of the number of bytes that are written for each time the callback is called. Finally a flag to check that will let us know to stop writing to file and close the file.
Because the code in the link you provided declares an AudioStreamBasicDescription which is LPCM and hence constant bit rate, we can use the AudioFileWriteBytes function (writing compressed audio is more involved and would use AudioFileWritePackets function instead).
Let's start by declaring a custom struct (which contains all the extra data we'll need) and adding an instance variable of this custom struct and also making a property that points to the struct variable. We'll add this to the AudioProcessor custom class, as you already have access to this object from within the callback where you typecast in this line.
AudioProcessor *audioProcessor = (AudioProcessor*) inRefCon;
Add this to AudioProcessor.h (above the #interface)
typedef struct Recorder {
AudioFileID recordFile;
SInt64 inStartingByte;
Boolean running;
} Recorder;
Now let's add an instance variable and also make it a pointer property and assign it to the instance variable (so we can access it from within the callback function).
In the #interface add an instance variable named audioRecorder and also make the ASBD available to the class.
Recorder audioRecorder;
AudioStreamBasicDescription recordFormat;// assign this ivar to where the asbd is created in the class
In the method -(void)initializeAudio comment out or delete this line as we have made recordFormat an ivar.
//AudioStreamBasicDescription recordFormat;
Now add the kAudioFormatFlagIsBigEndian format flag to where the ASBD is set up.
// also modify the ASBD in the AudioProcessor classes -(void)initializeAudio method (see EDIT: towards the end of this post!)
recordFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
And finally add it as a property that is a pointer to the audioRecorder instance variable and don't forget to synthesise it in AudioProcessor.m. We will name the pointer property audioRecorderPointer
#property Recorder *audioRecorderPointer;
// in .m synthesise the property
#synthesize audioRecorderPointer;
Now let's assign the pointer to the ivar (this could be placed in the -(void)initializeAudio method of the AudioProcessor class)
// ASSIGN POINTER PROPERTY TO IVAR
self.audioRecorderPointer = &audioRecorder;
Now in the AudioProcessor.m let's add a method to setup the file and open it so we can write to it. This should be called before you start the AUGraph running.
-(void)prepareAudioFileToRecord {
// lets set up a test file in the documents directory
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES);
NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;
NSString *fileName = #"test_recording.aif";
NSString *filePath = [basePath stringByAppendingPathComponent:fileName];
NSURL *fileURL = [NSURL fileURLWithPath:filePath];
// set up the file (bridge cast will differ if using ARC)
OSStatus audioErr = noErr;
audioErr = AudioFileCreateWithURL((CFURLRef)fileURL,
kAudioFileAIFFType,
recordFormat,
kAudioFileFlags_EraseFile,
&audioRecorder.recordFile);
assert (audioErr == noErr);// simple error checking
audioRecorder.inStartingByte = 0;
audioRecorder.running = true;
}
Okay, we are nearly there. Now we have a file to write to, and an AudioFileID that can be accessed from the render callback. So inside the callback function you posted add the following right before you return noErr at the end of the method.
// get a pointer to the recorder struct instance variable
Recorder *recInfo = audioProcessor.audioRecorderPointer;
// write the bytes
OSStatus audioErr = noErr;
if (recInfo->running) {
audioErr = AudioFileWriteBytes (recInfo->recordFile,
false,
recInfo->inStartingByte,
&size,
buffer.mData);
assert (audioErr == noErr);
// increment our byte count
recInfo->inStartingByte += (SInt64)size;// size should be number of bytes
}
When we want to stop recording (probably invoked by some user action), simply make the running boolean false and close the file like this somewhere in the AudioProcessor class.
audioRecorder.running = false;
OSStatus audioErr = AudioFileClose(audioRecorder.recordFile);
assert (audioErr == noErr);
EDIT: the endianness of the samples need to be big endian for the file so add the kAudioFormatFlagIsBigEndian bit mask flag to the ASBD in the source code found at the link provided in question.
For extra info about this topic the Apple documents are a great resource and I also recommend reading 'Learning Core Audio' by Chris Adamson and Kevin Avila (of which I own a copy).
Use Audio Queue Services.
There is an example in the Apple documentation that does exactly what you ask:
Audio Queue Services Programming Guide - Recording Audio
Is it theoretically possible to record a phone call on iPhone?
I'm accepting answers which:
may or may not require the phone to be jailbroken
may or may not pass apple's guidelines due to use of private API's (I don't care; it is not for the App Store)
may or may not use private SDKs
I don't want answers just bluntly saying "Apple does not allow that".
I know there would be no official way of doing it, and certainly not for an App Store application, and I know there are call recording apps which place outgoing calls through their own servers.
Here you go. Complete working example. Tweak should be loaded in mediaserverd daemon. It will record every phone call in /var/mobile/Media/DCIM/result.m4a. Audio file has two channels. Left is microphone, right is speaker. On iPhone 4S call is recorded only when the speaker is turned on. On iPhone 5, 5C and 5S call is recorded either way. There might be small hiccups when switching to/from speaker but recording will continue.
#import <AudioToolbox/AudioToolbox.h>
#import <libkern/OSAtomic.h>
//CoreTelephony.framework
extern "C" CFStringRef const kCTCallStatusChangeNotification;
extern "C" CFStringRef const kCTCallStatus;
extern "C" id CTTelephonyCenterGetDefault();
extern "C" void CTTelephonyCenterAddObserver(id ct, void* observer, CFNotificationCallback callBack, CFStringRef name, void *object, CFNotificationSuspensionBehavior sb);
extern "C" int CTGetCurrentCallCount();
enum
{
kCTCallStatusActive = 1,
kCTCallStatusHeld = 2,
kCTCallStatusOutgoing = 3,
kCTCallStatusIncoming = 4,
kCTCallStatusHanged = 5
};
NSString* kMicFilePath = #"/var/mobile/Media/DCIM/mic.caf";
NSString* kSpeakerFilePath = #"/var/mobile/Media/DCIM/speaker.caf";
NSString* kResultFilePath = #"/var/mobile/Media/DCIM/result.m4a";
OSSpinLock phoneCallIsActiveLock = 0;
OSSpinLock speakerLock = 0;
OSSpinLock micLock = 0;
ExtAudioFileRef micFile = NULL;
ExtAudioFileRef speakerFile = NULL;
BOOL phoneCallIsActive = NO;
void Convert()
{
//File URLs
CFURLRef micUrl = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)kMicFilePath, kCFURLPOSIXPathStyle, false);
CFURLRef speakerUrl = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)kSpeakerFilePath, kCFURLPOSIXPathStyle, false);
CFURLRef mixUrl = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)kResultFilePath, kCFURLPOSIXPathStyle, false);
ExtAudioFileRef micFile = NULL;
ExtAudioFileRef speakerFile = NULL;
ExtAudioFileRef mixFile = NULL;
//Opening input files (speaker and mic)
ExtAudioFileOpenURL(micUrl, &micFile);
ExtAudioFileOpenURL(speakerUrl, &speakerFile);
//Reading input file audio format (mono LPCM)
AudioStreamBasicDescription inputFormat, outputFormat;
UInt32 descSize = sizeof(inputFormat);
ExtAudioFileGetProperty(micFile, kExtAudioFileProperty_FileDataFormat, &descSize, &inputFormat);
int sampleSize = inputFormat.mBytesPerFrame;
//Filling input stream format for output file (stereo LPCM)
FillOutASBDForLPCM(inputFormat, inputFormat.mSampleRate, 2, inputFormat.mBitsPerChannel, inputFormat.mBitsPerChannel, true, false, false);
//Filling output file audio format (AAC)
memset(&outputFormat, 0, sizeof(outputFormat));
outputFormat.mFormatID = kAudioFormatMPEG4AAC;
outputFormat.mSampleRate = 8000;
outputFormat.mFormatFlags = kMPEG4Object_AAC_Main;
outputFormat.mChannelsPerFrame = 2;
//Opening output file
ExtAudioFileCreateWithURL(mixUrl, kAudioFileM4AType, &outputFormat, NULL, kAudioFileFlags_EraseFile, &mixFile);
ExtAudioFileSetProperty(mixFile, kExtAudioFileProperty_ClientDataFormat, sizeof(inputFormat), &inputFormat);
//Freeing URLs
CFRelease(micUrl);
CFRelease(speakerUrl);
CFRelease(mixUrl);
//Setting up audio buffers
int bufferSizeInSamples = 64 * 1024;
AudioBufferList micBuffer;
micBuffer.mNumberBuffers = 1;
micBuffer.mBuffers[0].mNumberChannels = 1;
micBuffer.mBuffers[0].mDataByteSize = sampleSize * bufferSizeInSamples;
micBuffer.mBuffers[0].mData = malloc(micBuffer.mBuffers[0].mDataByteSize);
AudioBufferList speakerBuffer;
speakerBuffer.mNumberBuffers = 1;
speakerBuffer.mBuffers[0].mNumberChannels = 1;
speakerBuffer.mBuffers[0].mDataByteSize = sampleSize * bufferSizeInSamples;
speakerBuffer.mBuffers[0].mData = malloc(speakerBuffer.mBuffers[0].mDataByteSize);
AudioBufferList mixBuffer;
mixBuffer.mNumberBuffers = 1;
mixBuffer.mBuffers[0].mNumberChannels = 2;
mixBuffer.mBuffers[0].mDataByteSize = sampleSize * bufferSizeInSamples * 2;
mixBuffer.mBuffers[0].mData = malloc(mixBuffer.mBuffers[0].mDataByteSize);
//Converting
while (true)
{
//Reading data from input files
UInt32 framesToRead = bufferSizeInSamples;
ExtAudioFileRead(micFile, &framesToRead, &micBuffer);
ExtAudioFileRead(speakerFile, &framesToRead, &speakerBuffer);
if (framesToRead == 0)
{
break;
}
//Building interleaved stereo buffer - left channel is mic, right - speaker
for (int i = 0; i < framesToRead; i++)
{
memcpy((char*)mixBuffer.mBuffers[0].mData + i * sampleSize * 2, (char*)micBuffer.mBuffers[0].mData + i * sampleSize, sampleSize);
memcpy((char*)mixBuffer.mBuffers[0].mData + i * sampleSize * 2 + sampleSize, (char*)speakerBuffer.mBuffers[0].mData + i * sampleSize, sampleSize);
}
//Writing to output file - LPCM will be converted to AAC
ExtAudioFileWrite(mixFile, framesToRead, &mixBuffer);
}
//Closing files
ExtAudioFileDispose(micFile);
ExtAudioFileDispose(speakerFile);
ExtAudioFileDispose(mixFile);
//Freeing audio buffers
free(micBuffer.mBuffers[0].mData);
free(speakerBuffer.mBuffers[0].mData);
free(mixBuffer.mBuffers[0].mData);
}
void Cleanup()
{
[[NSFileManager defaultManager] removeItemAtPath:kMicFilePath error:NULL];
[[NSFileManager defaultManager] removeItemAtPath:kSpeakerFilePath error:NULL];
}
void CoreTelephonyNotificationCallback(CFNotificationCenterRef center, void *observer, CFStringRef name, const void *object, CFDictionaryRef userInfo)
{
NSDictionary* data = (NSDictionary*)userInfo;
if ([(NSString*)name isEqualToString:(NSString*)kCTCallStatusChangeNotification])
{
int currentCallStatus = [data[(NSString*)kCTCallStatus] integerValue];
if (currentCallStatus == kCTCallStatusActive)
{
OSSpinLockLock(&phoneCallIsActiveLock);
phoneCallIsActive = YES;
OSSpinLockUnlock(&phoneCallIsActiveLock);
}
else if (currentCallStatus == kCTCallStatusHanged)
{
if (CTGetCurrentCallCount() > 0)
{
return;
}
OSSpinLockLock(&phoneCallIsActiveLock);
phoneCallIsActive = NO;
OSSpinLockUnlock(&phoneCallIsActiveLock);
//Closing mic file
OSSpinLockLock(&micLock);
if (micFile != NULL)
{
ExtAudioFileDispose(micFile);
}
micFile = NULL;
OSSpinLockUnlock(&micLock);
//Closing speaker file
OSSpinLockLock(&speakerLock);
if (speakerFile != NULL)
{
ExtAudioFileDispose(speakerFile);
}
speakerFile = NULL;
OSSpinLockUnlock(&speakerLock);
Convert();
Cleanup();
}
}
}
OSStatus(*AudioUnitProcess_orig)(AudioUnit unit, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inNumberFrames, AudioBufferList *ioData);
OSStatus AudioUnitProcess_hook(AudioUnit unit, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inNumberFrames, AudioBufferList *ioData)
{
OSSpinLockLock(&phoneCallIsActiveLock);
if (phoneCallIsActive == NO)
{
OSSpinLockUnlock(&phoneCallIsActiveLock);
return AudioUnitProcess_orig(unit, ioActionFlags, inTimeStamp, inNumberFrames, ioData);
}
OSSpinLockUnlock(&phoneCallIsActiveLock);
ExtAudioFileRef* currentFile = NULL;
OSSpinLock* currentLock = NULL;
AudioComponentDescription unitDescription = {0};
AudioComponentGetDescription(AudioComponentInstanceGetComponent(unit), &unitDescription);
//'agcc', 'mbdp' - iPhone 4S, iPhone 5
//'agc2', 'vrq2' - iPhone 5C, iPhone 5S
if (unitDescription.componentSubType == 'agcc' || unitDescription.componentSubType == 'agc2')
{
currentFile = &micFile;
currentLock = &micLock;
}
else if (unitDescription.componentSubType == 'mbdp' || unitDescription.componentSubType == 'vrq2')
{
currentFile = &speakerFile;
currentLock = &speakerLock;
}
if (currentFile != NULL)
{
OSSpinLockLock(currentLock);
//Opening file
if (*currentFile == NULL)
{
//Obtaining input audio format
AudioStreamBasicDescription desc;
UInt32 descSize = sizeof(desc);
AudioUnitGetProperty(unit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &desc, &descSize);
//Opening audio file
CFURLRef url = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)((currentFile == &micFile) ? kMicFilePath : kSpeakerFilePath), kCFURLPOSIXPathStyle, false);
ExtAudioFileRef audioFile = NULL;
OSStatus result = ExtAudioFileCreateWithURL(url, kAudioFileCAFType, &desc, NULL, kAudioFileFlags_EraseFile, &audioFile);
if (result != 0)
{
*currentFile = NULL;
}
else
{
*currentFile = audioFile;
//Writing audio format
ExtAudioFileSetProperty(*currentFile, kExtAudioFileProperty_ClientDataFormat, sizeof(desc), &desc);
}
CFRelease(url);
}
else
{
//Writing audio buffer
ExtAudioFileWrite(*currentFile, inNumberFrames, ioData);
}
OSSpinLockUnlock(currentLock);
}
return AudioUnitProcess_orig(unit, ioActionFlags, inTimeStamp, inNumberFrames, ioData);
}
__attribute__((constructor))
static void initialize()
{
CTTelephonyCenterAddObserver(CTTelephonyCenterGetDefault(), NULL, CoreTelephonyNotificationCallback, NULL, NULL, CFNotificationSuspensionBehaviorHold);
MSHookFunction(AudioUnitProcess, AudioUnitProcess_hook, &AudioUnitProcess_orig);
}
A few words about what's going on. AudioUnitProcess function is used for processing audio streams in order to apply some effects, mix, convert etc. We are hooking AudioUnitProcess in order to access phone call's audio streams. While phone call is active these streams are being processed in various ways.
We are listening for CoreTelephony notifications in order to get phone call status changes. When we receive audio samples we need to determine where they come from - microphone or speaker. This is done using componentSubType field in AudioComponentDescription structure. Now, you might think, why don't we store AudioUnit objects so that we don't need to check componentSubType every time. I did that but it will break everything when you switch speaker on/off on iPhone 5 because AudioUnit objects will change, they are recreated. So, now we open audio files (one for microphone and one for speaker) and write samples in them, simple as that. When phone call ends we will receive appropriate CoreTelephony notification and close the files. We have two separate files with audio from microphone and speaker that we need to merge. This is what void Convert() is for. It's pretty simple if you know the API. I don't think I need to explain it, comments are enough.
About locks. There are many threads in mediaserverd. Audio processing and CoreTelephony notifications are on different threads so we need some kind synchronization. I chose spin locks because they are fast and because the chance of lock contention is small in our case. On iPhone 4S and even iPhone 5 all the work in AudioUnitProcess should be done as fast as possible otherwise you will hear hiccups from device speaker which obviously not good.
Yes. Audio Recorder by a developer named Limneos does that (and quite well). You can find it on Cydia. It can record any type of call on iPhone 5 and up without using any servers etc'. The call will be placed on the device in an Audio file. It also supports iPhone 4S but for speaker only.
This tweak is known to be the first tweak ever that managed to record both streams of audio without using any 3rd party severs, VOIP or something similar.
The developer placed beeps on the other side of the call to alert the person you are recording but those were removed too by hackers across the net. To answer your question, Yes, it's very much possible, and not just theoretically.
Further reading
https://stackoverflow.com/a/19413363/202451
http://forums.macrumors.com/showthread.php?t=1566350
https://github.com/nst/iOS-Runtime-Headers
The only solution I can think of is to use the Core Telephony framework, and more specifically the callEventHandler property, to intercept when a call is coming in, and then to use an AVAudioRecorder to record the voice of the person with the phone (and maybe a little of the person on the other line's voice). This is obviously not perfect, and would only work if your application is in the foreground at the time of the call, but it may be the best you can get. See more about finding out if there is an incoming phone call here: Can we fire an event when ever there is Incoming and Outgoing call in iphone?.
EDIT:
.h:
#import <AVFoundation/AVFoundation.h>
#import<CoreTelephony/CTCallCenter.h>
#import<CoreTelephony/CTCall.h>
#property (strong, nonatomic) AVAudioRecorder *audioRecorder;
ViewDidLoad:
NSArray *dirPaths;
NSString *docsDir;
dirPaths = NSSearchPathForDirectoriesInDomains(
NSDocumentDirectory, NSUserDomainMask, YES);
docsDir = dirPaths[0];
NSString *soundFilePath = [docsDir
stringByAppendingPathComponent:#"sound.caf"];
NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath];
NSDictionary *recordSettings = [NSDictionary
dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:AVAudioQualityMin],
AVEncoderAudioQualityKey,
[NSNumber numberWithInt:16],
AVEncoderBitRateKey,
[NSNumber numberWithInt: 2],
AVNumberOfChannelsKey,
[NSNumber numberWithFloat:44100.0],
AVSampleRateKey,
nil];
NSError *error = nil;
_audioRecorder = [[AVAudioRecorder alloc]
initWithURL:soundFileURL
settings:recordSettings
error:&error];
if (error)
{
NSLog(#"error: %#", [error localizedDescription]);
} else {
[_audioRecorder prepareToRecord];
}
CTCallCenter *callCenter = [[CTCallCenter alloc] init];
[callCenter setCallEventHandler:^(CTCall *call) {
if ([[call callState] isEqual:CTCallStateConnected]) {
[_audioRecorder record];
} else if ([[call callState] isEqual:CTCallStateDisconnected]) {
[_audioRecorder stop];
}
}];
AppDelegate.m:
- (void)applicationDidEnterBackground:(UIApplication *)application//Makes sure that the recording keeps happening even when app is in the background, though only can go for 10 minutes.
{
__block UIBackgroundTaskIdentifier task = 0;
task=[application beginBackgroundTaskWithExpirationHandler:^{
NSLog(#"Expiration handler called %f",[application backgroundTimeRemaining]);
[application endBackgroundTask:task];
task=UIBackgroundTaskInvalid;
}];
This is the first time using many of these features, so not sure if this is exactly right, but I think you get the idea. Untested, as I do not have access to the right tools at the moment. Compiled using these sources:
Recording voice in background using AVAudioRecorder
http://prassan-warrior.blogspot.com/2012/11/recording-audio-on-iphone-with.html
Can we fire an event when ever there is Incoming and Outgoing call in iphone?
Apple does not allow it and does not provide any API for it.
However, on a jailbroken device I'm sure it's possible. As a matter of fact, I think it's already done. I remember seeing an app when my phone was jailbroken that changed your voice and recorded the call - I remember it was a US company offering it only in the states. Unfortunately I don't remember the name...
I guess some hardware could solve this. Connected to the minijack-port; having earbuds and a microphone passing through a small recorder. This recorder can be very simple. While not in conversation the recorder could feed the phone with data/the recording (through the jack-cable). And with a simple start button (just like the volum controls on the earbuds) could be sufficient for timing the recording.
Some setups
http://www.danmccomb.com/posts/483/how-to-record-iphone-conversations-using-zoom-h4n/
http://forums.macrumors.com/showthread.php?t=346430
I've been trying to serialize a local audio file following the instructions here, only instead of feeding the packets directly to the AudioQueBuffer, I would like send it over bluetooth/wifi using GKSession. I'm getting an EXC_BAD_ACCESS error:
- (void)broadcastServerMusicWithSession:(GKSession *)session playerName:(NSString *)name clients: (NSArray *)clients
{
self.isServer = YES;
_session = session;
_session.available = NO;
_session.delegate = self;
[_session setDataReceiveHandler:self withContext:nil];
CFURLRef fileURL = (__bridge CFURLRef)[[NSBundle mainBundle] URLForResource:#"mozart" withExtension:#"mp3"]; // file URL
AudioFile *audioFile = [[AudioFile alloc] initWithURL:fileURL];
//no we start sending the data over
#define MAX_PACKET_COUNT 4096
SInt64 currentPacketNumber = 0;
UInt32 numBytes;
UInt32 numPacketsToRead;
AudioStreamPacketDescription packetDescriptions[MAX_PACKET_COUNT];
NSInteger bufferByteSize = 4096;
if (bufferByteSize < audioFile.maxPacketSize) bufferByteSize = audioFile.maxPacketSize;
numPacketsToRead = bufferByteSize/audioFile.maxPacketSize;
UInt32 numPackets = numPacketsToRead;
BOOL isVBR = ([audioFile audioFormatRef]->mBytesPerPacket == 0) ? YES : NO;
// this should be in a do while (numPackets < numPacketsToRead) loop..
// but i took it out just to narrow down the problem
NSMutableData *myBuffer = [[NSMutableData alloc] initWithCapacity:numPackets * audioFile.maxPacketSize];
AudioFileReadPackets (
audioFile.fileID,
NO,
&numBytes,
isVBR ? packetDescriptions : 0,
currentPacketNumber,
&numPackets,
&myBuffer
);
NSError *error;
if (![session sendDataToAllPeers:packetData withDataMode:GKSendDataReliable error:&error])
{
NSLog(#"Error sending data to clients: %#", error);
}
I know for a fact that it's the AudioFileReadPackets call that's breaking the code, since it runs fine if I comment that bit out
I've tried doing Zombies profiling using xcode 4.3, but it just crashes.. one useful debugging tip I got though was breaking when the code crashes and outputting the error code using gdb console.. this is what I get:
(lldb) po [$eax className] NSException
error: warning: receiver type 'unsigned int' is not 'id' or interface pointer, consider casting it to 'id'
warning: declaration does not declare anything
error: expected ';' after expression
error: 1 errors parsing expression
but I couldn't do much with it.. any help?
Turns out there are two problems with the above code:
[audioFile audioFormatRef]->mBytesPerPacket == 0
the compiler isn't happy with this one.. if you refer to the code that defines audioFormatRef, it returns a pointer to a format struct, which has a mBytesPerPacket field.. I donno why the above syntax isn't working though
I shouldn't be supplying myBuffer to the AudioFileReadPackets function (type mistmach).. rather I should do something like this:
NSMutableData *myBuffer = [[NSMutableData alloc] initWithCapacity:numPackets * audioFile.maxPacketSize];
const void *buffer = [myBuffer bytes];
AudioFileReadPackets (
audioFile.fileID,
NO,
&numBytes,
isVBR ? packetDescriptions : 0,
currentPacketNumber,
&numPackets,
buffer
);
this works too
void *buffer = [myBuffer mutablebytes];
read this post to explain the difference between those two options
Is it theoretically possible to record a phone call on iPhone?
I'm accepting answers which:
may or may not require the phone to be jailbroken
may or may not pass apple's guidelines due to use of private API's (I don't care; it is not for the App Store)
may or may not use private SDKs
I don't want answers just bluntly saying "Apple does not allow that".
I know there would be no official way of doing it, and certainly not for an App Store application, and I know there are call recording apps which place outgoing calls through their own servers.
Here you go. Complete working example. Tweak should be loaded in mediaserverd daemon. It will record every phone call in /var/mobile/Media/DCIM/result.m4a. Audio file has two channels. Left is microphone, right is speaker. On iPhone 4S call is recorded only when the speaker is turned on. On iPhone 5, 5C and 5S call is recorded either way. There might be small hiccups when switching to/from speaker but recording will continue.
#import <AudioToolbox/AudioToolbox.h>
#import <libkern/OSAtomic.h>
//CoreTelephony.framework
extern "C" CFStringRef const kCTCallStatusChangeNotification;
extern "C" CFStringRef const kCTCallStatus;
extern "C" id CTTelephonyCenterGetDefault();
extern "C" void CTTelephonyCenterAddObserver(id ct, void* observer, CFNotificationCallback callBack, CFStringRef name, void *object, CFNotificationSuspensionBehavior sb);
extern "C" int CTGetCurrentCallCount();
enum
{
kCTCallStatusActive = 1,
kCTCallStatusHeld = 2,
kCTCallStatusOutgoing = 3,
kCTCallStatusIncoming = 4,
kCTCallStatusHanged = 5
};
NSString* kMicFilePath = #"/var/mobile/Media/DCIM/mic.caf";
NSString* kSpeakerFilePath = #"/var/mobile/Media/DCIM/speaker.caf";
NSString* kResultFilePath = #"/var/mobile/Media/DCIM/result.m4a";
OSSpinLock phoneCallIsActiveLock = 0;
OSSpinLock speakerLock = 0;
OSSpinLock micLock = 0;
ExtAudioFileRef micFile = NULL;
ExtAudioFileRef speakerFile = NULL;
BOOL phoneCallIsActive = NO;
void Convert()
{
//File URLs
CFURLRef micUrl = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)kMicFilePath, kCFURLPOSIXPathStyle, false);
CFURLRef speakerUrl = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)kSpeakerFilePath, kCFURLPOSIXPathStyle, false);
CFURLRef mixUrl = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)kResultFilePath, kCFURLPOSIXPathStyle, false);
ExtAudioFileRef micFile = NULL;
ExtAudioFileRef speakerFile = NULL;
ExtAudioFileRef mixFile = NULL;
//Opening input files (speaker and mic)
ExtAudioFileOpenURL(micUrl, &micFile);
ExtAudioFileOpenURL(speakerUrl, &speakerFile);
//Reading input file audio format (mono LPCM)
AudioStreamBasicDescription inputFormat, outputFormat;
UInt32 descSize = sizeof(inputFormat);
ExtAudioFileGetProperty(micFile, kExtAudioFileProperty_FileDataFormat, &descSize, &inputFormat);
int sampleSize = inputFormat.mBytesPerFrame;
//Filling input stream format for output file (stereo LPCM)
FillOutASBDForLPCM(inputFormat, inputFormat.mSampleRate, 2, inputFormat.mBitsPerChannel, inputFormat.mBitsPerChannel, true, false, false);
//Filling output file audio format (AAC)
memset(&outputFormat, 0, sizeof(outputFormat));
outputFormat.mFormatID = kAudioFormatMPEG4AAC;
outputFormat.mSampleRate = 8000;
outputFormat.mFormatFlags = kMPEG4Object_AAC_Main;
outputFormat.mChannelsPerFrame = 2;
//Opening output file
ExtAudioFileCreateWithURL(mixUrl, kAudioFileM4AType, &outputFormat, NULL, kAudioFileFlags_EraseFile, &mixFile);
ExtAudioFileSetProperty(mixFile, kExtAudioFileProperty_ClientDataFormat, sizeof(inputFormat), &inputFormat);
//Freeing URLs
CFRelease(micUrl);
CFRelease(speakerUrl);
CFRelease(mixUrl);
//Setting up audio buffers
int bufferSizeInSamples = 64 * 1024;
AudioBufferList micBuffer;
micBuffer.mNumberBuffers = 1;
micBuffer.mBuffers[0].mNumberChannels = 1;
micBuffer.mBuffers[0].mDataByteSize = sampleSize * bufferSizeInSamples;
micBuffer.mBuffers[0].mData = malloc(micBuffer.mBuffers[0].mDataByteSize);
AudioBufferList speakerBuffer;
speakerBuffer.mNumberBuffers = 1;
speakerBuffer.mBuffers[0].mNumberChannels = 1;
speakerBuffer.mBuffers[0].mDataByteSize = sampleSize * bufferSizeInSamples;
speakerBuffer.mBuffers[0].mData = malloc(speakerBuffer.mBuffers[0].mDataByteSize);
AudioBufferList mixBuffer;
mixBuffer.mNumberBuffers = 1;
mixBuffer.mBuffers[0].mNumberChannels = 2;
mixBuffer.mBuffers[0].mDataByteSize = sampleSize * bufferSizeInSamples * 2;
mixBuffer.mBuffers[0].mData = malloc(mixBuffer.mBuffers[0].mDataByteSize);
//Converting
while (true)
{
//Reading data from input files
UInt32 framesToRead = bufferSizeInSamples;
ExtAudioFileRead(micFile, &framesToRead, &micBuffer);
ExtAudioFileRead(speakerFile, &framesToRead, &speakerBuffer);
if (framesToRead == 0)
{
break;
}
//Building interleaved stereo buffer - left channel is mic, right - speaker
for (int i = 0; i < framesToRead; i++)
{
memcpy((char*)mixBuffer.mBuffers[0].mData + i * sampleSize * 2, (char*)micBuffer.mBuffers[0].mData + i * sampleSize, sampleSize);
memcpy((char*)mixBuffer.mBuffers[0].mData + i * sampleSize * 2 + sampleSize, (char*)speakerBuffer.mBuffers[0].mData + i * sampleSize, sampleSize);
}
//Writing to output file - LPCM will be converted to AAC
ExtAudioFileWrite(mixFile, framesToRead, &mixBuffer);
}
//Closing files
ExtAudioFileDispose(micFile);
ExtAudioFileDispose(speakerFile);
ExtAudioFileDispose(mixFile);
//Freeing audio buffers
free(micBuffer.mBuffers[0].mData);
free(speakerBuffer.mBuffers[0].mData);
free(mixBuffer.mBuffers[0].mData);
}
void Cleanup()
{
[[NSFileManager defaultManager] removeItemAtPath:kMicFilePath error:NULL];
[[NSFileManager defaultManager] removeItemAtPath:kSpeakerFilePath error:NULL];
}
void CoreTelephonyNotificationCallback(CFNotificationCenterRef center, void *observer, CFStringRef name, const void *object, CFDictionaryRef userInfo)
{
NSDictionary* data = (NSDictionary*)userInfo;
if ([(NSString*)name isEqualToString:(NSString*)kCTCallStatusChangeNotification])
{
int currentCallStatus = [data[(NSString*)kCTCallStatus] integerValue];
if (currentCallStatus == kCTCallStatusActive)
{
OSSpinLockLock(&phoneCallIsActiveLock);
phoneCallIsActive = YES;
OSSpinLockUnlock(&phoneCallIsActiveLock);
}
else if (currentCallStatus == kCTCallStatusHanged)
{
if (CTGetCurrentCallCount() > 0)
{
return;
}
OSSpinLockLock(&phoneCallIsActiveLock);
phoneCallIsActive = NO;
OSSpinLockUnlock(&phoneCallIsActiveLock);
//Closing mic file
OSSpinLockLock(&micLock);
if (micFile != NULL)
{
ExtAudioFileDispose(micFile);
}
micFile = NULL;
OSSpinLockUnlock(&micLock);
//Closing speaker file
OSSpinLockLock(&speakerLock);
if (speakerFile != NULL)
{
ExtAudioFileDispose(speakerFile);
}
speakerFile = NULL;
OSSpinLockUnlock(&speakerLock);
Convert();
Cleanup();
}
}
}
OSStatus(*AudioUnitProcess_orig)(AudioUnit unit, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inNumberFrames, AudioBufferList *ioData);
OSStatus AudioUnitProcess_hook(AudioUnit unit, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inNumberFrames, AudioBufferList *ioData)
{
OSSpinLockLock(&phoneCallIsActiveLock);
if (phoneCallIsActive == NO)
{
OSSpinLockUnlock(&phoneCallIsActiveLock);
return AudioUnitProcess_orig(unit, ioActionFlags, inTimeStamp, inNumberFrames, ioData);
}
OSSpinLockUnlock(&phoneCallIsActiveLock);
ExtAudioFileRef* currentFile = NULL;
OSSpinLock* currentLock = NULL;
AudioComponentDescription unitDescription = {0};
AudioComponentGetDescription(AudioComponentInstanceGetComponent(unit), &unitDescription);
//'agcc', 'mbdp' - iPhone 4S, iPhone 5
//'agc2', 'vrq2' - iPhone 5C, iPhone 5S
if (unitDescription.componentSubType == 'agcc' || unitDescription.componentSubType == 'agc2')
{
currentFile = &micFile;
currentLock = &micLock;
}
else if (unitDescription.componentSubType == 'mbdp' || unitDescription.componentSubType == 'vrq2')
{
currentFile = &speakerFile;
currentLock = &speakerLock;
}
if (currentFile != NULL)
{
OSSpinLockLock(currentLock);
//Opening file
if (*currentFile == NULL)
{
//Obtaining input audio format
AudioStreamBasicDescription desc;
UInt32 descSize = sizeof(desc);
AudioUnitGetProperty(unit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &desc, &descSize);
//Opening audio file
CFURLRef url = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)((currentFile == &micFile) ? kMicFilePath : kSpeakerFilePath), kCFURLPOSIXPathStyle, false);
ExtAudioFileRef audioFile = NULL;
OSStatus result = ExtAudioFileCreateWithURL(url, kAudioFileCAFType, &desc, NULL, kAudioFileFlags_EraseFile, &audioFile);
if (result != 0)
{
*currentFile = NULL;
}
else
{
*currentFile = audioFile;
//Writing audio format
ExtAudioFileSetProperty(*currentFile, kExtAudioFileProperty_ClientDataFormat, sizeof(desc), &desc);
}
CFRelease(url);
}
else
{
//Writing audio buffer
ExtAudioFileWrite(*currentFile, inNumberFrames, ioData);
}
OSSpinLockUnlock(currentLock);
}
return AudioUnitProcess_orig(unit, ioActionFlags, inTimeStamp, inNumberFrames, ioData);
}
__attribute__((constructor))
static void initialize()
{
CTTelephonyCenterAddObserver(CTTelephonyCenterGetDefault(), NULL, CoreTelephonyNotificationCallback, NULL, NULL, CFNotificationSuspensionBehaviorHold);
MSHookFunction(AudioUnitProcess, AudioUnitProcess_hook, &AudioUnitProcess_orig);
}
A few words about what's going on. AudioUnitProcess function is used for processing audio streams in order to apply some effects, mix, convert etc. We are hooking AudioUnitProcess in order to access phone call's audio streams. While phone call is active these streams are being processed in various ways.
We are listening for CoreTelephony notifications in order to get phone call status changes. When we receive audio samples we need to determine where they come from - microphone or speaker. This is done using componentSubType field in AudioComponentDescription structure. Now, you might think, why don't we store AudioUnit objects so that we don't need to check componentSubType every time. I did that but it will break everything when you switch speaker on/off on iPhone 5 because AudioUnit objects will change, they are recreated. So, now we open audio files (one for microphone and one for speaker) and write samples in them, simple as that. When phone call ends we will receive appropriate CoreTelephony notification and close the files. We have two separate files with audio from microphone and speaker that we need to merge. This is what void Convert() is for. It's pretty simple if you know the API. I don't think I need to explain it, comments are enough.
About locks. There are many threads in mediaserverd. Audio processing and CoreTelephony notifications are on different threads so we need some kind synchronization. I chose spin locks because they are fast and because the chance of lock contention is small in our case. On iPhone 4S and even iPhone 5 all the work in AudioUnitProcess should be done as fast as possible otherwise you will hear hiccups from device speaker which obviously not good.
Yes. Audio Recorder by a developer named Limneos does that (and quite well). You can find it on Cydia. It can record any type of call on iPhone 5 and up without using any servers etc'. The call will be placed on the device in an Audio file. It also supports iPhone 4S but for speaker only.
This tweak is known to be the first tweak ever that managed to record both streams of audio without using any 3rd party severs, VOIP or something similar.
The developer placed beeps on the other side of the call to alert the person you are recording but those were removed too by hackers across the net. To answer your question, Yes, it's very much possible, and not just theoretically.
Further reading
https://stackoverflow.com/a/19413363/202451
http://forums.macrumors.com/showthread.php?t=1566350
https://github.com/nst/iOS-Runtime-Headers
The only solution I can think of is to use the Core Telephony framework, and more specifically the callEventHandler property, to intercept when a call is coming in, and then to use an AVAudioRecorder to record the voice of the person with the phone (and maybe a little of the person on the other line's voice). This is obviously not perfect, and would only work if your application is in the foreground at the time of the call, but it may be the best you can get. See more about finding out if there is an incoming phone call here: Can we fire an event when ever there is Incoming and Outgoing call in iphone?.
EDIT:
.h:
#import <AVFoundation/AVFoundation.h>
#import<CoreTelephony/CTCallCenter.h>
#import<CoreTelephony/CTCall.h>
#property (strong, nonatomic) AVAudioRecorder *audioRecorder;
ViewDidLoad:
NSArray *dirPaths;
NSString *docsDir;
dirPaths = NSSearchPathForDirectoriesInDomains(
NSDocumentDirectory, NSUserDomainMask, YES);
docsDir = dirPaths[0];
NSString *soundFilePath = [docsDir
stringByAppendingPathComponent:#"sound.caf"];
NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath];
NSDictionary *recordSettings = [NSDictionary
dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:AVAudioQualityMin],
AVEncoderAudioQualityKey,
[NSNumber numberWithInt:16],
AVEncoderBitRateKey,
[NSNumber numberWithInt: 2],
AVNumberOfChannelsKey,
[NSNumber numberWithFloat:44100.0],
AVSampleRateKey,
nil];
NSError *error = nil;
_audioRecorder = [[AVAudioRecorder alloc]
initWithURL:soundFileURL
settings:recordSettings
error:&error];
if (error)
{
NSLog(#"error: %#", [error localizedDescription]);
} else {
[_audioRecorder prepareToRecord];
}
CTCallCenter *callCenter = [[CTCallCenter alloc] init];
[callCenter setCallEventHandler:^(CTCall *call) {
if ([[call callState] isEqual:CTCallStateConnected]) {
[_audioRecorder record];
} else if ([[call callState] isEqual:CTCallStateDisconnected]) {
[_audioRecorder stop];
}
}];
AppDelegate.m:
- (void)applicationDidEnterBackground:(UIApplication *)application//Makes sure that the recording keeps happening even when app is in the background, though only can go for 10 minutes.
{
__block UIBackgroundTaskIdentifier task = 0;
task=[application beginBackgroundTaskWithExpirationHandler:^{
NSLog(#"Expiration handler called %f",[application backgroundTimeRemaining]);
[application endBackgroundTask:task];
task=UIBackgroundTaskInvalid;
}];
This is the first time using many of these features, so not sure if this is exactly right, but I think you get the idea. Untested, as I do not have access to the right tools at the moment. Compiled using these sources:
Recording voice in background using AVAudioRecorder
http://prassan-warrior.blogspot.com/2012/11/recording-audio-on-iphone-with.html
Can we fire an event when ever there is Incoming and Outgoing call in iphone?
Apple does not allow it and does not provide any API for it.
However, on a jailbroken device I'm sure it's possible. As a matter of fact, I think it's already done. I remember seeing an app when my phone was jailbroken that changed your voice and recorded the call - I remember it was a US company offering it only in the states. Unfortunately I don't remember the name...
I guess some hardware could solve this. Connected to the minijack-port; having earbuds and a microphone passing through a small recorder. This recorder can be very simple. While not in conversation the recorder could feed the phone with data/the recording (through the jack-cable). And with a simple start button (just like the volum controls on the earbuds) could be sufficient for timing the recording.
Some setups
http://www.danmccomb.com/posts/483/how-to-record-iphone-conversations-using-zoom-h4n/
http://forums.macrumors.com/showthread.php?t=346430