I'm trying to get stereo samples from the digital line in on the iPhone/iPad.
To get a stereo line-in, I use the Mickey Blue microphone "Addon" for iOS.
See This.
I use the Aruts iOS CoreAudio example from Here.
This works perfect to get samples in mono from the line in, but I can't figure out the correct setup for stereo.
If there's anyone who can point me in the right direction that would be really helpful.
I need the samples directly to process them first, so writing to a file first is not an option.
I guess you could use the AVAudioRecorder like this:
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryPlayAndRecord error:nil];
[session setActive:YES error:nil];
NSMutableDictionary *recordSetting = [[NSMutableDictionary alloc] init];
[recordSetting setValue:[NSNumber numberWithInt:kAudioFormatMPEG4AAC] forKey:AVFormatIDKey];
[recordSetting setValue:[NSNumber numberWithFloat:44100.0] forKey:AVSampleRateKey];
//this should enable stereo
[recordSetting setValue:[NSNumber numberWithInt: 2] forKey:AVNumberOfChannelsKey];
AVAudioRecorder *recorder = [[AVAudioRecorder alloc] initWithURL:outputFileURL settings:recordSetting error:NULL];
recorder.delegate = self;
[recorder prepareToRecord];
[recorder record];
Found the solution:
The AudioStreamBasicDescription will only accept 32bit per channel in stereo mode somehow. Using the code below enables getting the samples in stereo (currently only using additional hardware)
Here is the audio format I used to get it to work:
AudioStreamBasicDescription stereoStreamFormat;
stereoStreamFormat.mSampleRate = 44100.00;
stereoStreamFormat.mFormatID = kAudioFormatLinearPCM;
stereoStreamFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical;
stereoStreamFormat.mBytesPerPacket = 4;
stereoStreamFormat.mBytesPerFrame = 4;
stereoStreamFormat.mFramesPerPacket = 1;
stereoStreamFormat.mChannelsPerFrame = 2;
stereoStreamFormat.mBitsPerChannel = 32;
Furthermore I use a separate buffer for each channel and a modified AudioBufferList struct :
struct AudioBufferListStereo
{
UInt32 mNumberBuffers;
AudioBuffer mBuffers[2]; //<-- this array has been set hardcoded to 2 channels because the standard iOS version did not work as expected.
#if defined(__cplusplus) && CA_STRICT
public:
AudioBufferListStereo() {}
private:
// Copying and assigning a variable length struct is problematic so turn their use into a
// compile time error for eacy spotting.
AudioBufferListStereo(const AudioBufferListStereo&);
AudioBufferListStereo& operator=(const AudioBufferListStereo&);
#endif
};
typedef struct AudioBufferListStereo AudioBufferListStereo;
The complete source is quite long, so I'll post it on GitHub if anyone requires it.
Related
I am using AVAudioSession and
I know this sounds like insanity maybe but is there any way to access the input and output data for a microphone or speaker? For example, I want to print out all the audio data streaming into the microphone in bytes.
Is there any way to do this? I looked into core audio but I am not familiar with it enough to be able to use it to print out both the input and output.
Thanks!
AVAudioSession is only the very first step to getting iOS to recognize that your app wants to do something with the mic. I'm assuming by "print" you mean visually or appending the audio data to a file.
If you are looking to record audio to a file you could use the AVAudioRecorder class:
//What are our output settings?
NSMutableDictionary *recordSetting = [[NSMutableDictionary alloc] init];
[recordSetting setValue:[NSNumber numberWithInt:kAudioFormatMPEG4AAC] forKey:AVFormatIDKey];
[recordSetting setValue:[NSNumber numberWithFloat:44100.0] forKey:AVSampleRateKey];
[recordSetting setValue:[NSNumber numberWithInt: 2] forKey:AVNumberOfChannelsKey];
// Create the recorder where recorder is declared as a property of the class & outputFileURL is an NSURL to a file path you want to record to
recorder = [[AVAudioRecorder alloc] initWithURL:outputFileURL settings:recordSetting error:NULL];
recorder.delegate = self;
[recorder prepareToRecord];
// Tell the system you are going to use the mic
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:0 error:nil];
[session setActive:YES error:nil];
// Start recording
[recorder record];
---------------------------------
//When you are done call:
[recorder stop];
NSURL *savedFileURL = recorder.url;
If you want to use CoreAudio so that you could get each buffer and decide what to do with it your going to have to take a step back and learn the structure of CoreAudio. CoreAudio allows you to be more flexible about the routing and the design of your audio system. It can't be explained in one post but if you are going that way one method to get data from the mic is to:
Create an AudioUnit from an AudioComponent (Type: kAudioUnitType_Output, Subtype: kAudioUnitSubType_RemoteIO)
Enable IO on the audioUnit
Set the kAudioUnitProperty_StreamFormat property of the audioUnit to an AudioStreamBasicDescription (most likely PCM format)
Intercept data using AURenderCallbackStruct
I've searched supported format audio files but I've found below formats only.
kAudioFormatLinearPCM = 'lpcm',
kAudioFormatAC3 = 'ac-3',
kAudioFormat60958AC3 = 'cac3',
kAudioFormatAppleIMA4 = 'ima4',
kAudioFormatMPEG4AAC = 'aac ',
kAudioFormatMPEG4CELP = 'celp',
kAudioFormatMPEG4HVXC = 'hvxc',
kAudioFormatMPEG4TwinVQ = 'twvq',
kAudioFormatMACE3 = 'MAC3',
kAudioFormatMACE6 = 'MAC6',
kAudioFormatULaw = 'ulaw',
kAudioFormatALaw = 'alaw',
kAudioFormatQDesign = 'QDMC',
kAudioFormatQDesign2 = 'QDM2',
kAudioFormatQUALCOMM = 'Qclp',
kAudioFormatMPEGLayer1 = '.mp1',
kAudioFormatMPEGLayer2 = '.mp2',
kAudioFormatMPEGLayer3 = '.mp3',
kAudioFormatTimeCode = 'time',
kAudioFormatMIDIStream = 'midi',
kAudioFormatParameterValueStream = 'apvs',
kAudioFormatAppleLossless = 'alac',
kAudioFormatMPEG4AAC_HE = 'aach',
kAudioFormatMPEG4AAC_LD = 'aacl',
kAudioFormatMPEG4AAC_ELD = 'aace',
kAudioFormatMPEG4AAC_ELD_SBR = 'aacf',
kAudioFormatMPEG4AAC_ELD_V2 = 'aacg',
kAudioFormatMPEG4AAC_HE_V2 = 'aacp',
kAudioFormatMPEG4AAC_Spatial = 'aacs',
kAudioFormatAMR = 'samr',
kAudioFormatAudible = 'AUDB',
kAudioFormatiLBC = 'ilbc',
kAudioFormatDVIIntelIMA = 0x6D730011,
kAudioFormatMicrosoftGSM = 0x6D730031,
kAudioFormatAES3 = 'aes3'
I want to record audio file in format of speex, because It's just enough to record speech. Is it possible with core audio framework?
Ref: I've found this lib. But It is too complicating process. I want to use simple control with AVAudioPlayer or AVAudioRecorder.
I once had an experience where I had to record audio with mp3 format on the iPhone. As you can see, it is not listed in your list. I ended up using LinearPCM and then converting it to mp3. I think this pattern is the only one you need. No standard methods will help you here, as well as I'm concerned.
Recording in any other format than aac is a lot of pain on the iOS.
I've used OGG Vorbis for audio playing which is same as OGG Speex.. for this i used a library available here on IDZAQAudioPlayer…
For OGG Speex audio Recording, I once used this library OGGSpeex
If u need further info for implementing it or anything else feel free to ask..
To Really reduce the size of audio recording you can use AudioFormat : MPEG4AAC which creates a 135KB file for 60 seconds recording..
Sure.. Code for AAC Recording
NSMutableDictionary *recordSetting = [[NSMutableDictionary alloc] init];
[recordSetting setValue:[NSNumber numberWithInt:kAudioFormatMPEG4AAC] forKey:AVFormatIDKey];
[recordSetting setValue:[NSNumber numberWithFloat:8000.0] forKey:AVSampleRateKey];
[recordSetting setValue:[NSNumber numberWithInt: 1] forKey:AVNumberOfChannelsKey];
[recordSetting setValue:[NSNumber numberWithInt:AVAudioQualityLow] forKey:AVEncoderAudioQualityKey];
AVAudioRecorder *recorder = [[AVAudioRecorder alloc] initWithURL:outputFileURL settings:recordSetting error:nil];
recorder.delegate = self;
recorder.meteringEnabled = YES;
[recorder prepareToRecord];
I have an AudioPlayer implemented in an app. As soon as I call [AVAudioRecorder prepareToRecord]; the CPU usage goes up to 5%. Fine, because something useful is happening. But after [AVAudioRecorder stop]; is called, CPU drops to 3% and stays there.
This is the code where I implement the recorder:
- (void)prepareRecorder
{
outputFileString = [NSString stringWithFormat:#"%#/%#", [self applicationDocumentsDirectory], #"audio.m4a"];
NSArray *pathComponents = [NSArray arrayWithObjects:[self applicationDocumentsDirectory], #"audio.m4a", nil];
outputFileURL = [NSURL fileURLWithPathComponents:pathComponents];
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setActive:YES error:nil];
[session setCategory:AVAudioSessionCategoryPlayAndRecord error:nil];
NSMutableDictionary *recordSetting = [[NSMutableDictionary alloc] init];
[recordSetting setValue:[NSNumber numberWithInt:kAudioFormatMPEG4AAC] forKey:AVFormatIDKey];
[recordSetting setValue:[NSNumber numberWithFloat:44100.0] forKey:AVSampleRateKey];
[recordSetting setValue:[NSNumber numberWithInt:2] forKey:AVNumberOfChannelsKey];
recorder = [[AVAudioRecorder alloc] initWithURL:outputFileURL settings:recordSetting error:nil];
recorder.delegate = self;
recorder.meteringEnabled = YES;
[recorder prepareToRecord];
}
I don't know if it's possible to release/dealloc an object in ARC, but I tried
recorder = Nil;
How can I solve this? Or shouldn't I bother about the 3%?
If You say it stays 3% no matter how much You start and stop AudioRecording, then there is no leak, but possibly SoundRecording framework has cached some private sound recording instances (something that is located and happens behind the Apple frameworks hood) and one of the reasons would be - next time to start recording faster than on first time. These cached instances would be deallocated in case sound player is not used anymore AND there would be a need of extra memory.
I think similarly is with AVPlayer and MFMessageComposeViewController (and probably many more frameworks/api's, which are heavy-to-run) - because - for both of these mentioned frameworks - on the first time I try to show SMS form, or play a video - there is a noticable delay. After that - there is no delay. So on first time it probably loads in cache something important, which can be auto-purged in case there is little memory available or application is closed.
So I think it is perfectly fine not to bother about this.
I'm running sample code on audio recording (source code downloadable at the end of the article). The code is like this
NSArray *pathComponents = [NSArray arrayWithObjects:
[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject],
#"MyAudioMemo.m4a",
nil];
NSURL *outputFileURL = [NSURL fileURLWithPathComponents:pathComponents];
// Setup audio session
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryPlayAndRecord error:nil];
// Define the recorder setting
NSMutableDictionary *recordSetting = [[NSMutableDictionary alloc] init];
[recordSetting setValue:[NSNumber numberWithInt:kAudioFormatMPEG4AAC] forKey:AVFormatIDKey];
[recordSetting setValue:[NSNumber numberWithFloat:44100.0] forKey:AVSampleRateKey];
[recordSetting setValue:[NSNumber numberWithInt: 2] forKey:AVNumberOfChannelsKey];
// Initiate and prepare the recorder
recorder = [[AVAudioRecorder alloc] initWithURL:outputFileURL settings:recordSetting error:nil];
recorder.delegate = self;
recorder.meteringEnabled = YES;
[recorder prepareToRecord];
However it throws an exception at prepareToRecord while running on the simulator:
It does record and it'll run fine if I just turn off the break point at exception. But this is annoying. What's wrong?
I'm getting this exception, too. Searching for solution. Important to note that you can only see these exceptions if you enable exception breakpoints. The app perform properly, but this is concerning. I'll keep investigating.
EDIT:
well, the answer is here: AVAudioPlayer throws breakpoint in debug mode
i figured that much myself, but still feel uncomfortable about having these exceptions. using different audio format while recording makes the exceptions to go away. however, i want to use the lossless format, which does cause these exceptions. hope this helps.
I want to record a virtual instrument in iOS that creates buffers which I can stream later on. But for this I need to create a recording buffer . I have a virtual piano that plays mp3 file on button click. But I really dont have any idea of internally recording the pieces the user plays. I have used a microphone recording for this but its very noisy and doesnt give a clean recording. Apps like garage band have such a feature . So i dont thinks its impossible. Can anyone guide me through this query?
I have tried Audio Engine but the sample code doesnt work for some reason. The code runs but nothing happens when i push the play button. No audio is played.
Add
#import <AVFoundation/AVFoundation.h>
#import <MediaPlayer/MediaPlayer.h>
to your project
and add delegate AVAudioRecorderDelegate to your .h file
#property (nonatomic, strong) AVAudioRecorder *Audiorecorder;
In .m file (here self.audioFileNameis NSString )
This is snippet of code :
NSMutableDictionary* recordSetting = [[NSMutableDictionary alloc] init];
[recordSetting setValue :[NSNumber numberWithInt:kAudioFormatMPEG4AAC] forKey:AVFormatIDKey];
[recordSetting setValue:[NSNumber numberWithFloat:16000.0] forKey:AVSampleRateKey];
[recordSetting setValue:[NSNumber numberWithInt: 1] forKey:AVNumberOfChannelsKey];
//Now that we have our settings we are going to instanciate an instance of our recorder instance.
NSDateFormatter *formatter = [[NSDateFormatter alloc] init];
[formatter setDateFormat:#"dd_MM_yyyy_HH_mm_ss"];
self.audioFileName = [NSString stringWithFormat:#"%#.caf", [formatter stringFromDate:[NSDate date]]];
NSString *path = [documentsDirectory stringByAppendingPathComponent:self.audioFileName ];
NSLog(#"%#",path);
NSURL *recordedTmpFile = [[NSURL alloc] initFileURLWithPath:path];
NSLog(#"%#",recordedTmpFile);
//Setup the recorder to use this file and record to it.
if(self.Audiorecorder != nil)
{
[self.Audiorecorder stop];
self.Audiorecorder = nil;
}
NSError * error;
self.Audiorecorder = [[ AVAudioRecorder alloc] initWithURL:recordedTmpFile settings:recordSetting error:&error];
//NSLog(#"Error : %#",error);
[self.Audiorecorder setDelegate:self];
[self.Audiorecorder prepareToRecord];
//Start the actual Recording
[self.Audiorecorder peakPowerForChannel:8];
[self.Audiorecorder updateMeters];
self.Audiorecorder.meteringEnabled = YES;
[self.Audiorecorder record];