wp8 ios audio recording and playback compatibility - ios

I want to get playing audio recordings from iOS on WP8 and vice-versa.
On iOS I'm using AVAudioRecorder for that purpose with the following configuration:
NSString *tempPath = NSTemporaryDirectory();
NSURL *soundFileURL = [NSURL fileURLWithPath:[tempPath stringByAppendingPathComponent:#"sound.aac"]];
NSDictionary *recordSettings = [NSDictionary
dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatMPEG4AAC],
AVFormatIDKey,
[NSNumber numberWithInt:AVAudioQualityMin],
AVEncoderAudioQualityKey,
[NSNumber numberWithInt:8000],
AVEncoderBitRateKey,
[NSNumber numberWithInt: 1],
AVNumberOfChannelsKey,
[NSNumber numberWithFloat:8000.0],
AVSampleRateKey,
[NSNumber numberWithInt:16],
AVEncoderBitDepthHintKey,
nil];
NSError *error = nil;
_audioRecorder = [[AVAudioRecorder alloc]
initWithURL:soundFileURL
settings:recordSettings
error:&error];
_audioRecorder.delegate = self;
The files "sound.aac" contains the recording in the AAC container and playing recorded audio sample works well on iOS.
I couldn't play the "sound.aac" on WP8 after transfering the file to the WP8 device. According the following link: http://msdn.microsoft.com/en-us/library/windowsphone/develop/ff462087(v=vs.105).aspx#BKMK_AudioSupport WP8 should be able to play the file.
The code on WP8 I've used is:
try
{
this.mediaPlayer = new MediaElement();
mediaPlayer.MediaEnded += new RoutedEventHandler(mediaPlayer_MediaEnded);
IsolatedStorageFile myStore = IsolatedStorageFile.GetUserStoreForApplication();
IsolatedStorageFileStream mediaStream = myStore.OpenFile("sound.aac", FileMode.Open, FileAccess.Read);
this.mediaPlayer.SetSource(mediaStream);
this.messageTextBlock.Text = "Playing the message...";
mediaPlayer.Play();
}
catch (Exception exception)
{
MessageBox.Show("Error playing audio!");
Debug.WriteLine(exception);
return;
}
After this the "sound.aac" is playing endlessly with no sound coming from speaker. The message "Playing the message..." is shown, there is no thrown Exception, no mediaPlayer_MediaEnded call. All I can do is to stop the playing.
I don't know how to get it working.

Related

audio Streaming AVFoundation using Audio Queues/ buffer in iOS

I need to do audio streaming in an iOS app using Objective C. I have used AVFoundation framework and capture the raw data from microphone and send to sever. However raw data which I am receiving is corrupt, Below is my code.
Please suggest me where I am doing wrong.
session = [[AVCaptureSession alloc] init];
NSDictionary *recordSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey,
[NSNumber numberWithFloat:16000.0], AVSampleRateKey,
[NSNumber numberWithInt: 1],AVNumberOfChannelsKey,
[NSNumber numberWithInt:32], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO],AVLinearPCMIsBigEndianKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
nil];
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:nil];
[session addInput:audioInput];
AVCaptureAudioDataOutput *audioDataOutput = [[AVCaptureAudioDataOutput alloc] init];
dispatch_queue_t audioQueue = dispatch_queue_create("AudioQueue", NULL);
[audioDataOutput setSampleBufferDelegate:self queue:audioQueue];
AVAssetWriterInput *_assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:recordSettings];
_assetWriterVideoInput.performsMultiPassEncodingIfSupported = YES;
if([session canAddOutput:audioDataOutput] ){
[session addOutput:audioDataOutput];
}
[session startRunning];
Capturing:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
AudioBufferList audioBufferList;
NSMutableData *data= [NSMutableData data];
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for( int y=0; y< audioBufferList.mNumberBuffers; y++ ){
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
Float32 *frame = (Float32*)audioBuffer.mData;
[data appendBytes:frame length:audioBuffer.mDataByteSize];
NSString *base64Encoded = [data base64EncodedStringWithOptions:0];
NSLog(#"Encoded: %#", base64Encoded);
}
CFRelease(blockBuffer);
}
I posted a sample of the kind of code you need to make this work. Its approach is nearly the same as yours. You should be able to read it easily.
The app uses AudioUnit to record and playback microphone input and speaker output, NSNetServices to connect two iOS devices on your network, and NSStreams to send an audio stream between the devices.
You can download the source code at:
https://drive.google.com/open?id=1tKgVl0X92SYvgpvbljRzilXNQ6iBcjqM
It requires the latest Xcode 9 beta release to compile, and the latest iOS 11 beta release to run it.
NOTE | A log entry for each method call and event is displayed in a textfield that encompasses the entire screen; there is no interactive interface—no buttons, etc. After installing the app on two iOS devices, simply launch it on both devices to automatically connect to your network and start streaming audio.

AVAudioRecorder not saving recording

I am making an iOS game. One of the things I need to do is to allow the user to make a quick little audio recording. This all works, but the recording is only temporarily saved. So when the user closes the app and reopens it, the recording should be able to play again, but it doesn't, it gets deleted when I close the app. I don't understand what I am doing wrong. Below is my code:
I setup the AVAudioRecorder in the ViewDidLoad method like this:
// Setup audio recorder to save file.
NSArray *pathComponents = [NSArray arrayWithObjects:[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject], #"MyAudioMemo.m4a", nil];
NSURL *outputFileURL = [NSURL fileURLWithPathComponents:pathComponents];
// Setup audio session.
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryPlayAndRecord error:nil];
NSMutableDictionary *recordSetting = [[NSMutableDictionary alloc] init];
[recordSetting setValue:[NSNumber numberWithInt:kAudioFormatMPEG4AAC] forKey:AVFormatIDKey];
[recordSetting setValue:[NSNumber numberWithFloat:44100.0] forKey:AVSampleRateKey];
[recordSetting setValue:[NSNumber numberWithInt:2] forKey:AVNumberOfChannelsKey];
audio_recorder = [[AVAudioRecorder alloc] initWithURL:outputFileURL settings:recordSetting error:nil];
audio_recorder.delegate = self;
audio_recorder.meteringEnabled = YES;
[audio_recorder prepareToRecord];
I have got the AVAudio delegate methods, too:
-(void)audio_playerDidFinishPlaying:(AVAudioPlayer *)player successfully:(BOOL)flag {
NSLog(#"Did finish playing: %d", flag);
}
-(void)audio_playerDecodeErrorDidOccur:(AVAudioPlayer *)player error:(NSError *)error {
NSLog(#"Decode Error occurred");
}
-(void)audio_recorderDidFinishRecording:(AVAudioPlayer *)recorder successfully:(BOOL)flag {
NSLog(#"Did finish recording: %d", flag);
}
-(void)audio_recorderEncodeErrorDidOccur:(AVAudioPlayer *)recorder error:(NSError *)error {
NSLog(#"Encode Error occurred");
}
When I want to play, record or stop the audio, I have made the following IBActions which are linked to UIButtons:
-(IBAction)play_audio {
NSLog(#"Play");
if (!audio_recorder.recording){
audio_player = [[AVAudioPlayer alloc] initWithContentsOfURL:audio_recorder.url error:nil];
[audio_player setDelegate:self];
[audio_player play];
}
}
-(IBAction)record_voice {
NSLog(#"Record");
if (!audio_recorder.recording) {
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setActive:YES error:nil];
// Start recording.
[audio_recorder record];
}
else {
// Pause recording.
[audio_recorder pause];
}
}
-(IBAction)stop_audio {
NSLog(#"Stop");
[audio_recorder stop];
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setActive:NO error:nil];
}
If you try my code you will see that it works, but it only seems to save the audio file temporarily.
What am I doing wrong? I thought I had used all the correct AVAudioRecorder methods?
To make a working recorder and save the recorded files, you need to:
Create a new audio session
Make sure microphone is connected/working
Start recording
Stop recording
Save the recorded audio file
Play the saved voice file
You're missing step 5 in your code, so the file that's just recorded is still available for you to play, but once you close the app, as it's not saved into an actual file somewhere in the app's directories, you lose it. You should add a method to save the recorded audio into a file so that you can access it any time later:
-(void) saveAudioFileNamed:(NSString *)filename {
destinationString = [[self documentsPath] stringByAppendingPathComponent:filename];
NSLog(#"%#", destinationString);
NSURL *destinationURL = [NSURL fileURLWithPath: destinationString];
NSDictionary *settings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithFloat: 44100.0], AVSampleRateKey,
[NSNumber numberWithInt: kAudioFormatAppleLossless], AVFormatIDKey,
[NSNumber numberWithInt: 1], AVNumberOfChannelsKey,
[NSNumber numberWithInt: AVAudioQualityMax], AVEncoderAudioQualityKey,
nil];
NSError *error;
audio_recorder = [[AVAudioRecorder alloc] initWithURL:destinationURL settings:settings error:&error];
audio_recorder.delegate = self;
}
Unrelated to this problem, but a general thing to mention is that you must follow Apple's (Objective-C's) naming conventions when defining variables, etc. audio_recording in no way follows these guidelines. You could use something like audioRecording instead.
Ok so thanks for #Neeku so much for answering my question, certainly something to take into account but its still not solving the problem. I was searching around and I found this example which works perfectly and more to the point shows me that I was approaching this entire functionality the wrong way. I think another one of my problems is that my app would delete the previously saved audio file in the ViewDidload method. And as well as that I don't think I have used the AVAudioSession instances correctly.
Anyway the example I found is very useful and solves my problem perfectly, you can find it here: Record audio and save permanently in iOS
Hope that helps anyone else who is having a similar problem to me.

Audio recording formats in ios

Which audio format is small in size for speech recording in ios? The quality is need not to be the best but it should be understandable what user speaks.
Assuming you plan to use AVAudioRecorder class, you should provide the recording settings like so -
NSDictionary *recordSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:AVAudioQualityMin], AVEncoderAudioQualityKey,
[NSNumber numberWithInt:16], AVEncoderBitRateKey,
[NSNumber numberWithInt: 2],AVNumberOfChannelsKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,nil];
NSError* error = nil;
AVAudioRecorder audioRecorder = [[AVAudioRecorder alloc]
initWithURL:soundFileURL
settings:recordSettings
error:&error];
Apple's documentation provides details about the settings constants (specifically AVEncoderAudioQualityKey) you could use in your app.
22.05KHz in mono is more than adequate for speech and is 1/4 the size of 44.1KHz in stereo at the same bit depth. You could likely even try dropping it down to 11.025KHz.
Several iOS apps use the Speex encoder for lower-bit rate speech. It's not built-in, but open source source code is available to do the encoding.

AVAudioRecorder - Proper MPEG4 AAC Recording Settings

I've got a live app with an estimated 15% of users reporting that the record feature is not working. This isn't happening on our test devices, but the reports show that the problem is that prepareToRecord is returning NO. I've had trouble finding sample settings for AAC format. Are any of my settings off? App requires iOS5 and uses ARC.
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryRecord error:nil];
NSDictionary *recordSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatMPEG4AAC], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:1], AVNumberOfChannelsKey,
[NSNumber numberWithInt:AVAudioQualityHigh], AVSampleRateConverterAudioQualityKey,
[NSNumber numberWithInt:128000], AVEncoderBitRateKey,
[NSNumber numberWithInt:16], AVEncoderBitDepthHintKey,
nil];
NSString *fileName = [NSString stringWithFormat:#"%#%#.caf", verseGUID, bRecordingReference ? #"_ref" : #""];
NSURL *url = [NSURL fileURLWithPath:[NSString stringWithFormat:#"%#/%#", [[Utilities sharedInstance] documentsDirectoryPath], fileName]];
NSError *error = nil;
audioRecorder = [[AVAudioRecorder alloc] initWithURL:url settings:recordSettings error:&error];
if([audioRecorder prepareToRecord]){
[audioRecorder record];
}else{
int errorCode = CFSwapInt32HostToBig([error code]);
NSLog(#"Error: %# [%4.4s])", [error localizedDescription], (char*)&errorCode);
}
it could be many things not having to do with the recording settings.
the real question you want answered seems to be: what would cause the recording not to occur?
audioRecorder could be nil or audioRecorder prepareToPlay could be returning NO. the former seems more likely.
the url passed to initWithURL could be malformed:
- have you tested by playing with the verseGUID, bRecordReference values? maybe your devices never have a bad verseGUID, but the devices on which no recording happens have a nil/empty verseGUID. this could cause the filename to be simply ".caf".
you seem to have your own class method [Utilities sharedInstance] . could this work for some reason on your devices but not on the failing devices? if so, you could be asking to record in a top level directory when you did not mean to.
can you get the testers you have onto a "beta" list? sign up for something like TestFlight or Hockey Kit, get one or more of the users with a failure to record to also sign up, and then upload a beta of your app with diagnostics that put a dialog on screen with the resultant "error". that might be most obvious. i use testflightapp.com only because it was the first i tried, it was pretty easy for me to manage and pretty painless for my beta testers.
Try this settings which i used to record MPEG 4 AAC format...Its working good..
NSString *tempDir = NSTemporaryDirectory();
NSString *soundFilePath = [tempDir stringByAppendingPathComponent:#"sound.m4a"];
NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath];
NSDictionary *recordSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatMPEG4AAC], AVFormatIDKey,
[NSNumber numberWithInt:AVAudioQualityMin], AVEncoderAudioQualityKey,
[NSNumber numberWithInt: 1], AVNumberOfChannelsKey,
[NSNumber numberWithFloat:8000.0], AVSampleRateKey,
nil];
Apologies that this is slightly tangential but I've just had a very similar issue with a number of users not being able to record, which I thought was related to audio settings.
However for me it turned out that they had denied permission to access the microphone. This meant that prepareToRecord was working (and truncating the file) and record was reporting that it was working, but actually it wasn't recording anything.
And I used this when I want to record:
AVAudioSession.sharedInstance().requestRecordPermission({(granted: Bool)-> Void in
if granted {
// can record - create AVAudioRecorder here
} else {
// can't record - update UI (or react accordingly)
}
})
Hope it saves someone some time if they run into a similar issue.

AVAssetWriter not recording audio

Im having trouble getting audio recorded into a video using avassetwriter on the iPhone. I am able to record video from the camera on the phone no problem but when I try to add audio I get nothing, also the durations displayed in the video in the photo albums app are showing something really out of whack, a 4 second video will show 15:54:01 or something like that, and every video made after the number increases even if the video is shorter. I've been trying to follow what ive seen in other questions here but no luck.
Heres how I'm setting up my audio inputs
captureSession = [[AVCaptureSession alloc] init];
//set up audio input
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeAudio];
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error ];
audioOutput = [[AVCaptureAudioDataOutput alloc] init];
if([captureSession canAddOutput:audioOutput])
{
[captureSession addOutput:audioOutput];
}
[audioOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
heres how im setting up the AVAssetWriter
videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:MOVIE_PATH] fileType:AVFileTypeQuickTimeMovie error:&error];
AudioChannelLayout acl;
bzero( &acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;
NSDictionary* audioOutputSettings = audioOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[ NSNumber numberWithInt: kAudioFormatMPEG4AAC], AVFormatIDKey,
[ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey,
[ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey,
[ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey,
[ NSNumber numberWithInt: 64000 ], AVEncoderBitRateKey,
nil];
and then here is how im writing the audio sample buffers using the CMSampleBufferRef Sent by the audioOutput call back
- (void) captureAudio:(CMSampleBufferRef)sampleBuffer
{
if([audioWriterInput isReadyForMoreMediaData]){
[audioWriterInput appendSampleBuffer:sampleBuffer];
}
}
Would really appreciate any help, I've been stuck on this all day.
I don't see you calling [videoWriter startSessionAtSourceTime], also you're discarding audio sample buffers when audioWriterInput isn't ready (and that can't be what you want).
So your problem lies in the PTSs (Presentation Time Stamps) of what you're writing out. Either you can tell the output that your timeline starts at a given time t with startSessionAtSourceTime or you can modify the buffers you append to have zero based presentationTimeStamps.

Resources