Check current or previous audio category iOS - ios

I have an application with multiple AudioSession's and their own aduioRouteChangeCallback's. I need a way of seeing what the current or previous Audio Category was in order to display the proper message to the user.
Does anyone know a way this can be done? From what I've seen on the web so far my hopes are not high that this is possible from what I've found on the web. Any help is greatly appreciate because I'm working against a deadline.
Below is an example of one of my audioRouteChangeCallback,
- (void) audioRouteChangeListener: (NSNotification*)notification {
// Initiallize dictionary with notification and grab route change reason
NSDictionary *interuptionDict = notification.userInfo;
NSInteger routeChangeReason = [[interuptionDictvalueForKey:
AVAudioSessionRouteChangeReasonKey] integerValue];
NSLog(#"MADE IT: sensorAudioRouteChageListener");
switch (routeChangeReason) {
// Sensor inserted
case AVAudioSessionRouteChangeReasonNewDeviceAvailable:
// Start IO communication
[self startCollecting];
NSLog(#"Sensor INSERTED");
break;
// Sensor removed
case AVAudioSessionRouteChangeReasonOldDeviceUnavailable:
// Stop IO audio unit
[self stopCollecting];
NSLog(#"Sensor REMOVED");
break;
// Category changed from PlayAndRecord
case AVAudioSessionRouteChangeReasonCategoryChange:
// Stop IO audio unit
[self stopCollecting];
NSLog(#"Category CHANGED");
break;
default:
NSLog(#"Blowing it in- audioRouteChangeListener with route change reason: %ld",
(long)routeChangeReason);
break;
}
}
The session is initialized in the objects init function,
- (id) init {
self = [super init];
if (!self) return nil;
// Set up AVAudioSession
self->noiseAudioSession = [AVAudioSession sharedInstance];
BOOL success;
NSError *error;
success = [self->noiseAudioSession setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
if (!success) NSLog(#"ERROR initNoiseRecorder: AVAudioSession failed overrideOutputAudio- %#", error);
success = [self->noiseAudioSession setActive:YES error:&error];
if (!success) NSLog(#"ERROR initNoiseRecorder: AVAudioSession failed activating- %#", error);
return self;
}

Related

Routing audio output between multiple bluetooth devices with iOS using AVAudioSession

Got a video conference app, implementing device selection feature of switching between Earpiece, Speaker, and connected Bluetooth devices. All seems to function appropriately apart from switching between bluetooth devices themselves.
For some reason, only the last connected device gets the audio routed to it, and you can't switch back to the other ones even if they're available in availableInputs of AVAudioSession SharedInstance, using setPreferredInput with OverridePortNone
I tried searching for resolutions but found the same unanswered issue from 5 years ago, I tried doing the same as changing setActive calls but was also unsuccessful.
Following is the test code, which is taken from here:
AVAudioSession *_audioSession = [AVAudioSession sharedInstance];
AVAudioSessionCategoryOptions _incallAudioCategoryOptionsAll = AVAudioSessionCategoryOptionMixWithOthers|AVAudioSessionCategoryOptionAllowBluetooth|AVAudioSessionCategoryOptionAllowAirPlay;
[_audioSession setCategory:AVAudioSessionCategoryPlayAndRecord
withOptions:_incallAudioCategoryOptionsAll
error:nil];
[_audioSession setMode:AVAudioSessionModeVoiceChat error: nil];
RCT_EXPORT_METHOD(setAudioDevice:(NSString *)device
resolve:(RCTPromiseResolveBlock)resolve
reject:(RCTPromiseRejectBlock)reject) {
BOOL success;
NSError *error = nil;
NSLog(#"[setAudioDevice] - Attempting to set audiodevice as: %#", device);
if ([device isEqualToString:kDeviceTypeSpeaker]) {
success = [_audioSession overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:&error];
} else {
AVAudioSessionPortDescription *port = nil;
for (AVAudioSessionPortDescription *portDesc in _audioSession.availableInputs) {
if ([portDesc.UID isEqualToString:device]) {
port = portDesc;
break;
}
}
if (port != nil) {
[_audioSession overrideOutputAudioPort:AVAudioSessionPortOverrideNone error:nil];
success = [_audioSession setPreferredInput:port error:&error];
if(error != nil)
{
NSLog(#"setAudioDevice %# %#", error.localizedDescription, error);
}
} else {
success = NO;
error = RCTErrorWithMessage(#"Could not find audio device");
}
}
if (success) {
resolve(#"setAudioDevice success!");
NSLog(#"resolved success");
} else {
reject(#"setAudioDevice", error != nil ? error.localizedDescription : #"", error);
NSLog(#"sent reject");
}
}
So how can I make it that we're able to successfully change from one bluetooth device to other?

Why does this audio session fail to recognise an interruption?

My app synthesises audio from a lookup table. It plays audio successfully but crashes the moment I try to stop playing. Audio playback only needs to exit without restarting so the requirements for handling the interruption are basic. I reread Apple’s Audio Session Programming Guide including the section Responding to Interruptions. However the method handleAudioSessionInterruption does not seem to register an interrupt so I’m obviously missing something.
EDIT See my answer. When I began work on this I knew next to nothing about NSNotificationCenter so I welcome any suggestion for improvement.
Two methods set up the audio session to play in the foreground.
- (void)setUpAudio
{
if (_playQueue == NULL)
{
if ([self setUpAudioSession] == TRUE)
{
[self setUpPlayQueue];
[self setUpPlayQueueBuffers];
}
}
}
- (BOOL)setUpAudioSession
{
BOOL success = NO;
NSError *audioSessionError = nil;
AVAudioSession *session = [AVAudioSession sharedInstance];
// Set up notifications
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(handleAudioSessionInterruption:)
name:AVAudioSessionInterruptionNotification
object:session];
// Set category
success = [session setCategory:AVAudioSessionCategoryPlayback
error:&audioSessionError];
if (!success)
{
NSLog(#"%# Error setting category: %#",
NSStringFromSelector(_cmd), [audioSessionError localizedDescription]);
// Exit early
return success;
}
// Set mode
success = [session setMode:AVAudioSessionModeDefault
error:&audioSessionError];
if (!success)
{
NSLog(#"%# Error setting mode: %#",
NSStringFromSelector(_cmd), [audioSessionError localizedDescription]);
// Exit early
return success;
}
// Set some preferred values
NSTimeInterval bufferDuration = .005; // I would prefer a 5ms buffer duration
success = [session setPreferredIOBufferDuration:bufferDuration
error:&audioSessionError];
if (audioSessionError)
{
NSLog(#"Error %ld, %# %i", (long)audioSessionError.code, audioSessionError.localizedDescription, success);
}
double sampleRate = _audioFormat.mSampleRate; // I would prefer a sample rate of 44.1kHz
success = [session setPreferredSampleRate:sampleRate
error:&audioSessionError];
if (audioSessionError)
{
NSLog(#"Error %ld, %# %i", (long)audioSessionError.code, audioSessionError.localizedDescription, success);
}
success = [session setActive:YES
error:&audioSessionError];
if (!success)
{
NSLog(#"%# Error activating %#",
NSStringFromSelector(_cmd), [audioSessionError localizedDescription]);
}
// Get current values
sampleRate = session.sampleRate;
bufferDuration = session.IOBufferDuration;
NSLog(#"Sample Rate:%0.0fHz I/O Buffer Duration:%f", sampleRate, bufferDuration);
return success;
}
And here is the method that handles the interruption when I press the stop button. However it does not respond.
EDIT The correct method needs block, not selector. See my answer.
- (void)handleAudioSessionInterruption:(NSNotification*)notification
{
if (_playQueue)
{
NSNumber *interruptionType = [[notification userInfo] objectForKey:AVAudioSessionInterruptionTypeKey];
NSNumber *interruptionOption = [[notification userInfo] objectForKey:AVAudioSessionInterruptionOptionKey];
NSLog(#"in-app Audio playback will be stopped by %# %lu", notification.name, (unsigned long)interruptionType.unsignedIntegerValue);
switch (interruptionType.unsignedIntegerValue)
{
case AVAudioSessionInterruptionTypeBegan:
{
if (interruptionOption.unsignedIntegerValue == AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation)
{
NSLog(#"notify other apps that audio is now available");
}
}
break;
default:
break;
}
}
}
Answer My method to handle AudioSessionInterruption did not subscribe the observer correctly with NSNotificationCentre. This has been fixed by adding observer using block, not selector.
The solution replaces deprecated AVAudioSession delegate methods in AudioBufferPlayer, an extremely fit for purpose audio player initially developed for direct audio synthesis by Matthias Hollejmans. Several deprecated functions including InterruptionListenerCallback were later upgraded by Mario Diana. The solution (below) uses NSNotification allowing users to exit AVAudioSession gracefully by pressing a button.
Here is the relevant code.
PlayViewController.m
UIButton action performs an orderly shutdown of synth, invalidates the timer and posts the notification that will exit AVAudioSession
- (void)fromEscButton:(UIButton*)button
{
[self stopConcertClock];
... // code for Exit PlayViewController not shown
}
- (void)stopConcertClock
{
[_synthLock lock];
[_synth stopAllNotes];
[_synthLock unlock];
[timer invalidate];
timer = nil;
[self postAVAudioSessionInterruptionNotification];
NSLog(#"Esc button pressed or sequence ended. Exit PlayViewController ");
}
- (void) postAVAudioSessionInterruptionNotification
{
[[NSNotificationCenter defaultCenter]
postNotificationName:#"AVAudioSessionInterruptionNotification"
object:self];
}
Initialising the AVAudioSession includes subscribing for a single interruption notification before starting startAudioPlayer in AudioBufferPlayer
- (id)init
{
if (self = [super init])
{
NSLog(#"PlayViewController starts MotionListener and AudioSession");
[self startAudioSession];
}
return self;
}
- (void)startAudioSession
{
// Synth and the AudioBufferPlayer must use the same sample rate.
_synthLock = [[NSLock alloc] init];
float sampleRate = 44100.0f;
// Initialise synth to fill the audio buffer with audio samples.
_synth = [[Synth alloc] initWithSampleRate:sampleRate];
// Initialise the audio buffer.
_player = [[AudioBufferPlayer alloc] initWithSampleRate:sampleRate
channels:1
bitsPerChannel:16
packetsPerBuffer:1024];
_player.gain = 0.9f;
__block __weak PlayViewController *weakSelf = self;
_player.block = ^(AudioQueueBufferRef buffer, AudioStreamBasicDescription audioFormat)
{
PlayViewController *blockSelf = weakSelf;
if (blockSelf != nil)
{
// Lock access to the synth. This callback runs on an internal Audio Queue thread and we don't
// want another thread to change the Synth's state while we're still filling up the audio buffer.
[blockSelf -> _synthLock lock];
// Calculate how many packets fit into this buffer. Remember that a packet equals one frame
// because we are dealing with uncompressed audio; a frame is a set of left+right samples
// for stereo sound, or a single sample for mono sound. Each sample consists of one or more
// bytes. So for 16-bit mono sound, each packet is 2 bytes. For stereo it would be 4 bytes.
int packetsPerBuffer = buffer -> mAudioDataBytesCapacity / audioFormat.mBytesPerPacket;
// Let the Synth write into the buffer. The Synth just knows how to fill up buffers
// in a particular format and does not care where they come from.
int packetsWritten = [blockSelf -> _synth fillBuffer:buffer->mAudioData frames:packetsPerBuffer];
// We have to tell the buffer how many bytes we wrote into it.
buffer -> mAudioDataByteSize = packetsWritten * audioFormat.mBytesPerPacket;
[blockSelf -> _synthLock unlock];
}
};
// Set up notifications
[self subscribeForBlockNotification];
[_player startAudioPlayer];
}
- (void)subscribeForBlockNotification
{
NSNotificationCenter * __weak center = [NSNotificationCenter defaultCenter];
id __block token = [center addObserverForName:#"AVAudioSessionInterruptionNotification"
object:nil
queue:[NSOperationQueue mainQueue]
usingBlock:^(NSNotification *note) {
NSLog(#"Received the notification!");
[_player stopAudioPlayer];
[center removeObserver:token];
}];
}
PlayViewController.h
These are relevant interface settings
#interface PlayViewController : UIViewController <EscButtonDelegate>
{
...
// Initialisation of audio player and synth
AudioBufferPlayer* player;
Synth* synth;
NSLock* synthLock;
}
...
- (AudioBufferPlayer*)player;
- (Synth*)synth;
#end
AudioBufferPlayer.m
- (void)stopAudioPlayer
{
[self stopPlayQueue];
[self tearDownPlayQueue];
[self tearDownAudioSession];
}
- (void)stopPlayQueue
{
if (_audioPlaybackQueue != NULL)
{
AudioQueuePause(_audioPlaybackQueue);
AudioQueueReset(_audioPlaybackQueue);
_playing = NO;
}
}
- (void)tearDownPlayQueue
{
AudioQueueDispose(_audioPlaybackQueue, NO);
_audioPlaybackQueue = NULL;
}
- (BOOL)tearDownAudioSession
{
NSError *deactivationError = nil;
BOOL success = [[AVAudioSession sharedInstance] setActive:NO
withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation
error:nil];
if (!success)
{
NSLog(#"%s AVAudioSession Error: %#", __FUNCTION__, deactivationError);
}
return success;
}

How to implement speech-to-text via the Speech framework in Objective-C?

I want to do speech recognition in my Objective-C app using the iOS Speech framework.
I found some Swift examples but haven't been able to find anything in Objective-C.
Is it possible to access this framework from Objective-C? If so, how?
After spending enough time looking for Objective-C samples -even in the Apple documentation- I couldn't find anything decent, so I figured it out myself.
Header file (.h)
/*!
* Import the Speech framework, assign the Delegate and declare variables
*/
#import <Speech/Speech.h>
#interface ViewController : UIViewController <SFSpeechRecognizerDelegate> {
SFSpeechRecognizer *speechRecognizer;
SFSpeechAudioBufferRecognitionRequest *recognitionRequest;
SFSpeechRecognitionTask *recognitionTask;
AVAudioEngine *audioEngine;
}
Methods file (.m)
- (void)viewDidLoad {
[super viewDidLoad];
// Initialize the Speech Recognizer with the locale, couldn't find a list of locales
// but I assume it's standard UTF-8 https://wiki.archlinux.org/index.php/locale
speechRecognizer = [[SFSpeechRecognizer alloc] initWithLocale:[[NSLocale alloc] initWithLocaleIdentifier:#"en_US"]];
// Set speech recognizer delegate
speechRecognizer.delegate = self;
// Request the authorization to make sure the user is asked for permission so you can
// get an authorized response, also remember to change the .plist file, check the repo's
// readme file or this project's info.plist
[SFSpeechRecognizer requestAuthorization:^(SFSpeechRecognizerAuthorizationStatus status) {
switch (status) {
case SFSpeechRecognizerAuthorizationStatusAuthorized:
NSLog(#"Authorized");
break;
case SFSpeechRecognizerAuthorizationStatusDenied:
NSLog(#"Denied");
break;
case SFSpeechRecognizerAuthorizationStatusNotDetermined:
NSLog(#"Not Determined");
break;
case SFSpeechRecognizerAuthorizationStatusRestricted:
NSLog(#"Restricted");
break;
default:
break;
}
}];
}
/*!
* #brief Starts listening and recognizing user input through the
* phone's microphone
*/
- (void)startListening {
// Initialize the AVAudioEngine
audioEngine = [[AVAudioEngine alloc] init];
// Make sure there's not a recognition task already running
if (recognitionTask) {
[recognitionTask cancel];
recognitionTask = nil;
}
// Starts an AVAudio Session
NSError *error;
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryRecord error:&error];
[audioSession setActive:YES withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error];
// Starts a recognition process, in the block it logs the input or stops the audio
// process if there's an error.
recognitionRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
AVAudioInputNode *inputNode = audioEngine.inputNode;
recognitionRequest.shouldReportPartialResults = YES;
recognitionTask = [speechRecognizer recognitionTaskWithRequest:recognitionRequest resultHandler:^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error) {
BOOL isFinal = NO;
if (result) {
// Whatever you say in the microphone after pressing the button should be being logged
// in the console.
NSLog(#"RESULT:%#",result.bestTranscription.formattedString);
isFinal = !result.isFinal;
}
if (error) {
[audioEngine stop];
[inputNode removeTapOnBus:0];
recognitionRequest = nil;
recognitionTask = nil;
}
}];
// Sets the recording format
AVAudioFormat *recordingFormat = [inputNode outputFormatForBus:0];
[inputNode installTapOnBus:0 bufferSize:1024 format:recordingFormat block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
[recognitionRequest appendAudioPCMBuffer:buffer];
}];
// Starts the audio engine, i.e. it starts listening.
[audioEngine prepare];
[audioEngine startAndReturnError:&error];
NSLog(#"Say Something, I'm listening");
}
- (IBAction)microPhoneTapped:(id)sender {
if (audioEngine.isRunning) {
[audioEngine stop];
[recognitionRequest endAudio];
} else {
[self startListening];
}
}
Now, add the delegate the SFSpeechRecognizerDelegate to check if the speech recognizer is available.
#pragma mark - SFSpeechRecognizerDelegate Delegate Methods
- (void)speechRecognizer:(SFSpeechRecognizer *)speechRecognizer availabilityDidChange:(BOOL)available {
NSLog(#"Availability:%d",available);
}
Instructions & Notes
Remember to modify the .plist file to get user's authorization for Speech Recognition and using the microphone, of course the <String> value must be customized to your needs, you can do this by creating and modifying the values in the Property List or right-click on the .plist file and Open As -> Source Code and paste the following lines before the </dict> tag.
<key>NSMicrophoneUsageDescription</key> <string>This app uses your microphone to record what you say, so watch what you say!</string>
<key>NSSpeechRecognitionUsageDescription</key> <string>This app uses Speech recognition to transform your spoken words into text and then analyze the, so watch what you say!.</string>
Also remember that in order to be able to import the Speech framework into the project you need to have iOS 10.0+.
To get this running and test it you just need a very basic UI, just create an UIButton and assign the microPhoneTapped action to it, when pressed the app should start listening and logging everything that it hears through the microphone to the console (in the sample code NSLog is the only thing receiving the text). It should stop the recording when pressed again.
I created a Github repo with a sample project, enjoy!

How to let my app audio to nicely interrupt iPhone audio while speaking

My iOS 7 app vocalizes texts when necessary.
What I'd like to do is enable the user to listen to his music or podcasts (or any other app using audio) while mine is running.
The expected behavior is that others audio either mixe or duck when my app speaks, then the others audio resume his volume at the initial level right after.
I have tried many ways to achieve this goal, but nothing is good enough, as I list the issues I face, after the code.
My current implementation is based on creating a session prior to play or text-to-speech as follows:
+ (void)setAudioActive {
[[self class] setSessionActiveWithMixing:YES];
}
After the play/speech, I set i to idled as follows:
+ (void)setAudioIdle {
[[self class] setSessionActiveWithMixing:NO];
}
The core function which handle the session setup accordingly to the active parameter, as follows:
+ (void)setSessionActiveWithMixing:(BOOL)active
{
NSError *error = NULL;
BOOL success;
AVAudioSession *session = [AVAudioSession sharedInstance];
static NSInteger counter = 0;
success = [session setActive:NO error:&error];
if (error) {
DDLogError(#"startAudioMixAndBackground: session setActive:NO, %#", error.description);
}
else {
counter--; if (counter<0) counter = 0;
}
if (active) {
AVAudioSessionCategoryOptions options = AVAudioSessionCategoryOptionAllowBluetooth
//|AVAudioSessionCategoryOptionDefaultToSpeaker
|AVAudioSessionCategoryOptionDuckOthers
;
success = [session setCategory://AVAudioSessionCategoryPlayback
AVAudioSessionCategoryPlayAndRecord
withOptions: options
error: &error];
if (error) {
// Do some error handling
DDLogError(#"startAudioMixAndBackground: setCategory:AVAudioSessionCategoryPlayback, %#", error.description);
}
else {
//activate the audio session
success = [session setActive:YES error:&error];
if (error) {
DDLogError(#"startAudioMixAndBackground: session setActive:YES, %#", error.description);
}
else {
counter++;
}
}
}
DDLogInfo(#"Audio session counter is: %ld",counter);
}
My current issues are:
1) When my app start to speak, I hear some king of glitch in the sound, which makes it not nice;
2) When I connect the route to bluetooth, the underlying audio (say the Podcast or ipod music) goes very low and sounds noisy, which make my solution merely unusable, my users will reject this level au poor quality.
3) When others bluetooth connected devices tried to emit sounds (say a GPS in a car or instance), my App does not receive any interrupts (or I handle it wrongly), see my code as follows:
- (void)startAudioMixAndBackground {
// initialize our AudioSession -
// this function has to be called once before calling any other AudioSession functions
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(audioSessionDidChangeInterruptionType:)
name:AVAudioSessionInterruptionNotification object:[AVAudioSession sharedInstance]];
// set our default audio session state
[[self class] setAudioIdle];
[[UIApplication sharedApplication] beginReceivingRemoteControlEvents];
if ([self canBecomeFirstResponder]) {
[self becomeFirstResponder];
}
#synchronized(self) {
self.okToPlaySound = YES;
}
//MPVolumeSettingsAlertShow();
}
// want remote control events (via Control Center, headphones, bluetooth, AirPlay, etc.)
- (void)remoteControlReceivedWithEvent:(UIEvent *)event
{
if (event.type == UIEventTypeRemoteControl)
{
switch(event.subtype)
{
case UIEventSubtypeRemoteControlPause:
case UIEventSubtypeRemoteControlStop:
[[self class] setAudioIdle];
break;
case UIEventSubtypeRemoteControlPlay:
[[self class] setAudioActive];
break;
default:
break;
}
}
}
#pragma mark - Audio Support
- (void)audioSessionDidChangeInterruptionType:(NSNotification *)notification
{
AVAudioSessionInterruptionType interruptionType = [[[notification userInfo]
objectForKey:AVAudioSessionInterruptionTypeKey] unsignedIntegerValue];
if (AVAudioSessionInterruptionTypeBegan == interruptionType)
{
DDLogVerbose(#"Session interrupted: --- Begin Interruption ---");
}
else if (AVAudioSessionInterruptionTypeEnded == interruptionType)
{
DDLogVerbose(#"Session interrupted: --- End Interruption ---");
}
}
Your issue is most likely due to the category you are setting: AVAudioSessionCategoryPlayAndRecord. The PlayAndRecord category does not allow your app to mix/duck audio with other apps. You should reference the docs on Audio Session Categories again here: https://developer.apple.com/library/ios/documentation/avfoundation/reference/AVAudioSession_ClassReference/Reference/Reference.html. It seems like AVAudioSessionCategoryAmbient is probably more what your looking for.

Music convert and how to know if writing are completed

i had to convert big file size song from iTunes library to a smaller 8K song file.
As i did the converting async, the bool always return true even though writing to doc folder are not completed. At the moment i'm using a delay of 10sec before i called the function again and it works fine on the interim for iPhone 5s, but i would like to cater on the slower devices.
kindly give me some pointer / recommendation on my code.
-(void)startUploadSongAnalysis
{
[self updateProgressYForID3NForUpload:NO];
if ([self.uploadWorkingAray count]>=1)
{
Song *songVar = [self.uploadWorkingAray objectAtIndex:0];//core data var
NSLog(#"songVar %#",songVar.songName);
NSLog(#"songVar %#",songVar.songURL);
NSURL *songU = [NSURL URLWithString:songVar.songURL]; //URL of iTunes Lib
// self.asset = [AVAsset assetWithURL:songU];
// NSLog(#"asset %#",self.asset);
NSError *error;
NSString *subString = [[songVar.songURL componentsSeparatedByString:#"id="] lastObject];
NSString *savedPath = [self.documentsDir stringByAppendingPathComponent:[NSString stringWithFormat:#"audio%#.m4a",subString]];//save file name of converted 8kb song
NSString *subStringPath = [NSString stringWithFormat:#"audio%#.m4a",subString];
if ([self.fileManager fileExistsAtPath:savedPath] == YES)
[self.fileManager removeItemAtPath:savedPath error:&error];
NSLog(#"cacheDir %#",savedPath);
//export low bitrate song to cache
if ([self exportAudio:[AVAsset assetWithURL:songU] toFilePath:savedPath]) // HERE IS THE PROBLEM, this return true even the writing is not completed cos when i upload to my web server, it will say song file corrupted
{
// [self performSelector:#selector(sendSongForUpload:) withObject:subStringPath afterDelay:1];
[self sendRequest:2 andPath:subStringPath andSongDBItem:songVar];
}
else
{
NSLog(#"song too short, skipped");
[self.uploadWorkingAray removeObjectAtIndex:0];
[self.songNotFoundArray addObject:songVar];
[self startUploadSongAnalysis];
}
}
else //uploadWorkingAray empty
{
NSLog(#"save changes");
[[VPPCoreData sharedInstance] saveAllChanges];
}
}
#pragma mark song exporter to doc folder
- (BOOL)exportAudio:(AVAsset *)avAsset toFilePath:(NSString *)filePath
{
CMTime assetTime = [avAsset duration];
Float64 duration = CMTimeGetSeconds(assetTime);
if (duration < 40.0) return NO; // if song too short return no
// get the first audio track
NSArray *tracks = [avAsset tracksWithMediaType:AVMediaTypeAudio];
if ([tracks count] == 0) return NO;
NSError *readerError = nil;
AVAssetReader *reader = [[AVAssetReader alloc] initWithAsset:avAsset error:&readerError];
//AVAssetReader *reader = [AVAssetReader assetReaderWithAsset:avAsset error:&readerError]; // both works the same ?
AVAssetReaderOutput *readerOutput = [AVAssetReaderAudioMixOutput
assetReaderAudioMixOutputWithAudioTracks:avAsset.tracks
audioSettings: nil];
if (! [reader canAddOutput: readerOutput])
{
NSLog (#"can't add reader output...!");
return NO;
}
else
{
[reader addOutput:readerOutput];
}
// writer AVFileTypeCoreAudioFormat AVFileTypeAppleM4A
NSError *writerError = nil;
AVAssetWriter *writer = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:filePath]
fileType:AVFileTypeAppleM4A
error:&writerError];
//NSLog(#"writer %#",writer);
AudioChannelLayout channelLayout;
memset(&channelLayout, 0, sizeof(AudioChannelLayout));
channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
// use different values to affect the downsampling/compression
// NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
// [NSNumber numberWithInt: kAudioFormatMPEG4AAC], AVFormatIDKey,
// [NSNumber numberWithFloat:16000.0], AVSampleRateKey,
// [NSNumber numberWithInt:2], AVNumberOfChannelsKey,
// [NSNumber numberWithInt:128000], AVEncoderBitRateKey,
// [NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)], AVChannelLayoutKey,
// nil];
NSDictionary *outputSettings = #{AVFormatIDKey: #(kAudioFormatMPEG4AAC),
AVEncoderBitRateKey: #(8000),
AVNumberOfChannelsKey: #(1),
AVSampleRateKey: #(8000)};
AVAssetWriterInput *writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio outputSettings:outputSettings];
//\Add inputs to Write
NSParameterAssert(writerInput);
NSAssert([writer canAddInput:writerInput], #"Cannot write to this type of audio input" );
if ([writer canAddInput:writerInput])
{
[writer addInput:writerInput];
}
else
{
NSLog (#"can't add asset writer input... die!");
return NO;
}
[writerInput setExpectsMediaDataInRealTime:NO];
[writer startWriting];
[writer startSessionAtSourceTime:kCMTimeZero];
[reader startReading];
__block UInt64 convertedByteCount = 0;
__block BOOL returnValue;
__block CMSampleBufferRef nextBuffer;
dispatch_queue_t mediaInputQueue = dispatch_queue_create("mediaInputQueue", NULL);
[writerInput requestMediaDataWhenReadyOnQueue:mediaInputQueue usingBlock:^{
// NSLog(#"Asset Writer ready : %d", writerInput.readyForMoreMediaData);
while (writerInput.readyForMoreMediaData)
{
nextBuffer = [readerOutput copyNextSampleBuffer];
if (nextBuffer)
{
[writerInput appendSampleBuffer: nextBuffer];
convertedByteCount += CMSampleBufferGetTotalSampleSize (nextBuffer);
//NSNumber *convertedByteCountNumber = [NSNumber numberWithLong:convertedByteCount];
//NSLog (#"writing");
CFRelease(nextBuffer);
}
else
{
[writerInput markAsFinished];
[writer finishWritingWithCompletionHandler:^{
if (AVAssetWriterStatusCompleted == writer.status)
{
NSLog(#"Writer completed");
returnValue = YES; //I NEED TO RETURN SOMETHING FROM HERE AFTER WRITING COMPLETED
dispatch_async(mediaInputQueue, ^{
dispatch_async(dispatch_get_main_queue(), ^{
// add this to the main queue as the last item in my serial queue
// when I get to this point I know everything in my queue has been run
NSDictionary *outputFileAttributes = [[NSFileManager defaultManager]
attributesOfItemAtPath:filePath
error:nil];
NSLog (#"done. file size is %lld",
[outputFileAttributes fileSize]);
});
});
}
else if (AVAssetWriterStatusFailed == writer.status)
{
[writer cancelWriting];
[reader cancelReading];
NSLog(#"Writer failed");
return;
}
else
{
NSLog(#"Export Session Status: %d", writer.status);
}
}];
break;
}
}
}];
tracks = nil;
writer = nil;
writerInput = nil;
reader = nil;
readerOutput=nil;
mediaInputQueue = nil;
return returnValue;
//return YES;
}
Your method exportAudio:toFilePath: is actually an asynchronous method and requires a few fixes to become a proper asynchronous method.
First, you should provide a completion handler in order to signal the call-site that the underlying task has been finished:
- (void)exportAudio:(AVAsset *)avAsset
toFilePath:(NSString *)filePath
completion:(completion_t)completionHandler;
Note, that the result of the method is passed through the completion handler, whose signature might be as follows:
typedef void (^completion_t)(id result);
where parameter result is the eventual result of the method. You should always return an NSError object when anything goes wrong when setting up the various objects within the method - even though, the method could return an immediate result indicating an error.
Next, if you take a look into to documentation you can read:
requestMediaDataWhenReadyOnQueue:usingBlock:
- (void)requestMediaDataWhenReadyOnQueue:(dispatch_queue_t)queue
usingBlock:(void (^)(void))block
Discussion
The block should append media data to the input either until the input’s readyForMoreMediaData property becomes NO or until there is no more media data to supply (at which point it may choose to mark the input as finished using markAsFinished). The block should then exit. After the block exits, if the input has not been marked as finished, once the input has processed the media data it has received and becomes ready for more media data again, it will invoke the block again in order to obtain more.
You should now be quite sure when your task is actually finished. You determine this within the block which is passed to the method requestMediaDataWhenReadyOnQueue:usingBlock:.
When the task is finished you call the completion handler completionHandler provided in
method exportAudio:toFilePath:completion:.
Of course, you need to fix your implementation, e.g. having the method ending with
tracks = nil;
writer = nil;
writerInput = nil;
reader = nil;
readerOutput=nil;
mediaInputQueue = nil;
return returnValue;
//return YES;
}
makes certainly no sense. Cleaning up and returning a result shall be done when the asynchronous task is actually finished. Unless an error occurs during setup, you need to determine this in the block passed to the method requestMediaDataWhenReadyOnQueue:usingBlock:.
In any case, in order to signal the result to the call-site call the completion handler completionHandler and pass a result object, e.g. if it succeeded the URL where it has been saved, otherwise an NSError object.
Now, since our method startUploadSongAnalysis is calling an asynchronous method, this method inevitable becomes asynchronous as well!
If I understood your original code correctly, you are invoking it recursively in order to process a number of assets. In order to implement this correctly, you need a few fixes shown below. The resulting "construct" is NOT a recursive method though, but instead an iteratively invocation of an asynchronous method ("asynchronous loop").
You may or may not provide a completion handler - same as above. It's up to you - but I would recommend it, it won't hurt to know when all assets have been processed. It may look as follows:
-(void)startUploadSongAnalysisWithCompletion:(completion_t)completionHandler
{
[self updateProgressYForID3NForUpload:NO];
// *** check for break condition: ***
if ([self.uploadWorkingAray count]>=1)
{
... stuff
//export low bitrate song to cache
[self exportAudio:[AVAsset assetWithURL:songU]
toFilePath:savedPath
completion:^(id urlOrError)
{
if ([urlOrError isKindOfClass[NSError class]]) {
// Error occurred:
NSLog(#"Error: %#", urlOrError);
// There are two alternatives to proceed:
// A) Ignore or remember the error and proceed with the next asset.
// In this case, it would be best to have a result array
// containing all the results. Then, invoke
// startUploadSongAnalysisWithCompletion: in order to proceed
// with the next asset.
//
// B) Stop with error.
// Don't call startUploadSongAnalysisWithCompletion: but
// instead invoke the completion handler passing it the error.
// A:
// possibly dispatch to a sync queue or the main thread!
[self.uploadWorkingAray removeObjectAtIndex:0];
[self.songNotFoundArray addObject:songVar];
// *** next song: ***
[self startUploadSongAnalysisWithCompletion:completionHandler];
}
else {
// Success:
// *** next song: ***
NSURL* url = urlOrError;
[self startUploadSongAnalysisWithCompletion:completionHandler];
}
}];
}
else //uploadWorkingAray empty
{
NSLog(#"save changes");
[[VPPCoreData sharedInstance] saveAllChanges];
// *** signal completion ***
if (completionHandler) {
completionHandler(#"OK");
}
}
}
I am not sure, but can not you send a call to a method like following
dispatch_async(mediaInputQueue, ^{
dispatch_async(dispatch_get_main_queue(), ^{
// add this to the main queue as the last item in my serial queue
// when I get to this point I know everything in my queue has been run
NSDictionary *outputFileAttributes = [[NSFileManager defaultManager]
attributesOfItemAtPath:filePath
error:nil];
NSLog (#"done. file size is %lld",
[outputFileAttributes fileSize]);
//calling the following method after completing the queue
[self printMe];
});
});
-(void)printMe{
NSLog(#"queue complete...");
//Do the next job, may be the following task !!!
if ([self exportAudio:[AVAsset assetWithURL:songU] toFilePath:savedPath]) // HERE IS THE PROBLEM, this return true even the writing is not completed cos when i upload to my web server, it will say song file corrupted
{
// [self performSelector:#selector(sendSongForUpload:) withObject:subStringPath afterDelay:1];
[self sendRequest:2 andPath:subStringPath andSongDBItem:songVar];
}
else
{
NSLog(#"song too short, skipped");
[self.uploadWorkingAray removeObjectAtIndex:0];
[self.songNotFoundArray addObject:songVar];
[self startUploadSongAnalysis];
}
}

Resources