I have a current speech recognition capture that works nicely - you say what you would like, and it get's it. Fairly accurate too for what it's worth...
The issue I'm having is such:
If I attempt to change languages after stopping and starting, it fails with the following errors
2018-05-23 00:51:51.878921-0400 APP[1237:332833] Speech error: The operation couldn’t be completed. (kAFAssistantErrorDomain error 209.)
2018-05-23 00:51:51.922965-0400 APP[1237:332833] Speech error: Corrupt
However, if I stop recording and reset with the original language, it will work just fine. For instance, even starting with Korean, any time I stop, switch to... Korean... then press start again, it works. No matter how many times I do this process.
The issue is, continuing my example, if I switch to a different language, EVEN English, after starting with Korean, it gives me that error (which is contained in my recognitionTaskWithRequest FYI).
It appears the starting language is irrelevant to whether it will
work, just as long as I choose a different language it fails, and when
I select the same starting language it works.
// Note: self.inputLanguageIdentifier is changed when you select a new language.
// I have tested to ensure this ID is correct each time.
// I.E. Korean prints ko-KR, English of course en-US, etc.
NSLocale *locale = [NSLocale alloc] initWithLocaleIdentifier:self.inputLanguageIdentifier]
speechRecognizer = [[SFSpeechRecognizer alloc] initWithLocale:locale];
speechRecognizer.delegate = self;
recognitionRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
AVAudioInputNode *inputNode = audioEngine.inputNode;
recognitionRequest.shouldReportPartialResults = YES;
recognitionTask = [speechRecognizer recognitionTaskWithRequest:recognitionRequest resultHandler:^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error) {
BOOL isFinal = NO;
if (result && !userDidTapCancel) {
// in the console.
NSLog(#"RESULT:%#", result.bestTranscription.formattedString);
[self updateTextForResult:result.bestTranscription.formattedString];
isFinal = !result.isFinal;
}
if (error) {
NSLog(#"Speech error: %#", error.localizedDescription);
[self stopListening];
}
}];
My stopListening is such:
- (void)stopListening {
isListening = NO;
[audioEngine stop];
[recognitionRequest endAudio];
[recognitionTask cancel];
}
UPDATE:
What I have found, is that upon resetting twice in a row (keeping the same newly selected language), the recording works as expected.
But as it stands I can't find a solution that allows it to work the first immediate time after changing languages... bizarre.
Related
I've implemented a RPScreenRecorder, which records screen as well as mic audio. After multiple recordings are completed I stop the recording and merge the Audios with Videos using AVMutableComposition and then Merge all the videos to form Single Video.
For screen recording and getting the video and audio files, I am using
- (void)startCaptureWithHandler:(nullable void(^)(CMSampleBufferRef sampleBuffer, RPSampleBufferType bufferType, NSError * _Nullable error))captureHandler completionHandler:
For stopping the recording. I Call this function:
- (void)stopCaptureWithHandler:(void (^)(NSError *error))handler;
And these are pretty Straight forward.
Most of the times it works great, I receive both video and audio CMSampleBuffers.
But some times it so happens that startCaptureWithHandler only sends me audio buffers but not video buffers.
And once I encounter this problem, it won't go until I restart my device and reinstall the app. This makes my app so unreliable for the user.
I think this is a replay kit issue but unable to found out related issues with other developers.
let me know if any one of you came across this issue and got the solution.
I have check multiple times but haven't seen any issue in configuration.
But here it is anyway.
NSError *videoWriterError;
videoWriter = [[AVAssetWriter alloc] initWithURL:fileString fileType:AVFileTypeQuickTimeMovie
error:&videoWriterError];
NSError *audioWriterError;
audioWriter = [[AVAssetWriter alloc] initWithURL:audioFileString fileType:AVFileTypeAppleM4A
error:&audioWriterError];
CGFloat width =UIScreen.mainScreen.bounds.size.width;
NSString *widthString = [NSString stringWithFormat:#"%f", width];
CGFloat height =UIScreen.mainScreen.boNSString *heightString = [NSString stringWithFormat:#"%f", height];unds.size.height;
NSDictionary * videoOutputSettings= #{AVVideoCodecKey : AVVideoCodecTypeH264,
AVVideoWidthKey: widthString,
AVVideoHeightKey : heightString};
videoInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:videoOutputSettings];
videoInput.expectsMediaDataInRealTime = true;
AudioChannelLayout acl;
bzero( &acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;
NSDictionary * audioOutputSettings = [ NSDictionary dictionaryWithObjectsAndKeys:
[ NSNumber numberWithInt: kAudioFormatAppleLossless ], AVFormatIDKey,
[ NSNumber numberWithInt: 16 ], AVEncoderBitDepthHintKey,
[ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey,
[ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey,
[ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey,
nil ];
audioInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio outputSettings:audioOutputSettings];
[audioInput setExpectsMediaDataInRealTime:YES];
[videoWriter addInput:videoInput];
[audioWriter addInput:audioInput];
[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker error:nil];
[RPScreenRecorder.sharedRecorder startCaptureWithHandler:^(CMSampleBufferRef _Nonnull sampleBuffer, RPSampleBufferType bufferType, NSError * _Nullable myError) {
Block
}
The startCaptureWithHandler function has pretty straight forward functionality as well:
[RPScreenRecorder.sharedRecorder startCaptureWithHandler:^(CMSampleBufferRef _Nonnull sampleBuffer, RPSampleBufferType bufferType, NSError * _Nullable myError) {
dispatch_sync(dispatch_get_main_queue(), ^{
if(CMSampleBufferDataIsReady(sampleBuffer))
{
if (self->videoWriter.status == AVAssetWriterStatusUnknown)
{
self->writingStarted = true;
[self->videoWriter startWriting];
[self->videoWriter startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp(sampleBuffer)];
[self->audioWriter startWriting];
[self->audioWriter startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp(sampleBuffer)];
}
if (self->videoWriter.status == AVAssetWriterStatusFailed) {
return;
}
if (bufferType == RPSampleBufferTypeVideo)
{
if (self->videoInput.isReadyForMoreMediaData)
{
[self->videoInput appendSampleBuffer:sampleBuffer];
}
}
else if (bufferType == RPSampleBufferTypeAudioMic)
{
// printf("\n+++ bufferAudio received %d \n",arc4random_uniform(100));
if (writingStarted){
if (self->audioInput.isReadyForMoreMediaData)
{
[self->audioInput appendSampleBuffer:sampleBuffer];
}
}
}
}
});
}
Also, when this situation occurs, the system screen recorder gets corrupted as well. On clicking system recorder, this error shows up:
The error says "Screen recording has stopped due to: Failure during recording due to Mediaservices error".
There must be two reasons:
iOS Replay kit is in beta, which is why it is giving problem after sometimes of usage.
I have implemented any problematic logic, which is cause replaykit to crash.
If it's issue no. 1, then no problem.
If this is issue no. 2 then I have to know where I might be wrong?
Opinions and help will be appreciated.
So, I have come across some scenarios where Replay kit totally crashes and System recorder shows error every time unless you restart the device.
1st Scenario
When you start recording and stop it in completion handler
[RPScreenRecorder.sharedRecorder startCaptureWithHandler:^(CMSampleBufferRef _Nonnull sampleBuffer, RPSampleBufferType bufferType, NSError * _Nullable error) {
printf("recording");
} completionHandler:^(NSError * _Nullable error) {
[RPScreenRecorder.sharedRecorder stopCaptureWithHandler:^(NSError * _Nullable error) {
printf("Ended");
}];
}];
2nd Scenario
When you start recording and stop it directly in capture handler
__block BOOL stopDone = NO;
[RPScreenRecorder.sharedRecorder startCaptureWithHandler:^(CMSampleBufferRef _Nonnull sampleBuffer, RPSampleBufferType bufferType, NSError * _Nullable error) {
if (!stopDone){
[RPScreenRecorder.sharedRecorder stopCaptureWithHandler:^(NSError * _Nullable error) {
printf("Ended");
}];
stopDone = YES;
}
printf("recording");
} completionHandler:^(NSError * _Nullable error) {}];
More Scenarios are yet to be discovered and I will keep updating the
answer
Update 1
It is true that the system screen recorded gives error when we stop recording right after the start, but it seem to work alright after we call startcapture again.
I have also encountered a scenario where I don't get video buffer in my
app only and the system screen recorder works fine, will update the
solution soon.
Update 2
So here is the issue, My actual app is old and it is being maintained and getting updated timely. When the replaykit becomes erroneous, My original app can't receive video buffers, I don't know if there is a configuration that is making this happen, or what?
But new sample app seem to work fine and after replay kit becomes erroneous. when I call startCapture next time, the replay kit becomes fine.
Weird
Update 3
I observed new issue. When the permission alert shows up, the app goes to background. Since I coded that whenever the app goes to background, some UI changes will occur and the recording will be stopped.
This led to the error of
Recording interrupted by multitasking and content resizing
I am not yet certain, which particular UI change is creating this failure, but it only comes when permission alert shows up and the UI changes are made.
If someone has noticed any particular case for this issue, please let us know.
If screen has no change, ReplayKit does not call processSampleBuffer() with video.
For example on PowerPoint presentation, processSampleBuffer() is called only when new slide is shown. No processSampleBuffer() with video is called for 10 sec or 1 min.
Sometimes Replaykit does not call processSampleBuffer() on new slide. No this case, user is missing one slide. It is critical and show stopper bug.
On the other hand, processSampleBuffer with Audio is called on every 500ms on iOS 11.4.
In videoOutputSettings make AVVideoWidthKey & AVVideoHeightKey NSNumber instead of NSString.
In audioOutputSettings remove AVEncoderBitDepthHintKey & AVChannelLayoutKey. Add AVEncoderBitRateKey with NSNumber 64000 and change AVFormatIDKey value to kAudioFormatMPEG4AAC replacing kAudioFormatAppleLossless.
In my project I faced similar problem. As far as I can remember, the problem was my output settings.
You can also try moving all your code in startCaptureWithHandler success block inside a synchronous block.
dispatch_sync(dispatch_get_main_queue(), ^ {
// your block code
}
I had exactly the same issue. I changed many things and write the code again and again. I finally understood that the reason of the problem was about the main window.
If you change anything about the main window (for instance windowLevel), reverting them back will solve the problem.
p.s: If you ask the relationship between the main window and replay kit, replay kit records the main window.
I want to do speech recognition in my Objective-C app using the iOS Speech framework.
I found some Swift examples but haven't been able to find anything in Objective-C.
Is it possible to access this framework from Objective-C? If so, how?
After spending enough time looking for Objective-C samples -even in the Apple documentation- I couldn't find anything decent, so I figured it out myself.
Header file (.h)
/*!
* Import the Speech framework, assign the Delegate and declare variables
*/
#import <Speech/Speech.h>
#interface ViewController : UIViewController <SFSpeechRecognizerDelegate> {
SFSpeechRecognizer *speechRecognizer;
SFSpeechAudioBufferRecognitionRequest *recognitionRequest;
SFSpeechRecognitionTask *recognitionTask;
AVAudioEngine *audioEngine;
}
Methods file (.m)
- (void)viewDidLoad {
[super viewDidLoad];
// Initialize the Speech Recognizer with the locale, couldn't find a list of locales
// but I assume it's standard UTF-8 https://wiki.archlinux.org/index.php/locale
speechRecognizer = [[SFSpeechRecognizer alloc] initWithLocale:[[NSLocale alloc] initWithLocaleIdentifier:#"en_US"]];
// Set speech recognizer delegate
speechRecognizer.delegate = self;
// Request the authorization to make sure the user is asked for permission so you can
// get an authorized response, also remember to change the .plist file, check the repo's
// readme file or this project's info.plist
[SFSpeechRecognizer requestAuthorization:^(SFSpeechRecognizerAuthorizationStatus status) {
switch (status) {
case SFSpeechRecognizerAuthorizationStatusAuthorized:
NSLog(#"Authorized");
break;
case SFSpeechRecognizerAuthorizationStatusDenied:
NSLog(#"Denied");
break;
case SFSpeechRecognizerAuthorizationStatusNotDetermined:
NSLog(#"Not Determined");
break;
case SFSpeechRecognizerAuthorizationStatusRestricted:
NSLog(#"Restricted");
break;
default:
break;
}
}];
}
/*!
* #brief Starts listening and recognizing user input through the
* phone's microphone
*/
- (void)startListening {
// Initialize the AVAudioEngine
audioEngine = [[AVAudioEngine alloc] init];
// Make sure there's not a recognition task already running
if (recognitionTask) {
[recognitionTask cancel];
recognitionTask = nil;
}
// Starts an AVAudio Session
NSError *error;
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryRecord error:&error];
[audioSession setActive:YES withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error];
// Starts a recognition process, in the block it logs the input or stops the audio
// process if there's an error.
recognitionRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
AVAudioInputNode *inputNode = audioEngine.inputNode;
recognitionRequest.shouldReportPartialResults = YES;
recognitionTask = [speechRecognizer recognitionTaskWithRequest:recognitionRequest resultHandler:^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error) {
BOOL isFinal = NO;
if (result) {
// Whatever you say in the microphone after pressing the button should be being logged
// in the console.
NSLog(#"RESULT:%#",result.bestTranscription.formattedString);
isFinal = !result.isFinal;
}
if (error) {
[audioEngine stop];
[inputNode removeTapOnBus:0];
recognitionRequest = nil;
recognitionTask = nil;
}
}];
// Sets the recording format
AVAudioFormat *recordingFormat = [inputNode outputFormatForBus:0];
[inputNode installTapOnBus:0 bufferSize:1024 format:recordingFormat block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
[recognitionRequest appendAudioPCMBuffer:buffer];
}];
// Starts the audio engine, i.e. it starts listening.
[audioEngine prepare];
[audioEngine startAndReturnError:&error];
NSLog(#"Say Something, I'm listening");
}
- (IBAction)microPhoneTapped:(id)sender {
if (audioEngine.isRunning) {
[audioEngine stop];
[recognitionRequest endAudio];
} else {
[self startListening];
}
}
Now, add the delegate the SFSpeechRecognizerDelegate to check if the speech recognizer is available.
#pragma mark - SFSpeechRecognizerDelegate Delegate Methods
- (void)speechRecognizer:(SFSpeechRecognizer *)speechRecognizer availabilityDidChange:(BOOL)available {
NSLog(#"Availability:%d",available);
}
Instructions & Notes
Remember to modify the .plist file to get user's authorization for Speech Recognition and using the microphone, of course the <String> value must be customized to your needs, you can do this by creating and modifying the values in the Property List or right-click on the .plist file and Open As -> Source Code and paste the following lines before the </dict> tag.
<key>NSMicrophoneUsageDescription</key> <string>This app uses your microphone to record what you say, so watch what you say!</string>
<key>NSSpeechRecognitionUsageDescription</key> <string>This app uses Speech recognition to transform your spoken words into text and then analyze the, so watch what you say!.</string>
Also remember that in order to be able to import the Speech framework into the project you need to have iOS 10.0+.
To get this running and test it you just need a very basic UI, just create an UIButton and assign the microPhoneTapped action to it, when pressed the app should start listening and logging everything that it hears through the microphone to the console (in the sample code NSLog is the only thing receiving the text). It should stop the recording when pressed again.
I created a Github repo with a sample project, enjoy!
I'm working on an iOS application in which I use Motion Activity Manager (in more detail - pedometer). When application launches I need to check is Motion Activity is allowed by user. I do this by doing
_motionActivityManager = [[CMMotionActivityManager alloc] init];
_pedometer = [[CMPedometer alloc] init];
[_pedometer queryPedometerDataFromDate : [NSDate date]
toDate : [NSDate date]
withHandler : ^(CMPedometerData *pedometerData, NSError *error) {
// BP1
if (error != nil) {
// BP2
}
else {
// BP3
}
}];
As discussed here ☛ iOS - is Motion Activity Enabled in Settings > Privacy > Motion Activity
In my understanding this code will trigger "alert window" asking user to opt-in/out.
What happens in my case is that when I run application first time (aka. all warnings are reset), application hangs before 'BP1' (callback is never executed) and then if I stop application with xCode or press home button "alert window" appears. And if I opt-in everything is good, on second run 'BP3' is reached (or 'BP2' if I opt-out).
What I tried do far:
I implemented another way of checking using async execution
[_pedometer queryPedometerDataFromDate : [NSDate date]
toDate : [NSDate date]
withHandler : ^(CMPedometerData *pedometerData, NSError *error) {
// Because CMPedometer dispatches to an arbitrary queue, it's very important
// to dispatch any handler block that modifies the UI back to the main queue.
dispatch_async(dispatch_get_main_queue(), ^{
authorizationCheckCompletedHandler(!error || error.code != CMErrorMotionActivityNotAuthorized);
});
}];
This doesn't hang the application, but "alert window" is never showed
I executed this "checking snippet" in later time in code - but again - application hangs
Essentially, use can first be sure that the alert view will not block your App, when the first view has appeared, ie. in onViewDidAppear.
For example do:
-(void) viewDidAppear:(BOOL)animated {
if ([MyActivityManager checkAvailability]) { // motion and activity availability checks
[myDataManager checkAuthorization:^(BOOL authorized) { // is authorized
dispatch_async(dispatch_get_main_queue(), ^{
if (authorized) {
// do your UI update etc...
}
else {
// maybe tell the user that this App requires motion and tell him about activating it in settings...
}
});
}];
}
}
This is what I do myself. I based my App on the Apple example code as well and noticed, that the example also has the problem you are describing.
I want to waveform display in real-time input from the microphone.
I have been implemented using the installTapOnBus:bufferSize:format:block:, This function is called three times in one second.
I want to set this function to be called 20 times per second.
Where can I set?
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
NSError* error = nil;
if (audioSession.isInputAvailable) [audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
if(error){
return;
}
[audioSession setActive:YES error:&error];
if(error){
retur;
}
self.engine = [[[AVAudioEngine alloc] init] autorelease];
AVAudioMixerNode* mixer = [self.engine mainMixerNode];
AVAudioInputNode* input = [self.engine inputNode];
[self.engine connect:input to:mixer format:[input inputFormatForBus:0]];
// tap ... 1 call in 16537Frames
// It does not change even if you change the bufferSize
[input installTapOnBus:0 bufferSize:4096 format:[input inputFormatForBus:0] block:^(AVAudioPCMBuffer* buffer, AVAudioTime* when) {
for (UInt32 i = 0; i < buffer.audioBufferList->mNumberBuffers; i++) {
Float32 *data = buffer.audioBufferList->mBuffers[i].mData;
UInt32 frames = buffer.audioBufferList->mBuffers[i].mDataByteSize / sizeof(Float32);
// create waveform
...
}
}];
[self.engine startAndReturnError:&error];
if (error) {
return;
}
they say, Apple Support replied no: (on sep 2014)
Yes, currently internally we have a fixed tap buffer size (0.375s),
and the client specified buffer size for the tap is not taking effect.
but someone resizes buffer size and gets 40ms
https://devforums.apple.com/thread/249510?tstart=0
Can not check it, neen in ObjC :(
UPD it works! just single line:
[input installTapOnBus:0 bufferSize:1024 format:[mixer outputFormatForBus:0] block:^(AVAudioPCMBuffer *buffer, AVAudioTime *when) {
buffer.frameLength = 1024; //here
The AVAudioNode class reference states that the implementation may choose a buffer size other than the one that you supply, so as far as I know, we are stuck with the very large buffer size. This is unfortunate, because AVAudioEngine is otherwise an excellent Core Audio wrapper. Since I too need to use the input tap for something other than recording, I'm looking into The Amazing Audio Engine, as well as the Core Audio C API (see the iBook Learning Core Audio for excellent tutorials on it), as alternatives.
***Update: It turns out that you can access the AudioUnit of the AVAudioInputNode and install a render callback on it. Via AVAudioSession, you can set your audio session's desired buffer size (not guaranteed, but certainly better than node taps). Thus far, I've gotten buffer sizes as low as 64 samples using this approach. I'll post back here with code once I've had a chance to test this.
As of iOS 13 in 2019, there is AVAudioSinkNode, which may better accomplish what you are looking for. While you could have also created a regular AVAudioUnit / Node and attached it to the input/output, the difference with an AVAudioSinkNode is that there is no output required. That makes it more like a tap and circumvents issues with incomplete chains that might occur when using a regular Audio Unit / Node.
For more information:
https://developer.apple.com/videos/play/wwdc2019/510/
https://devstreaming-cdn.apple.com/videos/wwdc/2019/510v8txdlekug3npw2m/510/510_whats_new_in_avaudioengine.pdf?dl=1
https://developer.apple.com/documentation/avfoundation/avaudiosinknode?language=objc
The relevant Swift code is on page 10 (with a small error corrected below) of the session's PDF.
// Create Engine
let engine = AVAudioEngine()
// Create and Attach AVAudioSinkNode
let sinkNode = AVAudioSinkNode() { (timeStamp, frames, audioBufferList) ->
OSStatus in
…
}
engine.attach(sinkNode)
I imagine that you'll still have to follow the typical real-time audio rules when using this (e.g. no allocating/freeing memory, no ObjC calls, no locking or waiting on locks, etc.). A ring buffer may still be helpful here.
Don't know why or even if this works yet, just trying a few things out. But for sure the NSLogs indicate a 21 ms interval, 1024 samples coming in per buffer...
AVAudioEngine* sEngine = NULL;
- (void)applicationDidBecomeActive:(UIApplication *)application
{
/*
Restart any tasks that were paused (or not yet started) while the application was inactive. If the application was previously in the background, optionally refresh the user interface.
*/
[glView startAnimation];
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
NSError* error = nil;
if (audioSession.isInputAvailable) [audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
if(error){
return;
}
[audioSession setActive:YES error:&error];
if(error){
return;
}
sEngine = [[AVAudioEngine alloc] init];
AVAudioMixerNode* mixer = [sEngine mainMixerNode];
AVAudioInputNode* input = [sEngine inputNode];
[sEngine connect:input to:mixer format:[input inputFormatForBus:0]];
__block NSTimeInterval start = 0.0;
// tap ... 1 call in 16537Frames
// It does not change even if you change the bufferSize
[input installTapOnBus:0 bufferSize:1024 format:[input inputFormatForBus:0] block:^(AVAudioPCMBuffer* buffer, AVAudioTime* when) {
if (start == 0.0)
start = [AVAudioTime secondsForHostTime:[when hostTime]];
// why does this work? because perhaps the smaller buffer is reused by the audioengine, with the code to dump new data into the block just using the block size as set here?
// I am not sure that this is supported by apple?
NSLog(#"buffer frame length %d", (int)buffer.frameLength);
buffer.frameLength = 1024;
UInt32 frames = 0;
for (UInt32 i = 0; i < buffer.audioBufferList->mNumberBuffers; i++) {
Float32 *data = buffer.audioBufferList->mBuffers[i].mData;
frames = buffer.audioBufferList->mBuffers[i].mDataByteSize / sizeof(Float32);
// create waveform
///
}
NSLog(#"%d frames are sent at %lf", (int) frames, [AVAudioTime secondsForHostTime:[when hostTime]] - start);
}];
[sEngine startAndReturnError:&error];
if (error) {
return;
}
}
You might be able to use a CADisplayLink to achieve this. A CADisplayLink will give you a callback each time the screen refreshes, which typically will be much more than 20 times per second (so additional logic may be required to throttle or cap the number of times your method is executed in your case).
This is obviously a solution that is quite discrete from your audio work, and to the extent you require a solution that reflects your session, it might not work. But when we need frequent recurring callbacks on iOS, this is often the approach of choice, so it's an idea.
I'm attempting to play the top tracks from the result of an SPArtistBrowse using cocoalibspotify. Most of the time this works flawlessly, but occasionally I get the following error:
Error Domain=com.spotify.CocoaLibSpotify.error Code=3 "The track cannot be played"
This happens only for specific tracks, and for affected tracks it is consistent and repeatable (e.g. the top track for Armin van Buren, spotify:track:6q0f0zpByDs4Zk0heXZ3cO, always gives this error when attempting to play using the code below). The odd thing is, if I use the simple player sample app and enter an affected track's URL, the track plays fine; so my hunch is it has something to do with the track being loaded from an SPArtistBrowse.
Here is the code I am using to play tracks:
- (void)playTrack
{
SPTrack *track = [self.artistBrowse.topTracks objectAtIndex:self.currentTrackIndex];
[SPAsyncLoading waitUntilLoaded:track then:^(NSArray *tracks) {
[self.playbackManager playTrack:track callback:^(NSError *error) {
if (error) {
self.currentTrackIndex++;
if (self.currentTrackIndex < self.artistBrowse.topTracks.count) {
[self playTrack];
} else {
[self.activityIndicator stopAnimating];
self.activityIndicator.alpha = 0;
self.nowPlayingLabel.text = #"Spotify Error";
}
} else {
[self.activityIndicator stopAnimating];
self.activityIndicator.alpha = 0;
self.nowPlayingLabel.text = track.name;
// Set "Now Playing" info on the iOS remote control
MPNowPlayingInfoCenter *infoCenter = [MPNowPlayingInfoCenter defaultCenter];
NSMutableDictionary *dic = [[NSMutableDictionary alloc] init];
[dic setValue:track.name forKey:MPMediaItemPropertyTitle];
[dic setValue:self.artistLabel.text forKey:MPMediaItemPropertyArtist];
infoCenter.nowPlayingInfo = dic;
}
}];
}];
}
The artist browse should affect anything - a track is a track. However, if you can reliably reproduce it, please fork CocoaLibSpotify and add a failing unit test to the unit test suite - that way we can fix it.
It's also possible that the Spotify playback service was unavailable right at the wrong time, but that's a fairly rare occurrence.