Pause Button for AVSpeechSynthesiser - ios

At the moment I have a map with annotations and when a user clicks on the annotation an audio plays. I wanted to add a Play/Pause button but it's not working and I'm unsure as to why.
The AVSpeechSynthesizer
- (void)mapView:(MKMapView *)mapView didSelectAnnotationView:(MKAnnotationView *)anView
{
//Get a reference to the annotation this view is for...
id<MKAnnotation> annSelected = anView.annotation;
//Before casting, make sure this annotation is our custom type
//(and not some other type like MKUserLocation)...
if ([annSelected isKindOfClass:[MapViewAnnotation class]])
{
MapViewAnnotation *mva = (MapViewAnnotation *)annSelected;
AVSpeechSynthesizer *synthesizer = [[AVSpeechSynthesizer alloc]init];
AVSpeechUtterance *utterance =
[AVSpeechUtterance speechUtteranceWithString:mva.desc];
utterance.voice = [AVSpeechSynthesisVoice voiceWithLanguage:#"en-gb"];
[utterance setRate:0.35];
[synthesizer speakUtterance:utterance];
}
The Button
- (IBAction)pauseButtonPressed:(UIButton *)sender
{
[_synthesizer pauseSpeakingAtBoundary:AVSpeechBoundaryImmediate];
}
Right now nothing happens when I click it.

use this one for pause
- (IBAction)pausePlayButton:(id)sender
{
if([synthesize isSpeaking]) {
[synthesize pauseSpeakingAtBoundary:AVSpeechBoundaryImmediate];
AVSpeechUtterance *utterance = [AVSpeechUtterance speechUtteranceWithString:#""];
[synthesize speakUtterance:utterance];
[synthesize pauseSpeakingAtBoundary:AVSpeechBoundaryImmediate];
}
}

I don't think you're initializing _synthesizer. Try doing self.synthesizer = [[AVSpeechSynthesizer alloc] init]; instead of assigning the synth to a local variable.
I noticed that AVSpeechSynthesizer had a hard time shutting up during the 7.0 beta, but I find it hard to believe that such an egregious bug would last this long.
NB: you should probably shouldn't recreate the AVSpeechSynthesizer every time an annotation is tapped.
NB2: Once you've paused, I think you have to call continueSpeaking to restart.

For future help, when I clicked pause and resume the annotation worked however it wouldn't let me play another annotation as technically the first one is still running. I added a stop button with [_synthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate]; to sort it out.

Related

Video is not plying on GMSMarker

I am working on play video on GMSMarker pin and below is my code. The display and hide/show pin view, everything work fine. Except video not play. I am using story board with pinview.
#interface MapViewController () <GMSMapViewDelegate>
#property (strong, nonatomic) IBOutlet GMSMapView *mapView;
#property (strong, nonatomic) IBOutlet UIView *pinView;
#property (strong, nonatomic) GMSMarker *london;
#property (strong, nonatomic) AVPlayer *player;
#property (strong, nonatomic) AVPlayerLayer *playerLayer;
#end
#implementation MapViewController
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view.
GMSCameraPosition *camera = [GMSCameraPosition cameraWithLatitude:51.5 longitude:-0.127 zoom:18];
self.mapView.camera = camera;
self.mapView.delegate = self;
CLLocationCoordinate2D position = CLLocationCoordinate2DMake(51.5, -0.127);
_london = [GMSMarker markerWithPosition:position];
_london.title = #"London";
_london.tracksViewChanges = YES;
_london.map = self.mapView;
[self setUpVideoPlayer];
}
- (UIView *)mapView:(GMSMapView *)mapView markerInfoWindow:(GMSMarker *)marker {
[self.player play];
return self.pinView;
}
-(void)setUpVideoPlayer{
NSString *videoFilePath = [[NSBundle mainBundle] pathForResource:#"SampleVideo" ofType:#"mp4"];
AVAsset *avAsset = [AVAsset assetWithURL:[NSURL fileURLWithPath:videoFilePath]];
AVPlayerItem *avPlayerItem =[[AVPlayerItem alloc]initWithAsset:avAsset];
self.player = [[AVPlayer alloc]initWithPlayerItem:avPlayerItem];
self.playerLayer =[AVPlayerLayer playerLayerWithPlayer:self.player];
[self.playerLayer setFrame:self.pinView.frame];
[self.pinView.layer addSublayer:self.playerLayer];
[self.player seekToTime:kCMTimeZero];
}
Please help me to fix the issue. Why the video is not playing.
Thanks in advance.
Normally, info window will not regenerate after showing it(its adding as an image). To show video frames it need to regenerate when video play. It means window need to refresh automatically. To do it you need to set one property for your GMSMarker. Line as follows,
_london.tracksInfoWindowChanges = YES;
So your full code is
CLLocationCoordinate2D position = CLLocationCoordinate2DMake(51.5, -0.127);
_london = [GMSMarker markerWithPosition:position];
_london.title = #"London";
_london.tracksViewChanges = YES;
_london.tracksInfoWindowChanges = YES;
_london.map = self.mapView;
Google documentation is https://developers.google.com/maps/documentation/ios-sdk/marker#set_an_info_window_to_refresh_automatically
I believe that the reason why your code doesn't work is the fact that Google Maps for iOS renders Custom Info Views as Images. I'm not sure, but I believe, since I have also tried with animations and it also doesn't work. When I use animation, I first see the initial state of animation and when I open marker info window again, I see the final state, without animation. Also, when I add a text field on custom info window view, I can't click on that text field. When I try with the video like you, I see a blank window like the player is going to start and hear the sound, but the video never starts.
I have found this note on many questions on Stackoverflow (they say that it is from Google Docs), but I don't see it on Google Docs:
Note: The info window is rendered as an image each time it is displayed on the map. This means that any changes to its properties, while it is active, will not be immediately visible. The contents of the info window will be refreshed the next time that it is displayed.
Source: Google Maps Showing speech bubble in marker info window iOS
Maybe you can try with tracksInfoWindowChanges = YES:
- (UIView *)mapView:(GMSMapView *)mapView markerInfoWindow:(GMSMarker *)marker {
marker.tracksInfoWindowChanges = YES;
self.player play];
return self.pinView;
}
Source: How to force refresh contents of the markerInfoWindow in Google Maps iOS SDK
I know that you have set it in viewDidLoad method, but maybe setting it in a
- (UIView *)mapView:(GMSMapView *)mapView markerInfoWindow:(GMSMarker *)marker
method maybe will work, but I don't believe. A lot of people say that setting markerInfoWindow to YES works, but I have tried, and it doesn't work.
I have tried to use iOS Maps and video plays without problems. I have never had a problem with using Apple Maps. So, I'm sure that Google Maps isn't a good solution for this problem. Maybe you could fix it by some hack, but I have lost several hours to find a solution, without results. Maybe it can be done, for example, to move a UIView above map when the user moves his finger on the map or zooms in/out, or some other solution, but I prefer system solutions above hacks. In this case, I don't see system solution, maybe I'd find it if I spent more time to research. If I were you, I wouldn't use Google Maps for this.
Edit: I've just seen that my solution works on simulator. On real device (iPhone 7+) it doesn't work, i.e. I see player controls are animating, I see sound, but I don't see the video. But even if it works on real device like it works on simulator, user can't play or pause video, because he only sees images which are updating regularly.

AVSpeechSynthesizer is not letting View Controller Deallocate

I have a view controller and in the .h I have:
{
NSString* textToSpeak;
}
#property (nonatomic, strong) AVSpeechSynthesizer* synthesizer;
In the .m of my view controller, I am using the synthesizer to play and pause a pre made script I created.
For example:
-(void)userProfileData:(UserProfileData *)userProfileData didReceiveDict:(NSDictionary *)dict
{
NSDictionary* resultsDict = [dict valueForKey:#"result"];
textToSpeak = [resultsDict objectForKey:#"text"];
UIBarButtonItem* pauseItem = [self.navigationItem.rightBarButtonItems objectAtIndex:0];
[pauseItem setEnabled:YES];
[self startSpeaking];
}
-(void)startSpeaking
{
if (!self.synthesizer) {
self.synthesizer = [[AVSpeechSynthesizer alloc] init];
self.synthesizer.delegate = self;
}
[self speakNextUtterance];
}
-(void)speakNextUtterance
{
AVSpeechUtterance* nextUtterance = [[AVSpeechUtterance alloc] initWithString:textToSpeak];
nextUtterance.rate = 0.25f;
[self.synthesizer speakUtterance:nextUtterance];
}
Before I created this synthesizer, I would navigate back to the parent view controller and dealloc would be called (I have a log statement in it to make sure it is called). However, as soon as I added this synthesizer, the dealloc is no longer being called. I am wondering why this is happening and how I can fix it. Any help would be amazing, thanks!
Solved the problem.. #ChrisLoonam you were a great help in the end. I just needed to stop the synthesizer beforehand and everything was deallocated properly

ios play a system sound

I created a bar code scanner with the apple in build framework AVFoundation. Everything works fine for now.
I want to add a sound that is played when the bar code scan is complete. I my case that would be when a label on my screen is filled with the number scanned.
I know how to play a sound. but the sound gets repeated all the time which is expected cause the method (see code*)is getting called always.
captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection
what I do is
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection
{
CGRect highlightViewRect = CGRectZero;
AVMetadataMachineReadableCodeObject *barCodeObject;
NSString *detectionString = nil;
NSArray *barCodeTypes = #[AVMetadataObjectTypeUPCECode,AVMetadataObjectTypeCode39Code,AVMetadataObjectTypeCode39Mod43Code,AVMetadataObjectTypeEAN13Code,AVMetadataObjectTypeEAN8Code,AVMetadataObjectTypeCode93Code,AVMetadataObjectTypeCode128Code,AVMetadataObjectTypePDF417Code,AVMetadataObjectTypeQRCode,AVMetadataObjectTypeAztecCode];
for (AVMetadataObject *metadata in metadataObjects) {
for (NSString *type in barCodeTypes) {
if ([metadata.type isEqualToString:type])
{
barCodeObject = (AVMetadataMachineReadableCodeObject *)[prevLayer transformedMetadataObjectForMetadataObject:(AVMetadataMachineReadableCodeObject *)metadata];
highlightViewRect = barCodeObject.bounds;
detectionString = [(AVMetadataMachineReadableCodeObject *)metadata stringValue];
break;
}
}
if (detectionString != nil)
{
lblresult.text = detectionString;
break;
}else
lblresult.text = #"(none)";
}
if(detectionString !=nil && ![lblresult.text isEqual:#""]){
[self playSound];
}
highlightView.frame = highlightViewRect;
}
The method which is playing the sound
-(void)playSound{
if(![lblresult.text isEqual:#""]){
AudioServicesPlaySystemSound(systemSoundID);
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(2 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
AudioServicesDisposeSystemSoundID(systemSoundID);
});
}
}
I really dont know where I should call this method so the sound is played only once when the label is not empty .
Thanks for help and fast answer !
It sounds like the sound is playing every time AVFoundation recognizes the barcode, which is many times a second for as long as your camera is pointed at that barcode.
I assume that you have an AVCaptureSession that you start running at some point (this is what causes captureOutput: to run), as well as a preview layer to see what the camera sees on screen, like this:
#property (strong, nonatomic) AVCaptureSession *session;
#property (strong, nonatomic) AVCaptureVideoPreviewLayer *prevLayer;
...
- (void)viewDidLoad
{
[_session startRunning];
...
If you called stopRunning: right after a successful scan, like this:
if(detectionString !=nil && ![lblresult.text isEqual:#""]){
[self playSound];
[_session stopRunning];
}
then it should only play the sound one time. However,you'll also be left with a preview layer frozen on the last frame it captured, so you may want to consider calling:
[self.prevLayer removeFromSuperlayer];
This would remove the frozen image from the view entirely, and then the preview layer could be added back and scanning could be re-enabled by:
[self.barcodeScanAreaView.layer addSublayer:_prevLayer];
[_session startRunning];
Alternatively, you could include some logic to disable barcode recognition after a scan just occurred. Create a BOOL property and change its value after a successful scan, like:
if(detectionString !=nil && !wasJustScanned && ![lblresult.text isEqual:#""]){
[self playSound];
wasJustScanned = YES;
}
The big question, is, of course, when to re-enable scanning. Perhaps you want the user to confirm that they want to keep what they just scanned (so maybe a button under the view), or perhaps the preview/ scanning view is a modal view that is popped right after a successful scan. I'd recommend looking at existing apps to see what they do. Fundamentally, though, the key to not repeating the beep is by either disabling capture or disabling processing of a captured barcode after the first successful recognition of the barcode.

How to stop the voice during text-to-speech synthesis?

I used the following code to initial the text to speech synthesis using a button. But sometimes users may want to stop the voice in the middle of the speech. May i know is there any code i can do that.
Thanks
Here is my code
#interface RMDemoStepViewController ()
#end
#implementation RMDemoStepViewController
- (void)viewDidLoad
{
[super viewDidLoad];
//Add Border to TextBox
//Instantiate the object that will allow us to use text to speech
self.speechSynthesizer = [[AVSpeechSynthesizer alloc] init];
[self.speechSynthesizer setDelegate:self];
}
- (IBAction)speakButtonWasPressed:(id)sender{
[self speakText:[self.textView text]];
}
- (void)speakText:(NSString *)toBeSpoken{
AVSpeechUtterance *utt = [AVSpeechUtterance speechUtteranceWithString:toBeSpoken];
utt.rate = [self.speedSlider value];
[self.speechSynthesizer speakUtterance:utt];
}
- (IBAction)speechSpeedShouldChange:(id)sender
{
UISlider *slider = (UISlider *)sender;
NSInteger val = lround(slider.value);
NSLog(#"%#",[NSString stringWithFormat:#"%ld",(long)val]);
}
#end
But sometimes users may want to stop the voice in the middle of the speech.
To stop speech, send the speech synthesizer a -stopSpeakingAtBoundary: message:
[self.speechSynthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
Use AVSpeechBoundaryWord instead of AVSpeechBoundaryImmediate if you want the speech to continue to the end of the current word rather than stopping instantly.
You can also pause speech instead of stopping it altogether.
This will completely stop the SpeechSynthesizer: -stopSpeakingAtBoundary and if you want more code after the SS, then put : at the end of that. So basically, -stopSpeakingAtBoundary: For the full code to fit in with yours, here: [self.speechSynthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];

An issue with AVSpeechSynthesizer, Any workarounds?

I am using AVSpeechSynthesizer to play text. I have an array of utterances to play.
NSMutableArray *utterances = [[NSMutableArray alloc] init];
for (NSString *text in textArray) {
AVSpeechUtterance *welcome = [[AVSpeechUtterance alloc] initWithString:text];
welcome.rate = 0.25;
welcome.voice = voice;
welcome.pitchMultiplier = 1.2;
welcome.postUtteranceDelay = 0.25;
[utterances addObject:welcome];
}
lastUtterance = [utterances lastObject];
for (AVSpeechUtterance *utterance in utterances) {
[speech speakUtterance:utterance];
}
I have a cancel button to stop speaking. When I click the cancel button when the first utterance is spoken, the speech stops and it clears all the utterances in the queue. If I press the cancel button after the first utterance is spoken (i.e. second utterance), then stopping the speech does not flush the utterances queue. The code that I am using for this is:
[speech stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
Can someone confirm if this is a bug in the API or am I using the API incorrectly? If it is a bug, is there any workaround to resolve this issue?
I found a workaround :
- (void)stopSpeech
{
if([_speechSynthesizer isSpeaking]) {
[_speechSynthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
AVSpeechUtterance *utterance = [AVSpeechUtterance speechUtteranceWithString:#""];
[_speechSynthesizer speakUtterance:utterance];
[_speechSynthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
}
}
Call stopSpeakingAtBoundary:, enqueue an empty utterance and call stopSpeakingAtBoundary: again to stop and clean the queue.
All answers here failed, and what I came up with is stopping the synthesizer and then re-instantiate it:
- (void)stopSpeech
{
if([_speechSynthesizer isSpeaking]) {
[_speechSynthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
_speechSynthesizer = [AVSpeechSynthesizer new];
_speechSynthesizer.delegate = self;
}
}
quite likely to be a bug, in that the delegate method synthesizer didCancelSpeechUtterance isn't called after the first utterance;
A workaround would be to chain the utterances rather than have them in an array and queue them up at once.
Use the delegate method synthesizer didFinishSpeechUtterance to increment an array pointer and speak the the next text from that array. Then when trying to stop the speech, set a BOOL that is checked in this delegate method before attempting to speak the next text.
For example:
1) implement the protocol in the view controller that is doing the speech synthesis
#import <UIKit/UIKit.h>
#import AVFoundation;
#interface ViewController : UIViewController <AVSpeechSynthesizerDelegate>
#end
2) instantiate the AVSpeechSynthesizer and set its delegate to self
speechSynthesizer = [AVSpeechSynthesizer new];
speechSynthesizer.delegate = self;
3) use an utterance counter, set to zero at start of speaking
4) use an array of texts to speak
textArray = #[#"Mary had a little lamb, its fleece",
#"was white as snow",
#"and everywhere that Mary went",
#"that sheep was sure to go"];
5) add delegate method didFinishSpeechUtterance to speak the next utterance from the array
of texts and increment the utterance counter
- (void)speechSynthesizer:(AVSpeechSynthesizer *)synthesizer didFinishSpeechUtterance:(AVSpeechUtterance *)utterance{
if(utteranceCounter < utterances.count){
AVSpeechUtterance *utterance = utterances[utteranceCounter];
[synthesizer speakUtterance:utterance];
utteranceCounter++;
}
}
5) to stop speaking, set the utterance counter to the count of the texts array and attempt to get the synthesizer to stop
utteranceCounter = utterances.count;
BOOL speechStopped = [speechSynthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
if(!speechStopped){
[speechSynthesizer stopSpeakingAtBoundary:AVSpeechBoundaryWord];
}
6) when speaking again, reset the utterance counter to zero
I did something similar to what SPA mentioned. Speaking one item at a time from a loop.
Here is the idea..
NSMutableArray *arr; //array of NSStrings, declared as property
AVSpeechUtterance *currentUtterence; //declared as property
AVSpeechSynthesizer *synthesizer; //property
- (void) viewDidLoad:(BOOL)anim
{
[super viewDidLoad:anim];
synthesizer = [[AVSpeechSynthesizer alloc]init];
//EDIT -- Added the line below
synthesizer.delegate = self;
arr = [self populateArrayWithString]; //generates strings to speak
}
//assuming one thread will call this
- (void) speakNext
{
if (arr.count > 0)
{
NSString *str = [arr objectAtIndex:0];
[arr removeObjectAtIndex:0];
currentUtterence = [[AVSpeechUtterance alloc] initWithString:str];
//EDIT -- Commentted out the line below
//currentUtterence.delegate = self;
[synthesizer speakUtterance:utteranc];
}
}
- (void)speechSynthesizer:(AVSpeechSynthesizer *)avsSynthesizer didFinishSpeechUtterance:(AVSpeechUtterance *)utterance
{
if ([synthesizer isEqual:avsSynthesizer] && [utterance isEqual:currentUtterence])
[self speakNext];
}
- (IBOutlet) userTappedCancelledButton:(id)sender
{
//EDIT <- replaced the object the method gets called on.
[synthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
[arr removeAllObjects];
}
didCancelSpeechUtterance does not work with the same AVSpeechSynthesizer object even though the utterances are chained in the didFinishSpeechUtterance method.
-(void)speakInternal
{
speech = [[AVSpeechSynthesizer alloc] init];
speech.delegate = self;
[speech speakUtterance:[utterances objectAtIndex:position++]];
}
In speakInternal, I am creating AVSpeechSynthesizer object multiple times to ensure that didCancelSpeechUtterance works. Kind of a workaround.

Resources