How to detect Network Signal Strength in iOS Reachability - ios

I am creating a new Traveling Application in iOS, this application is highly dependent on Maps and will include two Maps.
My first Map will work when the user has a strong Network Signal (Apple Maps).
My second Map will be used when their isn't any Network or really Low signal (Offline
MapBox).
Why do I have two different maps in one Application? My Application is a Direction App, so when the user has really low network or none it will go to the offline Map MapBox. Also the Apple Maps will have Yelp integration and not the offline Map MapBox.
So my Question: How can I detect the network signal in WiFi, 4G Lte, and 3G.

My original thought was to time the download of a file, and see how long it takes:
#interface ViewController () <NSURLSessionDelegate, NSURLSessionDataDelegate>
#property (nonatomic) CFAbsoluteTime startTime;
#property (nonatomic) CFAbsoluteTime stopTime;
#property (nonatomic) long long bytesReceived;
#property (nonatomic, copy) void (^speedTestCompletionHandler)(CGFloat megabytesPerSecond, NSError *error);
#end
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
[self testDownloadSpeedWithTimout:5.0 completionHandler:^(CGFloat megabytesPerSecond, NSError *error) {
NSLog(#"%0.1f; error = %#", megabytesPerSecond, error);
}];
}
/// Test speed of download
///
/// Test the speed of a connection by downloading some predetermined resource. Alternatively, you could add the
/// URL of what to use for testing the connection as a parameter to this method.
///
/// #param timeout The maximum amount of time for the request.
/// #param completionHandler The block to be called when the request finishes (or times out).
/// The error parameter to this closure indicates whether there was an error downloading
/// the resource (other than timeout).
///
/// #note Note, the timeout parameter doesn't have to be enough to download the entire
/// resource, but rather just sufficiently long enough to measure the speed of the download.
- (void)testDownloadSpeedWithTimout:(NSTimeInterval)timeout completionHandler:(nonnull void (^)(CGFloat megabytesPerSecond, NSError * _Nullable error))completionHandler {
NSURL *url = [NSURL URLWithString:#"http://insert.your.site.here/yourfile"];
self.startTime = CFAbsoluteTimeGetCurrent();
self.stopTime = self.startTime;
self.bytesReceived = 0;
self.speedTestCompletionHandler = completionHandler;
NSURLSessionConfiguration *configuration = [NSURLSessionConfiguration ephemeralSessionConfiguration];
configuration.timeoutIntervalForResource = timeout;
NSURLSession *session = [NSURLSession sessionWithConfiguration:configuration delegate:self delegateQueue:nil];
[[session dataTaskWithURL:url] resume];
}
- (void)URLSession:(NSURLSession *)session dataTask:(NSURLSessionDataTask *)dataTask didReceiveData:(NSData *)data {
self.bytesReceived += [data length];
self.stopTime = CFAbsoluteTimeGetCurrent();
}
- (void)URLSession:(NSURLSession *)session task:(NSURLSessionTask *)task didCompleteWithError:(NSError *)error {
CFAbsoluteTime elapsed = self.stopTime - self.startTime;
CGFloat speed = elapsed != 0 ? self.bytesReceived / (CFAbsoluteTimeGetCurrent() - self.startTime) / 1024.0 / 1024.0 : -1;
// treat timeout as no error (as we're testing speed, not worried about whether we got entire resource or not
if (error == nil || ([error.domain isEqualToString:NSURLErrorDomain] && error.code == NSURLErrorTimedOut)) {
self.speedTestCompletionHandler(speed, nil);
} else {
self.speedTestCompletionHandler(speed, error);
}
}
#end
Note, this measures the speed including the latency of starting the connection. You could alternatively initialize startTime in didReceiveResponse, if you wanted to factor out that initial latency.
Having done that, in retrospect, I don't like spending time or bandwidth downloading something that has no practical benefit to the app. So, as an alternative, I might suggest a far more pragmatic approach: Why don't you just try to open a MKMapView and see how long it takes to finish downloading the map? If it fails or if it takes more than a certain amount of time, then switch to your offline map. Again, there is quite a bit of variability here (not only because network bandwidth and latency, but also because some map images appear to be cached), so make sure to set a kMaximumElapsedTime to be large enough to handle all the reasonable permutations of a successful connection (i.e., don't be too aggressive in using a low value).
To do this, just make sure to set your view controller to be the delegate of the MKMapView. And then you can do:
#interface ViewController () <MKMapViewDelegate>
#property (nonatomic, strong) NSDate *startDate;
#end
static CGFloat const kMaximumElapsedTime = 5.0;
#implementation ViewController
// insert the rest of your implementation here
#pragma mark - MKMapViewDelegate methods
- (void)mapViewWillStartLoadingMap:(MKMapView *)mapView {
NSDate *localStartDate = [NSDate date];
self.startDate = localStartDate;
double delayInSeconds = kMaximumElapsedTime;
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, (int64_t)(delayInSeconds * NSEC_PER_SEC));
dispatch_after(popTime, dispatch_get_main_queue(), ^(void){
// Check to see if either:
// (a) start date property is not nil (because if it is, we
// finished map download); and
// (b) start date property is the same as the value we set
// above, as it's possible this map download is done, but
// we're already in the process of downloading the next
// map.
if (self.startDate && self.startDate == localStartDate)
{
[[[UIAlertView alloc] initWithTitle:nil
message:[NSString stringWithFormat:#"Map timed out after %.1f", delayInSeconds]
delegate:nil
cancelButtonTitle:#"OK"
otherButtonTitles:nil] show];
}
});
}
- (void)mapViewDidFailLoadingMap:(MKMapView *)mapView withError:(NSError *)error {
self.startDate = nil;
[[[UIAlertView alloc] initWithTitle:nil
message:#"Online map failed"
delegate:nil
cancelButtonTitle:#"OK"
otherButtonTitles:nil] show];
}
- (void)mapViewDidFinishLoadingMap:(MKMapView *)mapView
{
NSTimeInterval elapsed = [[NSDate date] timeIntervalSinceDate:self.startDate];
self.startDate = nil;
self.statusLabel.text = [NSString stringWithFormat:#"%.1f seconds", elapsed];
}

For Swift
class NetworkSpeedProvider: NSObject {
var startTime = CFAbsoluteTime()
var stopTime = CFAbsoluteTime()
var bytesReceived: CGFloat = 0
var speedTestCompletionHandler: ((_ megabytesPerSecond: CGFloat, _ error: Error?) -> Void)? = nil
func test() {
testDownloadSpeed(withTimout: 5.0, completionHandler: {(_ megabytesPerSecond: CGFloat, _ error: Error?) -> Void in
print("%0.1f; error = \(megabytesPerSecond)")
})
}
}
extension NetworkSpeedProvider: URLSessionDataDelegate, URLSessionDelegate {
func testDownloadSpeed(withTimout timeout: TimeInterval, completionHandler: #escaping (_ megabytesPerSecond: CGFloat, _ error: Error?) -> Void) {
// you set any relevant string with any file
let urlForSpeedTest = URL(string: "https://any.jpg")
startTime = CFAbsoluteTimeGetCurrent()
stopTime = startTime
bytesReceived = 0
speedTestCompletionHandler = completionHandler
let configuration = URLSessionConfiguration.ephemeral
configuration.timeoutIntervalForResource = timeout
let session = URLSession(configuration: configuration, delegate: self, delegateQueue: nil)
guard let checkedUrl = urlForSpeedTest else { return }
session.dataTask(with: checkedUrl).resume()
}
func urlSession(_ session: URLSession, dataTask: URLSessionDataTask, didReceive data: Data) {
bytesReceived += CGFloat(data.count)
stopTime = CFAbsoluteTimeGetCurrent()
}
func urlSession(_ session: URLSession, task: URLSessionTask, didCompleteWithError error: Error?) {
let elapsed = (stopTime - startTime) //as? CFAbsoluteTime
let speed: CGFloat = elapsed != 0 ? bytesReceived / (CGFloat(CFAbsoluteTimeGetCurrent() - startTime)) / 1024.0 / 1024.0 : -1.0
// treat timeout as no error (as we're testing speed, not worried about whether we got entire resource or not
if error == nil || ((((error as NSError?)?.domain) == NSURLErrorDomain) && (error as NSError?)?.code == NSURLErrorTimedOut) {
speedTestCompletionHandler?(speed, nil)
}
else {
speedTestCompletionHandler?(speed, error)
}
}
}

I believe a google search will help.
Look out for the following thread on StackOverflow—
iOS wifi scan, signal strength
iPhone signal strength
So, I don't think you can still do this without using private APIs.

Related

How to wait for a delegate function to finish before returning value from main function

I have a custom function for capturing true depth camera information and the function gets returned before the delegate functions have finished processing the captured photo. I need to somehow wait until the delegates have all completed before I return the correct value.
I tried wrapping the main function call into a synchronized block, but that did not solve the problem.
- (NSDictionary *)capture:(NSDictionary *)options resolve:(RCTPromiseResolveBlock)resolve reject:(RCTPromiseRejectBlock)reject
{
if (#available(iOS 11.1, *)) {
// Set photosettings to capture depth data
AVCapturePhotoSettings *photoSettings = [AVCapturePhotoSettings photoSettingsWithFormat:#{AVVideoCodecKey : AVVideoCodecJPEG}];
photoSettings.depthDataDeliveryEnabled = true;
photoSettings.depthDataFiltered = false;
#synchronized(self) {
[self.photoOutput capturePhotoWithSettings:photoSettings delegate:self];
}
}
// Somehow need to wait here until the delegate functions finish before returning
return self.res;
}
The delegate function which gets called too late:
- (void)captureOutput:(AVCapturePhotoOutput *)output didFinishProcessingPhoto:(AVCapturePhoto *)photo error:(NSError *)error
{
Cam *camera = [[Cam alloc] init];
self.res = [camera extractDepthInfo:photo];
}
Currently nil is returned before the delegate gets ever called and only afterwards does the delegate function assign the desired result to self.res
I believe that what you looking for is dispatch_semaphore_t.
Semaphores allow you to lock a thread until a secondary action is performed. This way, you can postpone the return of the method until the delegate has returned (if you are operating on a secondary thread).
The problem with such an approach is that you will be locking the thread! So, if you are operating in the main thread, your app will become unresponsive.
I would recommend you to consider moving the response to a completion block, similar to:
-(void)capture:(NSDictionary *)options resolve:(RCTPromiseResolveBlock)resolve reject:(RCTPromiseRejectBlock)reject completion:(void (^)(NSDicitionary* ))completion {
self.completion = completion
...
}
And call the completion at the end:
- (void)captureOutput:(AVCapturePhotoOutput *)output didFinishProcessingPhoto:(AVCapturePhoto *)photo error:(NSError *)error
{
Cam *camera = [[Cam alloc] init];
self.res = [camera extractDepthInfo:photo];
self.completion(self.res);
}
=== Edit: Swift Code ===
The code above would be translated to something like:
var completion: (([AnyHashable: Any]) -> Void)?
func capture(options: [AnyHashable: Any], resolve: RCTPromiseResolveBlock, reject: RCTPromiseRejectBlock, completion: #escaping ([AnyHashable: Any]) -> Void) {
self.completion = completion
...
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
let cam = Cam()
let result = cam.extractDepthInfo(photo)
self.completion?(result)
}
An important note here is that the completion needs to be marked as #escaping in the capture method, given that the object will be copied.

How do I convert voice to text in iOS [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
As far as I know , apple native framework doesn't have APIs for converting voice to text and we have to go for third party framework to do that and it has so many drawbacks like user has to microphone to convert from voice to text.
But I can find lots of information for converting text to voice but not the other way
Couldn't find any clear information about this and mostly it has so many uncertain things.
If someone could shed some light it'd be really great !
For Objective C, I wrote a Speech Converter class a while back to convert voice to text.
Step 1: Create A Speech Convertor class
Create a new Cocoa Class and subclass it from NSObject.
Name it let's say ATSpeechRecognizer.
In ATSpeechRecognizer.h:
#import <Foundation/Foundation.h>
#import <Speech/Speech.h>
#import <AVFoundation/AVFoundation.h>
typedef NS_ENUM(NSInteger, ATSpeechRecognizerState) {
ATSpeechRecognizerStateRunning,
ATSpeechRecognizerStateStopped
};
#protocol ATSpeechDelegate<NSObject>
#required
/*This method relays parsed text from Speech to the delegate responder class*/
-(void)convertedSpeechToText:(NSString *) parsedText;
/*This method relays change in Speech recognition ability to delegate responder class*/
-(void) speechRecAvailabilityChanged:(BOOL) status;
/*This method relays error messages to delegate responder class*/
-(void) sendErrorInfoToViewController:(NSString *) errorMessage;
#optional
/*This method relays info regarding whether speech rec is running or stopped to delegate responder class. State with be either ATSpeechRecognizerStateRunning or ATSpeechRecognizerStateStopped. You may or may not implement this method*/
-(void) changeStateIndicator:(ATSpeechRecognizerState) state;
#end
#interface ATSpeechRecognizer : NSObject <SFSpeechRecognizerDelegate>
+ (ATSpeechRecognizer *)sharedObject;
/*Delegate to communicate with requesting VCs*/
#property (weak, nonatomic) id<ATSpeechDelegate> delegate;
/*Class Methods*/
-(void) toggleRecording;
-(void) activateSpeechRecognizerWithLocaleIdentifier:(NSString *) localeIdentifier andBlock:(void (^)(BOOL isAuthorized))successBlock;
#end
And in ATSpeechRecognizer.m:
#import "ATSpeechRecognizer.h"
#interface ATSpeechRecognizer ()
/*This object handles the speech recognition requests. It provides an audio input to the speech recognizer.*/
#property SFSpeechAudioBufferRecognitionRequest *speechAudioRecRequest;
/*The recognition task where it gives you the result of the recognition request. Having this object is handy as you can cancel or stop the task. */
#property SFSpeechRecognitionTask *speechRecogTask;
/*This is your Speech recognizer*/
#property SFSpeechRecognizer *speechRecognizer;
/*This is your audio engine. It is responsible for providing your audio input.*/
#property AVAudioEngine *audioEngine;
#end
#implementation ATSpeechRecognizer
#pragma mark - Constants
//Error Messages
#define kErrorMessageAuthorize #"You declined the permission to perform speech Permission. Please authorize the operation in your device settings."
#define kErrorMessageRestricted #"Speech recognition isn't available on this OS version. Please upgrade to iOS 10 or later."
#define kErrorMessageNotDetermined #"Speech recognition isn't authorized yet"
#define kErrorMessageAudioInputNotFound #"This device has no audio input node"
#define kErrorMessageRequestFailed #"Unable to create an SFSpeechAudioBufferRecognitionRequest object"
#define kErrorMessageAudioRecordingFailed #"Unable to start Audio recording due to failure in Recording Engine"
#pragma mark - Singleton methods
+ (ATSpeechRecognizer *)sharedObject {
static ATSpeechRecognizer *sharedClass = nil;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
sharedClass = [[self alloc] init];
});
return sharedClass;
}
- (id)init {
if (self = [super init]) {
}
return self;
}
#pragma mark - Recognition methods
-(void) activateSpeechRecognizerWithLocaleIdentifier:(NSString *) localeIdentifier andBlock:(void (^)(BOOL isAuthorized))successBlock{
//enter Described language here
if([localeIdentifier length]>0){
NSLocale *locale = [[NSLocale alloc] initWithLocaleIdentifier:localeIdentifier];
_speechRecognizer = [[SFSpeechRecognizer alloc] initWithLocale:locale];
_speechRecognizer.delegate = self;
_audioEngine = [[AVAudioEngine alloc] init];
[self getSpeechRecognizerAuthenticationStatusWithSuccessBlock:^(BOOL isAuthorized) {
successBlock(isAuthorized);
}];
}
else{
successBlock(NO);
}
}
/*Microphone usage Must be authorized in the info.plist*/
-(void) toggleRecording{
if(_audioEngine.isRunning){
[self stopAudioEngine];
}
else{
[self startAudioEngine];
}
}
#pragma mark - Internal Methods
/*
In case different buttons are used for recording and stopping, these methods should be called indiviually. Otherwise use -(void) toggleRecording.
*/
-(void) startAudioEngine{
if([self isDelegateValidForSelector:NSStringFromSelector(#selector(changeStateIndicator:))]){
[_delegate changeStateIndicator:ATSpeechRecognizerStateRunning];
}
[self startRecordingSpeech];
}
-(void) stopAudioEngine{
if([self isDelegateValidForSelector:NSStringFromSelector(#selector(changeStateIndicator:))]){
[_delegate changeStateIndicator:ATSpeechRecognizerStateStopped];
}
[_audioEngine stop];
[_speechAudioRecRequest endAudio];
self.speechRecogTask = nil;
self.speechAudioRecRequest = nil;
}
/*
All the voice data is transmitted to Apple’s backend for processing. Therefore, it is mandatory to get the user’s authorization. Speech Recognition Must be authorized in the info.plist
*/
-(void) getSpeechRecognizerAuthenticationStatusWithSuccessBlock:(void (^)(BOOL isAuthorized))successBlock{
[SFSpeechRecognizer requestAuthorization:^(SFSpeechRecognizerAuthorizationStatus status) {
switch (status) {
case SFSpeechRecognizerAuthorizationStatusAuthorized:
successBlock(YES);
break;
case SFSpeechRecognizerAuthorizationStatusDenied:
[self sendErrorMessageToDelegate:kErrorMessageAuthorize];
successBlock(NO);
case SFSpeechRecognizerAuthorizationStatusRestricted:
[self sendErrorMessageToDelegate:kErrorMessageRestricted];
successBlock(NO);
case SFSpeechRecognizerAuthorizationStatusNotDetermined:
[self sendErrorMessageToDelegate:kErrorMessageNotDetermined];
successBlock(NO);
break;
default:
break;
}
}];
}
-(void) startRecordingSpeech{
/*
Check if the Task is running. If yes, Cancel it and start anew
*/
if(_speechRecogTask!=nil){
[_speechRecogTask cancel];
_speechRecogTask = nil;
}
/*
Prepare for the audio recording. Here we set the category of the session as recording, the mode as measurement, and activate it
*/
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
#try {
[audioSession setCategory:AVAudioSessionCategoryRecord error:nil];
[audioSession setMode:AVAudioSessionModeMeasurement error:nil];
[audioSession setActive:YES error:nil];
} #catch (NSException *exception) {
[self sendErrorMessageToDelegate:exception.reason];
}
/*
Instantiate the recognitionRequest. Here we create the SFSpeechAudioBufferRecognitionRequest object. Later, we use it to pass our audio data to Apple’s servers.
*/
#try {
_speechAudioRecRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
} #catch (NSException *exception) {
[self sendErrorMessageToDelegate:kErrorMessageRequestFailed];
}
/*
Check if the audioEngine (your device) has an audio input for recording.
*/
if(_audioEngine.inputNode!=nil){
AVAudioInputNode *inputNode = _audioEngine.inputNode;
/*If true, partial (non-final) results for each utterance will be reported.
Default is true*/
_speechAudioRecRequest.shouldReportPartialResults = YES;
/*Start the recognition by calling the recognitionTask method of our speechRecognizer. This function has a completion handler. This completion handler will be called every time the recognition engine has received input, has refined its current recognition, or has been canceled or stopped, and will return a final transcript.*/
_speechRecogTask = [_speechRecognizer recognitionTaskWithRequest:_speechAudioRecRequest resultHandler:^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error) {
BOOL isFinal = NO;
if(result!=nil){
if([self isDelegateValidForSelector:NSStringFromSelector(#selector(convertedSpeechToText:))]){
[_delegate convertedSpeechToText:[[result bestTranscription] formattedString]];
}
isFinal = [result isFinal]; //True if the hypotheses will not change; speech processing is complete.
}
//If Error of Completed, end it.
if(error!=nil || isFinal){
[_audioEngine stop];
[inputNode removeTapOnBus:0];
self.speechRecogTask = nil;
self.speechAudioRecRequest = nil;
if(error!=nil){
[self stopAudioEngine];
[self sendErrorMessageToDelegate:[NSString stringWithFormat:#"%li - %#",error.code, error.localizedDescription]];
}
}
}];
/* Add an audio input to the recognitionRequest. Note that it is ok to add the audio input after starting the recognitionTask. The Speech Framework will start recognizing as soon as an audio input has been added.*/
AVAudioFormat *recordingFormat = [inputNode outputFormatForBus:0];
[inputNode installTapOnBus:0 bufferSize:1024 format:recordingFormat block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) {
[self.speechAudioRecRequest appendAudioPCMBuffer:buffer];
}];
/*Prepare and start the audioEngine.*/
[_audioEngine prepare];
#try {
[_audioEngine startAndReturnError:nil];
} #catch (NSException *exception) {
[self sendErrorMessageToDelegate:kErrorMessageAudioRecordingFailed];
}
}
else{
[self sendErrorMessageToDelegate:kErrorMessageAudioInputNotFound];
}
}
-(BOOL) isDelegateValidForSelector:(NSString*)selectorName{
if(_delegate!=nil && [_delegate respondsToSelector:NSSelectorFromString(selectorName)]){
return YES;
}
return NO;
}
-(void) sendErrorMessageToDelegate:(NSString*) errorMessage{
if([self isDelegateValidForSelector:NSStringFromSelector(#selector(sendErrorInfoToViewController:))]){
[_delegate sendErrorInfoToViewController:errorMessage];
}
}
#pragma mark - Speech Recognizer Delegate Methods
-(void) speechRecognizer:(SFSpeechRecognizer *)speechRecognizer availabilityDidChange:(BOOL)available{
if(!available){
[self stopAudioEngine];
}
[_delegate speechRecAvailabilityChanged:available];
}
And that's it. Now you can use this class anywhere in any project you want to convert voice to text. Just be sure to read the guidance comments if you feel confused about how it works.
Step 2: Set up the ATSpeechRecognizer Class in your VC
Import ATSpeechRecognizer in your View Controller and set up the delegate like this:
#import "ATSpeechRecognizer.h"
#interface ViewController : UIViewController <ATSpeechDelegate>{
BOOL isRecAllowed;
}
Use the following method on VC viewDidLoad to set it up and running:
-(void) setUpSpeechRecognizerService{
[ATSpeechRecognizer sharedObject].delegate = self;
[[ATSpeechRecognizer sharedObject] activateSpeechRecognizerWithLocaleIdentifier:#"en-US" andBlock:^(BOOL isAuthorized) {
isRecAllowed = isAuthorized; /*Is operation allowed or not?*/
}];
}
Now set up delegate methods:
#pragma mark - Speech Recog Delegates
-(void) convertedSpeechToText:(NSString *)parsedText{
if(parsedText!=nil){
_txtView.text = parsedText; //You got Text from voice. Use it as you want
}
}
-(void) speechRecAvailabilityChanged:(BOOL)status{
isRecAllowed = status; //Status of Conversion ability has changed. Use Status flag to allow/stop operations
}
-(void) changeStateIndicator:(ATSpeechRecognizerState) state{
if(state==ATSpeechRecognizerStateStopped){
//Speech Recognizer is Stopped
_lblState.text = #"Stopped";
}
else{
//Speech Recognizer is running
_lblState.text = #"Running";
}
_txtView.text = #"";
}
-(void) sendErrorInfoToViewController:(NSString *)errorMessage{
[self showPopUpForErrorMessage:errorMessage]; /*Some error occured. Show it to user*/
}
To Start Conversion of Voice to Text:
- (IBAction)btnRecordTapped:(id)sender {
if(!isRecAllowed){
[self showPopUpForErrorMessage:#"Speech recognition is either not authorized or available for this device. Please authorize the operation or upgrade to latest iOS. If you have done all this, check your internet connectivity"];
}
else{
[[ATSpeechRecognizer sharedObject] toggleRecording]; /*If speech Recognizer is running, it will turn it off. if it is off, it will set it on*/
/*
If you want to do it mannually, use startAudioEngine method and stopAudioEngine method to explicitly perform those operations instead of toggleRecording
*/
}
}
And that's it. All the further explanation you need is in code comments. Ping me if you need further explanation.
Here is the full code for the same:
import UIKit
import Speech
public class ViewController: UIViewController, SFSpeechRecognizerDelegate {
// MARK: Properties
private let speechRecognizer = SFSpeechRecognizer(locale: Locale(identifier: "en-US"))!
private var recognitionRequest: SFSpeechAudioBufferRecognitionRequest?
private var recognitionTask: SFSpeechRecognitionTask?
private let audioEngine = AVAudioEngine()
#IBOutlet var textView : UITextView!
#IBOutlet var recordButton : UIButton!
// MARK: UIViewController
public override func viewDidLoad() {
super.viewDidLoad()
// Disable the record buttons until authorization has been granted.
recordButton.isEnabled = false
}
override public func viewDidAppear(_ animated: Bool) {
speechRecognizer.delegate = self
SFSpeechRecognizer.requestAuthorization { authStatus in
/*
The callback may not be called on the main thread. Add an
operation to the main queue to update the record button's state.
*/
OperationQueue.main.addOperation {
switch authStatus {
case .authorized:
self.recordButton.isEnabled = true
case .denied:
self.recordButton.isEnabled = false
self.recordButton.setTitle("User denied access to speech recognition", for: .disabled)
case .restricted:
self.recordButton.isEnabled = false
self.recordButton.setTitle("Speech recognition restricted on this device", for: .disabled)
case .notDetermined:
self.recordButton.isEnabled = false
self.recordButton.setTitle("Speech recognition not yet authorized", for: .disabled)
}
}
}
}
private func startRecording() throws {
// Cancel the previous task if it's running.
if let recognitionTask = recognitionTask {
recognitionTask.cancel()
self.recognitionTask = nil
}
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(AVAudioSessionCategoryRecord)
try audioSession.setMode(AVAudioSessionModeMeasurement)
try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
guard let inputNode = audioEngine.inputNode else { fatalError("Audio engine has no input node") }
guard let recognitionRequest = recognitionRequest else { fatalError("Unable to created a SFSpeechAudioBufferRecognitionRequest object") }
// Configure request so that results are returned before audio recording is finished
recognitionRequest.shouldReportPartialResults = true
// A recognition task represents a speech recognition session.
// We keep a reference to the task so that it can be cancelled.
recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest) { result, error in
var isFinal = false
if let result = result {
self.textView.text = result.bestTranscription.formattedString
isFinal = result.isFinal
}
if error != nil || isFinal {
self.audioEngine.stop()
inputNode.removeTap(onBus: 0)
self.recognitionRequest = nil
self.recognitionTask = nil
self.recordButton.isEnabled = true
self.recordButton.setTitle("Start Recording", for: [])
}
}
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in
self.recognitionRequest?.append(buffer)
}
audioEngine.prepare()
try audioEngine.start()
textView.text = "(Go ahead, I'm listening)"
}
// MARK: SFSpeechRecognizerDelegate
public func speechRecognizer(_ speechRecognizer: SFSpeechRecognizer, availabilityDidChange available: Bool) {
if available {
recordButton.isEnabled = true
recordButton.setTitle("Start Recording", for: [])
} else {
recordButton.isEnabled = false
recordButton.setTitle("Recognition not available", for: .disabled)
}
}
// MARK: Interface Builder actions
#IBAction func recordButtonTapped() {
if audioEngine.isRunning {
audioEngine.stop()
recognitionRequest?.endAudio()
recordButton.isEnabled = false
recordButton.setTitle("Stopping", for: .disabled)
} else {
try! startRecording()
recordButton.setTitle("Stop recording", for: [])
}
}
}

On Remand Resources - Estimated Time (and how to show an alert depending on the download progress)

Is there a way to get the estimated time of On Demand Resources download?
I'd like to show an alert until they are all downloaded.
[alertDownload showCustom:self image:[UIImage imageNamed:#"icon.jpg"]
color:[UIColor blueColor]
title:#"Download..."
subTitle:#"Download in progress"
closeButtonTitle:nil
duration: ODR ETA];
Right now I have
if (request1.progress.fractionCompleted < 1) {
// code above
}
but the alert will not automatically disappear when the download is completed, it will look at the duration of the alert.
OK, if you can get the fraction complete value and you can measure time, then you know how long you have left.
When you start the download, record the start time in an instance variable:
#interface MyClass () {
NSTimeInterval _downloadStartTime;
}
- (void)startDownload
{
...
_downloadStartTime = [NSDate timeIntervalSinceReferenceDate];
...
}
and then in your notification handler, where you receive the fraction complete, use:
double fractionComplete = 0.2; // For example
if (fractionComplete > 0.0) { // Avoid divide-by-zero
NSTimeInterval now = [NSDate timeIntervalSinceReferenceDate];
NSTimeInterval elapsed = now - _downloadStartTime;
double timeLeft = (elapsedTime / fractionComplete) * (1.0 - fractionComplete);
}
Note: I have not tackled your displaying of the alert dialog and I don't think the logic you are using will work (you don't want to display a new alert every time you get an update). I am avoiding this whole area and concentrating on the ETA logic only.
So, also thanks to the help of #trojanfoe, I achieved this way.
Basically, I'm not setting the alert duration when creating the alert, but I'm updating it depending on the download progress.
Until the download finished, I'm repeatedly setting the duration to 20.0f .
Then, when the download completed, I'm setting the duration to 1.0f (so the alert will disappear in 1 second).
NSTimeInterval _alertDuration;
- (void)viewDidLoad {
[request1 conditionallyBeginAccessingResourcesWithCompletionHandler:^
(BOOL resourcesAvailable)
{
if (resourcesAvailable) {
// use it
} else {
[request1 beginAccessingResourcesWithCompletionHandler:^
(NSError * _Nullable error)
{
if (error == nil) {
[[NSOperationQueue mainQueue] addOperationWithBlock:^ {
[alertDownload showCustom:self image:[UIImage
imageNamed:#"icon.jpg"]
color:[UIColor blueColor]
title:#"Download..."
subTitle:#"Download in progress"
closeButtonTitle:nil
duration:_alertDuration];
}
];
} else {
// handle error
}
}];
}
}];
.
- (void)observeValueForKeyPath:(nullable NSString *)keyPath
ofObject:(nullable id)object
change:(nullable NSDictionary *)change
context:(nullable void *)context {
if((object == request1.progress) && [keyPath
isEqualToString:#"fractionCompleted"]) {
[[NSOperationQueue mainQueue] addOperationWithBlock:^ {
if(request1.progress.fractionCompleted == 1) {
_alertDuration = 1.0f;
} else {
_alertDuration = 20.0f;
}
}];
}
}

keep network activity indicator spinning

I found that sometimes you may be doing something in your iPhone application that requires the user to wait while it completes. Often this is a network related activity, but in other cases it may not be. In my case I was parsing the response from a network connection and wanted the network activity indicator to keep spinning even though it had already downloaded the content.
below is what i'm doing:
applicationDelegate.m :
- (void)setNetworkActivityIndicatorVisible:(BOOL)setVisible
{
static NSInteger NumberOfCallsToSetVisible = 0;
if (setVisible)
NumberOfCallsToSetVisible++;
else
NumberOfCallsToSetVisible--;
// Display the indicator as long as our static counter is > 0.
[[UIApplication sharedApplication] setNetworkActivityIndicatorVisible:(NumberOfCallsToSetVisible > 0)];
}
otherView.m:
dispatch_queue_t dataLoadingQueue = dispatch_queue_create("synchronise", NULL);
dispatch_async(dataLoadingQueue,
^{
[appDelegate setNetworkActivityIndicatorVisible:YES];
[[DataLoader instance]LoadDataForGrewal];
[[FieldConsultantViewModelManager instance] resetCache];
[[DailyFieldConsultantViewModelManager instance] clearCache];
[appDelegate loadMainViews];
[[DataLoader instance]LoadDataForOtherEntities];
[appDelegate setNetworkActivityIndicatorVisible:NO];
});
dispatch_release(dataLoadingQueue);
as u can see above, i'm trying to keep the network indicator while updating the data into the database but it does not work , any clue / suggestions ?
Thanks
EDIT :
dispatch_queue_t dataLoadingQueue = dispatch_queue_create("synchronise", NULL);
dispatch_async(dataLoadingQueue,
^{
dispatch_async(dispatch_get_main_queue(), ^{ [self setNetworkActivityIndicatorVisible:YES]; });
[[DataLoader instance]LoadDataForGrewal];
[[FieldConsultantViewModelManager instance] resetCache];
[[DailyFieldConsultantViewModelManager instance] clearCache];
[appDelegate loadMainViews];
[[DataLoader instance]LoadDataForOtherEntities];
dispatch_async(dispatch_get_main_queue(), ^{ [self setNetworkActivityIndicatorVisible:NO]; });
});
dispatch_release(dataLoadingQueue);
it does not work i'm not sure why because i'm newbie in ios
try to dispatch your setNetworkActivityIndicatorVisible: call on main queue, because UIApplication is in UIKit and UIKit is not thread-safe.
For anyone looking for a complete solution, this is what I use (in Swift). It delays a short duration when stopping the network indicator. This will prevent the indicator from flickering if you have a lot of serial network request.
private var networkActivityCount = 0
func updateNetworkActivityCount(increaseNumber increase:Bool)
{
let appendingCount = increase ? 1 : -1
networkActivityCount += appendingCount
let delayInSeconds = increase ? 0.0 : 0.7
perform({ () -> () in
let application = UIApplication.sharedApplication()
let shouldBeVisible = self.networkActivityCount > 0
if application.networkActivityIndicatorVisible != shouldBeVisible {
application.networkActivityIndicatorVisible = shouldBeVisible
}
}, afterDelay: delayInSeconds)
}
Where perform:afterDelay is defined as
public func perform(block: ()->(), afterDelay:Double) {
let dispatchTime: dispatch_time_t = dispatch_time(DISPATCH_TIME_NOW, Int64(afterDelay * Double(NSEC_PER_SEC)))
dispatch_after(dispatchTime, dispatch_get_main_queue(), {
block()
})
}
Just call updateNetworkActivityCount(increaseNumber: true) before your request and the same with false in your request callback. I.e:
updateNetworkActivityCount(increaseNumber: true)
let task = urlSession.dataTaskWithRequest(request) { data, response, error in
self.updateNetworkActivityCount(increaseNumber: false)
...

Proper use of beginBackgroundTaskWithExpirationHandler

I'm a bit confused about how and when to use beginBackgroundTaskWithExpirationHandler.
Apple shows in their examples to use it in applicationDidEnterBackground delegate, to get more time to complete some important task, usually a network transaction.
When looking on my app, it seems like most of my network stuff is important, and when one is started I would like to complete it if the user pressed the home button.
So is it accepted/good practice to wrap every network transaction (and I'm not talking about downloading big chunk of data, it mostly some short xml) with beginBackgroundTaskWithExpirationHandler to be on the safe side?
If you want your network transaction to continue in the background, then you'll need to wrap it in a background task. It's also very important that you call endBackgroundTask when you're finished - otherwise the app will be killed after its allotted time has expired.
Mine tend look something like this:
- (void) doUpdate
{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[self beginBackgroundUpdateTask];
NSURLResponse * response = nil;
NSError * error = nil;
NSData * responseData = [NSURLConnection sendSynchronousRequest: request returningResponse: &response error: &error];
// Do something with the result
[self endBackgroundUpdateTask];
});
}
- (void) beginBackgroundUpdateTask
{
self.backgroundUpdateTask = [[UIApplication sharedApplication] beginBackgroundTaskWithExpirationHandler:^{
[self endBackgroundUpdateTask];
}];
}
- (void) endBackgroundUpdateTask
{
[[UIApplication sharedApplication] endBackgroundTask: self.backgroundUpdateTask];
self.backgroundUpdateTask = UIBackgroundTaskInvalid;
}
I have a UIBackgroundTaskIdentifier property for each background task
Equivalent code in Swift
func doUpdate () {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), {
let taskID = beginBackgroundUpdateTask()
var response: URLResponse?, error: NSError?, request: NSURLRequest?
let data = NSURLConnection.sendSynchronousRequest(request, returningResponse: &response, error: &error)
// Do something with the result
endBackgroundUpdateTask(taskID)
})
}
func beginBackgroundUpdateTask() -> UIBackgroundTaskIdentifier {
return UIApplication.shared.beginBackgroundTask(expirationHandler: ({}))
}
func endBackgroundUpdateTask(taskID: UIBackgroundTaskIdentifier) {
UIApplication.shared.endBackgroundTask(taskID)
}
The accepted answer is very helpful and should be fine in most cases, however two things bothered me about it:
As a number of people have noted, storing the task identifier as a property means that it can be overwritten if the method is called multiple times, leading to a task that will never be gracefully ended until forced to end by the OS at the time expiration.
This pattern requires a unique property for every call to beginBackgroundTaskWithExpirationHandler which seems cumbersome if you have a larger app with lots of network methods.
To solve these issues, I wrote a singleton that takes care of all the plumbing and tracks active tasks in a dictionary. No properties needed to keep track of task identifiers. Seems to work well. Usage is simplified to:
//start the task
NSUInteger taskKey = [[BackgroundTaskManager sharedTasks] beginTask];
//do stuff
//end the task
[[BackgroundTaskManager sharedTasks] endTaskWithKey:taskKey];
Optionally, if you want to provide a completion block that does something beyond ending the task (which is built in) you can call:
NSUInteger taskKey = [[BackgroundTaskManager sharedTasks] beginTaskWithCompletionHandler:^{
//do stuff
}];
Relevant source code available below (singleton stuff excluded for brevity). Comments/feedback welcome.
- (id)init
{
self = [super init];
if (self) {
[self setTaskKeyCounter:0];
[self setDictTaskIdentifiers:[NSMutableDictionary dictionary]];
[self setDictTaskCompletionBlocks:[NSMutableDictionary dictionary]];
}
return self;
}
- (NSUInteger)beginTask
{
return [self beginTaskWithCompletionHandler:nil];
}
- (NSUInteger)beginTaskWithCompletionHandler:(CompletionBlock)_completion;
{
//read the counter and increment it
NSUInteger taskKey;
#synchronized(self) {
taskKey = self.taskKeyCounter;
self.taskKeyCounter++;
}
//tell the OS to start a task that should continue in the background if needed
NSUInteger taskId = [[UIApplication sharedApplication] beginBackgroundTaskWithExpirationHandler:^{
[self endTaskWithKey:taskKey];
}];
//add this task identifier to the active task dictionary
[self.dictTaskIdentifiers setObject:[NSNumber numberWithUnsignedLong:taskId] forKey:[NSNumber numberWithUnsignedLong:taskKey]];
//store the completion block (if any)
if (_completion) [self.dictTaskCompletionBlocks setObject:_completion forKey:[NSNumber numberWithUnsignedLong:taskKey]];
//return the dictionary key
return taskKey;
}
- (void)endTaskWithKey:(NSUInteger)_key
{
#synchronized(self.dictTaskCompletionBlocks) {
//see if this task has a completion block
CompletionBlock completion = [self.dictTaskCompletionBlocks objectForKey:[NSNumber numberWithUnsignedLong:_key]];
if (completion) {
//run the completion block and remove it from the completion block dictionary
completion();
[self.dictTaskCompletionBlocks removeObjectForKey:[NSNumber numberWithUnsignedLong:_key]];
}
}
#synchronized(self.dictTaskIdentifiers) {
//see if this task has been ended yet
NSNumber *taskId = [self.dictTaskIdentifiers objectForKey:[NSNumber numberWithUnsignedLong:_key]];
if (taskId) {
//end the task and remove it from the active task dictionary
[[UIApplication sharedApplication] endBackgroundTask:[taskId unsignedLongValue]];
[self.dictTaskIdentifiers removeObjectForKey:[NSNumber numberWithUnsignedLong:_key]];
}
}
}
Here is a Swift class that encapsulates running a background task:
class BackgroundTask {
private let application: UIApplication
private var identifier = UIBackgroundTaskInvalid
init(application: UIApplication) {
self.application = application
}
class func run(application: UIApplication, handler: (BackgroundTask) -> ()) {
// NOTE: The handler must call end() when it is done
let backgroundTask = BackgroundTask(application: application)
backgroundTask.begin()
handler(backgroundTask)
}
func begin() {
self.identifier = application.beginBackgroundTaskWithExpirationHandler {
self.end()
}
}
func end() {
if (identifier != UIBackgroundTaskInvalid) {
application.endBackgroundTask(identifier)
}
identifier = UIBackgroundTaskInvalid
}
}
The simplest way to use it:
BackgroundTask.run(application) { backgroundTask in
// Do something
backgroundTask.end()
}
If you need to wait for a delegate callback before you end, then use something like this:
class MyClass {
backgroundTask: BackgroundTask?
func doSomething() {
backgroundTask = BackgroundTask(application)
backgroundTask!.begin()
// Do something that waits for callback
}
func callback() {
backgroundTask?.end()
backgroundTask = nil
}
}
As noted here and in answers to other SO questions, you do NOT want to use beginBackgroundTask only just when your app will go into the background; on the contrary, you should use a background task for any time-consuming operation whose completion you want to ensure even if the app does go into the background.
Therefore your code is likely to end up peppered with repetitions of the same boilerplate code for calling beginBackgroundTask and endBackgroundTask coherently. To prevent this repetition, it is certainly reasonable to want to package up the boilerplate into some single encapsulated entity.
I like some of the existing answers for doing that, but I think the best way is to use an Operation subclass:
You can enqueue the Operation onto any OperationQueue and manipulate that queue as you see fit. For example, you are free to cancel prematurely any existing operations on the queue.
If you have more than one thing to do, you can chain multiple background task Operations. Operations support dependencies.
The Operation Queue can (and should) be a background queue; thus, there is no need to worry about performing asynchronous code inside your task, because the Operation is the asynchronous code. (Indeed, it makes no sense to execute another level of asynchronous code inside an Operation, as the Operation would finish before that code could even start. If you needed to do that, you'd use another Operation.)
Here's a possible Operation subclass:
class BackgroundTaskOperation: Operation {
var whatToDo : (() -> ())?
var cleanup : (() -> ())?
override func main() {
guard !self.isCancelled else { return }
guard let whatToDo = self.whatToDo else { return }
var bti : UIBackgroundTaskIdentifier = .invalid
bti = UIApplication.shared.beginBackgroundTask {
self.cleanup?()
self.cancel()
UIApplication.shared.endBackgroundTask(bti) // cancellation
}
guard bti != .invalid else { return }
whatToDo()
guard !self.isCancelled else { return }
UIApplication.shared.endBackgroundTask(bti) // completion
}
}
It should be obvious how to use this, but in case it isn't, imagine we have a global OperationQueue:
let backgroundTaskQueue : OperationQueue = {
let q = OperationQueue()
q.maxConcurrentOperationCount = 1
return q
}()
So for a typical time-consuming batch of code we would say:
let task = BackgroundTaskOperation()
task.whatToDo = {
// do something here
}
backgroundTaskQueue.addOperation(task)
If your time-consuming batch of code can be divided into stages, you might want to bow out early if your task is cancelled. In that case, just return prematurely from the closure. Note that your reference to the task from within the closure needs to be weak or you'll get a retain cycle. Here's an artificial illustration:
let task = BackgroundTaskOperation()
task.whatToDo = { [weak task] in
guard let task = task else {return}
for i in 1...10000 {
guard !task.isCancelled else {return}
for j in 1...150000 {
let k = i*j
}
}
}
backgroundTaskQueue.addOperation(task)
In case you have cleanup to do in case the background task itself is cancelled prematurely, I've provided an optional cleanup handler property (not used in the preceding examples). Some other answers were criticised for not including that.
I implemented Joel's solution. Here is the complete code:
.h file:
#import <Foundation/Foundation.h>
#interface VMKBackgroundTaskManager : NSObject
+ (id) sharedTasks;
- (NSUInteger)beginTask;
- (NSUInteger)beginTaskWithCompletionHandler:(CompletionBlock)_completion;
- (void)endTaskWithKey:(NSUInteger)_key;
#end
.m file:
#import "VMKBackgroundTaskManager.h"
#interface VMKBackgroundTaskManager()
#property NSUInteger taskKeyCounter;
#property NSMutableDictionary *dictTaskIdentifiers;
#property NSMutableDictionary *dictTaskCompletionBlocks;
#end
#implementation VMKBackgroundTaskManager
+ (id)sharedTasks {
static VMKBackgroundTaskManager *sharedTasks = nil;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
sharedTasks = [[self alloc] init];
});
return sharedTasks;
}
- (id)init
{
self = [super init];
if (self) {
[self setTaskKeyCounter:0];
[self setDictTaskIdentifiers:[NSMutableDictionary dictionary]];
[self setDictTaskCompletionBlocks:[NSMutableDictionary dictionary]];
}
return self;
}
- (NSUInteger)beginTask
{
return [self beginTaskWithCompletionHandler:nil];
}
- (NSUInteger)beginTaskWithCompletionHandler:(CompletionBlock)_completion;
{
//read the counter and increment it
NSUInteger taskKey;
#synchronized(self) {
taskKey = self.taskKeyCounter;
self.taskKeyCounter++;
}
//tell the OS to start a task that should continue in the background if needed
NSUInteger taskId = [[UIApplication sharedApplication] beginBackgroundTaskWithExpirationHandler:^{
[self endTaskWithKey:taskKey];
}];
//add this task identifier to the active task dictionary
[self.dictTaskIdentifiers setObject:[NSNumber numberWithUnsignedLong:taskId] forKey:[NSNumber numberWithUnsignedLong:taskKey]];
//store the completion block (if any)
if (_completion) [self.dictTaskCompletionBlocks setObject:_completion forKey:[NSNumber numberWithUnsignedLong:taskKey]];
//return the dictionary key
return taskKey;
}
- (void)endTaskWithKey:(NSUInteger)_key
{
#synchronized(self.dictTaskCompletionBlocks) {
//see if this task has a completion block
CompletionBlock completion = [self.dictTaskCompletionBlocks objectForKey:[NSNumber numberWithUnsignedLong:_key]];
if (completion) {
//run the completion block and remove it from the completion block dictionary
completion();
[self.dictTaskCompletionBlocks removeObjectForKey:[NSNumber numberWithUnsignedLong:_key]];
}
}
#synchronized(self.dictTaskIdentifiers) {
//see if this task has been ended yet
NSNumber *taskId = [self.dictTaskIdentifiers objectForKey:[NSNumber numberWithUnsignedLong:_key]];
if (taskId) {
//end the task and remove it from the active task dictionary
[[UIApplication sharedApplication] endBackgroundTask:[taskId unsignedLongValue]];
[self.dictTaskIdentifiers removeObjectForKey:[NSNumber numberWithUnsignedLong:_key]];
NSLog(#"Task ended");
}
}
}
#end

Resources