Read logs using the new swift os_log api - ios

Deprecated in iOS 10.0: os_log(3) has replaced asl(3)
So iOS 10.0 apparently deprecates the asl (Apple System Log) api and replaces it with the very limited os_log api.
I use something similar to the code snippet below to read out log writes for the running app to show in a uitextview in app - and now it is full of deprecation warnings. Does anyone know of a way to read the printed log using the new os_log api? Because I only see an api for writing (https://developer.apple.com/reference/os/1891852-logging).
import asl
let query = asl_new(UInt32(ASL_TYPE_QUERY))
let response = asl_search(nil, query)
while let message = asl_next(response) {
var i: UInt32 = 0
let key = asl_key(message, i)
print(asl_get(message, key))
...
}
Edit after #Will Loew-Blosser's answer
https://developer.apple.com/videos/play/wwdc2016/721/ explained nicely what is going to happen with logging in the future. The biggest giveaway was that logs are put in some compressed format and only expanded by the new Console application. Which pretty much makes my mission hopeless.
The guy (Steve Szymanski) in the video mentions "All ASL logging APIs are superseeded by new APIs" and "New APIs for searching new log data will not be made public this release" aka asl_search. And that was exactly what I was looking for!
Also he mentions that a swift API i coming.

Looks like you need to use the enhanced Console instead of your own log viewer. The logs are compressed and not expanded until viewed - this makes logging much less intrusive at debug levels. There is no text form of the logs however.
See the 2016 WWDC video session 721 "Unified Logging and Activity Tracing" https://developer.apple.com/videos/play/wwdc2016/721/
Also the Apple sample app that demos the new approach has an undocumented build setting that I had to add to my iOS app. See the setting in the 'Paper Company (Swift)' iOS app.
The setting is found in the Targets section of the top level xCode window. These are the steps that I followed:
On the Build Settings page add in "User-Defined" a new section = ASSETCATALOG_COMPRESSION.
Under it add two lines:
Debug = lossless
Release = respect-asset-catalog
After adding this build setting then logging worked in my app as per the video session demo.

Related

Xamarin forms how to log in with Apple account

I'm trying to publish my first xamarin forms app on IOS. I barred the issue of login with the Apple account.
I have 4 questions, please.
1- If I implement Sign in with Apple only for IOS 13+ will it be accepted? :(
2- I'm trying to use Xamarin Essentials to log in to IOS 13+ as shown in this article:
Xamarin Essentials
// Use Native Apple Sign In API's
r = await AppleSignInAuthenticator.AuthenticateAsync();
But I only get back the idToken. AccessToken, name and mail return null. Am I missing something?
3 - And finally I tried to use the plugin.firebaseAuth version 4.0.0-pre01:
Link plugin
// For iOS
var credential = CrossFirebaseAuth.Current.OAuthProvider.GetCredential("apple.com", idToken, rawNonce: rawNonce);
var result = await CrossFirebaseAuth.Current.Instance.SignInWithCredentialAsync(credential);
// For Android
var provider = new OAuthProvider("apple.com");
var result = await CrossFirebaseAuth.Current.Instance.SignInWithProviderAsync(provider);
It provides an example using prism to deal with this, but when I install the plugin in this version the application is no more than a splash screen and closes, without showing an error in the output. What am I doing wrong? :(
The first link seems promising for iOS less than 13 and Android using Asp.NET. However in the application I use only the Firebase ClouFirestone and Firebase Hosting for the Administrative Panel. Is it possible for me to sign in Apple without the services of a different backend?
I am very grateful for any light on the path I must follow
1- If I implement Sign in with Apple only for IOS 13+ will it be accepted?
It depends, if they don't find any other issues or violation, it will get accepted.
2- I'm trying to use Xamarin Essentials to log in to IOS 13+ as shown in this article: But I only get back the idToken.
Apple will only provide you the requested details on the first authentication. After that first authentication, you will only get the User Id so be sure to store the details that first time in case you need them.
This feature needs to be tested on a physical device running iOS 13. The simulator is not reliable, it doesn’t always work properly.
Should follow the design guidelines when implementing Apple Sign In. You can find it here: https://developer.apple.com/design/human-interface-guidelines/sign-in-with-apple/overview/

Swift / iOS launch Apple Pay to a particular payment pass

Searching Apple Pay / Passkit / Wallet documentation, I've found very few code examples and pretty poor documentation. We're attempting to present a payment pass we've provisioned rather than just launch the wallet with openPaymentSetup().
According to PKPassLibrary docs, this can be achieved with PKPassLibrary.present(). We're invoking this function and it launches Apple Pay directly into the add a card wizard, which seems worse than the UX from openPaymentSetup().
The code we're using is:
let library: PKPassLibrary = PKPassLibrary()
let passes: [PKPass] = library.remotePaymentPasses()
if !passes.isEmpty, #available(iOS 10.0, *) {
library.present(passes[passes.count-1].paymentPass!)
} else {
library.openPaymentSetup()
}
We get the pass library and our passes, then conditionally attempt to present the last pass.
Does anyone know how to show a pass rather than launching a tutorial or add a card wizard?
You can still use presentPaymentPass api to present a PaymentPass. But your iOS version should be >=10.3.3 even if Apple Documentation says iOS version > 10.0. This is a wrong documentation from Apple.

How to access Siri voice selected by user in Settings in iOS 11

I am writing an app that includes text-to-speech using AVSpeechSynthesizer. The code for generating the utterance and using the speech synthesizer has been working fine.
let utterance = AVSpeechUtterance(string: text)
utterance.voice = currentVoice
speechSynthesizer.speak(utterance)
Now with iOS 11, I want to match the voice to the one selected by the user in the phone's Settings app, but I do not see any way to get that setting.
I have tried getting the list of installed voices and looking for one that has a quality of .enhanced, but sometimes there is no enhanced voice installed, and even when there is, it may or may not be the voice selected by the user in the Settings app.
static var enhanced: AVSpeechSynthesisVoice? {
for voice in AVSpeechSynthesisVoice.speechVoices() {
if voice.quality == .enhanced {
return voice
}
}
return nil
}
The questions are twofold:
How can I determine which voice has been selected by the user in the Setting app?
Why on some iOS 11 phones that are using the new Siri voice am I not finding an "enhanced" voice installed?
I suppose if there was a method available for selecting the same voice as in the Settings app, it'd be shown on the documentation for class AVSpeechSynthesisVoice under the Finding Voices topic. Jumping to the definition in code of AVSpeechSynthesisVoice, I couldn’t find any different methods to retrieve voices.
Here's my workaround on getting an enhanced voice over for the app I am working on:
Enhanced versions of voices are probably not present in new iOS devices by default in order to save storage. Iterating thru available voices on my brand new iPhone, I only found Default quality voices such as: [AVSpeechSynthesisVoice 0x1c4e11cf0] Language: en-US, Name: Samantha, Quality: Default [com.apple.ttsbundle.Samantha-compact]
I found this article on how to enable additional voice over voices and downloaded the one named “Samantha (Enhanced)” among them. Checking the list of available voices again, I noticed the following addition:
[AVSpeechSynthesisVoice 0x1c4c03060] Language: en-US, Name: Samantha (Enhanced), Quality: Enhanced [com.apple.ttsbundle.Samantha-premium]
As of now I was able to select an enhanced language on Xcode. Given that the AVSpeechSynthesisVoice.currentLanguageCode() method exposes the currently selected language, ran the following code to make a selection of the first enhanced voice I could find. If no enhanced version was available I’d just pick the available default (the code below is for a VoiceOver custom class I am creating to handle all speeches in my app. The piece below updates its voice variable).
var voice: AVSpeechSynthesisVoice!
for availableVoice in AVSpeechSynthesisVoice.speechVoices(){
if ((availableVoice.language == AVSpeechSynthesisVoice.currentLanguageCode()) &&
(availableVoice.quality == AVSpeechSynthesisVoiceQuality.enhanced)){ // If you have found the enhanced version of the currently selected language voice amongst your available voices... Usually there's only one selected.
self.voice = availableVoice
print("\(availableVoice.name) selected as voice for uttering speeches. Quality: \(availableVoice.quality.rawValue)")
}
}
if let selectedVoice = self.voice { // if sucessfully unwrapped, the previous routine was able to identify one of the enhanced voices
print("The following voice identifier has been loaded: ",selectedVoice.identifier)
} else {
self.voice = AVSpeechSynthesisVoice(language: AVSpeechSynthesisVoice.currentLanguageCode()) // load any of the voices that matches the current language selection for the device in case no enhanced voice has been found.
}
I am also hoping Apple will expose a method to directly load the selected language, but I hope this work around can serve you in the meantime. I guess Siri’s enhanced voice is downloaded on the go, so maybe this is the reason it takes so long to answer my voice commands :)
Best regards.
It looks like the new Siri voice in iOS 11 isn't part of the AVSpeechSynthesis API, and isn't available to developers.
In macOS 10.13 High Sierra (which also gets the new voice), there seems to be a new SiriTTS framework that's probably related to this functionality, but it's in PrivateFrameworks so it doesn't have a developer API.
I'll try to provide a more detailed answer. AVSpeechSynthesizer cannot use the Siri voice. Apple has locked this voice to ensure privacy as the malicious app could impersonate Siri and get private information that way.
Apple hasn't changed this for years, but there is ongoing initiative regarding this. We already know that there is a solution to access privacy sensitive features in the iOS using the permissions, and there is no reason why Siri voice couldn't be accessed with user permission. You may vote for this to happen using this petition and with some hope Apple may implement that in the future: https://www.change.org/p/apple-apple-please-allow-3rd-party-apps-to-use-siri-voices-for-improved-accessibility

iOS Application crashing on use of CMPedometer functions

I have a Xamarin.iOS application where I am using this guide to make use of the CMPedometer floors ascended property. Here is some relevant code on my single view app:
CMPedometer pedometer;
...
public override async void ViewDidLoad(){
base.ViewDidLoad();
if (CMPedometer.IsFloorCountingAvailable)
{
pedometer = new CMPedometer();
//app crashes here:
pedometer.StartPedometerUpdates(new NSDate(), UpdatePedometerData);
var data = await pedometer.QueryPedometerDataAsync((NSDate)DateTime.SpecifyKind(DateTime.Now.AddHours(-24), DateTimeKind.Utc), (NSDate)DateTime.Now);
UpdatePedometerData(data, null);
}
}
My very basic app crashes when I try to get updates from my CMPedometer with little error output. This is what I get:
=================================================================
Got a SIGABRT while executing native code. This usually indicates
a fatal error in the mono runtime or one of the native libraries
used by your application.
=================================================================
which may be an issue with my app permissions? If that's the case I am not sure how to grant/ask permissions on using the CDPedometer. Thanks for any help
Got this link. You have to add privacy setting for motion in your plist
https://blog.xamarin.com/new-ios-10-privacy-permission-settings/
Thanks to #panthor314 for getting me pointed in the right direction. Unfortunately the blog link above is dead, but this seems to be the new location for this information:
https://learn.microsoft.com/en-us/xamarin/ios/app-fundamentals/security-privacy?tabs=windows
This link explains:
Apps that fail to provide the required keys will be silently terminated by the system when they attempt to access one of the restricted features or user information, without error! If an app starts unexpectedly failing on iOS 10, ensure that all of the required Info.plist have been specified.
The relevant privacy key is NSMotionUsageDescription:
Motion Usage Description (NSMotionUsageDescription) - Allows the developer to describe why the app wants to access the device's accelerometer.
To add the property:
Right-click on Info.plist in your Solution Explorer (double click seems to open a different window)
Select Open With...
Select Generic PList Editor and click OK
At the end of the plist, click the + icon to add a new entry
Change Custom Property to Privacy - Motion Usage Description
Enter text to display to the user about accessing steps such as "This application would like to access your steps data"
Save the file and run the application again

Is there a way to recognize, that the app was installed thru firebase dynamic link in didFinishLaunchingWithOptions?

I am implementing firebase dynamic links in my iOS app and I can already parse the link, redirect to AppStore etc. Now I want to distinguish the first run of the app, when user installs it from the dynamic link - I want to skip the intro and show him the content that is expected to be shown.
Is there some parameter, that I could catch in application(_:didFinishLaunchingWithOptions:) so I could say that it was launched thru the dynamic link?
The method application(_:continueUserActivity:userActivity:restorationHandler:) is called later, so the intro is already launched.
This case is difficult to test, because you have to have your app published on the AppStore.
You actually don't need to have the app published in the App Store for this to work — clicking a link, closing the App Store, and then installing an app build through Xcode (or any other beta distribution platform like TestFlight or Fabric) has exactly the same effect.
According to the Firebase docs, the method that is called for the first install is openURL (no, this makes no sense to me either). The continueUserActivity method is for Universal Links, and is only used if the app is already installed when a link is opened.
I am not aware of any way to detect when the app is opening for the first time after install from a 'deferred' link, but you could simply route directly to the shared content (skipping the intro) whenever a deep link is present. If a deep link is NOT present, show the regular intro.
Alternative Option
You could check out Branch.io (full disclosure: I'm on the Branch team). Amongst other things, Branch is a great, free drop-in replacement for Firebase Dynamic Links with a ton of additional functionality. Here is an example of all the parameters Branch returns immediately in didFinishLaunchingWithOptions:
{
"branch_view_enabled" = 0;
"browser_fingerprint_id" = "<null>";
data = "{
\"+is_first_session\":false,
\"+clicked_branch_link\":true,
\"+match_guaranteed\":true,
\"$canonical_identifier\":\"room/OrangeOak\",
\"$exp_date\":0,
\"$identity_id\":\"308073965526600507\",
\"$og_title\":\"Orange Oak\",
\"$one_time_use\":false,
\"$publicly_indexable\":1,
\"room_name\":\"Orange Oak\", // this is a custom param, of which you may have an unlimited number
\"~channel\":\"pasteboard\",
\"~creation_source\":3,
\"~feature\":\"sharing\",
\"~id\":\"319180030632948530\",
\"+click_timestamp\":1477336707,
\"~referring_link\":\"https://branchmaps.app.link/qTLPNAJ0Jx\"
}";
"device_fingerprint_id" = 308073965409112574;
"identity_id" = 308073965526600507;
link = "https://branchmaps.app.link/?%24identity_id=308073965526600507";
"session_id" = 319180164046538734;
}
You can read more about these parameters on the Branch documentation here.
Hmm... as far as I'm aware, there's not really anything you can catch in the application:(_:didFinishLaunchingWithOptions) phase that would let you know the app was being opened by a dynamic link. You're going to have to wait until the continueUserActivity call, as you mentioned.
That said, FIRDynamicLinks.dynamicLinks()?.handleUniversalLink returns a boolean value nearly instantly, so you should be able to take advantage of that to short-circuit your into animation without it being a bad user experience. The callback itself might not happen until several milliseconds later, depending on if it's a shortened dynamic link (which requires a network call) or an expanded one (which doesn't).

Resources