Firebase Vision analogue in Google MLKit - ios

Firebase MLKit on iOS supported a Vision class, primarily used to obtain a Firebase vision object in the following manner:
let vision = Vision.vision()
A VisionTextRecognizer instance from the Firebase MLKit API (which also seemingly has no analogue in the Google-MLKit API) can be obtained from the vision object like so:
var recognizer : VisionTextRecognizer = vision.OnDeviceTextRecognizer()
Given the Firebase Mlkit API is deprecated, I'm looking to move the project to the Google-MlKit API and update the codebase accordingly. The migration guide provides a reference to the renamed and functionally equivalent facilities in GoogleMLKit. I cannot find an equivalent for the deprecated Vision and VisionTextRecognizer classes - are these supported in GoogleMLKit?

There is no Vision class in the new Google ML Kit, as mentioned in the Migration Guide:
Domain entry point classes (Vision, NaturalLanguage) no longer exist. They have been replaced by task specific classes. Replace calls to their various factory methods for getting detectors with direct calls to each detector's factory method.
To get an instance of the on-device text recognizer, you can simply do the following:
var recognizer : TextRecognizer = TextRecognizer.textRecognizer()
Or
let recognizer = TextRecognizer.textRecognizer()
Or chain it directly into the inference call:
var recognizedText: Text
do {
recognizedText = try TextRecognizer.textRecognizer().results(in: image)
} catch let error {
// Handle the error
}
See a working example in ML Kit quickstart vision sample app.

As an addendum to the accepted answer, you might encounter the following after an upgrade to MLKit.
If your project relies on a specific version of Protocol Buffers during the upgrade, MLKit might demand a newer version, or compilation errors may point to a missing file in the Protocol buffer headers. It turns out that simply upgrading the relevant pods did not suffice in my case, and I explicitly had to pull in Protobuf-C++ in the Podfile.

Related

''List<Object?>' is not a subtype of type 'Uint8List?'' on IOS Flutter Platform Channel

NOTE: Please do not do a knee-jerk close recommendation based on "more code required for a minimal reproducible example" especially if you don't understand the question. If you follow my logic, I think you will see that more code is not required.
I'm doing some platform specific Flutter code where I have a platform method "stopRec" (stop recording) which awaits a byte array from the native host.
On the Dart side, it looks like this:
Uint8List? recordedBytes;
recordedBytes = await platform.invokeMethod('stopRec');
As you can see it's expecting to get a byte array (Dart Uint8List) back.
I've written the Android code and it works -- it tests out fine, the recorded bytes come back through and playback correctly.
This is what the Android (Java) code looks like:
byte[] soundBytes = recorder.getRecordedBytes();
result.success(soundBytes);
I hope you understand why "more code" is not yet necessary in this question.
Continuing, though, on the IOS side, I'm getting the following error when calling the platform method:
[VERBOSE-2:ui_dart_state.cc(209)] Unhandled Exception: type
'List<Object?>' is not a subtype of type 'Uint8List?' in type cast
The Dart line where the error occurs is:
recordedBytes = await platform.invokeMethod('stopRec');
So what is happening is that it's not getting a the Dart Uint8List it expects sent back from IOS.
The IOS code looks like this:
var dartCompatibleAudioBytes:[UInt8]?
var audioBytesAsDataFromRecorder: Data?
// ..... platform channel section
case "stopRec":
self?.stopRec()
result(self?.dartCompatibleAudioBytes!) // <---- wrong data type getting sent back here
break
// ..... platform channel section
private func stopRec() {
myRecorder.stopRecording()
audioBytesAsDataFromRecorder = myRecorder.getRecordedAudioFileBytesAsData()
dartCompatibleAudioBytes = [UInt8] (audioBytesAsDataFromRecorder!)
}
I have tested the same IOS implementation code as a stand-alone IOS app that is not connected to Flutter, so I know that at the end of the the stopRec() method, the dartCompatibleAudioBytes variable does contain the expected data which plays back properly.
I hope you can see why "more code" is still not necessary.
The Dart code works
The Android code Works
The Dart code works together with the Android Code
The IOS code works
The IOS code does NOT work together with the Dart code
Using what I've shown, can anyone see immediately why the expected data type is not making its way back through the method channel?
According to the documentation, you should be using FlutterStandardTypedData(bytes: Data) in swift in order for it to be deserialized as Uint8List in dart.

Manage callbacks for Hyperledger Indy iOS SDK

I created a new xCode project (swiftUI) and I followed the guide to install the Indy iOS SDK.
Link: https://github.com/hyperledger/indy-sdk/blob/master/wrappers/ios/README.md
The pod has been installed correctly and I can call the various functions offered by the SDK.
I would like to perform the following operations in sequence:
Create a wallet
Open the wallet
I tried to nest the two operations:
let error = indy_create_wallet(0, walletConfig, credentials, {(commandHandle, err) in
print("Create wallet error: ", err)
let error = indy_open_wallet(1, self.walletConfig, self.credentials, {(commandHandle2, err2, handle) in
print("Open wallet error: ", err2)
})
})
But, in this case I get the error: A C function pointer cannot be formed from a closure that captures context
I tried to use the DispatchGroup but again I get the same error as I have to call the leave() method on the object inside the callback.
Unfortunately I cannot use the "libindy-objc" wrapper because it is not compatible with the version of swift I am using.
Does anyone have any ideas on how I can manage these callbacks to sequentially execute the wallet creation and opening operation? Thanks!
To solve the problem I imported (inside a new group) the wrapper source files.
Why not to use already prepared wrappers on github?
https://github.com/hyperledger/indy-sdk/tree/master/wrappers/ios/libindy-pod/Indy/Wrapper
This is written in ObjC but using Swift it can generate a mapping interface, then you can sequence operations using DispatchSemaphore with .signal and .wait

Not able to use Kotlin Extension Function written in common in iOS as Swift Extension

I have a Kotlin Multiplatform project setup with Android & cocoapods for iOS.
I have a file Extensions.kt in commonMain/src with the following function:
fun String.isValidEmail(): Boolean {
return validate(ValidatorRegex.EMAIL)
}
I am able to access this function in Android as a String extension:
"abcd#gmail.com".isValidEmail()
But in iOS using Swift, I need to call it as a static method of another class:
ExtensionsKt.isValidEmail("abcd#gmail.com")
It should convert that commonMain/src method to a Swift extension of String instead of a class with a static method.
Am I missing any configuration?
You're doing everything right here. Unfortunately, this option is not available for now. Extensions conversion may be performed correctly for some classes, but Swift's String is not the one. This is a known inconvenience, and K/N team is working hard to make it better.

Parse SDK and Swift: Incorrect argument label in call PFObject 'withoutDataWithObjectId'

I subclass PFObject exactly as described here.
Then I create a new instance of the subclassed object without data, but since Swift 1.2 I get an error (It did work perfectly before):
var test = Armor(withoutDataWithObjectId: "1234567890")
-> Xcode complains:
"Incorrect argument label in call (have 'withoutDataWithObjectId:',
expected: 'className:')"
Why className? It should get the class name from the class function parseClassName
And I can under no circumstances create a new object with objectId but no data (which I MUST have to fetch it from the local datastore)
This is super annoying as my app doesn't compile any longer.
Update to the newest Parse SDK, available here.
The issue is caused due to necessary adaptions in the Parse SDK after the Swift language update. This issue also occurs with the most recent update to Swift 2.2. The newest (as of today) Parse SDK release 1.13.0 already fixes this.
UPDATE
Parse iOS SDK 1.13.0 has a typo and the function PFUser(withoutDataWithObjectId:) is called PFUser(outDataWithObjectId:). So upgrading the Parse SDK alone does solve this. Until this is fixed a temporary workaround would be to extend PFObject with a convenience initializer. To do this add a new Swift file to your project and insert this:
import Parse
extension PFObject {
convenience init(withoutDataWithObjectId objectId: String?) {
self.init(outDataWithObjectId: objectId)
}
}
It may be a little late to answer this question.
I use swift 1.2, and v 1.7.5 Parse SDK, and it works totally fine.
however, make sure you have define objective-c bridging header in "build setting".
and try to run it, even though there may reports some error

Where is the class CryptoCommon in xamarin

Im trying to use the CryptoCommon class but unable the find it in the monotuch assembly.
I found the assembly Mono.Security.Cryptography, does it have the same performance as the CryptoCommon class?
Thanks!!
CommonCrypto is used internally inside Xamarin.iOS, this is not something extra - i.e. there's no need to opt-in or opt-out.
What it means is that it's use is totally transparent to your code. If an algorithm is available in CommonCrypto then using the classic .NET type will use it.
E.g.
// this will use CommonCrypto. AES is supported by CommonCrypto
// if your device supports it AES can be hardware accelerated
var aes = Aes.Create ();
// this will also use CommonCrypto. SHA-1 is supported by CommonCrypto
// if your device supports it SHA-1 can be hardware accelerated
var sha = new SHA1Managed ();
// this will not use CommonCrypto since the algorithm is not supported by Apple
var r = RIPEMD160.Create ();
More information about CommonCrypto can be found on my blog.

Resources