I have created a framework library that I am importing into another project. The library has a function that makes a network request, via another networking library called AutoGraph, and then performs some response manipulation before returning it to the caller. The issue is that after the network request is completed and it is about to return the data to my framework, the closure function it executes to return said data ends in an EXC_BAD_ACCESS. However, if I include the source classes of the framework directly in the project, this error doesn't occur. So this would lead me to believe this isn't an issue in AutoGraph, but rather something to do with the way the framework is being packaged?
Here is the request that is made by the framework:
MyClient.shared.graphClient.send(GetNavigationHierarchy(paginationFilter: pageFilter, nodePaginationFilter: nodePageFilter, navigationRootFilter: navRootFilter)) { (response) in
//Data manipulation
}
graphClient is an instance of the AutoGraph networking library, which is a GraphQL library built on top of Alamofire.
Here is a link to the exact line where AutoGraph throws the EXC_BAD_ACCESS:
https://github.com/remind101/AutoGraph/blob/cbd3772e44cc42e2c68c45d683dec58f1ba8f115/AutoGraph/AutoGraph.swift#L100
Related
NOTE: Please do not do a knee-jerk close recommendation based on "more code required for a minimal reproducible example" especially if you don't understand the question. If you follow my logic, I think you will see that more code is not required.
I'm doing some platform specific Flutter code where I have a platform method "stopRec" (stop recording) which awaits a byte array from the native host.
On the Dart side, it looks like this:
Uint8List? recordedBytes;
recordedBytes = await platform.invokeMethod('stopRec');
As you can see it's expecting to get a byte array (Dart Uint8List) back.
I've written the Android code and it works -- it tests out fine, the recorded bytes come back through and playback correctly.
This is what the Android (Java) code looks like:
byte[] soundBytes = recorder.getRecordedBytes();
result.success(soundBytes);
I hope you understand why "more code" is not yet necessary in this question.
Continuing, though, on the IOS side, I'm getting the following error when calling the platform method:
[VERBOSE-2:ui_dart_state.cc(209)] Unhandled Exception: type
'List<Object?>' is not a subtype of type 'Uint8List?' in type cast
The Dart line where the error occurs is:
recordedBytes = await platform.invokeMethod('stopRec');
So what is happening is that it's not getting a the Dart Uint8List it expects sent back from IOS.
The IOS code looks like this:
var dartCompatibleAudioBytes:[UInt8]?
var audioBytesAsDataFromRecorder: Data?
// ..... platform channel section
case "stopRec":
self?.stopRec()
result(self?.dartCompatibleAudioBytes!) // <---- wrong data type getting sent back here
break
// ..... platform channel section
private func stopRec() {
myRecorder.stopRecording()
audioBytesAsDataFromRecorder = myRecorder.getRecordedAudioFileBytesAsData()
dartCompatibleAudioBytes = [UInt8] (audioBytesAsDataFromRecorder!)
}
I have tested the same IOS implementation code as a stand-alone IOS app that is not connected to Flutter, so I know that at the end of the the stopRec() method, the dartCompatibleAudioBytes variable does contain the expected data which plays back properly.
I hope you can see why "more code" is still not necessary.
The Dart code works
The Android code Works
The Dart code works together with the Android Code
The IOS code works
The IOS code does NOT work together with the Dart code
Using what I've shown, can anyone see immediately why the expected data type is not making its way back through the method channel?
According to the documentation, you should be using FlutterStandardTypedData(bytes: Data) in swift in order for it to be deserialized as Uint8List in dart.
Firebase MLKit on iOS supported a Vision class, primarily used to obtain a Firebase vision object in the following manner:
let vision = Vision.vision()
A VisionTextRecognizer instance from the Firebase MLKit API (which also seemingly has no analogue in the Google-MLKit API) can be obtained from the vision object like so:
var recognizer : VisionTextRecognizer = vision.OnDeviceTextRecognizer()
Given the Firebase Mlkit API is deprecated, I'm looking to move the project to the Google-MlKit API and update the codebase accordingly. The migration guide provides a reference to the renamed and functionally equivalent facilities in GoogleMLKit. I cannot find an equivalent for the deprecated Vision and VisionTextRecognizer classes - are these supported in GoogleMLKit?
There is no Vision class in the new Google ML Kit, as mentioned in the Migration Guide:
Domain entry point classes (Vision, NaturalLanguage) no longer exist. They have been replaced by task specific classes. Replace calls to their various factory methods for getting detectors with direct calls to each detector's factory method.
To get an instance of the on-device text recognizer, you can simply do the following:
var recognizer : TextRecognizer = TextRecognizer.textRecognizer()
Or
let recognizer = TextRecognizer.textRecognizer()
Or chain it directly into the inference call:
var recognizedText: Text
do {
recognizedText = try TextRecognizer.textRecognizer().results(in: image)
} catch let error {
// Handle the error
}
See a working example in ML Kit quickstart vision sample app.
As an addendum to the accepted answer, you might encounter the following after an upgrade to MLKit.
If your project relies on a specific version of Protocol Buffers during the upgrade, MLKit might demand a newer version, or compilation errors may point to a missing file in the Protocol buffer headers. It turns out that simply upgrading the relevant pods did not suffice in my case, and I explicitly had to pull in Protobuf-C++ in the Podfile.
I created a new xCode project (swiftUI) and I followed the guide to install the Indy iOS SDK.
Link: https://github.com/hyperledger/indy-sdk/blob/master/wrappers/ios/README.md
The pod has been installed correctly and I can call the various functions offered by the SDK.
I would like to perform the following operations in sequence:
Create a wallet
Open the wallet
I tried to nest the two operations:
let error = indy_create_wallet(0, walletConfig, credentials, {(commandHandle, err) in
print("Create wallet error: ", err)
let error = indy_open_wallet(1, self.walletConfig, self.credentials, {(commandHandle2, err2, handle) in
print("Open wallet error: ", err2)
})
})
But, in this case I get the error: A C function pointer cannot be formed from a closure that captures context
I tried to use the DispatchGroup but again I get the same error as I have to call the leave() method on the object inside the callback.
Unfortunately I cannot use the "libindy-objc" wrapper because it is not compatible with the version of swift I am using.
Does anyone have any ideas on how I can manage these callbacks to sequentially execute the wallet creation and opening operation? Thanks!
To solve the problem I imported (inside a new group) the wrapper source files.
Why not to use already prepared wrappers on github?
https://github.com/hyperledger/indy-sdk/tree/master/wrappers/ios/libindy-pod/Indy/Wrapper
This is written in ObjC but using Swift it can generate a mapping interface, then you can sequence operations using DispatchSemaphore with .signal and .wait
I am getting started with PromiseKit to prevent myself from writing functions with 10 levels of callbacks..
I installed the latest version (6.2.4) using CocoaPods, am running the latest version of xCode, imported PromiseKit in the file I am trying to get it working in, but I get really weird behavior of Xcode, resulting in several errors.
I intend to do something really basic to get started:
the function below creates filters (ProductListComponents) for categories for products in a product overview app I'm working on.
func createCategoryComponents(masterComponent: MasterComponent?) -> Promise<[ProductListComponents]> {
return Promise { seal in
//create a bunch of product category components
seal.resolve([components])
}
}
All fine here. I then try to get this:
firstly {
self.createCategoryComponents(masterComponent: masterComponent)
}.then { createdComponents in
completion.resolve(nil, createdComponents)
}
This refuses to work. firstly, when I try to type the firstly code, Xcode suggests:
firstly(execute: { () -> Guarantee<T> in
//code
})
and:
firstly(execute: { () -> Thenable in
//code
})
I have not seen this syntax in ANY of the PromiseKit documentation. It also suggests odd syntax for e.g. the .then calls. When accepting Xcode's suggestions, it obviously displays error as this is not the correct PromiseKit syntax. When ignoring Xcode's suggestion, I get this:
Obviously something is wrong here, my best guess is that something went wrong with the installation of PromiseKit. I have cleaned my project, re-installed the pod, restarted Xcode but it seems that nothing is working.
Question
Does anybody know what kind of issue I'm experiencing here and, even more importantly, how I might get it resolved?
Any helpt would be much appreciated.
According to the release notes:
then is fed the previous promise value and requires you return a promise.
done is fed the previous promise value and returns a Void promise (which is 80% of chain usage)
map is fed the previous promise value and requires you return a non-promise, ie. a value.
So, then shouldn't work here, because you need to return the promise value. If you just change then to the done it will work.
Also some suggestions.
firstly is really about visual decoration (i believe it was somewhere at PMK docs, but i can't find that right now), so, if this confuses you, try to remove that for the start;
The main feature of PMK is the chain. You definitely should write your code according to this principle;
Also, don't forget about errors. Use catch at the end of the chain for that.
Final example of your code:
firstly {
self.createCategoryComponents(masterComponent: masterComponent)
}
.done { createdComponents in
completion.resolve(nil, createdComponents)
}
.catch { error in
// don't forget about errors
}
I'm using NSURLSession to connect to a database. I have this already implemented in C++ for Windows and am trying to get it working on iOS also. I have a .h file derived from a base C++ class that is the header for my .mm file. If I'm correct I have to implement all the functions in my .h file in C++. However NSURLSession is an Objective-C function. How do I call an Objective-C method from my C++ function?
I have a C++ function called Connect() where I make a C++ object m_Delegate that has an alloc and init.
this->m_Delegate = [[PrivateNSURLSessionDelegate alloc] initWihParent:this];
//where PrivateNSURLSessionDelegate is the name of my interface.
That interface has -(bool)NSConnect (with implementation in the #implementation) which I'm trying to call from:
void Connect()
{
[PrivateNSURLSessionDelegate NSConnect];
//This however gives me the error: +[PrivateNSRLSessionDelegate NSConnect]: unrecongnized selector sent to class
}
I also tried it using my C++ object
void Connect()
{
[m_Delegate NSConnect];
//This gives me a error that is unrecognized selector sent to instance
}
Is there a better way to do this? I basically want to ask the Objective-C to do all the NSURL stuff and send just the data back to the C++.
I'm completely new to Objective-C so any and all help would be greatly appreciated! Thanks!
-(bool)NSConnect
Here the - indicates it is an instance method. Conversely + would indicate a class method.
That being said, [PrivateNSURLSessionDelegate NSConnect]; calls a class method, since you call it on the interface PrivateNSURLSessionDelegate.
However, this is not defined as it is defined as NSConnect is defined as an instance method (btw the convention is that (instance) methods always start with a lowercase).
[m_Delegate NSConnect];
Does however call the instance method. You should define -(bool)NSConnect in the header file of PrivateNSURLSessionDelegate, not above the #implementation in the implementation file, that makes in a private method and thus inaccessible.
There is Objective-C, which is a superset of C, and Objective-C++, which is a superset of C++. Objective-C++ source code files have a .mm suffix, where Objective-C would have a .m suffix.
You cannot call Objective-C from C++. You can however call Objective-C from Objective-C++, and you can write your usual C++ classes in Objective-C++ as well.