IOS prediction app throws error to predict the .mlmodel in IOS14 - ios

I try to use my .mlmodel to predict on IOS14 using the same method as described in the apple document link below:
https://developer.apple.com/documentation/coreml/integrating_a_core_ml_model_into_your_app
I get below error. I could not understand what is wrong. My two inputs and output are all set as expected and model seems converted correctly from TF2 to mlmodel. Any suggestions how to analyze and fix the issue?
/Library/Caches/com.apple.xbs/Sources/MetalImage/MetalImage-124.0.29/MPSNeuralNetwork/Filters/MPSCNNKernel.mm:752: failed assertion `[MPSCNNConvolution encode...] Error: destination may not be nil.

This is an error from Metal, which is used to run the model on the GPU. You can try running it on the CPU instead, to see if that works without errors. (If running on the CPU also gives errors, something in your model is wrong.)
let config = MLModelConfiguration()
config.computeUnits = .cpuOnly
let model = YourModel(configuration: config)
...

Related

FFT in C++ AMP Throw CLIPBRD_E_CANT_OPEN error

I am trying to use C++ AMP in Visual C++ 2017 on Windows 10 (updated to the latest) and I find the archived FFT library from C++ AMP team on codeplex. I try to run the sample code, however the program throws ran out of memory error when creating DirectX FFT. I solve that problem by following the thread on Microsoft forum.
However, the problem doesn't stop. When the FFT library tries to create Unordered Access View, it throws error of CLIPBRD_E_CANT_OPEN. I did not try to operate on clipboard anyhow.
Thank you for reading this!
It seems I solve the problem. The original post mentioned that we need to create a new DirectX device and then create accelerator view upon it. Then I pass that view to ctor of fft as the second parameter.
fft(
concurrency::extent<_Dim> _Transform_extent,
const concurrency::accelerator_view& _Av = concurrency::accelerator().default_view,
float _Forward_scale = 0.0f,
float _Inverse_scale = 0.0f)
However, I still have crashes of the CLIPBRD_E_CANT_OPEN.
After reading the code, I realize that I need to create array on that DirectX views too. So I started to change:
array<std::complex<float>,dims> transformed_array(extend, directx_acc_view);
The idea comes from the different behaviors of create_uav(). The internal buffers and the precomputing caused no problem, but the samples' calls trigger the clipboard error. I guess the device matters here, so I do that change.
I hope my understanding is correct and anyway there is no such errors now.

Adding a new language to Tesseract ios SDK

I am able to compile the English version which is already in sample for tesseract but not able to add other language like swe.traineddata.
I'm doing like this
G8RecognitionOperation *operation = [[G8RecognitionOperation alloc] initWithLanguage:#"eng+swe"];
When adding this its giving this error but working fine with English.
Cube ERROR (CubeRecoContext::Load): unable to read cube language model params from /private/var/mobile/Containers/Bundle/Application/D93B654A-1E46-4A34-9A83-95C6FC903085/*.app/tessdata/swe.cube.lm
Cube ERROR (CubeRecoContext::Create): unable to init CubeRecoContext object
init_cube_objects(true, &tessdata_manager):Error:Assert failed:in file tessedit.cpp, line 203
The fact it does not work has to do with the engine mode. If you use the CubeOnly or TesseractCubeCombined, you need 'cube' files. Engine mode TesseractOnly works fine.
you are missing some of files,I think so.Also check on Create Folder References.that helped me once.

Not Getting QR Code Data Using AVFoundation Framework

I used AVFoundation framework delegate methods to read QR Code.It is reading almost all QR codes & giving resulted data for them. But when I try with some QR code(eg. below QR image) , it is predicting that it is QR code but does not give any data for it.
Your sample is triggering an internal (C++) exception.. it seems to be getting caught around [AVAssetCache setMaxSize:] which suggests either the data in this particular sample is corrupt, or it's just to large for AVFoundation to handle.
As it's an internal exception it is (mostly) failing silently. The exception occurs when you try to extract the stringValue from your AVMetadataMachineReadableCodeObject.
So if you test for the existence of your AVMetadataMachineReadableCodeObject, you will get YES, whereas if you test for stringValue you will get NO.
AVMetadataMachineReadableCodeObject *readableObject =
(AVMetadataMachineReadableCodeObject *)[self.previewLayer
transformedMetadataObjectForMetadataObject:metadataObject];
BOOL foundObject = readableObject != nil;
//returns YES
BOOL foundString = readableObject.stringValue != nil;
//returns NO + triggers internal exception
It's probably best to test for the string, rather than the object, and ignore any result that returns NO.
update
In your comment you ask about native framework solution that will read this barcode. AVFoundation is the native framework for barcode reading, so if it fails on your sample, you will have to look for third-party solutions.
zxing offers an iOS port but it looks to be old and unsupported.
zbarSDK used to be a good solution but also seems to be unsupported past ios4. As AVFoundation now has built-in barcode reading, this is unsurprising.
This solution by Accusoft does read the sample but is proprietary and really pricey.
I do wonder about the content of you sample though - it looks either corrupt or some kind of exotic encoding...

OpenCV Line-Mod problems with Images

I'm trying to use line-mod (in special line-2d) in opencv 2.4 to compare images. At the moment I try to change the test-implementation linemod.cpp to use an input images instead of the camera, but without any success.
I tried to load an image via imread('...', CV_LOAD_IMAGE_COLOR); and pushed that in the sources vector but got a 'OpenCV Error: Assertion failed (response_map.rows % T == 0) in linearize' error.
If I load a CV_LOAD_IMAGE_GRAYSCALE image the run stops at detector->match with the error 'Thread 1: EXC_BAD_ACCESS (code=1, address=0x11310f000)'.
I don't understand what makes the difference between images coming from a VideoCapturer and from imread...
Is there anyone out there that may help me? I'm totally lost ... again ;-)
(For example sample code for matching two objects from images with linemod would be absolutely great!)
I use opencv 2.4 with xcode on a mac.
Maybe it is too late for an answer, but I am also interested in the algorithm
In the OpenCV Minutes 2012-06-26 ( http://code.opencv.org/projects/opencv/wiki/2012 ) you can read:
Will work with Stefan Hinterstoisser for final version of LINEMOD by September
So if you did not already solve it, you may want to wait.

MonoTouch JIT Error in Release mode on Linq Method

I currently have some code as shown below that uses Linq to organize some IEnumerables for me. When executing this code on the device in release mode (iOS 5.0.1, MonoTouch 5.0.1, Mono 2.10.6.1) I get the exception
Attempting to JIT compile method 'System.Linq.OrderedEnumerable`1:GetEnumerator()' while running with --aot-only.
The code that generates this error is
// List<IncidentDocument> documents is passed in
List<LibraryTableViewItemGroup> groups = new List<LibraryTableViewItemGroup>();
List<DocumentObjectType> categories = documents.Select(d=>d.Type).Distinct().OrderBy(s=>s.ToString()).ToList();
foreach(DocumentObjectType cat in categories)
{
List<IncidentDocument> catDocs = documents.Where(d => d.Type == cat).OrderBy(d => d.Name).ToList();
List<LibraryTableViewItem> catDocsTableItems = catDocs.ConvertAll(d => { return new LibraryTableViewItem{ Image = GetImageForDocument(d.Type), Title = d.Name, SubTitle = d.Description}; });
LibraryTableViewItemGroup catGroup = new LibraryTableViewItemGroup{ Name = GetCatName(cat), Footer = null, Items = catDocsTableItems };
groups.Add (catGroup);
}
This error doesn't happen in the simulator for Release|Debug configurations, or on the device for the Debug configuration. I've seen a couple of similar threads on SO here and here, but I'm not sure I understand how they apply to me on this particular issue.
It could be a few things.
There are some limitations when using full AOT to build iOS applications, i.e. ensuring that nothing will be JITted at runtime (an Apple restriction). Each one is different even if the message looks identical (i.e. many causes will lead to this). However there are generally easy workarounds we can suggest for them;
It could also be a (known) regression in 5.0.1 (which is fixed in 5.0.2). This produced a few extra AOT failures that are normally not issues (or already fixed issues).
I suggest you to update to MonoTouch 5.0.2 to see if it compiles correctly your application. If not then please fill a bug report on http;//bugzilla.xamarin.com and include a small, self-contained, test case to duplicate the issue (the above is not complete enough). It seems an interesting test case if it works when debugging is enabled.

Resources