uploading a saved trained model into MLKIT - machine-learning

I have created a Multinominal naive bayes using sklearn and wrote it on jupiter model and saved it by joblib library as sav extension file. Now, I want to upload it into MLKit in order to use it in the future to connect to mobile application. However, while I uploading the file, There was an error that .sav is not a supported file type. Any idea what type of files exactly can be uploaded to ml kit to further be used in mobile app? or how can I save this model in a way that can be used to be uploaded in ML?

When you upload it into MLKit, are you using the Firebase Console (https://console.firebase.google.com)? ML Kit supports only Tensoflow Lite models. Usually the model file's extension is ".tflite".

Related

Compiling of ML Kit model on demand

I want to check if it is possible to use ML Kit Pose Detection without having it in the initial application bundle (to reduce application size).
I am looking for functionality similar to one provided by Core ML with Downloading and Compiling a Model on the User’s Device. For now, as an option, I found it possible by using Tensor Flow with converted to .tflite model, but still curious about any possible ways to achieve it.
You can also use VNDetectHumanBodyPoseRequest, it's integrated in iOS SDK.
https://developer.apple.com/documentation/vision/detecting_human_body_poses_in_images

Load a heavy CoreML model from a remote source

We have a situation where we have a heavy CoreML model (170MB~) that we want to include in our iOS app.
Since we don't want the app size to be that large, we created a smaller model (that has lesser performance) that we can include directly and our intention is the download the heavy model upon app start and switch between the two when the heavy model is downloaded.
Our initial thought was to go to Apple's CoreML Model Deployment solution but it quickly turned out to be impossible for us as Apple requires MLModel archives to be up to 50MB.
So the question is, is there an alternative solution to loading a CoreML model from a remote source, similar to Apple's solution, and how would one implement it?
Any help would be appreciated. Thanks!
Put the mlmodel file on a server you own, download it into the app's Documents folder using your favorite method, create a URL to the downloaded file, use MLModel.compileModel(:at) to compile it, initialize the MLModel (or the automatically generated class) using the compiled model.

Hiding CoreML model (.mlmodel) files

I am working on a project which involves adding AI object detection capabilities to an existing iOS APP. I was able to train my own DNN models and converted to the CoreML's .mlmodel format.
Now I need to transfer my work which includes the .mlmodel files to another developer for integration. However, I don't want them to use my trained .mlmodel files outside of this project (according to contract). Is there any way that I can do to just "hide" the .mlmodel files so they can only be used for this particular APP and can't be simply copied and saved for other uses?
I have done some quick research on iOS library and framework, but I am still not sure if that's the solution I am looking for.
Nope. Once someone has access to your mlmodel file or the compiled version, mlmodelc, they can use it elsewhere.
For example, you can download an app from the App Store, look inside the IPA file, copy their mlmodelc folder into your own app, and start using the model right away.
To prevent outsiders from stealing your model, you can encrypt the model (just like you'd encrypt any other file) but that only works if you can hide the decryption key. You can also add a custom layer to the model, so that it becomes useless without the code for this custom layer.
However, those solutions don't work if you're hiring an external developer to work on your app because they will -- out of necessity -- need to have access to these decryption keys and source code files.
I'm not sure what exactly you want this other developer to do, but if you don't trust them, then:
get a new developer that you do trust,
be prepared to enforce the contract, or
give them a version of your mlmodel file with the weights replaced by random numbers. The model will still work but give nonsense predictions. Once that developer is done with their work, replace the model with the real one. Obviously, this is not a good solution if they need to use the model for whatever work they need to do.

Import model trained in Google Cloud to Android device

I have trained a tensorflow model in Google Cloud using instructions from this link and have generated a Binary (application/octet-stream) file having .pb extension. However instead of deploying the model in the cloud, I want to use the model locally in my android device. How can I do the same?
You can do that and the easiest way of doing it right now is following this code lab: Tensorflow for Poets 2: TFLite
In the code lab you'll have to embed the model as an asset but an evolution you can make is download the model from the Cloud Storage whenever there's a new version of it.
If your model uses operations that are not yet supported by TFLite, you can use Tensorflow Mobile. It probably won't be as fast but it still works fine (There's also a code lab to understand it better).

jpg file parsing to extract info/text

I have an idea for a project that I wanted some advice/pointers on.
I am planning to write an application to automatically parse expense receipts in JPG format and automatically extract the amount and also categorize using some learning algorithm. Is this at all doable? What libraries are available to parse jpg files to extract textual information and currency information from it?
Any pointers appreciated..I have a vanilla HP all in one scanner that I will use to scan all receipts.
Thanks
RS
You will need a OCR plugin (Optical character recognition) this will recognize and retrieve text from images. It has been a while since I last used OCR software, not sure what the best SDK's / plugins are at the moment.
I did find an article on The Code Project which uses a OCR product from Leadtool.

Resources