Import model trained in Google Cloud to Android device - machine-learning

I have trained a tensorflow model in Google Cloud using instructions from this link and have generated a Binary (application/octet-stream) file having .pb extension. However instead of deploying the model in the cloud, I want to use the model locally in my android device. How can I do the same?

You can do that and the easiest way of doing it right now is following this code lab: Tensorflow for Poets 2: TFLite
In the code lab you'll have to embed the model as an asset but an evolution you can make is download the model from the Cloud Storage whenever there's a new version of it.
If your model uses operations that are not yet supported by TFLite, you can use Tensorflow Mobile. It probably won't be as fast but it still works fine (There's also a code lab to understand it better).

Related

Compiling of ML Kit model on demand

I want to check if it is possible to use ML Kit Pose Detection without having it in the initial application bundle (to reduce application size).
I am looking for functionality similar to one provided by Core ML with Downloading and Compiling a Model on the User’s Device. For now, as an option, I found it possible by using Tensor Flow with converted to .tflite model, but still curious about any possible ways to achieve it.
You can also use VNDetectHumanBodyPoseRequest, it's integrated in iOS SDK.
https://developer.apple.com/documentation/vision/detecting_human_body_poses_in_images

Load a heavy CoreML model from a remote source

We have a situation where we have a heavy CoreML model (170MB~) that we want to include in our iOS app.
Since we don't want the app size to be that large, we created a smaller model (that has lesser performance) that we can include directly and our intention is the download the heavy model upon app start and switch between the two when the heavy model is downloaded.
Our initial thought was to go to Apple's CoreML Model Deployment solution but it quickly turned out to be impossible for us as Apple requires MLModel archives to be up to 50MB.
So the question is, is there an alternative solution to loading a CoreML model from a remote source, similar to Apple's solution, and how would one implement it?
Any help would be appreciated. Thanks!
Put the mlmodel file on a server you own, download it into the app's Documents folder using your favorite method, create a URL to the downloaded file, use MLModel.compileModel(:at) to compile it, initialize the MLModel (or the automatically generated class) using the compiled model.

uploading a saved trained model into MLKIT

I have created a Multinominal naive bayes using sklearn and wrote it on jupiter model and saved it by joblib library as sav extension file. Now, I want to upload it into MLKit in order to use it in the future to connect to mobile application. However, while I uploading the file, There was an error that .sav is not a supported file type. Any idea what type of files exactly can be uploaded to ml kit to further be used in mobile app? or how can I save this model in a way that can be used to be uploaded in ML?
When you upload it into MLKit, are you using the Firebase Console (https://console.firebase.google.com)? ML Kit supports only Tensoflow Lite models. Usually the model file's extension is ".tflite".

can not connect machine learning to mobile app

I have a python machine learning code and a flutter mobile application code. is there a way to connect between both of them? Also, is there a library in flutter which can apply the concepts of machine learning/ neural networks on texts?
Moreover, what is the best practise/ tools/ platforms to develop a mobile application based on machine learning?
There is currently no way to run Python code within a flutter app. So you'll probably need to interface the two with an API. However, this is gonna require a larger codebase and you'll have to pay for server bandwidth. So it's much more easy to just build out your ML functionality within Flutter.
If you insist on going with Python for your ML:
You'll need to build a RESTful API.
Here are some resources for you to get started on that path.
(1) https://www.codementor.io/sagaragarwal94/building-a-basic-restful-api-in-python-58k02xsiq
(2) https://realpython.com/flask-connexion-rest-api/
There are a lot of different frameworks you can do this with, see (2).
Once you get that up and running here's a tutorial for importing that data into your Flutter app:
(3) https://www.tutorialspoint.com/python_data_science/python_processing_json_data.htm
If you want to build your ML inside of Flutter
This depends on your use case, but consider checking out (4) and using the MLKit for Firebase.
(4) http://flutterdevs.com/blog/machine-learning-in-flutter/
If you want to get into a little bit more into the weeds or you have a more specific use case, see (5).
(5) https://flutterawesome.com/a-machine-learning-app-for-an-age-old-debate/
Good luck!

Hiding CoreML model (.mlmodel) files

I am working on a project which involves adding AI object detection capabilities to an existing iOS APP. I was able to train my own DNN models and converted to the CoreML's .mlmodel format.
Now I need to transfer my work which includes the .mlmodel files to another developer for integration. However, I don't want them to use my trained .mlmodel files outside of this project (according to contract). Is there any way that I can do to just "hide" the .mlmodel files so they can only be used for this particular APP and can't be simply copied and saved for other uses?
I have done some quick research on iOS library and framework, but I am still not sure if that's the solution I am looking for.
Nope. Once someone has access to your mlmodel file or the compiled version, mlmodelc, they can use it elsewhere.
For example, you can download an app from the App Store, look inside the IPA file, copy their mlmodelc folder into your own app, and start using the model right away.
To prevent outsiders from stealing your model, you can encrypt the model (just like you'd encrypt any other file) but that only works if you can hide the decryption key. You can also add a custom layer to the model, so that it becomes useless without the code for this custom layer.
However, those solutions don't work if you're hiring an external developer to work on your app because they will -- out of necessity -- need to have access to these decryption keys and source code files.
I'm not sure what exactly you want this other developer to do, but if you don't trust them, then:
get a new developer that you do trust,
be prepared to enforce the contract, or
give them a version of your mlmodel file with the weights replaced by random numbers. The model will still work but give nonsense predictions. Once that developer is done with their work, replace the model with the real one. Obviously, this is not a good solution if they need to use the model for whatever work they need to do.

Resources