Exporting sklearn random forest Python model to Android - machine-learning

Can you use a Python trained model sklearn (random forest) in Android (Java).
I need to use it to predict values in real time and a server isn't an option here.

You've already answered your question by tagging it with the "pmml" tag - export Scikit-Learn's Random Forest model into PMML data format using the SkLearn2PMML package, and then import/optimize and score it on Android OS using JPMML-Model and JPMML-Evaluator libraries.

Related

Is there a way to train a TensorFlow model on iOS?

The documentation for porting an already trained TensorFlow Model to iOS is well defined:
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/ios
However, nowhere is mentioned if the model:
can be further trained on the device, or
can be created from scratch and trained on the device
Is this possible with TensorFlow?
I am aware of other Swift/C++ libraries that offer on-device training, but I am more interested in this technology.
Starting with CoreML3 and the UpdatableTask, the training on device is now part of the API: https://developer.apple.com/documentation/coreml/mlupdatetask

Tensorflow - saving the model generated in the PC to be used in the mobile version of Tensorflow or OpenCV

Currently I'm still working on that age classification project using the Convolutional Neural Network built in Tensorflow. How, and in what format, do I save the state of my current model trained in my PC to be able to use said model in the Tensorflow in my mobile app, or even OpenCV's tiny-dnn (dnn_modern)? Because I'm not sure if the checkpoint file would work in OpenCV's tiny-dnn.

Binary Classification Model training in CoreML

I have just started exploring CoreML and was wondering if there is a way to train a Binary Classification Model using the same.
Please provide me any references or examples as I am a ML noob.
Core ML doesn't offer any APIs for building or training models. It works with models you've already trained elsewhere (Keras, Caffe, etc) to perform whatever prediction or classification task you built the model for. See Apple's Core ML docs for info on how to convert a model for use with Core ML.
Core ML offers building and training models as of macOS 10.14 (Mojave). In XCode you can train models in various ways.
I don't believe they currently support a binary classifier, but if you can build an image set of X and NOT X you could emulate such.
Apple's Docs: https://developer.apple.com/documentation/create_ml/creating_an_image_classifier_model

Is it possible to use Caffe Only for classification without any training?

Some users might see this as opinion-based-question but if you look closely, I am trying to explore use of Caffe as a purely testing platform as opposed to currently popular use as training platform.
Background:
I have installed all dependencies using Jetpack 2.0 on Nvidia TK1.
I have installed caffe and its dependencies successfully.
The MNIST example is working fine.
Task:
I have been given a convnet with all standard layers. (Not an opensource model)
The network weights and bias values etc are available after training. The training has not been done via caffe. (Pretrained Network)
The weights and bias are all in the form of MATLAB matrices. (Actually in a .txt file but I can easily write code to get them to be matrices)
I CANNOT do training of this network with caffe and must used the given weights and bias values ONLY for classification.
I have my own dataset in the form of 32x32 pixel images.
Issue:
In all tutorials, details are given on how to deploy and train a network, and then use the generated .proto and .caffemodel files to validate and classify. Is it possible to implement this network on caffe and directly use my weights/bias and training set to classify images? What are the available options here? I am a caffe-virgin so be kind. Thank you for the help!
The only issue here is:
How to initialize caffe net from text file weights?
I assume you have a 'deploy.prototxt' describing the net's architecture (layer types, connectivity, filter sizes etc.). The only issue remaining is how to set the internal weights of caffe.Net to pre-defined values saved as text files.
You can get access to caffe.Net internals, see net surgery tutorial on how this can be done in python.
Once you are able to set the weights according to your text file, you can net.save(...) the new weights into a binary caffemodel file to be used from now on. You do not have to train the net if you already have trained weights, and you can use it for generating predictions ("test").

Converting a Cuda-Convnet checkpoint to a Caffe model binary protobuf

I have an old checkpoint (model specification) from a Cuda Convnet model trained by someone else a couple years ago where the training data is no longer available. I would like to find a way to convert this exact model to a Caffe model file. Is there a tool (which is currently available and supported) which does this? I would still interested even if the conversion is only to another ML framework which can export Caffe models (i.e. using Theano, Torch7, etc.) as a bridge.

Resources