Is there a way to train a TensorFlow model on iOS? - ios

The documentation for porting an already trained TensorFlow Model to iOS is well defined:
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/ios
However, nowhere is mentioned if the model:
can be further trained on the device, or
can be created from scratch and trained on the device
Is this possible with TensorFlow?
I am aware of other Swift/C++ libraries that offer on-device training, but I am more interested in this technology.

Starting with CoreML3 and the UpdatableTask, the training on device is now part of the API: https://developer.apple.com/documentation/coreml/mlupdatetask

Related

Training ML Models on iOS Devices

Is there any way to train PyTorch models directly on-device on an iPhone via the GPU? PyTorch Mobile docs seems to be completely focused on inference only, as do the iOS app examples (https://github.com/pytorch/ios-demo-app). I did find this article about using MPS backend on Macs (https://developer.apple.com/metal/pytorch/), but not sure if this is at all viable for iOS devices. There's also this prototype article about using iOS GPU for PyTorch mobile (https://pytorch.org/tutorials/prototype/ios_gpu_workflow.html), but it too seems to be focused on inference only.
We are attempting to train a large language model on the iPhone 14 and in order to make that possible given the memory constraints, we would like to a) discard intermediate activations and recompute them, and b) manage memory directly to write some intermediate activations to the filesystem and later read them back. We suspect that converting a PyTorch model to CoreML format and using CoreML for training would prevent us from making these low-level modifications, but PyTorch might have the APIs necessary for this. If there's any examples/pointers that anyone can link to that would be great.

How to run Tensorflow object detection on iOS

I'm trying to figure out the easiest way to run object detection from a Tensorflow model (Inception or mobilenet) in an iOS app.
I have iOS Tensorflow image classification working in my own app and network following this example
and have Tensorflow image classification and object detection working in Android for my own app and network following this example
but the iOS example does not contain object detection, only image classification, so how to extend the iOS example code to support object detection, or is there a complete example for this in iOS? (preferably objective-C)
I did find this and this, but it recompiles Tensorflow from source, which seems complex,
also found Tensorflow lite,
but again no object detection.
I also found an option of converting Tensorflow model to Apple Core ML, using Core ML, but this seems very complex, and could not find a complete example for object detection in Core ML
You need to train your own ML model. For iOS it will be easier to just use Core ML. Also tensorflow models can be exported in Core ML format. You can play with this sample and try different models. https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture
Or here:
https://github.com/ytakzk/CoreML-samples
So I ended up following this demo project,
https://github.com/csharpseattle/tensorflowiOS
It provided a working demo app/project, and was easy to switch its Tensorflow pb file for my own trained network file.
The instructions in the readme are pretty straight forward.
You do need to checkout and recompile Tensorflow, which takes several hours and 10gb of space. I did have the thread issue, used the gsed instructions, which worked. You also need to install Homebrew.
I have not looked at Core ML yet, but from what I have read converting from Tensorflow to Core ML is complicated, and you may loose parts of your model.
It ran quite fast on iPhone, even using an Inception model instead of Mobilenet.

Is it possible to train a CoreML model on device as the app runs?

Is it possible to ship an iOS app with a CoreML model and then have the app continue improving (training) the model on device based on user behaviour for example? So, then the model would keep growing and improving right on the device with no need of a server support...
It's now possible with Core ML 3.
https://developer.apple.com/videos/play/wwdc2019/704/
Skip to 9:00 to see it in action. If you just want the code, skip to 13:50.
The answer is YES.
Since CoreML 3 is greatly optimised – the answer is YES, you can train a CoreML model on device when your app is running.
However, using CoreML 2 it's not possible to train a model on device, because running CoreML 2 app, considerably much power is required to train a model in comparison with CoreML 3. That's why desktop and cloud computers with power GPUs are used for creating a pre-trained models. In CoreML 2 your MLmodel must be pre-configured and you have to include all pre-processing techniques like Edge Detection or Frame Differencing at that stage.
I'm trying to do the same thing. Apparently, when you convert your model to Core ML format with coremltools, you can pass the "respect_trainable" argument to the converter and it will automatically make the model updatable.

Dynamic Machine Learning Model for iOS

I have an iOS app written in SWIFT. It gets user information and saves it in the database (Firebase). I want to use this data and then dynamically update the Machine Learning model created as the data updates to provide an improved prediction every time. Is there a way of doing this?
I know that I can create my trained model separately (e.g. using TensorFlow) and then use Core ML to import it into my app but how can I do this so the model keeps updating as new data comes in?
Thanks for the help!!
Depends on the model.
You cannot use Core ML for this as it does not support training. The Metal Performance Shaders framework in iOS 11.3 now supports training for neural network-based models. And you can always write your own training code.
If the model is something basic like a logistic regression, you can train it on the device and it won't take that long. If it's a deep learning model with many layers and you're training it on a lot of data, it might not be feasible to train on the device.

Binary Classification Model training in CoreML

I have just started exploring CoreML and was wondering if there is a way to train a Binary Classification Model using the same.
Please provide me any references or examples as I am a ML noob.
Core ML doesn't offer any APIs for building or training models. It works with models you've already trained elsewhere (Keras, Caffe, etc) to perform whatever prediction or classification task you built the model for. See Apple's Core ML docs for info on how to convert a model for use with Core ML.
Core ML offers building and training models as of macOS 10.14 (Mojave). In XCode you can train models in various ways.
I don't believe they currently support a binary classifier, but if you can build an image set of X and NOT X you could emulate such.
Apple's Docs: https://developer.apple.com/documentation/create_ml/creating_an_image_classifier_model

Resources