Does DL4J supports on device model retraining - machine-learning

I am trying to deploy a pretrained model on a android application . Now, the need is to retrain the model with the data captured locally.
Specifically what is happening is, There is a pretrained dnn model which predicts the quality of a video seeing the bandwidth. The neural network was trained on some data which had the bandwidth and the corresponding video quality. Now that model has to be deployed on device for retraining on the new data.
This new data is already captured via a mobile application and is stored in required format(csv) .
First i thought of using tflite but it does not support on device retraining.
Now i am trying to use DL4J but could not understand how to do it.
If it possible to use DL4J in this case then how can i do it.
If not, then is there a another approach.
ps. I have tried my best to write the problem statement clearly. Pardon me, if u find it difficult to understand. Please comment what u have not understood, i will clear it.

Related

Suspicious facial expression and foreign object recognition using machine learning and image processing

I am stuck with a project on detecting suspicious facial expression along with detection of foreign objects (e.g. guns, metal rods or anything). I know nothing much of ML or image processing. I need to complete the project as soon as possible. It would be helpful if anyone could direct me with some things.
How can I manage a dataset?
Which type of code should I follow?
How do I present the final system?
I know it is a lot to ask but any amount of help is appreciated.
I have tried to train a machine using transfer learning following this link in in youtube:
https://www.youtube.com/watch?v=avv9GQ3b6Qg\
The tutorial uses mobilenet as the model and a known dataset of 7 subset (Angry, Disgust, Fear, Happy, Neutral, Sad, Surprised). I was able to successfully train the model get the face detected based on these 7 emotions.
How do I further develop it to achieve what I want?

Training ML Models on iOS Devices

Is there any way to train PyTorch models directly on-device on an iPhone via the GPU? PyTorch Mobile docs seems to be completely focused on inference only, as do the iOS app examples (https://github.com/pytorch/ios-demo-app). I did find this article about using MPS backend on Macs (https://developer.apple.com/metal/pytorch/), but not sure if this is at all viable for iOS devices. There's also this prototype article about using iOS GPU for PyTorch mobile (https://pytorch.org/tutorials/prototype/ios_gpu_workflow.html), but it too seems to be focused on inference only.
We are attempting to train a large language model on the iPhone 14 and in order to make that possible given the memory constraints, we would like to a) discard intermediate activations and recompute them, and b) manage memory directly to write some intermediate activations to the filesystem and later read them back. We suspect that converting a PyTorch model to CoreML format and using CoreML for training would prevent us from making these low-level modifications, but PyTorch might have the APIs necessary for this. If there's any examples/pointers that anyone can link to that would be great.

Is it possible to train a CoreML model on device as the app runs?

Is it possible to ship an iOS app with a CoreML model and then have the app continue improving (training) the model on device based on user behaviour for example? So, then the model would keep growing and improving right on the device with no need of a server support...
It's now possible with Core ML 3.
https://developer.apple.com/videos/play/wwdc2019/704/
Skip to 9:00 to see it in action. If you just want the code, skip to 13:50.
The answer is YES.
Since CoreML 3 is greatly optimised – the answer is YES, you can train a CoreML model on device when your app is running.
However, using CoreML 2 it's not possible to train a model on device, because running CoreML 2 app, considerably much power is required to train a model in comparison with CoreML 3. That's why desktop and cloud computers with power GPUs are used for creating a pre-trained models. In CoreML 2 your MLmodel must be pre-configured and you have to include all pre-processing techniques like Edge Detection or Frame Differencing at that stage.
I'm trying to do the same thing. Apparently, when you convert your model to Core ML format with coremltools, you can pass the "respect_trainable" argument to the converter and it will automatically make the model updatable.

Dynamic Machine Learning Model for iOS

I have an iOS app written in SWIFT. It gets user information and saves it in the database (Firebase). I want to use this data and then dynamically update the Machine Learning model created as the data updates to provide an improved prediction every time. Is there a way of doing this?
I know that I can create my trained model separately (e.g. using TensorFlow) and then use Core ML to import it into my app but how can I do this so the model keeps updating as new data comes in?
Thanks for the help!!
Depends on the model.
You cannot use Core ML for this as it does not support training. The Metal Performance Shaders framework in iOS 11.3 now supports training for neural network-based models. And you can always write your own training code.
If the model is something basic like a logistic regression, you can train it on the device and it won't take that long. If it's a deep learning model with many layers and you're training it on a lot of data, it might not be feasible to train on the device.

How intensive is training a machine learning algorithm?

I'd like to make an app using iOS's new CoreML framework that does image recognition. To do so I'd probably have to train my own model, and I'm wondering exactly how much data and compute power it would require. Is it something I could feasibly accomplish on an dual core i5 Macbook Pro using Google Images for source data or would it be much more involved?
It depends on what sort of images you want to train your model to recognize.
What is often done is fine-tuning an existing model. You take a pretrained version of Inception-v3 (let's say) and then replace the final layer with your own. You train this last layer on your own images.
You still need a fair number of training images (a few 100 per category, but more is better) but you can do this on your MacBook Pro in anywhere between 30 minutes to a few hours.
TensorFlow comes with a script that makes it really easy to do this. Keras has a great blog post on how to do this. I used the TensorFlow script to re-train Inception-v3 to tell apart my two cats, from 50 or so images of each cat.
If you want to train from scratch you probably want to do this in the cloud using AWS, Google's Cloud ML Engine, or something easy like FloydHub.

Resources