Using Google ML to retrain Image Recognition with TensorFlow - machine-learning

I'm using a bunch of images to train my Tensorflow Image recognition project using this tutorial https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/index.html#4
Actually I need a lot of Cpu to train my model and it takes a lot of time on my laptop.
I have registered a Google ML account and started this tutorial:
https://cloud.google.com/ml/docs/quickstarts/training
Everything is set up and running but this is for mnist sample code. There is no image_retraining sample code like the retrain.py from tensorflow.
Looking for some examples on how to to run the Tensorflow Image Recognition script retrain in Google ML.

Related

Text classification with BERT and PyTorch Lightning

I am currently working on multi-label text classification with BERT and PyTorch Lightning. I am new to machine learning and am confused on how to train my model on AWS.
My questions include which Accelerated Computing instance (Amazon EC2) do I use considering I have a large database with 377 labels. Along with that, will I be able to run my code through Jupyter Notebooks with libraries like torch? Thank you for your time.

Using TFLite file generated via Google ML Kit in TensorFlowLite Image classification example iOS app

The .tflite and .txt files generated for Image classification via Google Firebase ML kit(https://developers.google.com/ml-kit) - when replaced into the Tensorflow image classification iOS sample (https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification), it is identifying the images with very low accuracy and the match % is mostly below 30%.
The same .tflite and .txt files - when integrated into the Android sample code of the same Tensorflow image classification sample, it works perfectly with very high accuracy like 99%.
Any help in pointing towards a solution worth looking for would be great at this point.
I have been reading about Quantization - and Android sample has options to provide both quantized and non-quantized models (but non-quantized crashes for me for some reason). Not sure if it has anything to do with this.
If it helps, here is a link to my TFLite file generated through Google ML Kit : https://drive.google.com/file/d/1WXdjGGyj2RQbSLTniQ60o0ZYb6nqlaUw/view?usp=sharing

How to run Tensorflow object detection on iOS

I'm trying to figure out the easiest way to run object detection from a Tensorflow model (Inception or mobilenet) in an iOS app.
I have iOS Tensorflow image classification working in my own app and network following this example
and have Tensorflow image classification and object detection working in Android for my own app and network following this example
but the iOS example does not contain object detection, only image classification, so how to extend the iOS example code to support object detection, or is there a complete example for this in iOS? (preferably objective-C)
I did find this and this, but it recompiles Tensorflow from source, which seems complex,
also found Tensorflow lite,
but again no object detection.
I also found an option of converting Tensorflow model to Apple Core ML, using Core ML, but this seems very complex, and could not find a complete example for object detection in Core ML
You need to train your own ML model. For iOS it will be easier to just use Core ML. Also tensorflow models can be exported in Core ML format. You can play with this sample and try different models. https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture
Or here:
https://github.com/ytakzk/CoreML-samples
So I ended up following this demo project,
https://github.com/csharpseattle/tensorflowiOS
It provided a working demo app/project, and was easy to switch its Tensorflow pb file for my own trained network file.
The instructions in the readme are pretty straight forward.
You do need to checkout and recompile Tensorflow, which takes several hours and 10gb of space. I did have the thread issue, used the gsed instructions, which worked. You also need to install Homebrew.
I have not looked at Core ML yet, but from what I have read converting from Tensorflow to Core ML is complicated, and you may loose parts of your model.
It ran quite fast on iPhone, even using an Inception model instead of Mobilenet.

Is there a way to train a TensorFlow model on iOS?

The documentation for porting an already trained TensorFlow Model to iOS is well defined:
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/ios
However, nowhere is mentioned if the model:
can be further trained on the device, or
can be created from scratch and trained on the device
Is this possible with TensorFlow?
I am aware of other Swift/C++ libraries that offer on-device training, but I am more interested in this technology.
Starting with CoreML3 and the UpdatableTask, the training on device is now part of the API: https://developer.apple.com/documentation/coreml/mlupdatetask

Tensorflow - saving the model generated in the PC to be used in the mobile version of Tensorflow or OpenCV

Currently I'm still working on that age classification project using the Convolutional Neural Network built in Tensorflow. How, and in what format, do I save the state of my current model trained in my PC to be able to use said model in the Tensorflow in my mobile app, or even OpenCV's tiny-dnn (dnn_modern)? Because I'm not sure if the checkpoint file would work in OpenCV's tiny-dnn.

Resources