Is there any tutorial for Object localization and detection with inception model V3 on Tensorflow?
Thank you.
An Object Detection API was released recently.
I highly doubt if you can use your inception model on it. You generated model is for object classification while what you need is object localization.
Related
For training a model for face detection, is it possible to take a model for object detection and train it on a face detection data set (with only one class)? Are there any issues with having only 1 class for the object detection model?
The repository I am planning on using is: https://github.com/qfgaohao/pytorch-ssd
What you are talking about is transfer learning. Without any further information about your problem, I think it's best to train both from scratch and using that pre-trained model to see which one performs better.
Are there any issues with having only 1 class for the object detection model?
I am working on a medical project that focuses on detecting one class (lesion), so it's not something unusual in the field.
Hi I have trained object detection model using tensorflow 1.14 object detection API, my model is performing well. However, I want to reduce/optimize parameters of model to make it lighter. How can I use pruning on trained model?
Did you check the Pruning guide on the Tensorflow website ?
It has concrete examples on how to prune a model and benchmark the size and performance improvements.
I'm trying to figure out the easiest way to run object detection from a Tensorflow model (Inception or mobilenet) in an iOS app.
I have iOS Tensorflow image classification working in my own app and network following this example
and have Tensorflow image classification and object detection working in Android for my own app and network following this example
but the iOS example does not contain object detection, only image classification, so how to extend the iOS example code to support object detection, or is there a complete example for this in iOS? (preferably objective-C)
I did find this and this, but it recompiles Tensorflow from source, which seems complex,
also found Tensorflow lite,
but again no object detection.
I also found an option of converting Tensorflow model to Apple Core ML, using Core ML, but this seems very complex, and could not find a complete example for object detection in Core ML
You need to train your own ML model. For iOS it will be easier to just use Core ML. Also tensorflow models can be exported in Core ML format. You can play with this sample and try different models. https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture
Or here:
https://github.com/ytakzk/CoreML-samples
So I ended up following this demo project,
https://github.com/csharpseattle/tensorflowiOS
It provided a working demo app/project, and was easy to switch its Tensorflow pb file for my own trained network file.
The instructions in the readme are pretty straight forward.
You do need to checkout and recompile Tensorflow, which takes several hours and 10gb of space. I did have the thread issue, used the gsed instructions, which worked. You also need to install Homebrew.
I have not looked at Core ML yet, but from what I have read converting from Tensorflow to Core ML is complicated, and you may loose parts of your model.
It ran quite fast on iPhone, even using an Inception model instead of Mobilenet.
I have an iOS app written in SWIFT. It gets user information and saves it in the database (Firebase). I want to use this data and then dynamically update the Machine Learning model created as the data updates to provide an improved prediction every time. Is there a way of doing this?
I know that I can create my trained model separately (e.g. using TensorFlow) and then use Core ML to import it into my app but how can I do this so the model keeps updating as new data comes in?
Thanks for the help!!
Depends on the model.
You cannot use Core ML for this as it does not support training. The Metal Performance Shaders framework in iOS 11.3 now supports training for neural network-based models. And you can always write your own training code.
If the model is something basic like a logistic regression, you can train it on the device and it won't take that long. If it's a deep learning model with many layers and you're training it on a lot of data, it might not be feasible to train on the device.
Is there a way to test out the pretrained image classification models released by Google called 'MobileNets' using only the Keras API?
Models like ResNet50 and InceptionV3 are already available as Keras Applications, but I couldn't find documentation on using custom tensorflow models with Keras. Thanks in advance.
AFAIK, there is no direct access via Keras API to it. However, you can find a good interaction of Keras and MobileNets
here.