I am quiet new to Machine learning, and I am working on iOS app for object detection using tensorflow, I have been using the sample data model that is provided by tensorflow example in the form of .pb (graph.pb) file which works just fine with object detection.
But My backend team has given me model2_BN.ckpt for data model file, I have tried to research on how to use this file and I have no clue.
Is it possible to use the ckpt file on client side as data model? If yes How can I use it in the iOS tensorflow example as data model?
Please help.
Thanks
This one from my backend developer:
The .ckpt is the model given by tensorflow which includes all the
weights/parameters in the model. The .pb file stores the computational
graph. To make tensorflow work we need both the graph and the
parameters. There are two ways to get the graph:
(1) use the python program that builds it in the first place (tensorflowNetworkFunctions.py).
(2) Use a .pb file (which would have to be generated by tensorflowNetworkFunctions.py).
.ckpt file is were all the intelligence is.
Related
I'm trying to figure out the easiest way to run object detection from a Tensorflow model (Inception or mobilenet) in an iOS app.
I have iOS Tensorflow image classification working in my own app and network following this example
and have Tensorflow image classification and object detection working in Android for my own app and network following this example
but the iOS example does not contain object detection, only image classification, so how to extend the iOS example code to support object detection, or is there a complete example for this in iOS? (preferably objective-C)
I did find this and this, but it recompiles Tensorflow from source, which seems complex,
also found Tensorflow lite,
but again no object detection.
I also found an option of converting Tensorflow model to Apple Core ML, using Core ML, but this seems very complex, and could not find a complete example for object detection in Core ML
You need to train your own ML model. For iOS it will be easier to just use Core ML. Also tensorflow models can be exported in Core ML format. You can play with this sample and try different models. https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture
Or here:
https://github.com/ytakzk/CoreML-samples
So I ended up following this demo project,
https://github.com/csharpseattle/tensorflowiOS
It provided a working demo app/project, and was easy to switch its Tensorflow pb file for my own trained network file.
The instructions in the readme are pretty straight forward.
You do need to checkout and recompile Tensorflow, which takes several hours and 10gb of space. I did have the thread issue, used the gsed instructions, which worked. You also need to install Homebrew.
I have not looked at Core ML yet, but from what I have read converting from Tensorflow to Core ML is complicated, and you may loose parts of your model.
It ran quite fast on iPhone, even using an Inception model instead of Mobilenet.
I have just trained a model with satisfactory results and I have the frozen_inference_graph.pb . How would I go about running this on iOS? It was trained on SSD Mobilenet V1 if that helps. Optimally I'd like to run it using the GPU (I know the tensorflow API can't do that on iOS), but it would be great to just have it on CPU first.
Support was just announced for importing TensorFlow models into Core ML. This is accomplished using the tfcoreml converter, which should take in your .pb graph and output a Core ML model. From there, you can use this model with Core ML and either take in still images or video frames for processing.
At that point, it's up to you to make sure you're providing the correct input colorspace and size, then extracting and processing the SSD results correctly to get your object classes and bounding boxes.
I decided to take a dip into ML and with a lot of trial and error was able to create a model using TS' inception.
To take this a step further, I want to use their Object Detection API. But their input preparation instructions, references the use of Pascal VOC 2012 dataset but I want to do the training on my own dataset.
Does this mean I need to setup my datasets to either Pascal VOC or Oxford IIT format? If yes, how do I go about doing this?
If no (my instinct says this is the case), what are the alternatives of using TS object detection with my own datasets?
Side Note: I know that my trained inception model can't be used for localization because its a classifier
Edit:
For those still looking to achieve this, here is how I went about doing it.
The training jobs in the Tensorflow Object Detection API expect to get TF Record files with certain fields populated with groundtruth data.
You can either set up your data in the same format as the Pascal VOC or Oxford-IIIT examples, or you can just directly create the TFRecord files ignoring the XML formats.
In the latter case, the create_pet_tf_record.py or create_pascal_tf_record.py scripts are likely to still be useful as a reference for which fields the API expects to see and what format they should take. Currently we do not provide a tool that creates these TFRecord files generally, so you will have to write your own.
Except TF Object Detection API you may look at OpenCV Haar Cascades. I was starting my object detection way from that point and if provide well prepared data set it works pretty fine.
There are also many articles and tutorials about creating your own cascades, so it`s easy to start.
I was using this blog, it helps me a lot.
I am new to tensorflow. I have downloaded and run the image classifier provided on tensorflow website. I can see link that downloads model from web.
I have to read the .pb file in human readable format.
Is this possible? if yes, How?
Thanks!
If you mean the model architecture then I recommend looking at the graph in Tensorboard, the graph visualisation tool provided with Tensorflow. I'm pretty sure that the demo code/tutorial already implements all the code required to import into tensorboard so it should just be a case or running tensorboard and pointing it to the log directory. (This should be defined in the code near the top).
Then run tensorboard --logdir=/path/to/logs/ and click the graphs tab. You will then see the various graphs for the different runs.
Alternatively, there are a couple of papers on inception that describe the model and the theory behind it. One is available here
Hope that helps you understand inception a bit more.
Can we customize the Named Entity Recognition (NER) model in Azure ML Studio with a separate training dataset? What I want to do is to find out non-English names from a text. (Training dataset includes the set of names that going to use for training)
Unfortunately, this module's ability to perform NER with a custom set of entities is planned for the future, but not currently available.
If you're familiar with Python and willing to put in the extra footwork, you might consider using the Natural Language Toolkit (NLTK). Sujit Pal has a nice blog post and sample code describing the creation of a custom NER with that package. You may be able to train an NLTK NER model and apply it to your data of interest from within an Execute Python Script module on Azure ML.