Converting a Cuda-Convnet checkpoint to a Caffe model binary protobuf - machine-learning

I have an old checkpoint (model specification) from a Cuda Convnet model trained by someone else a couple years ago where the training data is no longer available. I would like to find a way to convert this exact model to a Caffe model file. Is there a tool (which is currently available and supported) which does this? I would still interested even if the conversion is only to another ML framework which can export Caffe models (i.e. using Theano, Torch7, etc.) as a bridge.

Related

How to run Tensorflow object detection on iOS

I'm trying to figure out the easiest way to run object detection from a Tensorflow model (Inception or mobilenet) in an iOS app.
I have iOS Tensorflow image classification working in my own app and network following this example
and have Tensorflow image classification and object detection working in Android for my own app and network following this example
but the iOS example does not contain object detection, only image classification, so how to extend the iOS example code to support object detection, or is there a complete example for this in iOS? (preferably objective-C)
I did find this and this, but it recompiles Tensorflow from source, which seems complex,
also found Tensorflow lite,
but again no object detection.
I also found an option of converting Tensorflow model to Apple Core ML, using Core ML, but this seems very complex, and could not find a complete example for object detection in Core ML
You need to train your own ML model. For iOS it will be easier to just use Core ML. Also tensorflow models can be exported in Core ML format. You can play with this sample and try different models. https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture
Or here:
https://github.com/ytakzk/CoreML-samples
So I ended up following this demo project,
https://github.com/csharpseattle/tensorflowiOS
It provided a working demo app/project, and was easy to switch its Tensorflow pb file for my own trained network file.
The instructions in the readme are pretty straight forward.
You do need to checkout and recompile Tensorflow, which takes several hours and 10gb of space. I did have the thread issue, used the gsed instructions, which worked. You also need to install Homebrew.
I have not looked at Core ML yet, but from what I have read converting from Tensorflow to Core ML is complicated, and you may loose parts of your model.
It ran quite fast on iPhone, even using an Inception model instead of Mobilenet.

How do I run a Tensorflow Object Detection API model in iOS?

I have just trained a model with satisfactory results and I have the frozen_inference_graph.pb . How would I go about running this on iOS? It was trained on SSD Mobilenet V1 if that helps. Optimally I'd like to run it using the GPU (I know the tensorflow API can't do that on iOS), but it would be great to just have it on CPU first.
Support was just announced for importing TensorFlow models into Core ML. This is accomplished using the tfcoreml converter, which should take in your .pb graph and output a Core ML model. From there, you can use this model with Core ML and either take in still images or video frames for processing.
At that point, it's up to you to make sure you're providing the correct input colorspace and size, then extracting and processing the SSD results correctly to get your object classes and bounding boxes.

Tensorflow - saving the model generated in the PC to be used in the mobile version of Tensorflow or OpenCV

Currently I'm still working on that age classification project using the Convolutional Neural Network built in Tensorflow. How, and in what format, do I save the state of my current model trained in my PC to be able to use said model in the Tensorflow in my mobile app, or even OpenCV's tiny-dnn (dnn_modern)? Because I'm not sure if the checkpoint file would work in OpenCV's tiny-dnn.

Binary Classification Model training in CoreML

I have just started exploring CoreML and was wondering if there is a way to train a Binary Classification Model using the same.
Please provide me any references or examples as I am a ML noob.
Core ML doesn't offer any APIs for building or training models. It works with models you've already trained elsewhere (Keras, Caffe, etc) to perform whatever prediction or classification task you built the model for. See Apple's Core ML docs for info on how to convert a model for use with Core ML.
Core ML offers building and training models as of macOS 10.14 (Mojave). In XCode you can train models in various ways.
I don't believe they currently support a binary classifier, but if you can build an image set of X and NOT X you could emulate such.
Apple's Docs: https://developer.apple.com/documentation/create_ml/creating_an_image_classifier_model

Is it possible to use Caffe Only for classification without any training?

Some users might see this as opinion-based-question but if you look closely, I am trying to explore use of Caffe as a purely testing platform as opposed to currently popular use as training platform.
Background:
I have installed all dependencies using Jetpack 2.0 on Nvidia TK1.
I have installed caffe and its dependencies successfully.
The MNIST example is working fine.
Task:
I have been given a convnet with all standard layers. (Not an opensource model)
The network weights and bias values etc are available after training. The training has not been done via caffe. (Pretrained Network)
The weights and bias are all in the form of MATLAB matrices. (Actually in a .txt file but I can easily write code to get them to be matrices)
I CANNOT do training of this network with caffe and must used the given weights and bias values ONLY for classification.
I have my own dataset in the form of 32x32 pixel images.
Issue:
In all tutorials, details are given on how to deploy and train a network, and then use the generated .proto and .caffemodel files to validate and classify. Is it possible to implement this network on caffe and directly use my weights/bias and training set to classify images? What are the available options here? I am a caffe-virgin so be kind. Thank you for the help!
The only issue here is:
How to initialize caffe net from text file weights?
I assume you have a 'deploy.prototxt' describing the net's architecture (layer types, connectivity, filter sizes etc.). The only issue remaining is how to set the internal weights of caffe.Net to pre-defined values saved as text files.
You can get access to caffe.Net internals, see net surgery tutorial on how this can be done in python.
Once you are able to set the weights according to your text file, you can net.save(...) the new weights into a binary caffemodel file to be used from now on. You do not have to train the net if you already have trained weights, and you can use it for generating predictions ("test").

Resources