Training a detectron2 model on intel neural compute stick2 - machine-learning

Any one knows how to train faster_rcnn_R_50_FPN_3x model which is pytorch-based model from detectron2 models zoo on intel neural compute stick2 device ?
I've already installed openvino-toolkit and run correctly one of their demos on my device.
I've trained the faster_rcnn_R_50_FPN_3x on a custom data set with detectron2 on google-colab-gpu and now I have to train it on the intel neural compute stick2 vpu to compare the AP results

You cannot train models on the Intel Neural Compute Stick 2 (NCS2). NCS2 is to be used with OpenVINO™ toolkit. OpenVINO™ toolkit consists of two main components, which are Model Optimizer and Inference Engine. The Inference Engine MYRIAD plugin has been developed for inference of neural networks on Intel® Neural Compute Stick 2. Have a look on this OpenVINO™ Toolkit Workflow.

Related

How can we Stack XGBOOST model with dense neural networks

i am a newbie in ML , i was trying to solve a multi-class classification problem . I have used XGBOOST to reduce the log loss, I also tried a Dense neural network to reduce the log loss , it also seem to work well. Now is there a way i can stack these two models so that i can further reduce the log loss.
You can do it with Apple coremltools.
Take your XGBoost model and convert to MLModel using the converter.
Create a pipeline model combining that model with any neural networks.
I'm sure there are other tools with pipeline. But you will need to convert XGBoost to some other format.

Neural Network Classifier VS Neural Network in Machine Learning Model Type

Is there any difference between these Neural Network Classifier and Neural Network in machine learning model type used in iOS ? Whenever i create a machine learning iOS app with these two model type i often use VNCoreMLFeatureValueObservation for Neural Network and VNClassificationObservation for Neural Network Classifier. why?

Difference between SSD and Mobilenet

I am confusing between SSD and mobilenet. As far as I know, both of them are neural network. SSD provides localization while mobilenet provides classification. Thus the combination of SSD and mobilenet can produce the object detection. The image is taken from SSD paper. The default classification network of SSD is VGG-16. So, for SSD Mobilenet, VGG-16 is replaced with mobilenet. Are my statements correct?
Where can I get more information about SSD Mobilenet especially that one available on Tensorflow model zoo?
SSD - single shot detector - is a NN architecture designed for detection purposes - which means localization(bounding boxes) and classification at once.
Mobilenet- (https://arxiv.org/abs/1704.04861) - efficient architecture introduced by Google (using depthwise and pointwise convolutions). It can be used for classification purposes, or as a feature extractor for other (i.e. detection).
In the SSD paper they present the use of VGG NN as the feature extractor for the detection, the features maps are being taken from several different layers (resolutions) and being fed to their corresponding classification and localization layers (Classification head and Regression head).
So actually, one can decide to use a different kind of feature extractor - like MobileNet-SSD - which means you use SSD arch. while your feature extractor is mobilenet arch.
By reading the SSD paper, and the mobilenet paper you would be able to understand the model exist in the model zoo.
There are two types of deep neural networks, Base network, and detection network. MobileNet, VGG-Net, LeNet are base networks.
The base network provides high-level features for classification or detection. If you use an entirely connected layer at the end of these networks, you have a classification. But you can remove a fully connected layer and replace it with detection networks, like SSD, Faster R-CNN, and so on. In general, SSD use of last convolutional layer on base networks for the detection task. MobileNet just like other base networks uses of convolution to produce high-level features.

Binary Classification Model training in CoreML

I have just started exploring CoreML and was wondering if there is a way to train a Binary Classification Model using the same.
Please provide me any references or examples as I am a ML noob.
Core ML doesn't offer any APIs for building or training models. It works with models you've already trained elsewhere (Keras, Caffe, etc) to perform whatever prediction or classification task you built the model for. See Apple's Core ML docs for info on how to convert a model for use with Core ML.
Core ML offers building and training models as of macOS 10.14 (Mojave). In XCode you can train models in various ways.
I don't believe they currently support a binary classifier, but if you can build an image set of X and NOT X you could emulate such.
Apple's Docs: https://developer.apple.com/documentation/create_ml/creating_an_image_classifier_model

Is it possible to use Caffe Only for classification without any training?

Some users might see this as opinion-based-question but if you look closely, I am trying to explore use of Caffe as a purely testing platform as opposed to currently popular use as training platform.
Background:
I have installed all dependencies using Jetpack 2.0 on Nvidia TK1.
I have installed caffe and its dependencies successfully.
The MNIST example is working fine.
Task:
I have been given a convnet with all standard layers. (Not an opensource model)
The network weights and bias values etc are available after training. The training has not been done via caffe. (Pretrained Network)
The weights and bias are all in the form of MATLAB matrices. (Actually in a .txt file but I can easily write code to get them to be matrices)
I CANNOT do training of this network with caffe and must used the given weights and bias values ONLY for classification.
I have my own dataset in the form of 32x32 pixel images.
Issue:
In all tutorials, details are given on how to deploy and train a network, and then use the generated .proto and .caffemodel files to validate and classify. Is it possible to implement this network on caffe and directly use my weights/bias and training set to classify images? What are the available options here? I am a caffe-virgin so be kind. Thank you for the help!
The only issue here is:
How to initialize caffe net from text file weights?
I assume you have a 'deploy.prototxt' describing the net's architecture (layer types, connectivity, filter sizes etc.). The only issue remaining is how to set the internal weights of caffe.Net to pre-defined values saved as text files.
You can get access to caffe.Net internals, see net surgery tutorial on how this can be done in python.
Once you are able to set the weights according to your text file, you can net.save(...) the new weights into a binary caffemodel file to be used from now on. You do not have to train the net if you already have trained weights, and you can use it for generating predictions ("test").

Resources