Caffemodel and prototxt - machine-learning

ageNet=dnn.readNet("age_net.caffemodel","age_deploy.prototxt")
maskNet = load_model("ask_detector.model")
I am very much curious about the purpose of caffemodel and protext.
I have surfed many materials
but I couldn't grasp it.
Kindly give me an analogy to understand the above lines and their working.

age_deploy.prototxt contains the details of the model architecture for the age detection model
caffemodel has the pre-trained model weights for age detection

Related

Evaluation of generative models like variational autoencoder

i hope everyone is doing well
I need some help with generative models.
So im working on a project where the main task is to build a binary classification model. In the dataset which contains 300000 sample and 100 feature, there is an imbalance between the 2 classes where majority class is too much bigger than the minory class.
To handle this problem, i'm using VAE (variational autoencoders) to solve this problem.
So i started training the VAE on the minority class and then use the decoder part of the VAE to generate new or fake samples that are similars to the minority class then concatenate this new data with training set in order to have a new balanced training set.
My question is : is there anyway to evalutate generative models like vae, like is there a way to know if the data generated is similar to the real one ??
I have read that there is some metrics to evaluate generated data like inception distance and Frechet inception distance but i saw that they have been only used on image data
I wanna know if i can use them too on my dataset ?
Thanks in advance
I believe your data is not image as you say there are 100 features. What I believe that you can check the similarity between the synthesised features and the original features (the ones belong to minority class), and keep only the ones with certain similarity. Cosine similarity index would be useful for this problem.
That would be also very nice to check a scatter plot of the synthesised features with the original ones to see if they are close to each other. tSNE would be useful at this point.

Image Classification using Single Class Dataset using Transfer Learning

I only have around 1000 images of vehicle. I need to train a model that can identify if the image is vehicle or not-vehicle. I do not have a dataset for not-vehicle, as it could be anything besides vehicle.
I guess the best method for this would be to apply transfer learning. I am trying to train data on a pre-trained VGG19 Model. But still, I am unaware on how to train a model with just vehicle images without any non-vehicle images. I am not being able to classify it.
I am new to ML Overall, Any solution based on practical implementation will be highly appreciated.
You are right about transfer learning approach. Have a look a this article, it is exactly about going from multi-class to binary classification with transfer learning - https://medium.com/#mandygu/seefood-creating-a-binary-classifier-using-transfer-learning-da751db7cf9c
You can try using pretrained model and take the output. You might need to apply dimensionality reduction e.g. PCA, to get a more managable size input. After that you can train novelty detection model to identify whether the output is different than your training set.
Refer to this example: https://github.com/J-Yash/Hotdog-Not-Hotdog
Hope this helps.
This is a binary classification problem: whether the input is a vehicle or not.
If you are new to ML, I would suggest you should start implementing basic binary classifiers like Logistic Regression, Support Vector Machines before jumping to Convolutional Neural Networks (CNNs).
I am providing some links for the binary classification problem implementations using different algorithms. I hope this would help.
Logistic Regression: https://github.com/JB1984/Logistic-Regression-Cat-Classifier
SVM: https://github.com/Witsung/SVM-Fruit-Image-Classifier
CNN: https://github.com/A-Jatin/CNN-implementation-for-binary-image-classification

machine learning algorithm that says which train data cause current decision

I need a learning model that when we test it with a data sample it says which train data cause the answer .
Is there anything that do this?
(I already know KNN will do this)
thanks
look for generative models
"It asks the question: based on generation assumptions, which category is most likely to generate this signal?"
This is not a very well worded question:
Which train data cause the answer? I already know KNN will do this
KNN will tell you what the K nearest neighbors are, but it's not just those K training samples that cause the answer, it's also all the other training samples by being farther away.
The objective of machine learning is to generalize from the whole of the training dataset, so all samples in the training dataset (after outlier filtering, dataset reduction steps) cause the answer.
If your question is 'Which class of machine learning algorithms makes a decision by comparing a new instance to instances seen in the training data, and can list the training examples which most strongly informed the decision?', the answer is: Instance based learning https://en.wikipedia.org/wiki/Instance-based_learning
(e.g. KNN, kernel machines, RBF)

Can inception model be used for object counting in an image?

I have already gone through the image classification part in Inception model, but I require to count the objects in the image.
Considering the flowers data-set, one image can have multiple instances of a flower, so how can I get that count?
What you describe is known to research community as Instance-Level Segmentation.
In last year itself there have been a significant spike in papers addressing this problem.
Here are some of the papers:
https://arxiv.org/pdf/1412.7144v4.pdf
https://arxiv.org/pdf/1511.08498v3.pdf
https://arxiv.org/pdf/1607.03222v2.pdf
https://arxiv.org/pdf/1607.04889v2.pdf
https://arxiv.org/pdf/1511.08250v3.pdf
https://arxiv.org/pdf/1611.07709v1.pdf
https://arxiv.org/pdf/1603.07485v2.pdf
https://arxiv.org/pdf/1611.08303v1.pdf
https://arxiv.org/pdf/1611.08991v2.pdf
https://arxiv.org/pdf/1611.06661v2.pdf
https://arxiv.org/pdf/1612.03129v1.pdf
https://arxiv.org/pdf/1605.09410v4.pdf
As you see in these papers simple object classification network won't solve the problem.
If you search github you will find a few repositories with basic frameworks, you can build on top of them.
https://github.com/daijifeng001/MNC (caffe)
https://github.com/bernard24/RIS/blob/master/RIS_infer.ipynb (torch)
https://github.com/jr0th/segmentation (keras, tensorflow)
indraforyou answered the question in how to solve the problem you are having. I want to add something for the inception model specifically. In https://arxiv.org/pdf/1312.6229.pdf they propose a regressor network trained on the output of a model trained on the imagenet dataset like the inception model. This regressor model then is used to propose object boundaries for you to use for counting. The advantage of this approach is that you do not have to annotate any training examples and you can just use the ImageNet dataset for training.
If you do not want to train anything I would propose a heuristic in finding object boundaries. Literature in image segmentation https://en.wikipedia.org/wiki/Image_segmentation should help you find a suitable heuristic. I do think using a heuristic will decrease your accuracy though.
Last but not least this is an open problem in computer vision research. You should not expect to get 100% accuracy or even 95% accuracy on counting. Many very smart people have tried this and reported mixed results. Still some very cool things can be accomplished.
Any classification model like inception model will identify the object like flower in your case. However, when multiple items are there classifications won't work (get confused in simple language).
Thus:
You've to segment main image into child images with one object per image and use classification on each segment. This is termed as image segmentation in image processing.

Deep learning Training dataset with Caffe

I am a deep-learning newbie and working on creating a vehicle classifier for images using Caffe and have a 3-part question:
Are there any best practices in organizing classes for training a
CNN? i.e. number of classes and number of samples for each class?
For example, would I be better off this way:
(a) Vehicles - Car-Sedans/Car-Hatchback/Car-SUV/Truck-18-wheeler/.... (note this could mean several thousand classes), or
(b) have a higher level
model that classifies between car/truck/2-wheeler and so on...
and if car type then query the Car Model to get the car type
(sedan/hatchback etc)
How many training images per class is a typical best practice? I know there are several other variables that affect the accuracy of
the CNN, but what rough number is good to shoot for in each class?
Should it be a function of the number of classes in the model? For
example, if I have many classes in my model, should I provide more
samples per class?
How do we ensure we are not overfitting to class? Is there way to measure heterogeneity in training samples for a class?
Thanks in advance.
Well, the first choice that you mentioned corresponds to a very challenging task in computer vision community: fine-grained image classification, where you want to classify the subordinates of a base class, say Car! To get more info on this, you may see this paper.
According to the literature on image classification, classifying the high-level classes such as car/trucks would be much simpler for CNNs to learn since there may exist more discriminative features. I suggest to follow the second approach, that is classifying all types of cars vs. truck and so on.
Number of training samples is mainly proportional to the number of parameters, that is if you want to train a shallow model, much less samples are required. That also depends on your decision to fine-tune a pre-trained model or train a network from scratch. When sufficient samples are not available, you have to fine-tune a model on your task.
Wrestling with over-fitting has been always a problematic issue in machine learning and even CNNs are not free of them. Within the literature, some practical suggestions have been introduced to reduce the occurrence of over-fitting such as dropout layers and data-augmentation procedures.
May not included in your questions, but it seems that you should follow the fine-tuning procedure, that is initializing the network with pre-computed weights of a model on another task (say ILSVRC 201X) and adapt the weights according to your new task. This procedure is known as transfer learning (and sometimes domain adaptation) in community.

Resources