I just followed this great tutorial about how to quickly retrain ImageNet and make image classifiers using Tensorflow. I made the classifier, and it works well. From what I understand, Tensorflow partitions the provided dataset into training, test and validation by itself - or at least it does with this script. I've worked with sklearn in the past, and you can always find the accuracy of the model.
My question is, how can I find the accuracy percentage of the trained model in Tensorflow, specifically for image classifiers?
Thanks very much.
Related
I am trying to understand the concept of fine-tuning and few-shot learning.
I understand the need for fine-tuning. It is essentially tuning a pre-trained model to a specific downstream task. However, recently I have seen a plethora of blog posts stating zero-shot learning, one-shot learning and few-shot learning.
How are they different from fine-tuning? It appears to me that few-shot learning is a specialization of fine-tuning. What am I missing here?
Can anyone please help me?
Fine tuning - When you already have a model trained to perform the task you want but on a different dataset, you initialise using the pre-trained weights and train it on target (usually smaller) dataset (usually with a smaller learning rate).
Few shot learning - When you want to train a model on any task using very few samples. e.g., you have a model trained on different but related task and you (optionally) modify it and train for target task using small number of examples.
For example:
Fine tuning - Training a model for intent classification and then fine tuning it on a different dataset.
Few shot learning - Training a language model on large text dataset and modifying it (usually last (few) layer) to classify intents by training on small labelled dataset.
There could be many more ways to do few shot learning. For 1 more example, training a model to classify images where some classes have very small (or 0 for zero shot and 1 for one shot) number of training samples. Here in inference, classifying these rare classes (rare in training) correctly becomes the aim of few shot learning.
i am a newbie in ML , i was trying to solve a multi-class classification problem . I have used XGBOOST to reduce the log loss, I also tried a Dense neural network to reduce the log loss , it also seem to work well. Now is there a way i can stack these two models so that i can further reduce the log loss.
You can do it with Apple coremltools.
Take your XGBoost model and convert to MLModel using the converter.
Create a pipeline model combining that model with any neural networks.
I'm sure there are other tools with pipeline. But you will need to convert XGBoost to some other format.
I am doing thesis on baby cry detection, I build the model with CNN and KNN, the train accuracy of CNN is 99% and Test accuracy is 98% and for KNN, train accuracy is 98% and Test accuracy is 98%.
Please suggest me which algorithms I should choose and why?
In KNN, output completely relies on nearest neighbours, which may or may not be good choice. Also it is sensitive to distance metrics. More you can find here. And great discussion on its distance metrics can be helpful for you.
On the other hand, CNN extract the features from the input data. Which are very helpful for making analysis. And recent success in the CNN specially wavenet for the audio application, i will prefer to go with CNN.
Edit: Considering your data-size, CNN is not good option here.
I'm searching for 2-class convolutional neural network problem published as paper using smallest-dataset.
Generally CNN requires millions of images to train ,but I found it successfully works with hundreds to thousands of 2-class images through augmentation. I need to prove it by citing other best practices.
The similar case I found is "Mitosis detection in breast cancer histology images with deep neural networks" it trained on 190 positive samples and other images on the background. and it had quite successful result.
Is there another successful 2-class problem CNN research with small dataset?
I wrote an image processing program that train some classifier to recognize some object in the image. now I want to test the response of my algorithm to noise. I wish the algorithm have some robustness to noise.
My question is that, should I train the classifier using noisy version of train dataset, or train the classifier using original version of dataset, and see its performance on noisy data.
Thank you.
to show robustness of classifier one might use highly noisy test data on the originally trained classifier. depending on that performance, one can train again using noisy data and then test again. obviously for an application development, if including extremely noisy samples increase accuracy then that's the way to go. literature says to have as large a range of training samples as possible. however sometimes this degrades performances in specific cases.