How to generate noise free data by training on clean data? - machine-learning

I'm working on a problem where I have got two image data sets.
One is a clean image data set and the other one is the same data set but with noise mixed in it.
1) Is it possible to train a model on clean data (So, that it learns the characteristics of clean image) and then, When passed a noisy image it outputs the image data without noise (detects the noise, removes it and outputs the clean image data)?
2) Would GAN be useful in this case (If yes, How?)?

I would use a denoising Autoencoder. You input noisy images, and train the model to map them with their "clean counterpart".
Here you can find some info on denoising Autoencoders.
And you can find how to implement an Autoencoder in TensorFlow 1.x and in TensorFlow 2.0.
Hope this helps, otherwise let me know.

Related

I have an image data and metadata containing the description of the image, how can i use both image image and metadata to train the images

I need to build a classifier of skin lesions and have image dataset along with metadata which has description consisting of the classification.
any help as to how to match the data and images and use both in training my convolutional neural network
Seems like you are asking how to incorporate non-numeric data in your metadata in the Neural Network you are trying to train. Hopefully, you will find this document on tf.feature_columns helpful. It will help you incorporate categorical, ordinal, etc. as numeric columns.
To train the model with both kind of information, one way would be:
Apply CNN to your images and get a dense representation
Simultaneously, build another NN to pass your metadata through
Concatenate the two resultant Tensors from both NNs and then pass it all through a softmax
Hopefully this should help.
Now the above mentioned tf.feature_columns is no longer recommended for new code. Instead of this use Keras preprocessing layers.

Image classification, narrow domain with custom labels

Let's suppose I would like to classify motorbikes by model.
there are couple of hundreds models of motorbikes I'm interested in.
I do have tens, sometimes hundreds of pictures of each motorbike model.
Can you please point me to the practical example that demonstrates how to train model on your data and then use it to classify images? It needs to be a deep learning model, not simple logistic regression.
I'm not sure about it, but it seems like I can't use pre-trained neural net because it has been trained on wide range of objects like cat, human, cars etc. They may be not too good at distinguishing the motorbike nuances I'm interested in.
I found couple of such examples (tensorflow has one), but sadly, all of them were using pre-trained model. None of it had example how to train it on your own dataset.
In cases like yours you either use transfer learning or fine tuning. If you have more then thousand images of motorbikes I would use fine tuning and if you have less transfer learning.
Fine tuning is using a pre trained model and using a different classifier part. Then the new classifier part maybe the last 1-2 layers of the trained model are trained to your dataset.
Transfer learning means using a pre trained model and letting it output features for an input image. Now you use a new classifier based on those features. Maybe a SVM or a logistic regression.
An example for this can be seen here: https://github.com/cpra/dlvc2016/blob/master/lectures/lecture10.pdf. slide 33.
This paper Quick, Draw! Doodle Recognition from a kaggle challenge may be similar enough to what you are doing. The code is on github. You may need some data augmentation if you only have a few hundred images for each category.
What you want is pretty EZ. Follow the darknet YOLO implementation
Instruction: https://pjreddie.com/darknet/yolo/
Code https://github.com/pjreddie/darknet
Training YOLO on COCO
You can train YOLO from scratch if you want to play with different training regimes, hyper-parameters, or datasets. Here's how to get it working on the COCO dataset.
Get The COCO Data
To train YOLO you will need all of the COCO data and labels. The script scripts/get_coco_dataset.sh will do this for you. Figure out where you want to put the COCO data and download it, for example:
cp scripts/get_coco_dataset.sh data
cd data
bash get_coco_dataset.sh
Add your data inside and make sure it is same as testing samples.
Now you should have all the data and the labels generated for Darknet.
Then call training script with the pre-trained weight.
Keep in mind that only training on your motorcycle may not result in good estimation. There would be biased result coming out, I red it somewhere b4.
The rest is all inside the link. Good luck

Applying PCA before sending data to SVM

Before applying SVM on my data I want to reduce its dimension by PCA. Should I separate the Train data and Test data then apply PCA on each of them separately or apply PCA on both sets combined then separate them?
Actually both provided answers are only partially right. The crucial part here is what is the exact problem you are trying to solve. There are two basic possible settings which can be considered, and both are valid under some assumptions.
Case 1
You have some data (which you splitted to train and test) and in the future you will get more data coming from the same distribution.
If this is the case, you should fit PCA on train data, then SVM on its projection, and for testing you just apply already fitted PCA followed by already fitted SVM, and you do exactly the same for new data that will come. This way your test error (under some "size assumptions" should approximate your expected error).
Case 2
You have some data (which you splitted train and test) and in the future you will obtain a big chunk of unlabeled data and you will be able to fit your model then.
In such a case, you fit PCA on whole data provided, learn SVM on labeled part (train set) and evaluate on test set. This way, once new data arrives you can fit PCA using both your data and new ones, and then - train SVM on your old data (as this is the only one having labels). Under the assumption that again - data comes from the same distributions, everything is correct here. You use more data to fit PCA only to have a better estimator (maybe your data is really high dimensional and PCA fails with small sample?).
You should do them separately. If you run pca on both sets combined then you are going to introduce a bias in your svn. The goal of the test set is to see how your algorithm will perform without prior knowledge of the data.
Learn the Projection Matrix of PCA on the train set and use this to reduce the dimensions of the test data.
One benifit is this way you don't have to rely on collecting sufficient data in the test set if you are applying your classifier for actual run time where test data comes one sample at a time.
Also I think separate train and test PCA will fail.Why?
Think of PCA as giving you features, and then you learn a classifier over these features. If over time your data shifts, then the test features you get using PCA would be different, and you don't have a classifier trained on these features. Even if the set of directions/features of the PCA remain same but their order varies your classifier still fails.

Input data amount for Caffe

I want to use Caffe and the googlenet structure coming with Caffe to train a model based on my own image data.
I have 14 categories for classification. But I do have only around 250 images for training and 80 for testing. Is this enough? Are there means to find out how many images I need per class?
Solution 1:
Just finetune the top layer since you only have such few data. By this way, you can think the network as a feature extractor and you just train a classifier on top this features.
Solution 2:
Try aggressive data augmentation. For example you can try random translation, scaling, rotation of your data. In this way, you can get a lot of images from one training image.
Solution 3:
The most effective way is to try to get more real data. Data is very important for deep learning. As a rule of thumb, at least 1000 images for one class.

Training images and test images

I am working on a project about the feedforward pathway of the ventral stream, and i have 6 images to be recognized at the InferoTemporal Layer.
Please can someone give me images' exmamples showing to me what is the difference between training images and test images. So what i should add to my folder that contain my training images? Does i should add another folder that contain a list of test images ? if yes, what should be these test images?
Does the training images must contains the images to be analysed or recognized and the test images must contains the images in memory? In other words, if we have for example 16 training faces and one or two test faces. So we should analyse what is the face in the training that correspond to the face in test ? Is that true ??
Note: I don't need a code, I am only interested to get a brief explanations about the difference between test ans training images.
Any help will be very appreciated.
The only difference between training and test images is the fact, that test images are not used for selecting your models parameters. Each model has some kind of paramters, variables, which it fits to the data. This is called a training process. The training/test set separation ensures, that your model (algorithm) can actually do something more that just memorizing images - so you test it on test images, which has not been used during the training phase.
It has been already discussed in detail on SO: whats is the difference between train, validation and test set, in neural networks?
In HMAX, you use all the data at the input image layer. And garbor filter, max-pooling, radial basis kernel functions on all of them. Only at C2 layer, you start to train a subset of the images (mostly with a linear kernel based SVM). The subset is set to training data. And the rest are test data. In one word, training images are first used to build the SVM and then the test images are assigned to digit classes using the majority-voting method.
But this is in fact equivalent as you put the training images at the image layer at first. After all the layers going through, you then put the test images at the image layer to restart for the recognition. Since both training and test image need scaling, and all the operations at previous layers prior to C2 are the same, you just mix them altogether at the beginning.
Although you use all the training and test images at the image layer, you still need to shuffle the data and pick up some of them as the training, and the others as the testing.

Resources