How is image recognition done by neural network after doing canny edge detection of the image? I don't seek for the code, I want to know how neural networks actually work in order to match similarity of the image from a set of images.
What should be considered in input layer, hidden layers, etc.?
This question is really wide. The main reason why neural networks are doing so great job in this issue is taking advantage of some intrinsic image properties and invariances as well as computational advances which makes this issue possible to deal with :
Hierarchical structure : Faces consists of eyes, mouth, ears, etc. Eyes consists of a certain set of shapes, which consist of certain kind of edges, lines.. etc. There is a certain hierarchy of different shapes, structures etc. which are used for image recognition - and this is why deep stacked neural networks are so good in dealing with this task - this hierarchy is coded in a structure of neural network.
Geometrical invariances : If you move an image of a car from a left corner to a right corner - you will still have an image of a car. This property is a reason of success of a certain kind of neural networks - convolutional ones. This kind of ANN topologies makes use of this invariances making learning so easy and powerful.
Increased computational power : Today's convolutional neural networks are designed in a way which makes computations very easy to do in a parallel way. Also modern GPU's architecture makes learning really fast - sometimes up to 10x faster than classical CPU implementations.
You can read a detailed explaination here.
You question is really very broad. Also, from this line of your question, How is image recognition done by neural network after doing canny edge detection of the image?, it can be inferred that you are new to neural networks and deep learning. Neural networks do not specifically perform canny edge detection.
I recommend that before jumping into convolutional neural networks (CNNs), you understand some basics of neural networks. This way you will be able to appreciate CNN concepts later. CS231n course could be a very good starting point for you, as basics of neural networks are also covered in that course.
It is really hard to write any specific answer for your broad question. Let me know if you have some specific questions.
Related
I have a general question regarding bio-medical image analysis. As bio-medical images require registration for alignment of images in same space and for better feature extraction. My question is does deep learning based classification also requires image registration of images for training dataset?
As in deep learning the architecture define best features by itself, does registration required for abdominal CT scan images classification using Deep Neural Networks ?
As we perform data augmentation for better training of data does image registration still required in this case?
Generally deep learning approaches for image data is done using convolutional neural networks (CNNs) which are at least shift invariant. By using image pyramids or specially constructed neural network layouts, they can also be made scale invariant. Generally they are not rotation invariant.
This does not mean that they cannot work with differently rotated input images, but you may need much bigger models and more training data to get it to work well. The neural net will learn the differently rotated features of whatever you're trying to detect. If the range of rotation is small, this is probably not a big issue.
In summary, you don't necessarily need registration, but it can improve your final results.
As known, modern most popular CNN (convolutional neural network): VGG/ResNet (FasterRCNN), SSD, Yolo, Yolo v2, DenseBox, DetectNet - are not rotate invariant: Are modern CNN (convolutional neural network) as DetectNet rotate invariant?
Also known, that there are several neural networks with rotate-invariance object detection:
Rotation-Invariant Neoperceptron 2006 (PDF): https://www.researchgate.net/publication/224649475_Rotation-Invariant_Neoperceptron
Learning rotation invariant convolutional filters for texture classification 2016 (PDF): https://arxiv.org/abs/1604.06720
RIFD-CNN: Rotation-Invariant and Fisher Discriminative Convolutional Neural Networks for Object Detection 2016 (PDF): http://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Cheng_RIFD-CNN_Rotation-Invariant_and_CVPR_2016_paper.html
Encoded Invariance in Convolutional Neural Networks 2014 (PDF)
Rotation-invariant convolutional neural networks for galaxy morphology prediction (PDF): https://arxiv.org/abs/1503.07077
Learning Rotation-Invariant Convolutional Neural Networks for Object Detection in VHR Optical Remote Sensing Images 2016: http://ieeexplore.ieee.org/document/7560644/
We know, that in such image-detection competitions as: IMAGE-NET, MSCOCO, PASCAL VOC - used networks ensembles (simultaneously some neural networks). Or networks ensembles in single net such as ResNet (Residual Networks Behave Like Ensembles of Relatively Shallow Networks)
But are used rotation invariant network ensembles in winners like as MSRA, and if not, then why? Why in ensemble the additional rotation-invariant network does not add accuracy to detect certain objects such as aircraft objects - which images is done at a different angles of rotation?
It can be:
aircraft objects which are photographed from the ground
or ground objects which are photographed from the air
Why rotation-invariant neural networks are not used in winners of the popular object-detection competitions?
The recent progress in image recognition which was mainly made by changing the approach from a classic feature selection - shallow learning algorithm to no feature selection - deep learning algorithm wasn't only caused by mathematical properties of convolutional neural networks. Yes - of course their ability to capture the same information using smaller number of parameters was partially caused by their shift invariance property but the recent research has shown that this is not a key in understanding their success.
In my opinion the main reason behind this success was developing faster learning algorithms than more mathematically accurate ones and that's why less attention is put on developing another property invariant neural nets.
Of course - rotation invariance is not skipped at all. This is partially made by data augmentation where you put the slightly changed (e.g. rotated or rescaled) image to your dataset - with the same label. As we can read in this fantastic book these two approaches (more structure vs less structure + data augmentation) are more or less equivalent. (Chapter 5.5.3, titled: Invariances)
I'm also wondering why the community or scholar didn't put much attention on ration invariant CNN as #Alex.
One possible cause, in my opinion, is that many scenarios don't need this property, especially for those popular competitions. Like Rob mentioned, some natural pictures are already taken in a unified horizontal (or vertical) way. For example, in face detection, many works will align the picture to ensure the people are standing on the earth before feeding to any CNN models. To be honest, this is the most cheap and efficient way for this particular task.
However, there does exist some scenarios in real life, needing rotation invariant property. So I come to another guess: this problem is not difficult from those experts (or researchers)' view. At least we can use data augmentation to obtain some rotate invariant.
Lastly, thanks so much for your summarization about the papers. I added one more paper Group Equivariant Convolutional Networks_icml2016_GCNN and its implementation on github by other people.
Object detection is mostly driven by the successes of detection algorithms in world-famous object detection benchmarks like PASCAL-VOC and MS-COCO, which are object centric datasets where most objects are vertical (potted plants, humans, horses, etc.) and thus data augmentation with left-right flips is often sufficient (for all we know data augmentation with rotated images like upside-down flips could even hurt detection performance).
Every year the entire community adopts the base algorithmic structure of the winning solution and build on it (I am exaggerating a bit to prove a point but not so much).
Interestingly other less widely known topics like oriented text detections and oriented vehicle detections in aerial imagery both need rotation invariant features and rotation equivariant detection pipelines (like in both articles from Cheng you mentioned).
If you want to find literature and code in this area you need to dive in these two domains. I can already give you a few pointers like the DOTA challenge for aerial imagery or the ICDAR challenges for oriented text detections.
As #Marcin Mozejko said, CNN are by nature translation invariant and not rotation invariant. It is an open problem how to incorporate perfect rotation invariance the few articles that deal with it have yet to become standards even though some of them seem promising.
My personal favorite for detection is the modification of Faster R-CNN recently proposed by Ma.
I hope that this direction of research will be investigated more and more once people will get fed up of MS-COCO and VOC.
What you could try is take a state-of-the-art detector trained on MS-COCO like Faster R-CNN with NASNet from TF detection API and see how it performs wrt rotating the test image, in my opinion it would be far from rotation invariant.
Rotation invariance is mostly a good thing, but not always. Objects can have different interpretation based on their rotation, eg. if a rotated "1" might be difficult to distinguish from a "7".
First, let's acknowledge that introducing rotational invariance requires a static assumption about the distribution of angles. For example, another commenter on this page suggested rotating the kernel with 30-degree steps. That's equivalent to assuming that useful rotations in each layer are uniformly distributed over the rotation angles.
In contrast to that, when the network learns rotated kernels, the network picks a different distribution of angles for each layer. An interesting research question is to find what distribution of rotation angles is implied by learned kernels. In any case, why would such learning flexibility be useful?
I suspect that the assumption of a uniform distribution might not be equally useful across all layers of a network. In the first few convolutional layers (edges and other basic shapes), it's likely true that the rotation angles are uniformly distributed. However, in the deep layers, this assumption might be less valid. If cars are almost always rotated within a small range of angles, then why waste compute and space on unlikely rotations?
However, the network won't learn the right distribution of angles if the training dataset is not sufficiently representative. Note that simply rotating an image (called data augmentation) is not the same as rotating an object relative to other objects in the same image. I suppose it comes down to your expectation of the difference between the training dataset and the unobserved dataset to which the network has to generalize.
Interestingly, the human visual cortex is not fully rotation-invariant at the scale of major face features. See https://en.wikipedia.org/wiki/Thatcher_effect.
Probably lots of people already saw this article by Google research:
http://googleresearch.blogspot.ru/2015/06/inceptionism-going-deeper-into-neural.html
It describes how Google team have made neural networks to actually draw pictures, like an artificial artist :)
I wanted to do something similar just to see how it works and maybe use it in future to better understand what makes my network to fail. The question is - how to achieve it with nolearn\lasagne (or maybe pybrain - it will also work but I prefer nolearn).
To be more specific, guys from Google have trained an ANN with some architecture to classify images (for example, to classify which fish is on a photo). Fine, suppose I have an ANN constructed in nolearn with some architecture and I have trained to some degree. But... What to do next? I don't get it from their article. It doesn't seem that they just visualize the weights of some specific layers. It seems to me (maybe I am wrong) like they do one of 2 things:
1) Feed some existing image or purely a random noise to the trained network and visualize the activation of one of the neuron layers. But - looks like it is not fully true, since if they used convolution neural network the dimensionality of the layers might be lower then the dimensionality of original image
2) Or they feed random noise to the trained ANN, get its intermediate output from one of the middlelayers and feed it back into the network - to get some kind of a loop and inspect what neural networks layers think might be out there in the random noise. But again, I might be wrong due to the same dimensionality issue as in #1
So... Any thoughts on that? How we could do the similar stuff as Google did in original article using nolearn or pybrain?
From their ipython notebook on github:
Making the "dream" images is very simple. Essentially it is just a
gradient ascent process that tries to maximize the L2 norm of
activations of a particular DNN layer. Here are a few simple tricks
that we found useful for getting good images:
offset image by a random jitter
normalize the magnitude of gradient
ascent steps apply ascent across multiple scales (octaves)
It is done using a convolutional neural network, which you are correct that the dimensions of the activations will be smaller than the original image, but this isn't a problem.
You change the image with iterations of forward/backward propagation just how you would normally train a network. On the forward pass, you only need to go until you reach the particular layer you want to work with. Then on the backward pass, you are propagating back to the inputs of the network instead of the weights.
So instead of finding the gradients to the weights with respect to a loss function, you are finding gradients to inputs with respect to the l2 Normalization of a certain set of activations.
I am doing research in the field of computer vision, and am working on a problem related to finding visually similar images to a query image. For example, finding t-shirts of similar colour with similar patterns (Striped/ Checkered), or shoes of similar colour and shape, and so on.
I have explored hand-crafted image features such as Color Histograms, Texture features, Shape features (Histogram of Oriented Gradients), SIFT and so on. I have also read up literature about Deep Neural Networks (Convolutional Neural Networks), which have been trained on massive amounts of data and are currently state of the art in Image Classification.
I was wondering if the same features (extracted from the CNN's) can also be used for my project - finding fine-grained similarities between images. From what I understand, the CNNs have learnt good representative features that can help classify images - for example, be it a red shirt or a blue shirt or an orange shirt, it is able to identify that the image is a shirt. However it doesn't understand that an orange shirt looks more similar to a red shirt than a blue shirt does, and hence it is not able to capture these similarities.
Please correct me if I am wrong. I would like to know if there are any Deep Neural Networks that capture these similarities, and have proven to be superior to the hand-crafted features. Thanks in advance.
For your task, a CNN is definitely worth a try!
Many researchers used networks which are pretrained for Image Classification and obtained state-of-the-art results on fine-grained classification. For example, trying to classify birds species or cars.
Now, your task is not classification, but it is related. You can think about similarity as some geometric distance between features, which are basically vectors. Thus, you may carry out some experiments computing the distance between the feature vectors for all your training images (the reference) and the feature vector extracted from the query image.
CNNs features extracted from the first layers of the net should be more related to color or other graphical traits, rather than more "semantical" ones.
Alternatively, there is some work on learning directly a similarity metric through CNN, see here for example.
A little bit out-dated, but it can still be useful for other people. Yes, CNNs can be used for image similarity and I used before. As Flavio pointed out, for a simple start, you can use a pre-trained CNN of your choice such as Alexnet,GoogleNet etc.. and then use it as feature extractor. You can compare the features based on the distance, similar pictures will have a smaller distance between their feature vectors.
At the end of the introduction to this instructive kaggle competition, they state that the methods used in "Viola and Jones' seminal paper works quite well". However, that paper describes a system for binary facial recognition, and the problem being addressed is the classification of keypoints, not entire images. I am having a hard time figuring out how, exactly, I would go about adjusting the Viola/Jones system for keypoint recognition.
I assume I should train a separate classifier for each keypoint, and some ideas I have are:
iterate over sub-images of a fixed size and classify each one, where an image with a keypoint as center pixel is a positive example. In this case I'm not sure what I would do with pixels close to the edge of the image.
instead of training binary classifiers, train classifiers with l*w possible classes (one for each pixel). The big problem with this is that I suspect it will be prohibitively slow, as every weak classifier suddenly has to do l*w*original operations
the third idea I have isn't totally hashed out in my mind, but since the keypoints are each parts of a greater part of a face (left, right center of an eye, for example), maybe I could try to classify sub-images as just an eye, and then use the left, right, and center pixels (centered in the y coordinate) of the best-fit subimage for each face-part
Is there any merit to these ideas, and are there methods I haven't thought of?
however, that paper describes a system for binary facial recognition
No, read the paper carefully. What they describe is not face specific, face detection was the motivating problem. The Viola Jones paper introduced a new strategy for binary object recognition.
You could train a Viola Jones style Cascade for eyes, another for a nose, and one for each keypoint you are interested in.
Then, when you run the code - you should (hopefully) get 2 eyes, 1 nose, etc, for each face.
Provided you get the number of items you expected, you can then say "here are the key points!" What takes more work is getting enough data to build a good detector for each thing you want to detect, and gracefully handling false positives / negatives.
I ended up working on this problem extensively. I used "deep learning," aka several layers of neural networks. I used convolutional networks. You can learn more about them by checking out these demos:
http://cs.stanford.edu/people/karpathy/convnetjs/demo/mnist.html
http://deeplearning.net/tutorial/lenet.html#lenet
I made the following changes to a typical convolutional network:
I did not do any down-sampling, as any loss of precision directly translates to a decrease in the model's score
I did n-way binary classification, with each pixel being classified as a keypoint or non-keypoint (#2 in the things I listed in my original post). As I suspected, computational complexity was the primary barrier here. I tried to use my GPU to overcome these issues, but the number of parameters in the neural network were too large to fit in GPU memory, so I ended up using an xl amazon instance for training.
Here's a github repo with some of the work I did:
https://github.com/cowpig/deep_keypoints
Anyway, given that deep learning has blown up in popularity, there are surely people who have done this much better than I did, and published papers about it. Here's a write-up that looks pretty good:
http://danielnouri.org/notes/2014/12/17/using-convolutional-neural-nets-to-detect-facial-keypoints-tutorial/