I have a question regarding changing the output layer of the nets (AlexNet/GoogleNet/ImageNet). So the standard output is a 1x1000 Vector so one value for each class.
I know I can change the output to e.g 5 so I would get 1x5 Vector, if I only have 5 classes.
But what is if I do not have classes? Is it possible to change the output to a matrix like 18x18. Because my net should output a density map and not a "class".
And is it recommended to use a pre-trained net for my task or should I train from scratch?
Thank you for your help :-)
But what is if i do not have classes?
The concept of "class" is not really connected to the architecture but rather to the loss function itself. In other words if you have 1000 outputs it does not matter whether you want to classify among 1000 disjoint classes, assign 1000 tags, or regress on 1000 dimensional real output - architecture still makes perfect sense.
Is it possible to change the output to a matrix like 18x18
The "naive" approach would be to output 18*18 = 324 values and treat it as a 2-dim matrix. However, the 2-dim structure suggests there is some characteristics that can be exploited on the architecture side, one typical characteristic is translational invariance, which is exploited in convnets, and if the same is true for your output you might consider deconvolution (of any sort, since there are many) for your model.
And is it recommended to use a pretrained net for my task? Or shoud i learn from scratch ?
This does not depend on the architecture but task. If your task is similar enough to the ones a given net was trained on, you can use the pretrained one as a starting point and just "fine-tune" on the new one. In general, using pretrained net as a starting point is a safe thing to do (it should not be worse than training from scratch). Remember to train the whole network, and not just added parts though (unless you do not have enough data to train the whole structure).
Related
I understand that Batch Normalisation helps in faster training by turning the activation towards unit Gaussian distribution and thus tackling vanishing gradients problem. Batch norm acts is applied differently at training(use mean/var from each batch) and test time (use finalized running mean/var from training phase).
Instance normalisation, on the other hand, acts as contrast normalisation as mentioned in this paper https://arxiv.org/abs/1607.08022 . The authors mention that the output stylised images should be not depend on the contrast of the input content image and hence Instance normalisation helps.
But then should we not also use instance normalisation for image classification where class label should not depend on the contrast of input image. I have not seen any paper using instance normalisation in-place of batch normalisation for classification. What is the reason for that? Also, can and should batch and instance normalisation be used together. I am eager to get an intuitive as well as theoretical understanding of when to use which normalisation.
Definition
Let's begin with the strict definition of both:
Batch normalization
Instance normalization
As you can notice, they are doing the same thing, except for the number of input tensors that are normalized jointly. Batch version normalizes all images across the batch and spatial locations (in the CNN case, in the ordinary case it's different); instance version normalizes each element of the batch independently, i.e., across spatial locations only.
In other words, where batch norm computes one mean and std dev (thus making the distribution of the whole layer Gaussian), instance norm computes T of them, making each individual image distribution look Gaussian, but not jointly.
A simple analogy: during data pre-processing step, it's possible to normalize the data on per-image basis or normalize the whole data set.
Credit: the formulas are from here.
Which normalization is better?
The answer depends on the network architecture, in particular on what is done after the normalization layer. Image classification networks usually stack the feature maps together and wire them to the FC layer, which share weights across the batch (the modern way is to use the CONV layer instead of FC, but the argument still applies).
This is where the distribution nuances start to matter: the same neuron is going to receive the input from all images. If the variance across the batch is high, the gradient from the small activations will be completely suppressed by the high activations, which is exactly the problem that batch norm tries to solve. That's why it's fairly possible that per-instance normalization won't improve network convergence at all.
On the other hand, batch normalization adds extra noise to the training, because the result for a particular instance depends on the neighbor instances. As it turns out, this kind of noise may be either good and bad for the network. This is well explained in the "Weight Normalization" paper by Tim Salimans at al, which name recurrent neural networks and reinforcement learning DQNs as noise-sensitive applications. I'm not entirely sure, but I think that the same noise-sensitivity was the main issue in stylization task, which instance norm tried to fight. It would be interesting to check if weight norm performs better for this particular task.
Can you combine batch and instance normalization?
Though it makes a valid neural network, there's no practical use for it. Batch normalization noise is either helping the learning process (in this case it's preferable) or hurting it (in this case it's better to omit it). In both cases, leaving the network with one type of normalization is likely to improve the performance.
Great question and already answered nicely. Just to add: I found this visualisation From Kaiming He's Group Norm paper helpful.
Source: link to article on Medium contrasting the Norms
I wanted to add more information to this question since there are some more recent works in this area. Your intuition
use instance normalisation for image classification where class label
should not depend on the contrast of input image
is partly correct. I would say that a pig in broad daylight is still a pig when the image is taken at night or at dawn. However, this does not mean using instance normalization across the network will give you better result. Here are some reasons:
Color distribution still play a role. It is more likely to be a apple than an orange if it has a lot of red.
At later layers, you can no longer imagine instance normalization acts as contrast normalization. Class specific details will emerge in deeper layers and normalizing them by instance will hurt the model's performance greatly.
IBN-Net uses both batch normalization and instance normalization in their model. They only put instance normalization in early layers and have achieved improvement in both accuracy and ability to generalize. They have open sourced code here.
IN provide visual and appearance in-variance and BN accelerate training and preserve discriminative feature.
IN is preferred in Shallow layer(starting layer of CNN) so remove appearance variation and BN is preferred in deep layers(last CNN layer) should be reduce in order to maintain discrimination.
I searched for the reason a lot but I didn't get it clear, May someone explain it in some more detail please?
In theory you do not have to attach a fully connected layer, you could have a full stack of convolutions till the very end, as long as (due to custom sizes/paddings) you end up with the correct number of output neurons (usually number of classes).
So why people usually do not do that? If one goes through the math, it will become visible that each output neuron (thus - prediction wrt. to some class) depends only on the subset of the input dimensions (pixels). This would be something among the lines of a model, which only decides whether an image is an element of class 1 depending on first few "columns" (or, depending on the architecture, rows, or some patch of the image), then whether this is class 2 on a few next columns (maybe overlapping), ..., and finally some class K depending on a few last columns. Usually data does not have this characteristic, you cannot classify image of the cat based on a few first columns and ignoring the rest.
However, if you introduce fully connected layer, you provide your model with ability to mix signals, since every single neuron has a connection to every single one in the next layer, now there is a flow of information between each input dimension (pixel location) and each output class, thus the decision is based truly on the whole image.
So intuitively you can think about these operations in terms of information flow. Convolutions are local operations, pooling are local operations. Fully connected layers are global (they can introduce any kind of dependence). This is also why convolutions work so well in domains like image analysis - due to their local nature they are much easier to train, even though mathematically they are just a subset of what fully connected layers can represent.
note
I am considering here typical use of CNNs, where kernels are small. In general one can even think of MLP as a CNN, where the kernel is of the size of the whole input with specific spacing/padding. However these are just corner cases, which are not really encountered in practise, and not really affecting the reasoning, since then they end up being MLPs. The whole point here is simple - to introduce global relations, if one can do it by using CNNs in a specific manner - then MLPs are not needed. MLPs are just one way of introducing this dependence.
Every fully connected (FC) layer has an equivalent convolutional layer (but not vice versa). Hence it is not necessary to add FC layers. They can always be replaced by convolutional layers (+ reshaping). See details.
Why do we use FC layers then?
Because (1) we are used to it (2) it is simpler. (1) is probably the reason for (2). For example, you would need to adjust the loss fuctions / the shape of the labels / add a reshape add the end if you used a convolutional layer instead of a FC layer.
I found this answer by Anil-Sharma on Quora helpful.
We can divide the whole network (for classification) into two parts:
Feature extraction:
In the conventional classification algorithms, like SVMs, we used to extract features from the data to make the classification work. The convolutional layers are serving the same purpose of feature extraction. CNNs capture better representation of data and hence we don’t need to do feature engineering.
Classification:
After feature extraction we need to classify the data into various classes, this can be done using a fully connected (FC) neural network. In place of fully connected layers, we can also use a conventional classifier like SVM. But we generally end up adding FC layers to make the model end-to-end trainable.
The CNN gives you a representation of the input image. To learn the sample classes, you should use a classifier (such as logistic regression, SVM, etc.) that learns the relationship between the learned features and the sample classes. Fully-connected layer is also a linear classifier such as logistic regression which is used for this reason.
Convolution and pooling layers extract features from image. So this layer doing some "preprocessing" of data. Fully connected layrs perform classification based on this extracted features.
if I train a deep neural network with (say) 10 classes and I feed the network a completely different image, is it reasonable to expect that the output layer's cells will not activate much, so I will know there is no object of the trained classes in the image?
My intuition says "Yes", but is it so? And what would be the best approach to this?
Thanks
During the supervised training you usually assume that during training you get complete representation of the future types of objects. Typically - these are all labeled instances. In your case - there are also "noise" instances, thus there are basically two main approaches:
As multi-class neural network has K output neurons, each representing the probability of being a member of a particular class, you can simply condition on these distribution to say that the new object does not belong to any of them. One particular approach is to check if min(p(y|x))<T (where p(y|x) is the activation of output neuron) for some threshold T. You can either set this value by hand, or through analysis of you "noise" instances (with you do have some for training). Simply pass them through your network and compare what value of T gives you best recognition rate
Add another one-class classifier (anomaly detector) before your network - so you will end up with the sequence of two classifiers, first is able to recognize if it is a noise or element of any of your classes (notice, that this can be trained without access to noise samples, see one-class classification or anomaly detection techniques.
You could also add another output to the network to represent noise, but this probably will not work well, as you will force your network to generate consistent internal representation for both noise vs data and inter-class decisions.
The answer to your question very much depends on your network architecture and the parameters used to train it. If you are trying to protect against false positives, we can typically set an arbitrary threshold value on the relevant output nodes.
More generally, learning algorithms mostly take the form of “closed set” recognition, where all testing classes are known at training time. However, a more realistic scenario for vision applications is “open set” recognition, where an incomplete knowledge of the world is present at training time and unknown classes can be submitted during testing.
This is an on-going area of research - please see this Open Set Recognition web page for plenty of resources on the subject.
I Wrote a simple recurrent neural network (7 neurons, each one is initially connected to all the neurons) and trained it using a genetic algorithm to learn "complicated", non-linear functions like 1/(1+x^2). As the training set, I used 20 values within the range [-5,5] (I tried to use more than 20 but the results were not changed dramatically).
The network can learn this range pretty well, and when given examples of other points within this range, it can predict the value of the function. However, it can not extrapolate correctly and predicting the values of the function outside the range [-5,5]. What are the reasons for that and what can I do to improve its extrapolation abilities?
Thanks!
Neural networks are not extrapolation methods (no matter - recurrent or not), this is completely out of their capabilities. They are used to fit a function on the provided data, they are completely free to build model outside the subspace populated with training points. So in non very strict sense one should think about them as an interpolation method.
To make things clear, neural network should be capable of generalizing the function inside subspace spanned by the training samples, but not outside of it
Neural network is trained only in the sense of consistency with training samples, while extrapolation is something completely different. Simple example from "H.Lohninger: Teach/Me Data Analysis, Springer-Verlag, Berlin-New York-Tokyo, 1999. ISBN 3-540-14743-8" shows how NN behave in this context
All of these networks are consistent with training data, but can do anything outside of this subspace.
You should rather reconsider your problem's formulation, and if it can be expressed as a regression or classification problem then you can use NN, otherwise you should think about some completely different approach.
The only thing, which can be done to somehow "correct" what is happening outside the training set is to:
add artificial training points in the desired subspace (but this simply grows the training set, and again - outside of this new set, network's behavious is "random")
add strong regularization, which will force network to create very simple model, but model's complexity will not guarantee any extrapolation strength, as two model's of exactly the same complexity can have for example completely different limits in -/+ infinity.
Combining above two steps can help building model which to some extent "extrapolates", but this, as stated before, is not a purpose of a neural network.
As far as I know this is only possible with networks which do have the echo property. See Echo State Networks on scholarpedia.org.
These networks are designed for arbitrary signal learning and are capable to remember their behavior.
You can also take a look at this tutorial.
The nature of your post(s) suggests that what you're referring to as "extrapolation" would be more accurately defined as "sequence recognition and reproduction." Training networks to recognize a data sequence with or without time-series (dt) is pretty much the purpose of Recurrent Neural Network (RNN).
The training function shown in your post has output limits governed by 0 and 1 (or -1, since x is effectively abs(x) in the context of that function). So, first things first, be certain your input layer can easily distinguish between negative and positive inputs (if it must).
Next, the number of neurons is not nearly as important as how they're layered and interconnected. How many of the 7 were used for the sequence inputs? What type of network was used and how was it configured? Network feedback will reveal the ratios, proportions, relationships, etc. and aid in the adjustment of network weight adjustments to match the sequence. Feedback can also take the form of a forward-feed depending on the type of network used to create the RNN.
Producing an 'observable' network for the exponential-decay function: 1/(1+x^2), should be a decent exercise to cut your teeth on RNNs. 'Observable', meaning the network is capable of producing results for any input value(s) even though its training data is (far) smaller than all possible inputs. I can only assume that this was your actual objective as opposed to "extrapolation."
I am working on a project for uni which requires markerless relative pose estimation. To do this I take two images and match n features in certain locations of the picture. From these points I can find vectors between these points which, when included with distance, can be used to estimate the new postition of the camera.
The project is required to be deplyoable on mobile devices so the algorithm needs to be efficient. A thought I had to make it more efficient would be to take these vectors and put them into a Neural Network which could take the vectors and output an estimation of the xyz movement vector based on the input.
The question I have is if a NN could be appropriate for this situation if sufficiently trained? and, if so, how would I calculate the number of hidden units I would need and what the best activation function would be?
Using a neural network for your application can very well work, however, I feel you will need a lot of training samples to allow the network to generalize. Of course, this also depends on the type and number of poses you're dealing with. It sounds to me that with some clever maths it might be possible to derive the movement vector directly from the input vector -- if by any chance you can come up with a way of doing that (or provide more information so others can think about it too), that would very much be preferred, as in that case you would include prior knowledge you have about the task instead of relying on the NN to learn it from data.
If you decide to go ahead with the NN approach, keep the following in mind:
Divide your data into training and validation set. This allows you to make sure that the network doesn't overfit. You train using the training set and determine the quality of a particular network using the error on the validation set. The ratio of training/validation depends on the amount of data you have. A large validation set (e.g., 50% of your data) will allow more precise conclusions about the quality of the trained network, but often you have too few data to afford this. However, in any case I would suggest to use at least 10% of your data for validation.
As to the number of hidden units, a rule of thumb is to have at least 10 training examples for each free parameter, i.e., each weight. So assuming you have a 3-layer network with 4 inputs, 10 hidden units, and 3 output units, where each hidden unit and the output units have additionally a bias weight, you would have (4+1) * 10 + (10+1) * 3 = 83 free parameters/weights. In general you should experiment with the number of hidden units and also the number of hidden layers. From my experience 4-layer networks (i.e., 2 hidden layers) work better than 3-layer network, but that depends on the problem. Since you also have the validation set, you can find out what network architecture and size works without having to fear overfitting.
For the activation function you should use some sigmoid function to allow for non-linear behavior. I like the hyperbolic tangent for its symmetry, but from my experience you can just as well use the logistic function.