Tensorflow how pooling layer is connected to convolutional layer - machine-learning

Is there a way to selectively connect the feature maps of a pooling layer to the feature maps of a (sequential) convolutional layer?
In the paper "Gradient-Based Learning Applied To Document Recognition" [Proc. of the IEEE, Nov 1998] LeCun et al. describe how a subsampling layer is selectively connected to a subsequent convolutional layer.
In the basic TF example (A guide to TF layers: Building a CNN) conv2 is connected to pool1 as:
conv2 = tf.layers.conv2d(inputs=pool1, ...
however, I'd like to selectively connect pool1 to conv2 in a similar way the LeCun paper connects S2 to C3 (see Table 1).
Thanks!

I haven't examined this paper. Just want to share that you can manipulate your pool1 tensor any way your like before passing it to the conv2d layer, e.g. split it by two and connect each one to its own conv layer.
For instance, that's what LSTM cell is doing (see the implementation here).

Related

Output layer in image classification

I am building a simple multi-layer perceptron that takes an image as an input and gives as an output the classification of the image. My image dataset is composed by grayscale images with size (n x m). I choose as input layer nm input neurons (in reality I am reducing dimensions with PCA, but let's keep it simple). Then I choose intermediate hidden layers. Now what should I choose as my output layer? How many neurons and why? My classification uses, say, L different classes (i.e., L different types of images). Should I use a single output neuron?
Since you have L different classes you should have L output neurons, in keras it would be :
...
previous_layer = tf.keras.layers.Dense(4096)(...)
output = tf.keras.layers.Dense(self.nb_class)(previous_layer)
If you were in a binary classification you would need a sigmoid activation
output = tf.keras.layers.Activation('sigmoid')(output)
If L > 2 then you would go for softmax activation.
output = tf.keras.layers.Activation('softmax')(output)
Last thing, you should try some Convolutions layers before going for Dense layers. Look at VGG16 architecture.
If you want to work with just a single model (ML-NN),
if L = 2,
The number of neurons in the last layer can be just one with sigmoid activation (the most common approach).
You can also avoid sigmoid, and simply use a threshold to do the binary classification.
If L > 2,
The number of neurons should be L with softmax activation.
A special-case is a multi-label classification, in which you want to know if for a sample, there can be multiple classes or not.
Then, use L neurons with sigmoid activation.

How to modify backpropagation for a standard multilayer network including a scalar gain at each layer?

Considering a standard multilayer network including a scalar gain at each layer. The net input at layer m would be computed as :
n^m = β^m [W^m α^m − 1 + b^m]
where β^m is the scalar gain at layer m . This gain would be trained like the weights and biases of the network.
How can I modify the backpropagation algorithm for this new network ?
What would be a new equation added to update β^m ?
This is an exercise from this book .
E11.13
Neural Network Design (2nd Edition) - Martin T. Hagan, Howard B. Demuth, Mark H. Beale, Orlando De Jesus
I have written the answer in LaTeX

Can the number of units in NN input layer be different than the number of features in the data?

Based on the tensorflow keras API tutorial;
model = keras.Sequential([
keras.layers.Dense(10, activation='softmax', input_shape=(32,)),
keras.layers.Dense(10, activation='softmax')
])
I couldn't understand that why the number of units in the input layer is 10 while the input shape is 32. Also, there are many examples like this one in the tensorflow tutorials.
This is a rather common confusion by new practitioners, and not without a reason: the answer, as it has already been hinted at in the comments, is that in the Keras Sequential API there is an implicit input layer, determined by the input_shape argument of the first explicit layer.
This is directly visible in the Keras Functional API (check the example in the docs), where Input is an explicit layer itself, and in which your model would be written as:
inputs = Input(shape=(32,)) # input layer
x = Dense(10, activation='softmax')(inputs) # hidden layer
outputs = Dense(10, activation='softmax')(x) # output layer
model = Model(inputs, outputs)
i.e. your model is actually an example of a "good old" neural net with three layers (input, hidden, and output), despite that it looks like a two-layer net in the Keras Sequential API.
(BTW, and irrelevant to the question, it does not make much sense to have softmax as activation for your hidden layer.)

Can you copy the weights from just the first 3 layers of a network? Not exactly finetuning, but almost reshaping

In caffe I was looking to only use the pretrained weights for the alexnet architecture trained using the ImageNet dataset, for just the first two layers, and I would like to add a softmax classifier after those two layers. I was wondering how I could go about extracting only those first two layer's weights from a weight file that contains a much larger network structure (the true "deep" Alexnet structure).
To add to Shai's answer -
In case you don't want the full weights file,
In order to extract the weights of the desired layers, use net surgery:
net = caffe.Net(prototxt, caffemodel, caffe.TRAIN)
outnet = caffe.Net(predefined_prototxt_with_desired_layers_only, caffe.TRAIN)
layers_to_copy = ['conv1', 'conv2', 'conv3']
for layer in layers_to_copy:
for i in range(0, len(net.params[layer])): #this is for copying both weights and bias, in case bias exists
outnet.params[layer][i].data[...]=np.copy(net.params[layer][i].data[...])
outnet.save(new_caffemodel_name)
Caffe uses the layer's "name" to assign weights to layer's blobs.
If you change the top layers' "name"s than caffe will not copy the weights from the original .caffemodel file.

One dimensional data with CNN

Just wondering whether anybody has done this? I have a dataset that is one dimensional (not sure whether it's the right word choice though). Unlike the usual CNN inputs which are images (so 2D), my data only has one dimension. An example would be:
instance1 - feature1, feature2,...featureN
instance2 - feature1, feature2,...featureN
...
instanceM - feature1, feature2,...featureN
How do I use my dataset with CNNs? the ones I have looked at accepts images (like AlexNet and GoogleNet) in the form:
instance1 - 2d feature matrix
instance2 - 2d feature matrix2
...
instanceM - 2d feature matrixN
Appreciate any help on it.
Thanks!
If your data were spatially related (you said it isn't) then you'd feed it to a convnet (or, specifically, a conv2d layer) with shape 1xNx1 or Nx1x1 (rows x cols x channels).
If this isn't spatial data at all - you just have N non-spatially-related features, then the shape should be 1x1xN.
For completeness, I should point out that if your data really is non-spatial, then there's really no point in using a convolutional layer/net. You could shape it as 1x1xN and then use 1x1 convolutions, but since a 1x1 convolution does the exact same thing as a fully-connected (aka dense aka linear) layer, you might as well just use that instead.

Resources