I am implementing a custom connection between two different keras layers. The neural network begins something like below:
model = tf.keras.Sequential()
c1 = model.add(Conv2D(6, kernel_size=[5,5], strides=(stride,stride), padding="valid", input_shape=(32,32,1),
activation = 'tanh'))
s2 = model.add(AveragePooling2D(pool_size=2, strides=2, padding='valid'))
Now, the output of s2 has a size of 14*14*6
Here, I want to apply my custom connection to convolution layer c3 which has an output size of 10*10*16 (that is, 16 filters need to be applied on s2 of size 14*14*6 and get an output of 10*10*16). For this, I need to use kernal_size = 5*5, filers=16, stride = 1, and padding=valid.
However, all the 6 feature maps (of s2) are not connected to 16 feature maps of (c3). The connections are explained as given here.
For example (the explanation of given link above), to build your first feature map of C3, you convolve 3 of your input maps (of s2 of size 14*14*6) with 5x5 filters, which gives you 3 10x10 maps that are summed up to give your first feature map, which is then of size 10x10.
I read somewhere that, we need to use Functional API to build this.
But, I am not sure, how to proceed further. Can someone help on implementing this.
My initial approach of implementing this is as follows:
from keras.models import Model
from keras.layers import Conv2D, Input, Concatenate, Lambda, Add
inputTensor = Input(shape=(14, 14,6))
stride =1
group0_a = Lambda(lambda x: x[:,:,0])(inputTensor)
group0_b = Lambda(lambda x: x[:,:,1])(inputTensor)
group0_c = Lambda(lambda x: x[:,:,2])(inputTensor) # Take 0,1,2 feature map of s2
conv_group0_a = Conv2D(1, kernel_size=[5,5], strides=(stride,stride), padding="valid", activation = 'tanh')(group0_a)
conv_group0_b = Conv2D(1, kernel_size=[5,5], strides=(stride,stride), padding="valid", activation = 'tanh')(group0_b)
conv_group0_c = Conv2D(1, kernel_size=[5,5], strides=(stride,stride), padding="valid", activation = 'tanh')(group0_c) #Applying convolution on each of 0, 1, 2 feature maps of s2 with distinct kernals
added_0 = Add()([conv_group0_a, conv_group0_b, conv_group0_c]) #adding all the three to get one of the 10*10*16
#Repeat this for 16 neurons of c3 and then finally
output_layer = Concatenate()([]) #concatenate them
Mymodel = Model(inputTensor,output_layer)
I want to know, if my approach is correct (I know it is not because I am getting too many errors). So, I need help in recreating the custom connection as explained above. Any help is appreciated.
the above code is correct, the only change I made is group0_a = Lambda(lambda x: x[:,:,0:1])(inputTensor), that is instead of passing x as x[:,:,0] I passed it as x[:,:,0:1]
Related
I have to add a k-max pooling layer in CNN model to detect fake reviews. Please can you let me know how to implement it using keras.
I searched the internet but I got no good resources.
As per this paper, k-Max Pooling is a pooling operation that is a generalisation of the max pooling over the time dimension used in the Max-TDNN sentence model
and different from the local max pooling operations applied in a convolutional network for object recognition (LeCun et al., 1998).
The k-max pooling operation makes it possible
to pool the k most active features in p that may be
a number of positions apart; it preserves the order
of the features, but is insensitive to their specific
positions.
There are few resources which show how to implement it in tensorflow or keras:
How to implement K-Max pooling in Tensorflow or Keras?
https://github.com/keras-team/keras/issues/373
New Pooling Layers For Varying-Length Convolutional Networks
Keras implementation of K-Max Pooling with TensorFlow Backend
There seems to be a solution here as #Anubhav_Singh suggested. This response got almost 5 times more thumbs up (24) than thumbs down (5) on the github keras issues link. I am just quoting it as-is here and let people try it out and say whether it worked for them or not.
Original author: arbackus
from keras.engine import Layer, InputSpec
from keras.layers import Flatten
import tensorflow as tf
class KMaxPooling(Layer):
"""
K-max pooling layer that extracts the k-highest activations from a sequence (2nd dimension).
TensorFlow backend.
"""
def __init__(self, k=1, **kwargs):
super().__init__(**kwargs)
self.input_spec = InputSpec(ndim=3)
self.k = k
def compute_output_shape(self, input_shape):
return (input_shape[0], (input_shape[2] * self.k))
def call(self, inputs):
# swap last two dimensions since top_k will be applied along the last dimension
shifted_input = tf.transpose(inputs, [0, 2, 1])
# extract top_k, returns two tensors [values, indices]
top_k = tf.nn.top_k(shifted_input, k=self.k, sorted=True, name=None)[0]
# return flattened output
return Flatten()(top_k)
Note: it was reported to be running very slow (though it worked for people).
Check this out. Not thoroughly tested but works fine for me. Let me know what you think. P.S. Latest tensorflow version.
tf.nn.top_k does not preserve the order of occurrence of values. So, that is the think that need to be worked upon
import tensorflow as tf
from tensorflow.keras import layers
class KMaxPooling(layers.Layer):
"""
K-max pooling layer that extracts the k-highest activations from a sequence (2nd dimension).
TensorFlow backend.
"""
def __init__(self, k=1, axis=1, **kwargs):
super(KMaxPooling, self).__init__(**kwargs)
self.input_spec = layers.InputSpec(ndim=3)
self.k = k
assert axis in [1,2], 'expected dimensions (samples, filters, convolved_values),\
cannot fold along samples dimension or axis not in list [1,2]'
self.axis = axis
# need to switch the axis with the last elemnet
# to perform transpose for tok k elements since top_k works in last axis
self.transpose_perm = [0,1,2] #default
self.transpose_perm[self.axis] = 2
self.transpose_perm[2] = self.axis
def compute_output_shape(self, input_shape):
input_shape_list = list(input_shape)
input_shape_list[self.axis] = self.k
return tuple(input_shape_list)
def call(self, x):
# swap sequence dimension to get top k elements along axis=1
transposed_for_topk = tf.transpose(x, perm=self.transpose_perm)
# extract top_k, returns two tensors [values, indices]
top_k_vals, top_k_indices = tf.math.top_k(transposed_for_topk,
k=self.k, sorted=True,
name=None)
# maintain the order of values as in the paper
# sort indices
sorted_top_k_ind = tf.sort(top_k_indices)
flatten_seq = tf.reshape(transposed_for_topk, (-1,))
shape_seq = tf.shape(transposed_for_topk)
len_seq = tf.shape(flatten_seq)[0]
indices_seq = tf.range(len_seq)
indices_seq = tf.reshape(indices_seq, shape_seq)
indices_gather = tf.gather(indices_seq, 0, axis=-1)
indices_sum = tf.expand_dims(indices_gather, axis=-1)
sorted_top_k_ind += indices_sum
k_max_out = tf.gather(flatten_seq, sorted_top_k_ind)
# return back to normal dimension but now sequence dimension has only k elements
# performing another transpose will get the tensor back to its original shape
# but will have k as its axis_1 size
transposed_back = tf.transpose(k_max_out, perm=self.transpose_perm)
return transposed_back
Here is my implementation of k-max pooling as explained in the comment of #Anubhav Singh above (the order of topk is preserved)
def test60_simple_test(a):
# swap last two dimensions since top_k will be applied along the last dimension
#shifted_input = tf.transpose(a) #[0, 2, 1]
# extract top_k, returns two tensors [values, indices]
res = tf.nn.top_k(a, k=3, sorted=True, name=None)
b = tf.sort(res[1],axis=0,direction='ASCENDING',name=None)
e=tf.gather(a,b)
#e=e[0:3]
return (e)
a = tf.constant([7, 2, 3, 9, 5], dtype = tf.float64)
print('*input:',a)
print('**output', test60_simple_test(a))
The result:
*input: tf.Tensor([7. 2. 3. 9. 5.], shape=(5,), dtype=float64)
**output tf.Tensor([7. 9. 5.], shape=(3,), dtype=float64)
Here is a Pytorch version implementation of k-max pooling:
import torch
def kmax_pooling(x, dim, k):
index = x.topk(k, dim = dim)[1].sort(dim = dim)[0]
return x.gather(dim, index)
Hope it would help.
I am trying to create a list based on my neural network outputs and use it in Tensorflow as a loss function.
Assume that results is list of size [1, batch_size] that is output by a neural network. I check to see whether the first value of this list is in a specific range passed in as a placeholder called valid_range, and if it is add 1 to a list. If it is not, add -1. The goal is to make all predictions of the network in the range, so the correct predictions is a tensor of all 1, which I call correct_predictions.
values_list = []
for j in range(batch_size):
a = results[0, j] >= valid_range[0]
b = result[0, j] <= valid_range[1]
c = tf.logical_and(a, b)
if (c == 1):
values_list.append(1)
else:
values_list.append(-1.)
values_list_tensor = tf.convert_to_tensor(values_list)
correct_predictions = tf.ones([batch_size, ], tf.float32)
Now, I want to use this as a loss function in my network, so that I can force all the predictions to be in the specified range. I try to train like this:
loss = tf.reduce_mean(tf.squared_difference(values_list_tensor, correct_predictions))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
gradients, variables = zip(*optimizer.compute_gradients(loss))
gradients, _ = tf.clip_by_global_norm(gradients, gradient_clip_threshold)
optimize = optimizer.apply_gradients(zip(gradients, variables))
This, however, has a problem and throws an error on the last optimize line, saying:
ValueError: No gradients provided for any variable: ['<tensorflow.python.training.optimizer._RefVariableProcessor object at 0x7f0245d4afd0>',
'<tensorflow.python.training.optimizer._RefVariableProcessor object at 0x7f0245d66050>'
...
I tried to debug this in Tensorboard, and I notice that the list I am creating does not appear in the graph, so basically the x part of the loss function is not part of the network itself. Is there some way to accurately create a list based on the predictions of a neural network and use it in the loss function in Tensorflow to train the network?
Please help, I have been stuck on this for a few days now.
Edit:
Following what was suggested in the comments, I decided to use a l2 loss function, multiplying it by the binary vector I had from before values_list_tensor. The binary vector now has values 1 and 0 instead of 1 and -1. This way when the prediction is in the range the loss is 0, else it is the normal l2 loss. As I am unable to see the values of the tensors, I am not sure if this is correct. However, I can view the final loss and it is always 0, so something is wrong here. I am unsure if the multiplication is being done correctly and if values_list_tensor is calculated accurately? Can someone help and tell me what could be wrong?
loss = tf.reduce_mean(tf.nn.l2_loss(tf.matmul(tf.transpose(tf.expand_dims(values_list_tensor, 1)), tf.expand_dims(result[0, :], 1))))
Thanks
To answer the question in the comment. One way to write a piece-wise function is using tf.cond. For example, here is a function that returns 0 in [-1, 1] and x everywhere else:
sess = tf.InteractiveSession()
x = tf.placeholder(tf.float32)
y = tf.cond(tf.logical_or(tf.greater(x, 1.0), tf.less(x, -1.0)), lambda : x, lambda : 0.0)
y.eval({x: 1.5}) # prints 1.5
y.eval({x: 0.5}) # prints 0.0
According to the original paper on Dropout said regularisation method can be applied to convolution layers often improving their performance. TensorFlow function tf.nn.dropout supports that by having a noise_shape parameter to allow the user to choose which parts of the tensors will drop out independently. However, neither the paper nor the documentation give a clear explanation of which dimensions should be kept independently, and the TensorFlow explanation of how noise_shape works is rather unclear.
only dimensions with noise_shape[i] == shape(x)[i] will make independent decisions.
I would assume that for a typical CNN layer output of the shape [batch_size, height, width, channels] we don't want individual rows or columns to drop out by themselves, but rather whole channels (which would be equivalent to a node in a fully connected NN) independently of the examples (i.e. different channels could be dropped for different examples in a batch). Am I correct in this assumption?
If so, how would one go about implementing dropout with such specificity using the noise_shape parameter? Would it be:
noise_shape=[batch_size, 1, 1, channels]
or:
noise_shape=[1, height, width, 1]
from here,
For example, if shape(x) = [k, l, m, n] and noise_shape = [k, 1, 1, n], each batch and channel component will be kept independently and each row and column will be kept or not kept together.
The code may help explain this.
noise_shape = noise_shape if noise_shape is not None else array_ops.shape(x)
# uniform [keep_prob, 1.0 + keep_prob)
random_tensor = keep_prob
random_tensor += random_ops.random_uniform(noise_shape,
seed=seed,
dtype=x.dtype)
# 0. if [keep_prob, 1.0) and 1. if [1.0, 1.0 + keep_prob)
binary_tensor = math_ops.floor(random_tensor)
ret = math_ops.div(x, keep_prob) * binary_tensor
ret.set_shape(x.get_shape())
return ret
the line random_tensor += supports broadcast. When the noise_shape[i] is set to 1, that means all elements in this dimension will add the same random value ranged from 0 to 1. So when noise_shape=[k, 1, 1, n], each row and column in the feature map will be kept or not kept together. On the other hand, each example (batch) or each channel receives different random values and each of them will be kept independently.
I am trying to compute a weighted output from multiple parallel models using Keras' Merge layer. I'm using Theano backend.
I have L parallel models (Ci). Each of their output layer is a k-sized softmax.
There is one model (N), its output is a L-sized softmax.
Here is what I have so far:
Parallel models (Ci) each with k dimension in the output layer:
model.add(Dense(K, activation='softmax', W_regularizer=l2(0.001),init='normal'))
The weighing model (N), output layer:
model.add(Dense(L, activation='softmax', W_regularizer=l2(0.001), init='normal'))
The merger is as follows:
model.add(Merge(layers=model_group,
mode=lambda model_group: self.merge_fun(model_group, L),
output_shape = (None, k)))
where "model_group" is a (L+1)-length list [N, C1, C2, ..., CL], and merge_fun's signature is:
def merge_fun(self, model_group, L):
Mathematically, I would like the output of the merged layer to be a weighted sum:
out = N[1]x([C11, C12, C13, .., C1k]) + N[2]x([C21, C22, C23, ..., C2k]) + ... + N[L]x([CL1, CL2, CL3, ..., CLk]),
where out is a vector of size k
How can I use the Merge layer to achieve this ?
I know that the magic would probably have to happen in the 'merge_fun', but I am not sure how to perform matrix algebra in Keras. The tensor parameters don't have a "shape" parameter - they have a keras_shape = (None, K or L) - but I am not sure how to combine parallel models' output into a matrix.
I tried using a local evaluation of the following expressions:
K.concatenate([model_group[1], model_group[2]], axis=0)*model_group[0]
and
model_group[0] * K.concatenate([model_group[1], model_group[2]], axis=0)
both of which didn't throw an error, so I can't use this as a guide. After the multiplication, the result returned did not have the keras_shape variable, so I'm not sure what the shape of the result is.
Any suggestions ?
What I advise you is to use a functional API and use this is in a following manner:
Define the L output models:
softmax_1 = Dense(K, activation='softmax', ...))(input_to_softmax_1)
softmax_2 = Dense(K, activation='softmax', ...))(input_to_softmax_2)
...
softmax_L = Dense(K, activation='softmax', ...))(input_to_softmax_L)
Define the merge softmax:
merge_softmax= Dense(L, activation='softmax', ...)(input_to_merge_softmax)
merge_softmax = Reshape((1, L))(merge_softmax)
Merge and reshape the bag of L models:
bag_of_models = merge([softmax_1, ..., softmax_L], mode = 'concat')
bag_of_models = Reshape((L, K))(bag_of_models)
Compute the final merged softmax:
final_result = merge([bag_of_models, merge_softmax], mode = 'dot', dot_axes = [1, 2])
final_result = Reshape((K, ))(final_result)
Of course - depending on your topology - different tensor might be the same (e.g. input to different softmaxes). I tested this on my machine but due to extensive refactoring - I might made mistake - so if you fin one - please inform me.
The solution with Sequential is much less clear and a little bit cumbersome - but if you want one - please write in the comment so I will update my answer.
I asked similar question on CrossValidation for the image interpretation. I'm moving my detailed question here to include some code details.
The results I'm having are not fully desirable So maybe you have faced this issue before and you can help me find it out.
It is fully convolution neural network "no fully connected part".
Training part
first the images are transposed to match the convolution function. (batch_no,img_channels,width,height)
input.transpose(0, 3, 1, 2)
Learning optimized using learning rate:3e-6, Hu_uniform initialization and nestrove for 500 epochs until this convergence.
Training cost: 1.602449
Training loss: 4.610442
validation error: 5.126761
Test loss: 5.885714
Backward part
Loading Image
jpgfile = np.array(Image.open(join(testing_folder,img_name)))
Reshape to one batch
batch = jpgfile.reshape(1, jpgfile.shape[0], jpgfile.shape[1], 3)
Run the model to extract first feature map after activation using Relu
output = classifier.layer0.output
Test_model = theano.function(
inputs=[x],
outputs=output,
)
layer_Fmaps = Test_model(test_set_x)
Apply backwork model to reconstruct the image using the only activated
neurons
bch, ch, row, col = layer_Fmaps.shape
output_grad_reshaped = layer_Fmaps.reshape((-1, 1, row, col))
output_grad_reshaped = output_grad_reshaped[0].reshape(1,1,row,col)
input_shape = (1, 3, 226, 226)
W = classifier.layer0.W.get_value()[0].reshape(1,3,7,7)
kernel = theano.shared(W)
inp = T.tensor4('inp')
deconv_out = T.nnet.abstract_conv.conv2d_grad_wrt_inputs(
output_grad = inp,
filters=kernel,
input_shape= input_shape,
filter_shape=(1,3,7,7),
border_mode=(0,0),
subsample=(1,1)
)
f = theano.function(
inputs = [inp],
outputs= deconv_out)
f_out = f(output_grad_reshaped)
deconved_relu = T.nnet.relu(f_out)[0].transpose(1,2,0)
deconved = f_out[0].transpose(1,2,0)
Here we have two images results, the first is the transposed image without activation and the second with relu since kernels might have some negative weights.
It is clear from the transposed convolution image that this kernel is learn to detect some useful feature related to this image. But the reconstructing part is breaking the image color scheme during the transpose convolution. It might be because the pixels values are small float numbers. Do you see where is the problem here ?