I have train my data with cnn model and shuffle images. The first convolution layer is defined:
with tf.name_scope("conv1") as scope:
image = tf.placeholder(tf.float32, [FLAGS.batch_size, 32, 32, 1])
image = tf.reshape(image, [FLAGS.batch_size, 32, 32, 1])
print(image)
w_conv1 = weight_variable([7, 7, 1, 50])
tf.summary.histogram('w_conv1', w_conv1)
conv = tf.nn.conv2d(image, w_conv1, [1, 1, 1, 1], padding='SAME')
b_conv1 = bias_variable([50])
tf.summary.histogram('b_conv1', b_conv1)
conv1 = tf.nn.bias_add(conv, b_conv1)
tf.summary.image('conv1_img',conv1)# **this line get the error**
print('conv1:')
print(conv1)
if I remove the line" tf.summary.image('conv1_img',conv1)", program can run successfully. When I add this line ,the error:
tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed avalue for placeholder tensor 'conv1/Placeholder' with dtype float and shape [30,32,32,1]
happens,why?
The summary of you define with tf.summary.image is automatically added to a collection of summaries.
You surely run the op summaries = tf.summary.merge_all(key='summaries') to collect all the summaries added to the collection named summaries (the default sumamry collection).
Then, once you run this op within a session with sess.run(summaries), every summary previously defined is executed.
The histogram summaries depend on the model parameters values only, thus they don't need any external data to be computed.
Instead, the tf.summary.image('conv1_img',conv1) draws the output of the conv1 op that needs a placeholder (image) to be computed.
Thus, you should execute the summary op feeding the graph with the image placeholder:
sess.run(summaries, feed_dict{image: <your image here>}).
A suggestion:
Let the placeholder be a placehoder. With the statement
image = tf.placeholder(tf.float32, [FLAGS.batch_size, 32, 32, 1])
image = tf.reshape(image, [FLAGS.batch_size, 32, 32, 1])
you're
defining image as a placeholder
overwriting the python image variable to a tensorflow operation.
Thus, when you run use the feed_dict parameter to inject into the computational graph a value for the image placeholder, you're in reality, overriding a tensorflow op (and thus you have to feed an already reshaped value to make it work).
Thus, it's better to let the placeholder be a placeholder:
image_placeholder = tf.placeholder(tf.float32, [FLAGS.batch_size, 32, 32, 1])
image = tf.reshape(image_placeholder, [FLAGS.batch_size, 32, 32, 1])
and then use it to correctly feed the graph:
sess.run(<your ops>, feed_dict:{image_placeholder: <your image>})
Related
Hello I'm using RandomizedSearchCV for hyperparameter tuning of my LSTM. The code works fine with stateful=False. However I also want so try this with stateful on but I'm not sure how to.
I arranged my data in a sliding window with shape (211845 datapoints, 4 window size, 16 features).
Following function for creating the architecture of the model.
def create_lstm(dropout_rate=0.0, neurons=32, lr=1e-3):
lstm = Sequential()
lstm.add(InputLayer((4, 16)))
lstm.add(LSTM(neurons, return_sequences=True))
lstm.add(Dropout(dropout_rate))
lstm.add(LSTM(neurons)
lstm.add(Dropout(dropout_rate))
lstm.add(Dense(neurons/4, activation='relu'))
lstm.add(Dense(1))
lstm.compile(loss='mse',
optimizer=Adam(learning_rate=lr),
metrics=['mean_squared_error']
)
return lstm
I pass the function to the wrapper
lstm_estimator = KerasRegressor(build_fn=create_lstm, verbose=1)
The following code is for my param grid and RandomSearch
lstm_param_grid = {
'dropout_rate': [0, 0.2, 0.4],
'neurons': [32, 64, 128],
'batch_size': [100, 200, 400],
'epochs': [50, 100, 150],
'lr': [1e-3, 1e-4, 1e-5]
}
lstm_RandomGrid = RandomizedSearchCV(estimator = lstm_estimator,
param_distributions = lstm_param_grid,
n_iter = 10,
verbose = 10,
n_jobs = -1,
cv = 5
)
In my create_lstm function the input shape of my data is equal to the window size and the number of features. After the RandomSearch i pass the arguments epochs and batch size when I fit the model. However with stateful=True you have to define batch_input_shape=(x, y, z).
I'm really not sure what exactly to do now. How can I change my code so that the RandomSearch still tests multiple batch sizes? And what exactly are (x,y,z) in my example? I tried (batch_size=100, window size, num of features) but that didn't work out.
I'm using the HuggingFace Transformers BERT model, and I want to compute a summary vector (a.k.a. embedding) over the tokens in a sentence, using either the mean or max function. The complication is that some tokens are [PAD], so I want to ignore the vectors for those tokens when computing the average or max.
Here's an example. I initially instantiate a BertTokenizer and a BertModel:
import torch
import transformers
from transformers import AutoTokenizer, AutoModel
transformer_name = 'bert-base-uncased'
tokenizer = AutoTokenizer.from_pretrained(transformer_name, use_fast=True)
model = AutoModel.from_pretrained(transformer_name)
I then input some sentences into the tokenizer and get out input_ids and attention_mask. Notably, an attention_mask value of 0 means that the token was a [PAD] that I can ignore.
sentences = ['Deep learning is difficult yet very rewarding.',
'Deep learning is not easy.',
'But is rewarding if done right.']
tokenizer_result = tokenizer(sentences, max_length=32, padding=True, return_attention_mask=True, return_tensors='pt')
input_ids = tokenizer_result.input_ids
attention_mask = tokenizer_result.attention_mask
print(input_ids.shape) # torch.Size([3, 11])
print(input_ids)
# tensor([[ 101, 2784, 4083, 2003, 3697, 2664, 2200, 10377, 2075, 1012, 102],
# [ 101, 2784, 4083, 2003, 2025, 3733, 1012, 102, 0, 0, 0],
# [ 101, 2021, 2003, 10377, 2075, 2065, 2589, 2157, 1012, 102, 0]])
print(attention_mask.shape) # torch.Size([3, 11])
print(attention_mask)
# tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
# [1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0],
# [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0]])
Now, I call the BERT model to get the 768-D token embeddings (the top-layer hidden states).
model_result = model(input_ids, attention_mask=attention_mask, return_dict=True)
token_embeddings = model_result.last_hidden_state
print(token_embeddings.shape) # torch.Size([3, 11, 768])
So at this point, I have:
token embeddings in a [3, 11, 768] matrix: 3 sentences, 11 tokens, 768-D vector for each token.
attention mask in a [3, 11] matrix: 3 sentences, 11 tokens. A 1 value indicates non-[PAD].
How do I compute the mean / max over the vectors for the valid, non-[PAD] tokens?
I tried using the attention mask as a mask and then called torch.max(), but I don't get the right dimensions:
masked_token_embeddings = token_embeddings[attention_mask==1]
print(masked_token_embeddings.shape) # torch.Size([29, 768] <-- WRONG. SHOULD BE [3, 11, 768]
pooled = torch.max(masked_token_embeddings, 1)
print(pooled.values.shape) # torch.Size([29]) <-- WRONG. SHOULD BE [3, 768]
What I really want is a tensor of shape [3, 768]. That is, a 768-D vector for each of the 3 sentences.
For max, you can multiply with attention_mask:
pooled = torch.max((token_embeddings * attention_mask.unsqueeze(-1)), axis=1)
For mean, you can sum along the axis and divide by attention_mask along that axis:
mean_pooled = token_embeddings.sum(axis=1) / attention_mask.sum(axis=-1).unsqueeze(-1)
In addition to #Quang, you can have a look at sentence_transformers Pooling layer.
For max pooling, they do this:
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
token_embeddings[input_mask_expanded == 0] = -1e9 # Set padding tokens to large negative value
pooled = torch.max(token_embeddings, 1)[0]
And for mean pooling they do the following:
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = input_mask_expanded.sum(1)
sum_mask = torch.clamp(sum_mask, min=1e-9)
pooled = sum_embeddings / sum_mask
The max pooling presented in the accepted answer will suffer when the max is negative, and the implementation from sentence transformers changes token_embeddings, which throw an error when you want to use the embedding for back propagation:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation:
If you're interested on anything back-prop related, you can do something like this:
input_mask_expanded = torch.where(attention_mask==0, -1e-9, 0.).unsqueeze(-1).expand(token_embeddings.size()).float()
pooled = torch.max(token_embeddings-input_mask_expanded, 1)[0] # Set padding tokens to large negative value
It's the same idea of making all masked tokens to be very small, but it doesn't change the token_embeddings while at it.
Alex is right.
Look on hidden states for strings that go into tokenizer. For different strings, padding will have different embeddings.
So, in order to properly pool embeddings, you need to ignore those padding vectors.
Let's say you want to get embeddings out of the last 4 layers of BERT (as it yields the best classification results):
#iterate over the last 4 layers and get embeddings for
#strings without having embeddings from PAD tokens
m = []
for i in range(len(hidden_states[0])):
m.append([hidden_states[j+9][i,:,:][tokens["attention_mask"][i] !=0] for j in range(4)])
#average over all tokens embeddings
means = []
for i in range(len(hidden_states[0])):
means.append(torch.stack(m[i]).mean(dim=1))
#stack embeddings for all strings
pooled = torch.stack(means).reshape(-1,1,3072)
I am trying to filter a single channel 2D image of size 256x256 using unfold to create 16x16 blocks with an overlap of 8. This is shown below:
# I = [256, 256] image
kernel_size = 16
stride = bx/2
patches = I.unfold(1, kernel_size,
int(stride)).unfold(0, kernel_size, int(stride)) # size = [31, 31, 16, 16]
I have started to attempt to put the image back together with fold but I’m not quite there yet. I’ve tried to use view to get the image to ‘fit’ the way it’s supposed to but I don’t see how this would preserve the original image. Perhaps I’m overthinking this.
# patches.shape = [31, 31, 16, 16]
patches = = filt_data_block.contiguous().view(-1, kernel_size*kernel_size) # [961, 256]
patches = patches.permute(1, 0) # size = [951, 256]
Any help would be greatly appreciated. Thanks very much.
I believe you will benefit from using torch.nn.functional.fold and torch.nn.functional.unfold in this case, as these functions are built specifically for images (or any 4D tensors, that is with shape B X C X H X W).
Let's start with unfolding the image:
import torch
import torch.nn.functional as F
import matplotlib.pyplot as plt
from sklearn.datasets import load_sample_image #Used to load a sample image
dtype = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor
#Load a flower image from sklearn.datasets, crop it to shape 1 X 3 X 256 X 256:
I = torch.from_numpy(load_sample_image('flower.jpg')).permute(2,0,1).unsqueeze(0).type(dtype)[...,128:128+256,256:256+256]
kernel_size = 16
stride = kernel_size//2
I_unf = F.unfold(I, kernel_size, stride=stride)
Here we obtain all the 16x16 image patches with strides of 8 by using the F.unfold function. This will result in a 3D tensor with shape torch.Size([1, 768, 961]). ie - 961 patches with 768 = 16 X 16 X 3 pixels within each.
Now, say we wish to fold it back to I:
I_f = F.fold(I_unf,I.shape[-2:],kernel_size,stride=stride)
norm_map = F.fold(F.unfold(torch.ones(I.shape).type(dtype),kernel_size,stride=stride),I.shape[-2:],kernel_size,stride=stride)
I_f /= norm_map
We use F.fold where we tell it the original shape of I, the kernel_size we used to unfold and the stride used. After folding I_unf we will obtain a summation with overlaps. This means that the resulting image will appear saturated. As a result, we need to compute a normalization map which will normalize multiple summation of pixels due to overlaps. A way to do this efficiently is to take a ones tensor and use unfold followed by fold - to mimic the summation with overlaps. This gives us the normalization map by which we normalize I_f to recover I.
Now, we wish to plot I_f and I to prove content is preserved:
#Plot I:
plt.imshow(I[0,...].permute(1,2,0).cpu()/255)
#Plot I_f:
plt.imshow(I_f[0,...].permute(1,2,0).cpu()/255)
This whole process will work also for single-channel images. One thing to notice is that if spatial dimensions of the image are not divisible by the stride, you will get norm_map with zeros (at the edges) due to some pixels not reachable but you can easily handle this case as well.
A slightly less elegant solution than that proposed by Gil:
I took inspiration from this post on the Pytorch forums, formatting my image tensor to be of standard shape B x C x H x W (1 x 1 x 256 x 256). Unfolding:
# CREATE THE UNFOLDED IMAGE SLICES
I = image # shape [256, 256]
kernel_size = bx #shape [16]
stride = int(bx/2) #shape [8]
I2 = I.unsqueeze(0).unsqueeze(0) #shape [1, 1, 256, 256]
patches2 = I2.unfold(2, kernel_size, stride).unfold(3, kernel_size, stride)
#shape [1, 1, 31, 31, 16, 16]
Following this, I do some transforms and filtering to my tensor stack. Before doing this I apply a cosine window and normalise:
# NORMALISE AND WINDOW
Pvv = torch.mean(torch.pow(win, 2))*torch.numel(win)*(noise_std**2)
Pvv = Pvv.double()
mean_patches = torch.mean(patches2, (4, 5), keepdim=True)
mean_patches = mean_patches.repeat(1, 1, 1, 1, 16, 16)
window_patches = win.unsqueeze(0).unsqueeze(0).unsqueeze(0).unsqueeze(0).repeat(1, 1, 31, 31, 1, 1)
zero_mean = patches2 - mean_patches
windowed_patches = zero_mean * window_patches
#SOME FILTERING ....
#ADD MEAN AND WINDOW BEFORE FOLDING BACK TOGETHER.
filt_data_block = (filt_data_block + mean_patches*window_patches) * window_patches
The above code works for me, but a mask would be more simple. Next, I prepare my tensor of form [1, 1, 31, 31, 16, 16] to be transformed back into the original [1, 1, 256, 256]:
# REASSEMBLE THE IMAGE USING FOLD
patches = filt_data_block.contiguous().view(1, 1, -1, kernel_size*kernel_size)
patches = patches.permute(0, 1, 3, 2)
patches = patches.contiguous().view(1, kernel_size*kernel_size, -1)
IR = F.fold(patches, output_size=(256, 256), kernel_size=kernel_size, stride=stride)
IR = IR.squeeze()
This allowed me to create an overlapping sliding window and seamlessly stitch the image back together. Cutting out the filtering makes for an identical image.
Please forgive my ignorance as I am really new to the area. I am trying to get the correct output shape from my neural network which has 3 Conv2D layers then 2 Dense layers. My input shape is (140, 140, 4), which are 4 grayscale images. When I fit in 1 input, I am expecting an output of (1, 4) but I am getting a shape of (14, 14, 4) here. What am I doing wrong here? Thank you very much for your help in advance!
meta_layers = [Conv2D, Conv2D, Conv2D, Dense, Dense]
meta_inits = ['lecun_uniform'] * 5
meta_nodes = [32, 64, 64, 512, 4]
meta_filter = [(8,8), (4,4), (3,3), None, None]
meta_strides = [(4,4), (2,2), (1,1), None, None]
meta_activations = ['relu'] * 5
meta_loss = "mean_squared_error"
meta_optimizer=RMSprop(lr=0.00025, rho=0.9, epsilon=1e-06)
meta_n_samples = 1000
meta_epsilon = 1.0;
meta = Sequential()
meta.add(self.meta_layers[0](self.meta_nodes[0], init=self.meta_inits[0], input_shape=(140, 140, 4), kernel_size=self.meta_filters[0], strides=self.meta_strides[0]))
meta.add(Activation(self.meta_activations[0]))
for layer, init, node, activation, kernel, stride in list(zip(self.meta_layers, self.meta_inits, self.meta_nodes, self.meta_activations, self.meta_filters, self.meta_strides))[1:]:
if(layer == Conv2D):
meta.add(layer(node, init = init, kernel_size = kernel, strides = stride))
meta.add(Activation(activation))
elif(layer == Dense):
meta.add(layer(node, init=init))
meta.add(Activation(activation))
print("meta node: " + str(node))
meta.compile(loss=self.meta_loss, optimizer=self.meta_optimizer)
Your problem lies in the fact that in Keras with version >= 2.0, a Dense layer is applied to the last channel of the inputs (you may read about it here). So if you apply:
Dense(512)
to a Conv2D layer with shape (14, 14, 64) you'll get the output with shape (14, 14, 512) and then Dense(4) applied to it will give you output with shape (14, 14, 4). You can call model.summary() method to confirm my words.
In order to solve this you need to apply one of the following layers: GlobalMaxPooling2D, GlobalAveragePooling2D or Flatten to the output from the last convolutional layer in order to squash your output to be only 2 dimensional (with shape (batch_size, features).
I am training a CNN with Keras but with 30x30 patches from an image. I want to test the network with a full image but I get the following error:
ValueError: GpuElemwise. Input dimension mis-match. Input 2 (indices
start at 0) has shape[1] == 30, but the output's size on that axis is
100. Apply node that caused the error: GpuElemwise{Composite{((i0 + i1) - i2)}}[(0, 0)](GpuDimShuffle{0,2,3,1}.0, GpuReshape{4}.0,
GpuFromHost.0) Toposort index: 79 Inputs types:
[CudaNdarrayType(float32, 4D), CudaNdarrayType(float32, (True, True,
True, False)), CudaNdarrayType(float32, 4D)] Inputs shapes: [(10, 100,
100, 3), (1, 1, 1, 3), (10, 30, 30, 3)] Inputs strides: [(30000, 100,
1, 10000), (0, 0, 0, 1), (2700, 90, 3, 1)] Inputs values: ['not
shown', CudaNdarray([[[[ 0.01060364 0.00988821 0.00741314]]]]), 'not
shown'] Outputs clients:
[[GpuCAReduce{pre=sqr,red=add}{0,1,1,1}(GpuElemwise{Composite{((i0 +
i1) - i2)}}[(0, 0)].0)]]
This is my model.predict:
predict_image = model.predict(np.array([test_images[1]]), batch_size=1)[0]
It's seems like the issue is that the input size cannot be anything other than 30x30 but the first input shape for the first layer of my network is none, none, 3.
model.add(Convolution2D(n1, f1, f1, border_mode='same', input_shape=(None, None, 3), activation='relu'))
Is it simply not possible to test an image with different dimensions to the ones I trained with?
As fchollet himself described here, you should be able to define the input as so:
input_shape=(1, None, None)
However this will fail if you have layers that use the Flatten operation.
This suggests that you should be able to accomplish your goal with a fully convolutional NN.