How to choose Strategy in SimpleImputer - machine-learning

How to choose strategy ( mean, median, most_frequent, constant) in a SimpleImputer?
What exactly "constant" strategy does ?

You should refer to this documentation which contains complete description for each strategy.
constant strategy would fill the missing values with a constant which will be defined by the parameter fill_value.
Example:
imp_constant = SimpleImputer(missing_values=np.nan, strategy='constant', fill_value = 1)
imp_constant.fit_transform([[7, 2, 3], [4, np.nan, 6], [10, 5, 9]])
Orignal:
([[ 7., 2., 3.],
[ 4., nan, 6.],
[10., 5., 9.]])
Output:
([[ 7., 2., 3.],
[ 4., 1., 6.],
[10., 5., 9.]])
In the above example since I've chosen strategy='constant', therefore I need to define the constant value which needs to filled that is done using the parameter fill_value = 1. NaN occurrence would then be filled by 1.

Related

How to compute mean/max of HuggingFace Transformers BERT token embeddings with attention mask?

I'm using the HuggingFace Transformers BERT model, and I want to compute a summary vector (a.k.a. embedding) over the tokens in a sentence, using either the mean or max function. The complication is that some tokens are [PAD], so I want to ignore the vectors for those tokens when computing the average or max.
Here's an example. I initially instantiate a BertTokenizer and a BertModel:
import torch
import transformers
from transformers import AutoTokenizer, AutoModel
transformer_name = 'bert-base-uncased'
tokenizer = AutoTokenizer.from_pretrained(transformer_name, use_fast=True)
model = AutoModel.from_pretrained(transformer_name)
I then input some sentences into the tokenizer and get out input_ids and attention_mask. Notably, an attention_mask value of 0 means that the token was a [PAD] that I can ignore.
sentences = ['Deep learning is difficult yet very rewarding.',
'Deep learning is not easy.',
'But is rewarding if done right.']
tokenizer_result = tokenizer(sentences, max_length=32, padding=True, return_attention_mask=True, return_tensors='pt')
input_ids = tokenizer_result.input_ids
attention_mask = tokenizer_result.attention_mask
print(input_ids.shape) # torch.Size([3, 11])
print(input_ids)
# tensor([[ 101, 2784, 4083, 2003, 3697, 2664, 2200, 10377, 2075, 1012, 102],
# [ 101, 2784, 4083, 2003, 2025, 3733, 1012, 102, 0, 0, 0],
# [ 101, 2021, 2003, 10377, 2075, 2065, 2589, 2157, 1012, 102, 0]])
print(attention_mask.shape) # torch.Size([3, 11])
print(attention_mask)
# tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
# [1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0],
# [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0]])
Now, I call the BERT model to get the 768-D token embeddings (the top-layer hidden states).
model_result = model(input_ids, attention_mask=attention_mask, return_dict=True)
token_embeddings = model_result.last_hidden_state
print(token_embeddings.shape) # torch.Size([3, 11, 768])
So at this point, I have:
token embeddings in a [3, 11, 768] matrix: 3 sentences, 11 tokens, 768-D vector for each token.
attention mask in a [3, 11] matrix: 3 sentences, 11 tokens. A 1 value indicates non-[PAD].
How do I compute the mean / max over the vectors for the valid, non-[PAD] tokens?
I tried using the attention mask as a mask and then called torch.max(), but I don't get the right dimensions:
masked_token_embeddings = token_embeddings[attention_mask==1]
print(masked_token_embeddings.shape) # torch.Size([29, 768] <-- WRONG. SHOULD BE [3, 11, 768]
pooled = torch.max(masked_token_embeddings, 1)
print(pooled.values.shape) # torch.Size([29]) <-- WRONG. SHOULD BE [3, 768]
What I really want is a tensor of shape [3, 768]. That is, a 768-D vector for each of the 3 sentences.
For max, you can multiply with attention_mask:
pooled = torch.max((token_embeddings * attention_mask.unsqueeze(-1)), axis=1)
For mean, you can sum along the axis and divide by attention_mask along that axis:
mean_pooled = token_embeddings.sum(axis=1) / attention_mask.sum(axis=-1).unsqueeze(-1)
In addition to #Quang, you can have a look at sentence_transformers Pooling layer.
For max pooling, they do this:
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
token_embeddings[input_mask_expanded == 0] = -1e9 # Set padding tokens to large negative value
pooled = torch.max(token_embeddings, 1)[0]
And for mean pooling they do the following:
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = input_mask_expanded.sum(1)
sum_mask = torch.clamp(sum_mask, min=1e-9)
pooled = sum_embeddings / sum_mask
The max pooling presented in the accepted answer will suffer when the max is negative, and the implementation from sentence transformers changes token_embeddings, which throw an error when you want to use the embedding for back propagation:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation:
If you're interested on anything back-prop related, you can do something like this:
input_mask_expanded = torch.where(attention_mask==0, -1e-9, 0.).unsqueeze(-1).expand(token_embeddings.size()).float()
pooled = torch.max(token_embeddings-input_mask_expanded, 1)[0] # Set padding tokens to large negative value
It's the same idea of making all masked tokens to be very small, but it doesn't change the token_embeddings while at it.
Alex is right.
Look on hidden states for strings that go into tokenizer. For different strings, padding will have different embeddings.
So, in order to properly pool embeddings, you need to ignore those padding vectors.
Let's say you want to get embeddings out of the last 4 layers of BERT (as it yields the best classification results):
#iterate over the last 4 layers and get embeddings for
#strings without having embeddings from PAD tokens
m = []
for i in range(len(hidden_states[0])):
m.append([hidden_states[j+9][i,:,:][tokens["attention_mask"][i] !=0] for j in range(4)])
#average over all tokens embeddings
means = []
for i in range(len(hidden_states[0])):
means.append(torch.stack(m[i]).mean(dim=1))
#stack embeddings for all strings
pooled = torch.stack(means).reshape(-1,1,3072)

Keras Tokenizer num_words doesn't seem to work

>>> t = Tokenizer(num_words=3)
>>> l = ["Hello, World! This is so&#$ fantastic!", "There is no other world like this one"]
>>> t.fit_on_texts(l)
>>> t.word_index
{'fantastic': 6, 'like': 10, 'no': 8, 'this': 2, 'is': 3, 'there': 7, 'one': 11, 'other': 9, 'so': 5, 'world': 1, 'hello': 4}
I'd have expected t.word_index to have just the top 3 words. What am I doing wrong?
There is nothing wrong in what you are doing. word_index is computed the same way no matter how many most frequent words you will use later (as you may see here). So when you will call any transformative method - Tokenizer will use only three most common words and at the same time, it will keep the counter of all words - even when it's obvious that it will not use it later.
Just a add on Marcin's answer ("it will keep the counter of all words - even when it's obvious that it will not use it later.").
The reason it keeps counter on all words is that you can call fit_on_texts multiple times. Each time it will update the internal counters, and when transformations are called, it will use the top words based on the updated counters.
Hope it helps.
Limiting num_words to a small number (eg, 3) has no effect on fit_on_texts outputs such as word_index, word_counts, word_docs. It does have effect on texts_to_matrix. The resulting matrix will have num_words (3) columns.
>>> t = Tokenizer(num_words=3)
>>> l = ["Hello, World! This is so&#$ fantastic!", "There is no other world like this one"]
>>> t.fit_on_texts(l)
>>> print(t.word_index)
{'world': 1, 'this': 2, 'is': 3, 'hello': 4, 'so': 5, 'fantastic': 6, 'there': 7, 'no': 8, 'other': 9, 'like': 10, 'one': 11}
>>> t.texts_to_matrix(l, mode='count')
array([[0., 1., 1.],
[0., 1., 1.]])
Just to add a little bit to farid khafizov's answer,
words at sequence of num_words and above are removed from the results of texts_to_sequences (4 in 1st, 5 in 2nd and 6 in 3rd sentence disappeared respectively)
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
print(tf.__version__) # 2.4.1, in my case
sentences = [
'I love my dog',
'I, love my cat',
'You love my dog!'
]
tokenizer = Tokenizer(num_words=4)
tokenizer.fit_on_texts(sentences)
word_index = tokenizer.word_index
seq = tokenizer.texts_to_sequences(sentences)
print(word_index) # {'love': 1, 'my': 2, 'i': 3, 'dog': 4, 'cat': 5, 'you': 6}
print(seq) # [[3, 1, 2], [3, 1, 2], [1, 2]]

How to implement sparse mean squared error loss in Keras

I wanted to modify the following keras mean squared error loss (MSE) such that the loss is only computed sparsely.
def mean_squared_error(y_true, y_pred):
return K.mean(K.square(y_pred - y_true), axis=-1)
My output y is a 3 channel image, where the 3rd channel is non-zero at only those pixels where loss is to be computed. Any idea how can I modify the above to compute sparse loss?
This is not the exact loss you are looking for, but I hope it will give you a hint to write your function (see also here for a Github discussion):
def masked_mse(mask_value):
def f(y_true, y_pred):
mask_true = K.cast(K.not_equal(y_true, mask_value), K.floatx())
masked_squared_error = K.square(mask_true * (y_true - y_pred))
masked_mse = (K.sum(masked_squared_error, axis=-1) /
K.sum(mask_true, axis=-1))
return masked_mse
f.__name__ = 'Masked MSE (mask_value={})'.format(mask_value)
return f
The function computes the MSE loss over all the values of the predicted output, except for those elements whose corresponding value in the true output is equal to a masking value (e.g. -1).
Two notes:
when computing the mean the denominator must be the count of non-masked values and not the
dimension of the array, that's why I'm not using K.mean(masked_squared_error, axis=1) and I'm
instead averaging manually.
the masking value must be a valid number (i.e. np.nan or np.inf will not do the job), which means that you'll have to adapt your data so that it does not contain the mask_value.
In this example, the target output is always [1, 1, 1, 1], but some prediction values are progressively masked.
y_pred = K.constant([[ 1, 1, 1, 1],
[ 1, 1, 1, 3],
[ 1, 1, 1, 3],
[ 1, 1, 1, 3],
[ 1, 1, 1, 3],
[ 1, 1, 1, 3]])
y_true = K.constant([[ 1, 1, 1, 1],
[ 1, 1, 1, 1],
[-1, 1, 1, 1],
[-1,-1, 1, 1],
[-1,-1,-1, 1],
[-1,-1,-1,-1]])
true = K.eval(y_true)
pred = K.eval(y_pred)
loss = K.eval(masked_mse(-1)(y_true, y_pred))
for i in range(true.shape[0]):
print(true[i], pred[i], loss[i], sep='\t')
The expected output is:
[ 1. 1. 1. 1.] [ 1. 1. 1. 1.] 0.0
[ 1. 1. 1. 1.] [ 1. 1. 1. 3.] 1.0
[-1. 1. 1. 1.] [ 1. 1. 1. 3.] 1.33333
[-1. -1. 1. 1.] [ 1. 1. 1. 3.] 2.0
[-1. -1. -1. 1.] [ 1. 1. 1. 3.] 4.0
[-1. -1. -1. -1.] [ 1. 1. 1. 3.] nan
To prevent nan from showing up, follow the instructions here. The following assumes you want the masked value (background) to be equal to zero:
# Copied almost character-by-character (only change is default mask_value=0)
# from https://github.com/keras-team/keras/issues/7065#issuecomment-394401137
def masked_mse(mask_value=0):
"""
Made default mask_value=0; not sure this is necessary/helpful
"""
def f(y_true, y_pred):
mask_true = K.cast(K.not_equal(y_true, mask_value), K.floatx())
masked_squared_error = K.square(mask_true * (y_true - y_pred))
# in case mask_true is 0 everywhere, the error would be nan, therefore divide by at least 1
# this doesn't change anything as where sum(mask_true)==0, sum(masked_squared_error)==0 as well
masked_mse = K.sum(masked_squared_error, axis=-1) / K.maximum(K.sum(mask_true, axis=-1), 1)
return masked_mse
f.__name__ = str('Masked MSE (mask_value={})'.format(mask_value))
return f

Why are all of my results from SVM the same in scikit learn?

I'm trying to calculate probabilities for a multi-class dataset using scikit learn. However, for some reason, I'm getting a the same probabilities for every example. Any idea what's happening? Does this have to do with my model, my use of the library, or something else? Appreciate any help!
svm_model = svm.SVC(probability=True, kernel='rbf',C=1, decision_function_shape='ovr', gamma=0.001,verbose=100)
svm_model.fit(train_X,train_y)
preds= svm_model.predict_proba(test_X)
train_X looks like this
array([[2350, 5550, 2750.0, ..., 23478, 1, 3],
[2500, 5500, 3095.5, ..., 23674, 0, 3],
[3300, 6900, 3600.0, ..., 6529, 0, 3],
...,
[2150, 6175, 2500.0, ..., 11209, 0, 3],
[2095, 5395, 2595.4, ..., 10070, 0, 3],
[1650, 2850, 2000.0, ..., 25463, 1, 3]], dtype=object)
train_y looks like this
0 1
1 2
10 2
100 2
1000 2
10000 2
10001 2
10002 2
10003 2
10004 2
10005 2
10006 2
10007 2
10008 1
10009 1
1001 2
10010 2
test_X looks like this
array([[2190, 3937, 2200.5, ..., 24891, 1, 5],
[2695, 7000, 2850.0, ..., 5491, 1, 4],
[2950, 12000, 4039.5, ..., 22367, 0, 4],
...,
[2850, 5200, 3000.0, ..., 15576, 1, 1],
[3200, 16000, 4100.0, ..., 1320, 0, 3],
[2100, 3750, 2400.0, ..., 6022, 0, 1]], dtype=object)
My results look like
array([[ 0.07819139, 0.22727628, 0.69453233],
[ 0.07819139, 0.22727628, 0.69453233],
[ 0.07819139, 0.22727628, 0.69453233],
...,
[ 0.07819139, 0.22727628, 0.69453233],
[ 0.07819139, 0.22727628, 0.69453233],
[ 0.07819139, 0.22727628, 0.69453233]])
Start with preprocessing!.
It's very important to standardize your data to zero-mean and unit-variance.
The scikit-learn docs say this:
Support Vector Machine algorithms are not scale invariant, so it is highly recommended to scale your data. For example, scale each attribute on the input vector X to [0,1] or [-1,+1], or standardize it to have mean 0 and variance 1. Note that the same scaling must be applied to the test vector to obtain meaningful results. See section Preprocessing data for more details on scaling and normalization
sklearns Section on Preprocessing
sklearns StandardScaler.
The next step after this is parameter-tuning (C, gamma and co.). This is usually done by GridSearch. But i usually expect people to try a simple LinearSVM first before trying the Kernel-SVM (less hyper-parameters, less computation-time, better generalization for non-optimal parameter-chosings).

How to repeat unknown dimension in TensorFlow

For example (I can do this with Theano without a problem):
std_var = T.repeat(T.exp(log_var)[None, :], Mean.shape[0], axis=0)
wrt TF Mean has shape (?, num), but log_var has shape (num,)
I don't know how to do the same in TensorFlow...
You can use shape to extract the shape of a placeholder during evaluation. Then simply tile the tensor. For instance, for:
num = 3
p1 = tf.placeholder(tf.float32, (None, num))
p2 = tf.placeholder(tf.float32, (num,))
the operation:
op = tf.tile(tf.reshape(p2, [1, -1]), (tf.shape(p1)[0], 1))
sess.run(op, feed_dict={p1:[[1,2,3],
[4,5,6]],
p2: [1,2,1]})
will give:
array([[ 1., 2., 1.],
[ 1., 2., 1.]], dtype=float32)
However, in most cases you actually do not need to do that since you can rely on the broadcasting behavior of TF operations. For instance:
op = tf.add(p1, p2)
sess.run(op, feed_dict={p1:[[1,2,3],
[4,5,6]],
p2: [1,2,1]})
gives:
array([[ 2., 4., 4.],
[ 5., 7., 7.]], dtype=float32)

Resources