Fill NaN gaps, with mean of bffil and ffill, only if the gaps not exceed 3 - mean

Following the line of codes that I run:
#count the number of consecutive missing values
'''
df['null'] = df.old.isnull().astype(int).groupby(df.old.notnull().astype(int).cumsum()).sum()
'''
#Fill all missing values
'''
df['new'] = (df['old'].ffill()+df['old'].bfill())/2
'''
#Find the indexes with (previously) more than 3 NaN values and assign NaN to them
'''
df.loc[df['null'] > 3, 'old'] = np.NaN
'''
'''
df.to_excel('result.xlsx', index = False, na_rep='NaN')
'''
That's the result that I obtained

You can try this:
#count the number of consecutive missing values
df["null"] = df.yourcolumnname.isnull().astype(int).groupby(df.a.notnull().astype(int).cumsum()).sum()
# Fill all missing values
df["yourcolumnname"] = (df["yourcolumnname"].ffill()+df["yourcolumnname"].bfill())/2
#Find the indexes with (previously) more than 3 NaN values and assign NaN to them
df.loc[df["null"] > 3, "yourcolumnname"] = np.NaN

Related

Questions regarding custom multiclass metrics (Keras)

could anyone explain how to write a custom multiclass metrics for Keras? I tried to write custom metric but encountered some issue. Main problem is I am not familiar with how tensor works during training (I think it is called Graph mode?). I am able to create confusion matrix and derived F1 score using NumPy or Python list.
I printed out the y-true and y_pred and tried to understand them, but the output was not what I expected:
Below is the function I used:
def f1_scores(y_true,y_pred):
y_true = K.print_tensor(y_true, message='y_true = ')
y_pred = K.print_tensor(y_pred, message='y_pred = ')
print(f"y_true_shape:{K.int_shape(y_true)}")
print(f"y_pred_shape:{K.int_shape(y_pred)}")
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
gt = K.argmax(y_true_f)
pred = K.argmax(y_pred_f)
print(f"pred_print:{pred}")
print(f"gt_print:{gt}")
pred = K.print_tensor(pred, message='pred= ')
gt = K.print_tensor(gt, message='gt =')
print(f"pred_shape:{K.int_shape(pred)}")
print(f"gt_shape:{K.int_shape(gt)}")
pred_f = K.flatten(pred)
gt_f = K.flatten(gt)
pred_f = K.print_tensor(pred_f, message='pred_f= ')
gt_f = K.print_tensor(gt_f, message='gt_f =')
print(f"pred_f_shape:{K.int_shape(pred_f)}")
print(f"gt_f_shape:{K.int_shape(gt_f)}")
conf_mat = tf.math.confusion_matrix(y_true_f,y_pred_f, num_classes = 14)
"""
add codes to find F1 score for each class
"""
# return an arbitrary number, as F1 scores not found yet.
return 1
The output at when epoch 1 just started:
y_true_shape:(None, 256, 256, 14)
y_pred_shape:(None, 256, 256, 14)
pred_print:Tensor("ArgMax_1:0", shape=(), dtype=int64)
gt_print:Tensor("ArgMax:0", shape=(), dtype=int64)
pred_shape:()
gt_shape:()
pred_f_shape:(1,)
gt_f_shape:(1,)
Then for the rest of the steps and epochs were similar as below:
y_true = [[[[1 0 0 ... 0 0 0]
[1 0 0 ... 0 0 0]
[1 0 0 ... 0 0 0]
...
y_pred = [[[[0.0889623 0.0624801107 0.0729747042 ... 0.0816219151 0.0735477135 0.0698677748]
[0.0857798532 0.0721047595 0.0754121244 ... 0.0723947287 0.0728530064 0.0676521733]
[0.0825942457 0.0670698211 0.0879610255 ... 0.0721599609 0.0845924541 0.0638583601]
...
pred= 1283828
gt = 0
pred_f= [1283828]
gt_f = [0]
Why is pred a number instead of a list of numbers with each number represents index of class? Similarly, why is pred_f is a list with only one number instead of list of indices?
And for gt (and gt_f), why is the value 0? I expect them to be list of indices.
I looks like argmax() simply uses the flattened y.
You need to specify which axis you want argmax() to reduce. Probably it's the last one, in your case 3. Then you'll get pred with a shape (None, 256, 256) containing integer between 0 and 13.
Try something like this: pred = K.argmax(y_pred, axis=3)
This is the documentation for tensorflow argmax. (But I'm not sure if you're using exactly that, since I can not see what K is imported as)

DesicionTreeClassifier: Input contains NaN, infinity or a value too large for dtype('float32')

clf = DecisionTreeClassifier()
scoring = 'accuracy'
score = cross_val_score(clf, train_data, target, cv=k_fold, n_jobs=1, error_score='raise')
print(score)
after run this code I have error:
ValueError: Input contains NaN, infinity or a value too large for
dtype('float32').
So how I can fix it?
Decision Trees doesn't accept NaN / infinity values.
Try doing (assuming that train_data is a Pandas DataFrame):
train_data.fillna(0, inplace = True)
This will replace all NaN values by 0.
If you don't want this, the only thing to do is to delete entries with NaN data :
train_data.dropna(inplace = True)
If this is not a DataFrame, try adding this line before the fillna method:
train_data = pd.DataFrame(train_data)

Tensorflow Dimension size must be evenly divisible by N but is M for 'linear/linear_model/x/Reshape'

I've been trying to use tensorflow's tf.estimator, but I'm getting the following errors regarding the shape of input/output data.
ValueError: Dimension size must be evenly divisible by 9 but is 12 for
'linear/linear_model/x/Reshape' (op: 'Reshape') with input shapes:
[4,3], [2] and with input tensors computed as partial shapes: input[1]
= [?,9].
Here is the code:
data_size = 3
iterations = 10
learn_rate = 0.005
# generate test data
input = np.random.rand(data_size,3)
output = np.dot(input, [2, 3 ,7]) + 4
output = np.transpose([output])
feature_columns = [tf.feature_column.numeric_column("x", shape=(data_size, 3))]
estimator = tf.estimator.LinearRegressor(feature_columns=feature_columns)
input_fn = tf.estimator.inputs.numpy_input_fn({"x":input}, output, batch_size=4, num_epochs=None, shuffle=True)
estimator.train(input_fn=input_fn, steps=iterations)
input data shape is shape=(3, 3):
[[ 0.06525168 0.3171153 0.61675511]
[ 0.35166298 0.71816544 0.62770994]
[ 0.77846666 0.20930611 0.1710842 ]]
output data shape is shape=(3, 1)
[[ 9.399135 ]
[ 11.25179188]
[ 7.38244104]]
I have sense it is related to input data, output data and batch_size, because when input data changed to 1 row it works. When input data rows count equal to batch_size(data_size = 10 and batch_size=10) then it throws other error:
ValueError: Shapes (1, 1) and (10, 1) are incompatible
Any help with the errors would be much appreciated.

How does this binary encoder function work?

I'm trying to understand the logic behind this binary encoder.
It automatically takes categorical variables and dummy codes them (similar to one-hot-encoding on sklearn), but reduces the number of output columns equal to the log2 of the length of unique values.
Basically, when I used this library, I noticed that my dummy variables are limited to only a few of the unique values. Upon further investigation I noticed this #staticmethod, which take the log2 of the len of unique values in a categorical variable.
My question is WHY? I realize that this reduces the dimensionality of the output data, but what is the logic behind doing this? How does taking the log2 determine how many digits are needed to represent the data?
def calc_required_digits(X, col):
"""
figure out how many digits we need to represent the classes present
"""
return int( np.ceil(np.log2(len(X[col].unique()))) )
Full source code:
"""Binary encoding"""
import copy
import pandas as pd
import numpy as np
from sklearn.base import BaseEstimator, TransformerMixin
from category_encoders.ordinal import OrdinalEncoder
from category_encoders.utils import get_obj_cols, convert_input
__author__ = 'willmcginnis'
[docs]class BinaryEncoder(BaseEstimator, TransformerMixin):
"""Binary encoding for categorical variables, similar to onehot, but stores categories as binary bitstrings.
Parameters
----------
verbose: int
integer indicating verbosity of output. 0 for none.
cols: list
a list of columns to encode, if None, all string columns will be encoded
drop_invariant: bool
boolean for whether or not to drop columns with 0 variance
return_df: bool
boolean for whether to return a pandas DataFrame from transform (otherwise it will be a numpy array)
impute_missing: bool
boolean for whether or not to apply the logic for handle_unknown, will be deprecated in the future.
handle_unknown: str
options are 'error', 'ignore' and 'impute', defaults to 'impute', which will impute the category -1. Warning: if
impute is used, an extra column will be added in if the transform matrix has unknown categories. This can causes
unexpected changes in dimension in some cases.
Example
-------
>>>from category_encoders import *
>>>import pandas as pd
>>>from sklearn.datasets import load_boston
>>>bunch = load_boston()
>>>y = bunch.target
>>>X = pd.DataFrame(bunch.data, columns=bunch.feature_names)
>>>enc = BinaryEncoder(cols=['CHAS', 'RAD']).fit(X, y)
>>>numeric_dataset = enc.transform(X)
>>>print(numeric_dataset.info())
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 506 entries, 0 to 505
Data columns (total 16 columns):
CHAS_0 506 non-null int64
RAD_0 506 non-null int64
RAD_1 506 non-null int64
RAD_2 506 non-null int64
RAD_3 506 non-null int64
CRIM 506 non-null float64
ZN 506 non-null float64
INDUS 506 non-null float64
NOX 506 non-null float64
RM 506 non-null float64
AGE 506 non-null float64
DIS 506 non-null float64
TAX 506 non-null float64
PTRATIO 506 non-null float64
B 506 non-null float64
LSTAT 506 non-null float64
dtypes: float64(11), int64(5)
memory usage: 63.3 KB
None
"""
def __init__(self, verbose=0, cols=None, drop_invariant=False, return_df=True, impute_missing=True, handle_unknown='impute'):
self.return_df = return_df
self.drop_invariant = drop_invariant
self.drop_cols = []
self.verbose = verbose
self.impute_missing = impute_missing
self.handle_unknown = handle_unknown
self.cols = cols
self.ordinal_encoder = None
self._dim = None
self.digits_per_col = {}
[docs] def fit(self, X, y=None, **kwargs):
"""Fit encoder according to X and y.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples
and n_features is the number of features.
y : array-like, shape = [n_samples]
Target values.
Returns
-------
self : encoder
Returns self.
"""
# if the input dataset isn't already a dataframe, convert it to one (using default column names)
# first check the type
X = convert_input(X)
self._dim = X.shape[1]
# if columns aren't passed, just use every string column
if self.cols is None:
self.cols = get_obj_cols(X)
# train an ordinal pre-encoder
self.ordinal_encoder = OrdinalEncoder(
verbose=self.verbose,
cols=self.cols,
impute_missing=self.impute_missing,
handle_unknown=self.handle_unknown
)
self.ordinal_encoder = self.ordinal_encoder.fit(X)
for col in self.cols:
self.digits_per_col[col] = self.calc_required_digits(X, col)
# drop all output columns with 0 variance.
if self.drop_invariant:
self.drop_cols = []
X_temp = self.transform(X)
self.drop_cols = [x for x in X_temp.columns.values if X_temp[x].var() <= 10e-5]
return self
[docs] def transform(self, X):
"""Perform the transformation to new categorical data.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
p : array, shape = [n_samples, n_numeric + N]
Transformed values with encoding applied.
"""
if self._dim is None:
raise ValueError('Must train encoder before it can be used to transform data.')
# first check the type
X = convert_input(X)
# then make sure that it is the right size
if X.shape[1] != self._dim:
raise ValueError('Unexpected input dimension %d, expected %d' % (X.shape[1], self._dim, ))
if not self.cols:
return X
X = self.ordinal_encoder.transform(X)
X = self.binary(X, cols=self.cols)
if self.drop_invariant:
for col in self.drop_cols:
X.drop(col, 1, inplace=True)
if self.return_df:
return X
else:
return X.values
[docs] def binary(self, X_in, cols=None):
"""
Binary encoding encodes the integers as binary code with one column per digit.
"""
X = X_in.copy(deep=True)
if cols is None:
cols = X.columns.values
pass_thru = []
else:
pass_thru = [col for col in X.columns.values if col not in cols]
bin_cols = []
for col in cols:
# get how many digits we need to represent the classes present
digits = self.digits_per_col[col]
# map the ordinal column into a list of these digits, of length digits
X[col] = X[col].map(lambda x: self.col_transform(x, digits))
for dig in range(digits):
X[str(col) + '_%d' % (dig, )] = X[col].map(lambda r: int(r[dig]) if r is not None else None)
bin_cols.append(str(col) + '_%d' % (dig, ))
X = X.reindex(columns=bin_cols + pass_thru)
return X
[docs] #staticmethod
def calc_required_digits(X, col):
"""
figure out how many digits we need to represent the classes present
"""
return int( np.ceil(np.log2(len(X[col].unique()))) )
[docs] #staticmethod
def col_transform(col, digits):
"""
The lambda body to transform the column values
"""
if col is None or float(col) < 0.0:
return None
else:
col = list("{0:b}".format(int(col)))
if len(col) == digits:
return col
else:
return [0 for _ in range(digits - len(col))] + col
My question is WHY? I realize that this reduces the dimensionality of
the output data, but what is the logic behind doing this?
Basically, the issue of categorical encoding is to make your algorithm it's dealing with categorical features. Therefore, several methods are available for doing it, including binary encoding. Actually, it's logic is close to the logic of One Hot Encoding (OHE), if you understood it.
For binary encoding, each unique label in your categorical vector is associated randomly to a number between (0) and (the number of unique labels-1). Now, you encode this number in base 2 and "transcript" the previous number in 0 and 1 through the newly created columns.
As an example, let's say your dataset as three different labels: 'A', 'B' & 'C'.
The following correspondance is randomly built:
'A' -> 1 -> 01;
'B' -> 2 > 10;
'C' -> 0 -> 00.
Therefore, an example of encoding of a given dataset is:
index my_category enc_category_0 enc_category_1
0 A, 1, 0
1, B, 0, 1
2, C, 0, 0
3 A, 1, 0
Regarding the utility of it, as you said it's reduce the dimensionality. Besides, I guess it helps not having too much zeros in the encoded columns as with OHE. Here is an interesting post: https://medium.com/data-design/visiting-categorical-features-and-encoding-in-decision-trees-53400fa65931
How does taking the log2 determine how many digits are needed to represent the data?
If you understood the working principle, you understand the use of the log2. Computing the log2 of a number retrives the necessary number of digits for a binary encoding of this number. Example: [log2(10)]=[3.32]=4, 4 digits are needed for binary encode 10.
For more info about the implementation and code example: http://contrib.scikit-learn.org/categorical-encoding/_modules/category_encoders/binary.html#BinaryEncoder
Hope I was clear,
Tchau

How to explain high AUC-ROC with mediocre precision and recall in unbalanced data?

I have some machine learning results that I am trying to make sense of. The task is to predict/label "Irish" vs. "non-Irish". Python 2.7's output:
1= ir
0= non-ir
Class count:
0 4090942
1 940852
Name: ethnicity_scan, dtype: int64
Accuracy: 0.874921350119
Classification report:
precision recall f1-score support
0 0.89 0.96 0.93 2045610
1 0.74 0.51 0.60 470287
avg / total 0.87 0.87 0.87 2515897
Confusion matrix:
[[1961422 84188]
[ 230497 239790]]
AUC-ir= 0.901238104773
As you can see, the precision and recall are mediocre, but the AUC-ROC is higher (~0.90). And I am trying to figure out why, which I suspect is due to data imbalance (about 1:5). Based on the confusion matrix, and using Irish as the target (+), I calculated the TPR=0.51 and FPR=0.04. If I am considering non-Irish as (+), then TPR=0.96 and FPR=0.49. So how can I get a 0.9 AUC while the TPR can be only 0.5 at FPR=0.04?
Codes:
try:
for i in mass[k]:
df = df_temp # reset df before each loop
#$$
#$$
if 1==1:
###if i == singleEthnic:
count+=1
ethnicity_tar = str(i) # fr, en, ir, sc, others, ab, rus, ch, it, jp
# fn, metis, inuit; algonquian, iroquoian, athapaskan, wakashan, siouan, salish, tsimshian, kootenay
############################################
############################################
def ethnicity_target(row):
try:
if row[ethnicity_var] == ethnicity_tar:
return 1
else:
return 0
except: return None
df['ethnicity_scan'] = df.apply(ethnicity_target, axis=1)
print '1=', ethnicity_tar
print '0=', 'non-'+ethnicity_tar
# Random sampling a smaller dataframe for debugging
rows = df.sample(n=subsample_size, random_state=seed) # Seed gives fixed randomness
df = DataFrame(rows)
print 'Class count:'
print df['ethnicity_scan'].value_counts()
# Assign X and y variables
X = df.raw_name.values
X2 = df.name.values
X3 = df.gender.values
X4 = df.location.values
y = df.ethnicity_scan.values
# Feature extraction functions
def feature_full_name(nameString):
try:
full_name = nameString
if len(full_name) > 1: # not accept name with only 1 character
return full_name
else: return '?'
except: return '?'
def feature_full_last_name(nameString):
try:
last_name = nameString.rsplit(None, 1)[-1]
if len(last_name) > 1: # not accept name with only 1 character
return last_name
else: return '?'
except: return '?'
def feature_full_first_name(nameString):
try:
first_name = nameString.rsplit(' ', 1)[0]
if len(first_name) > 1: # not accept name with only 1 character
return first_name
else: return '?'
except: return '?'
# Transform format of X variables, and spit out a numpy array for all features
my_dict = [{'last-name': feature_full_last_name(i)} for i in X]
my_dict5 = [{'first-name': feature_full_first_name(i)} for i in X]
all_dict = []
for i in range(0, len(my_dict)):
temp_dict = dict(
my_dict[i].items() + my_dict5[i].items()
)
all_dict.append(temp_dict)
newX = dv.fit_transform(all_dict)
# Separate the training and testing data sets
X_train, X_test, y_train, y_test = cross_validation.train_test_split(newX, y, test_size=testTrainSplit)
# Fitting X and y into model, using training data
classifierUsed2.fit(X_train, y_train)
# Making predictions using trained data
y_train_predictions = classifierUsed2.predict(X_train)
y_test_predictions = classifierUsed2.predict(X_test)
Inserted codes for resampling:
try:
for i in mass[k]:
df = df_temp # reset df before each loop
#$$
#$$
if 1==1:
###if i == singleEthnic:
count+=1
ethnicity_tar = str(i) # fr, en, ir, sc, others, ab, rus, ch, it, jp
# fn, metis, inuit; algonquian, iroquoian, athapaskan, wakashan, siouan, salish, tsimshian, kootenay
############################################
############################################
def ethnicity_target(row):
try:
if row[ethnicity_var] == ethnicity_tar:
return 1
else:
return 0
except: return None
df['ethnicity_scan'] = df.apply(ethnicity_target, axis=1)
print '1=', ethnicity_tar
print '0=', 'non-'+ethnicity_tar
# Resampled
df_resampled = df.append(df[df.ethnicity_scan==0].sample(len(df)*5, replace=True))
# Random sampling a smaller dataframe for debugging
rows = df_resampled.sample(n=subsample_size, random_state=seed) # Seed gives fixed randomness
df = DataFrame(rows)
print 'Class count:'
print df['ethnicity_scan'].value_counts()
# Assign X and y variables
X = df.raw_name.values
X2 = df.name.values
X3 = df.gender.values
X4 = df.location.values
y = df.ethnicity_scan.values
# Feature extraction functions
def feature_full_name(nameString):
try:
full_name = nameString
if len(full_name) > 1: # not accept name with only 1 character
return full_name
else: return '?'
except: return '?'
def feature_full_last_name(nameString):
try:
last_name = nameString.rsplit(None, 1)[-1]
if len(last_name) > 1: # not accept name with only 1 character
return last_name
else: return '?'
except: return '?'
def feature_full_first_name(nameString):
try:
first_name = nameString.rsplit(' ', 1)[0]
if len(first_name) > 1: # not accept name with only 1 character
return first_name
else: return '?'
except: return '?'
# Transform format of X variables, and spit out a numpy array for all features
my_dict = [{'last-name': feature_full_last_name(i)} for i in X]
my_dict5 = [{'first-name': feature_full_first_name(i)} for i in X]
all_dict = []
for i in range(0, len(my_dict)):
temp_dict = dict(
my_dict[i].items() + my_dict5[i].items()
)
all_dict.append(temp_dict)
newX = dv.fit_transform(all_dict)
# Separate the training and testing data sets
X_train, X_test, y_train, y_test = cross_validation.train_test_split(newX, y, test_size=testTrainSplit)
# Fitting X and y into model, using training data
classifierUsed2.fit(X_train, y_train)
# Making predictions using trained data
y_train_predictions = classifierUsed2.predict(X_train)
y_test_predictions = classifierUsed2.predict(X_test)
Your model outputs a probability P (between 0 and 1) for each row in the test set that it scores. The summary stats (precision, recall, etc) are for a single value of P as a prediction threshold, probably P=0.5, unless you've changed this in your code. However the ROC contains more information, the idea is that you probably won't want to use this default value as your prediction threshold, so the ROC is plotted by calculating the ratio of true positives to false positives, across every prediction threshold betwen 0 and 1.
If you've undersampled your non-Irish people in the data, then you're correct that the AUC and precision will be overestimated; if your dataset is only 5000 rows, then you will have no problem running your model on a larger training set; just rebalance your dataset (by bootstrap sampling to increase your non-Irish people) until your accurately reflect your sample population.

Resources