Using GridSearchCV with TimeSeriesSplit - machine-learning

I have some code that would use TimeSeriesSplit to split my data. For each split, I would use ParametersGrid and loop through each parameter combination, record the best set of params and use it to predict my X_test. You can see the code for this part at the bottom of the post
I understand that GridSearchCV will do a lot of that work for me. I'm wondering if I use the following code, where does my data get split into
X_train, X_test, y_train and y_test? Will using the GridSearchCV with the TimeSeriesSplit handle this behind the scenes and if so will this code accomplish the same thing as my original code at the bottom of this post? Also, i've now tried the GridSearchCV method and it's been almost 30 min without finishing - do i have the right syntax?
X = data.iloc[:, 0:8]
y = data.iloc[:, 8:9]
parameters = [
{'kernel': ['rbf'],
'gamma': [.01],
'C': [1, 10, 100]}]
gsc = GridSearchCV(SVR(), param_grid=parameters, scoring='neg_mean_absolute_error',
cv=TimeSeriesSplit(n_splits=2))
gsc.fit(X,y)
means = gsc.cv_results_['mean_test_score']
for mean in means:
print(mean)
print('end')
Original Code Below:
# Create the time series split generator
tscv = TimeSeriesSplit(n_splits=3)
for train_index, test_index in tqdm(tscv.split(X)):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
# scale the data set
scaler_X = StandardScaler()
scaler_y = StandardScaler()
scaler_X.fit(X_train)
scaler_y.fit(y_train)
X_train, X_test = scaler_X.transform(X_train), scaler_X.transform(X_test)
y_train, y_test = scaler_y.transform(y_train), scaler_y.transform(y_test)
# optimization area - set params
parameters = [
{'kernel': ['rbf'],
'gamma': [.01],
'C': [ 1,10,100,500,1000]}]
regressor = SVR()
# loop through each of the parameters and find the best set
for e, g in enumerate(ParameterGrid(parameters)):
regressor.set_params(**g)
regressor.fit(X_train, y_train.ravel())
score = metrics.mean_absolute_error(regressor.predict(X_train), y_train.ravel())
if e == 0:
best_score = score
best_params = g
elif score < best_score:
best_score = score
best_params = g
# refit the model with the best set of params
regressor.set_params(**best_params)
regressor.fit(X_train, y_train.ravel())

You need to modify the code slightly.
gsc = GridSearchCV(SVR(), param_grid=parameters, scoring='neg_mean_absolute_error',
cv=TimeSeriesSplit(n_splits=2).split(X))
And, you can consider adding verbose parameter to look at running output.

Related

ValueError: y_true takes value in {'True', 'False'} and pos_label is not specified in ROC_curve

x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.5, random_state=2)
# generate a no skill prediction (majority class)
ns_probs = [0 for _ in range(len(y_test))]
# fit a model
model = KNeighborsClassifier(n_neighbors = 3)
model.fit(x_train, y_train)
# predict probabilities
lr_probs = model.predict_proba(x_test)
# keep probabilities for the positive outcome only
lr_probs = lr_probs[:, 1]
# calculate scores
ns_auc = roc_auc_score(y_test, ns_probs)
lr_auc = roc_auc_score(y_test, lr_probs)
# summarize scores
print('No Skill: ROC AUC=%.3f' % (ns_auc))
print('Logistic: ROC AUC=%.3f' % (lr_auc))
# calculate roc curves
ns_fpr, ns_tpr, _ = roc_curve(y_test, ns_probs) <-- Error Occurred
lr_fpr, lr_tpr, _ = roc_curve(y_test, lr_probs)
...
I'm trying to use the ROC curve in the KNN algorithm.
ValueError: y_true takes value in {'True', 'False'} and pos_label is not specified:
either make y_true take value in {0, 1} or {-1, 1} or pass pos_label explicitly
However, as you can see above, an error occurred.
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
encoder.fit(data.Malware)
data['TrueorFalse'] = encoder.transform(data['TrueorFalse'])
data.value_counts(data['TrueorFalse'].values, sort=False)
data.head()
So to solve this problem, I thought the labels I wrote "True" and "False" were problematic because they were strings. Therefore, the above code was applied to switch True or Flase to 0 and 1, respectively, but errors still occur. I'm using True and False as labels in the TrueorFalse column. Is there anything I'm missing?
Instead of passing 2 arguments in your function, pass an additional parameter stating the pos_label just like the error states while elaborating.
In your case, instead of passing the arguments in function like:
ns_fpr, ns_tpr, _ = roc_curve(y_test, ns_probs)
try like:
ns_fpr, ns_tpr, _ = roc_curve(y_test, ns_probs, pos_label=1)
for both of the curves. Hence the suggested modification will be:
ns_fpr, ns_tpr, _ = roc_curve(y_test, ns_probs, pos_label=1)
lr_fpr, lr_tpr, _ = roc_curve(y_test, lr_probs, pos_label=1)
Hope this might resolve your error completely!
y_test = y_test.map({'True': 1, 'False': 0}).astype(int)
Adding this code helped me to solve my problem.

making predictions using classification models with multiple independent variables in hand

I am trying to make a simple classification using Logistic Regression. I fit the model and scale the values using a standard scaler. how can I make a single prediction after that? I am getting the same result for different values. For every value, I am getting 0. the prediction I am getting from single inputs does not resemble with the result from the prediction made by the testing dataset. Can someone please give me a hand?
dataset = pd.read_csv("Social_Network_Ads.csv")
x = dataset.iloc[:, 2:4].values
y = dataset.iloc[:, 4].values
print(dataset)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25, random_state=0)
scaler = StandardScaler()
x_train = scaler.fit_transform(x_train)
x_test = scaler.transform(x_test)
classifier = LogisticRegression()
classifier.fit(x_train, y_train)
y_pred = classifier.predict(x_test)
x_values = [36, 36000]
x_values = np.array(x_values).reshape(1, -1)
x_values = scaler.transform(x_values)
pred = classifier.predict(x_values)
print("single prediction: ", pred)

Training and testing ML from two different sources

I am using sklearn for a classification task. I want to train my model on data from table "train" and test it on data from a different table"test". Both tables have the same exact features, but different numbers of rows. I have the code below, but I am getting the error:
(<class 'ValueError'>, ValueError('Found input variables with inconsistent numbers of samples: [123, 174]',), <traceback object at 0x0000016476E10C48>).
what am I doing wrong?
get_train_data = 'select * from train;'
get_test_data = 'select * from test;'
df_train = pd.read_sql_query(get_train_data, con=connection)
df_test = pd.read_sql_query(get_test_data, con=connection)
X = df_train[:, 2:30]
Y = df_test[:, :30]
X_train, X_test, Y_train, Y_test = train_test_split(X, Y)
model.fit(X_train, Y_train)
predictions = model.predict(X_test)
split_mat=confusion_matrix(Y_test, predictions)
If you want to train on dataframe df_train and test on dataframe df_test, why are you taking the features of df_train and the target column of df_test and pass them to the train_test_split function?
You can simply do the following:
get_train_data = 'select * from train;'
get_test_data = 'select * from test;'
df_train = pd.read_sql_query(get_train_data, con=connection)
df_test = pd.read_sql_query(get_test_data, con=connection)
X_train = df_train[:, 2:30]
y_train = df_train.y # assuming y is the name of your target variable in df_train
X_test = df_test[:, i:j] # change i to j with the number that allow you to take the same columns as X_train
y_test = df_test.y # assuming y is the name of your target variable in df_test
model.fit(X_train, y_train)
predictions = model.predict(X_test)
# Do something with predictions, e.g.
mean(predictions == y_test)

Accuracy in logistic regression

Here is slightly modified code that I found here...
I am using the same logic as the original author and still not getting good accuracy. The Mean Reciprocal Rank is close (mine: 52.79, example: 48.04)
cv = CountVectorizer(binary=True, max_df=0.95)
feature_set = cv.fit_transform(df["short_description"])
X_train, X_test, y_train, y_test = train_test_split(
feature_set, df["category"].values, random_state=2000)
scikit_log_reg = LogisticRegression(
verbose=1, solver="liblinear", random_state=0, C=5, penalty="l2", max_iter=1000)
model = scikit_log_reg.fit(X_train, y_train)
target = to_categorical(y_test)
y_pred = model.predict_proba(X_test)
label_ranking_average_precision_score(target, y_pred)
>> 0.5279108613021547
model.score(X_test, y_test)
>> 0.38620071684587814
But the accuracy of notebook sample (59.80) does not match with my code (38.62)
Is the following function used in the sample notebook correctly returning accuracy?
def compute_accuracy(eval_items:list):
correct=0
total=0
for item in eval_items:
true_pred=item[0]
machine_pred=set(item[1])
for cat in true_pred:
if cat in machine_pred:
correct+=1
break
accuracy=correct/float(len(eval_items))
return accuracy
The notebook code is checking whether the actual category is in the top 3 returned from the model:
def get_top_k_predictions(model, X_test, k):
probs = model.predict_proba(X_test)
best_n = np.argsort(probs, axis=1)[:, -k:]
preds=[[model.classes_[predicted_cat] for predicted_cat in prediction] for prediction in best_n]
preds=[item[::-1] for item in preds]
return preds
If you replace the evaluation part of your code with the below, you'll see that your model returns a top-3 accuracy of 0.5980 as well:
...
model = scikit_log_reg.fit(X_train, y_train)
top_preds = get_top_k_predictions(model, X_test, 3)
pred_pairs = list(zip([[v] for v in y_test], top_preds))
print(compute_accuracy(pred_pairs))
# below is a simpler & more Pythonic version of compute_accuracy
print(np.mean([actual in pred for actual, pred in zip(y_test, top_preds)]))

How to split data on balanced training set and test set on sklearn

I am using sklearn for multi-classification task. I need to split alldata into train_set and test_set. I want to take randomly the same sample number from each class.
Actually, I amusing this function
X_train, X_test, y_train, y_test = cross_validation.train_test_split(Data, Target, test_size=0.3, random_state=0)
but it gives unbalanced dataset! Any suggestion.
Although Christian's suggestion is correct, technically train_test_split should give you stratified results by using the stratify param.
So you could do:
X_train, X_test, y_train, y_test = cross_validation.train_test_split(Data, Target, test_size=0.3, random_state=0, stratify=Target)
The trick here is that it starts from version 0.17 in sklearn.
From the documentation about the parameter stratify:
stratify : array-like or None (default is None)
If not None, data is split in a stratified fashion, using this as the labels array.
New in version 0.17: stratify splitting
You can use StratifiedShuffleSplit to create datasets featuring the same percentage of classes as the original one:
import numpy as np
from sklearn.model_selection import StratifiedShuffleSplit
X = np.array([[1, 3], [3, 7], [2, 4], [4, 8]])
y = np.array([0, 1, 0, 1])
stratSplit = StratifiedShuffleSplit(y, n_iter=1, test_size=0.5, random_state=42)
for train_idx, test_idx in stratSplit:
X_train=X[train_idx]
y_train=y[train_idx]
print(X_train)
# [[3 7]
# [2 4]]
print(y_train)
# [1 0]
If the classes are not balanced but you want the split to be balanced, then stratifying isn't going to help. There doesn't seem to be a method for doing balanced sampling in sklearn but it's kind of easy using basic numpy, for example a function like this might help you:
def split_balanced(data, target, test_size=0.2):
classes = np.unique(target)
# can give test_size as fraction of input data size of number of samples
if test_size<1:
n_test = np.round(len(target)*test_size)
else:
n_test = test_size
n_train = max(0,len(target)-n_test)
n_train_per_class = max(1,int(np.floor(n_train/len(classes))))
n_test_per_class = max(1,int(np.floor(n_test/len(classes))))
ixs = []
for cl in classes:
if (n_train_per_class+n_test_per_class) > np.sum(target==cl):
# if data has too few samples for this class, do upsampling
# split the data to training and testing before sampling so data points won't be
# shared among training and test data
splitix = int(np.ceil(n_train_per_class/(n_train_per_class+n_test_per_class)*np.sum(target==cl)))
ixs.append(np.r_[np.random.choice(np.nonzero(target==cl)[0][:splitix], n_train_per_class),
np.random.choice(np.nonzero(target==cl)[0][splitix:], n_test_per_class)])
else:
ixs.append(np.random.choice(np.nonzero(target==cl)[0], n_train_per_class+n_test_per_class,
replace=False))
# take same num of samples from all classes
ix_train = np.concatenate([x[:n_train_per_class] for x in ixs])
ix_test = np.concatenate([x[n_train_per_class:(n_train_per_class+n_test_per_class)] for x in ixs])
X_train = data[ix_train,:]
X_test = data[ix_test,:]
y_train = target[ix_train]
y_test = target[ix_test]
return X_train, X_test, y_train, y_test
Note that if you use this and sample more points per class than in the input data, then those will be upsampled (sample with replacement). As a result, some data points will appear multiple times and this may have an effect on the accuracy measures etc. And if some class has only one data point, there will be an error. You can easily check the numbers of points per class for example with np.unique(target, return_counts=True)
Another approach is to over- or under- sample from your stratified test/train split. The imbalanced-learn library is quite handy for this, specially useful if you are doing online learning & want to guarantee balanced train data within your pipelines.
from imblearn.pipeline import Pipeline as ImbalancePipeline
model = ImbalancePipeline(steps=[
('data_balancer', RandomOverSampler()),
('classifier', SVC()),
])
This is my implementation that I use to get train/test data indexes
def get_safe_balanced_split(target, trainSize=0.8, getTestIndexes=True, shuffle=False, seed=None):
classes, counts = np.unique(target, return_counts=True)
nPerClass = float(len(target))*float(trainSize)/float(len(classes))
if nPerClass > np.min(counts):
print("Insufficient data to produce a balanced training data split.")
print("Classes found %s"%classes)
print("Classes count %s"%counts)
ts = float(trainSize*np.min(counts)*len(classes)) / float(len(target))
print("trainSize is reset from %s to %s"%(trainSize, ts))
trainSize = ts
nPerClass = float(len(target))*float(trainSize)/float(len(classes))
# get number of classes
nPerClass = int(nPerClass)
print("Data splitting on %i classes and returning %i per class"%(len(classes),nPerClass ))
# get indexes
trainIndexes = []
for c in classes:
if seed is not None:
np.random.seed(seed)
cIdxs = np.where(target==c)[0]
cIdxs = np.random.choice(cIdxs, nPerClass, replace=False)
trainIndexes.extend(cIdxs)
# get test indexes
testIndexes = None
if getTestIndexes:
testIndexes = list(set(range(len(target))) - set(trainIndexes))
# shuffle
if shuffle:
trainIndexes = random.shuffle(trainIndexes)
if testIndexes is not None:
testIndexes = random.shuffle(testIndexes)
# return indexes
return trainIndexes, testIndexes
This is the function I am using. You can adapt it and optimize it.
# Returns a Test dataset that contains an equal amounts of each class
# y should contain only two classes 0 and 1
def TrainSplitEqualBinary(X, y, samples_n): #samples_n per class
indicesClass1 = []
indicesClass2 = []
for i in range(0, len(y)):
if y[i] == 0 and len(indicesClass1) < samples_n:
indicesClass1.append(i)
elif y[i] == 1 and len(indicesClass2) < samples_n:
indicesClass2.append(i)
if len(indicesClass1) == samples_n and len(indicesClass2) == samples_n:
break
X_test_class1 = X[indicesClass1]
X_test_class2 = X[indicesClass2]
X_test = np.concatenate((X_test_class1,X_test_class2), axis=0)
#remove x_test from X
X_train = np.delete(X, indicesClass1 + indicesClass2, axis=0)
Y_test_class1 = y[indicesClass1]
Y_test_class2 = y[indicesClass2]
y_test = np.concatenate((Y_test_class1,Y_test_class2), axis=0)
#remove y_test from y
y_train = np.delete(y, indicesClass1 + indicesClass2, axis=0)
if (X_test.shape[0] != 2 * samples_n or y_test.shape[0] != 2 * samples_n):
raise Exception("Problem with split 1!")
if (X_train.shape[0] + X_test.shape[0] != X.shape[0] or y_train.shape[0] + y_test.shape[0] != y.shape[0]):
raise Exception("Problem with split 2!")
return X_train, X_test, y_train, y_test

Resources