I came across the idea of seeding my neural network for reproducible results, and was wondering if pytorch seeding affects dropout layers and what is the proper way to seed my training/testing?
I'm reading the documentation here, and wondering if just placing these lines will be enough?
torch.manual_seed(1)
torch.cuda.manual_seed(1)
You can easily answer your question with some lines of code:
import torch
from torch import nn
dropout = nn.Dropout(0.5)
torch.manual_seed(9999)
a = dropout(torch.ones(1000))
torch.manual_seed(9999)
b = dropout(torch.ones(1000))
print(sum(abs(a - b)))
# > tensor(0.)
Yes, using manual_seed is enough.
Actually it depends on your device:
If cpu:
torch.manual_seed(1) == true.
If cuda:
torch.cuda.manual_seed(1)=true
torch.backends.cudnn.deterministic = True
Lastly, use the following code can make sure the results are reproducible among python, numpy and pytorch.
def setup_seed(seed):
random.seed(seed)
numpy.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
setup_seed(42)
Related
Does Scikit-learn support transfer learning? Please check the following code.
model clf is gotten by fit(X,y)
Can model clf2 learn on the base of clf and transfer learn by fit(X2,y2) ?
>>> from sklearn import svm
>>> from sklearn import datasets
>>> clf = svm.SVC()
>>> X, y= ....
>>> clf.fit(X, y)
SVC()
>>> import pickle
>>> s = pickle.dumps(clf)
>>> clf2 = pickle.loads(s)
>>> clf2.fit(X2,y2)
>>> clf2.predict(X[0:1])
In the context of scikit-learn there's no transfer learning as such, there is incremental learning or continuous learning or online learning.
By looking at your code, whatever you're intending to do won't work the way you're thinking here. From this scikit-learn documentation:
Calling fit() more than once will overwrite what was learned by any
previous fit()
Which means using fit() more than once on the same model will simply overwrite all the previously fitted coefficients, weights, intercept (bias), etc.
However if you want to fit a portion of your data set and then improve your model by fitting a new data, what you can do is look for estimators that include partial_fit API implementation.
If we call partial_fit() multiple times, framework will update the
existing weights instead of re-initialising them.
Another way to do incremental learning with scikit-learn is to look for algorithms that support the warm_start parameter.
From this doc:
warm_start: bool, default=False
When set to True, reuse the solution of
the previous call to fit() as initialization, otherwise, just erase the
previous solution. Useless for liblinear solver.
Another example is Random forrest regressor.
So I made a CNN that classifies two types of birds, and it worked fine. After that, I tried adding one more type, but I got weird results. I already posted this on ai stack exchange, but they said its better to ask it in here, so I am providing a link to that post.
https://ai.stackexchange.com/q/11444/23452
Here is the model code:
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, Conv2D, MaxPooling2D
from tensorflow.keras.callbacks import TensorBoard
import pickle
import time as time
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction = 0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
pickle_in = open("C:/Users/Recep/Desktop/programlar/python/X.pickle","rb")
X = pickle.load(pickle_in)
pickle_in = open("C:/Users/Recep/Desktop/programlar/python/Y.pickle","rb")
Y = pickle.load(pickle_in)
X = X/255.0
node_size = 64
model_name = "agi_vs_golden-{}".format(time.time())
tensorboard = TensorBoard(log_dir='C:/Users/Recep/Desktop/programlar/python/logs/{}'.format(model_name))
file_writer = tf.summary.FileWriter('C:/Users/Recep/Desktop/programlar/python/logs/{}'.format(model_name, sess.graph))
model = Sequential()
model.add(Conv2D(node_size,(3,3),input_shape = X.shape[1:]))
#idk what that shape does except that and validation i have no problem
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(node_size,(3,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(node_size))
model.add(Activation("relu"))
model.add(Dense(1))
model.add(Activation("sigmoid"))
model.compile(loss="binary_crossentropy",optimizer="adam",metrics=["accuracy"])
model.fit(X,Y,batch_size=25,epochs=8,validation_split=0.1,callbacks=[tensorboard])
# idk what the validation is and how its used but dont think it caused the problem
model.save("agi_vs_gouldian.model")
By the way, as I said in the comments of my original post, I think maybe there is a lack of training the network, or I don't have the enough data. So I tried increasing the number of epochs. It kinda get the problem, but the part that I'm curious about is what happened when I had the lower epochs?
Can anyone help me?
I am giving the tensor board graphs below.
BTW, is my data array rgb?
And how can I get rid of this local max of %70?
And since I'm a beginner to this, I don't know what validation really works, but I saw that the validation graphs stays the same in the first training that I had issues with.
You try to classify three birds with sigmoid. Sigmoid is good for binary classification. Try a softmax activation layer and see how it goes. I suggest replacing
model.add(Dense(1))
model.add(Activation("sigmoid"))
with
model.add(Dense(3, activation='softmax'))
Where 3 is the number of birds' type you want to classify.
Have a look here, a very good tutorial of using softmax as the activation layer for a multi-class classification
https://machinelearningmastery.com/multi-class-classification-tutorial-keras-deep-learning-library/
I’ve tried to train a 2 layer neural network on a simple linear interpolation for a discrete function, I’ve tried lots of different learning rates as well as different activation functions, and it seems like nothing is being learned!
I’ve literally spent the last 6 hours trying to debug the following code, but it seems like there’s no bug! What's the explanation?
from torch.utils.data import Dataset
import os
import torch
import numpy as np
import torch.nn as nn
import torch.optim as optim
import random
LOW_X=255
MID_X=40000
HIGH_X=200000
LOW_Y=torch.Tensor([0,0,1])
MID_Y=torch.Tensor([0.2,0.5,0.3])
HIGH_Y=torch.Tensor([1,0,0])
BATCH_SIZE=4
def x_to_tensor(x):
if x<=MID_X:
return LOW_Y+(x-LOW_X)*(MID_Y-LOW_Y)/(MID_X-LOW_X)
if x<=HIGH_X:
return MID_Y+(x-MID_X)*(HIGH_Y-MID_Y)/(HIGH_X-MID_X)
return HIGH_Y
class XYDataset(Dataset):
LENGTH=10000
def __len__(self):
return self.LENGTH
def __getitem__(self, idx):
x=random.randint(LOW_X,HIGH_X)
y=x_to_tensor(x)
return x,y
class Interpolate(nn.Module):
def __init__(self, num_outputs,hidden_size=10):
super(Interpolate, self).__init__()
self.hidden_size=hidden_size
self.x_to_hidden = nn.Linear(1, hidden_size)
self.hidden_to_out = nn.Linear(hidden_size,num_outputs)
self.activation = nn.Tanh() #I have tried Sigmoid and Relu activations as well
self.softmax=torch.nn.Softmax(dim=1)
def forward(self, x):
out = self.x_to_hidden(x)
out = self.activation(out)
out = self.hidden_to_out(out)
out = self.softmax(out)
return out
dataset=XYDataset()
trainloader = torch.utils.data.DataLoader(dataset, batch_size=BATCH_SIZE,
shuffle=True, num_workers=4)
criterion= nn.MSELoss()
def train_net(net,epochs=10,lr=5.137871216190041e-05,l2_regularization=2.181622809797563e-12):
optimizer= optim.Adam(net.parameters(),lr=lr,weight_decay=l2_regularization)
net.train(True)
running_loss=0.0
for epoch in range(epochs):
for i,data in enumerate(trainloader):
inputs,targets=data
inputs,targets=torch.FloatTensor(inputs.float()).view(-1,1),torch.FloatTensor(targets.float())
optimizer.zero_grad()
outputs=net(inputs)
loss=criterion(outputs,targets)
loss.backward()
optimizer.step()
running_loss+=loss.item()
if (len(trainloader)*epoch+i)%200==199:
running_loss=running_loss/(200*BATCH_SIZE)
print('[%d,%5d] loss: %.6f ' % (epoch+1,i+1,running_loss))
running_loss=0.0
for i in range(-11,3):
net=Interpolate(num_outputs=3)
train_net(net,lr=10**i,epochs=1)
print('for learning rate {} net output on low x is {}'.format(i,net(torch.Tensor([255]).view(-1,1))))
Although your problem is quite simple, it is poorly scaled: x ranges from 255 to 200K. This poor scaling leads to numerical instability and overall makes the training process unnecessarily unstable.
To overcome this technical issue, you simply need to scale your inputs to [-1, 1] (or [0, 1]) range.
Note that this scaling is quite ubiquitous in deep-learning: images are scaled to [-1, 1] range (see, e.g., torchvision.transforms.Normalize).
To understand better the importance of scaled responses, you can look into the mathematical analysis done in this paper.
You Can Perform a simple interpolation with a NN however you have to consider the following:
I would recommend the following settings:
For an activation function: for a simple interpolation a identity activation function can turn the NN as a Linear Regressor which may generalize well. However you should consider Rectified Linear Unit (Relu) for big data and Logistic/Tanh for regular size data as other options.
In case of big amounts of data I would select an iterative optimizer for weights as simple gradient descent or Adam. On the other hand if you got few data I would use a Newton approximation LBFGS since you will get a good approximation at weights in a reasonably lower computational time.
Vary the number of neurons in each layer and number of layers performing batch learning to seek better approximations.
Is it possible to train a model in Xgboost that have multiple continuous outputs (multi regression)?
What would be the objective to train such a model?
Thanks in advance for any suggestions
My suggestion is to use sklearn.multioutput.MultiOutputRegressor as a wrapper of xgb.XGBRegressor. MultiOutputRegressor trains one regressor per target and only requires that the regressor implements fit and predict, which xgboost happens to support.
# get some noised linear data
X = np.random.random((1000, 10))
a = np.random.random((10, 3))
y = np.dot(X, a) + np.random.normal(0, 1e-3, (1000, 3))
# fitting
multioutputregressor = MultiOutputRegressor(xgb.XGBRegressor(objective='reg:linear')).fit(X, y)
# predicting
print np.mean((multioutputregressor.predict(X) - y)**2, axis=0) # 0.004, 0.003, 0.005
This is probably the easiest way to regress multi-dimension targets using xgboost as you would not need to change any other part of your code (if you were using the sklearn API originally).
However this method does not leverage any possible relation between targets. But you can try to design a customized objective function to achieve that.
Multiple output regression is now available in the nightly build of XGBoost, and will be included in XGBoost 1.6.0.
See https://github.com/dmlc/xgboost/blob/master/demo/guide-python/multioutput_regression.py for an example.
It generates warnings: reg:linear is now deprecated in favor of reg:squarederror, so I update an answer based on #ComeOnGetMe's
import numpy as np
import pandas as pd
import xgboost as xgb
from sklearn.multioutput import MultiOutputRegressor
# get some noised linear data
X = np.random.random((1000, 10))
a = np.random.random((10, 3))
y = np.dot(X, a) + np.random.normal(0, 1e-3, (1000, 3))
# fitting
multioutputregressor = MultiOutputRegressor(xgb.XGBRegressor(objective='reg:squarederror')).fit(X, y)
# predicting
print(np.mean((multioutputregressor.predict(X) - y)**2, axis=0))
Out:
[2.00592697e-05 1.50084441e-05 2.01412247e-05]
I would place a comment but I lack the reputation. In addition to #Jesse Anderson, to install the most recent version, select the top link from here:
https://s3-us-west-2.amazonaws.com/xgboost-nightly-builds/list.html?prefix=master/
Make sure to select the one for your operating system.
Use pip install to install the wheel. I.e. for macOS:
pip install https://s3-us-west-2.amazonaws.com/xgboost-nightly-builds/master/xgboost-1.6.0.dev0%2B4d81c741e91c7660648f02d77b61ede33cef8c8d-py3-none-macosx_10_15_x86_64.macosx_11_0_x86_64.macosx_12_0_x86_64.whl
You can use Linear regression, random forest regressors and some other related algorithms in Scikit-learn to produce multi-output regression. Not sure about XGboost. The boosting regressor in Scikit does not allow multiple outputs. For people who asked, when it may be necessary one example would be to forecast multi-steps of time-series a head.
Based on the above discussion, I have extended the univariate XGBoostLSS to a multivariate framework called Multi-Target XGBoostLSS Regression that models multiple targets and their dependencies in a probabilistic regression setting. Code follows soon.
I am still very new to machine learning and trying to figure things out myself. I am using SciKit learn and have a data set of tweets with around 20,000 features (n_features=20,000). So far I achieved a precision, recall and f1 score of around 79%. I would like to use RFECV for feature selection and improve the performance of my model. I have read the SciKit learn documentation but am still a bit confused on how to use RFECV.
This is the code I have so far:
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
from sklearn.cross_validation import StratifiedShuffleSplit
from sklearn.cross_validation import cross_val_score
from sklearn.feature_selection import RFECV
from sklearn import metrics
# cross validation
sss = StratifiedShuffleSplit(y, 5, test_size=0.2, random_state=42)
for train_index, test_index in sss:
docs_train, docs_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
# feature extraction
count_vect = CountVectorizer(stop_words='English', min_df=3, max_df=0.90, ngram_range=(1,3))
X_CV = count_vect.fit_transform(docs_train)
tfidf_transformer = TfidfTransformer()
X_tfidf = tfidf_transformer.fit_transform(X_CV)
# Create the RFECV object
nb = MultinomialNB(alpha=0.5)
# The "accuracy" scoring is proportional to the number of correct classifications
rfecv = RFECV(estimator=nb, step=1, cv=2, scoring='accuracy')
rfecv.fit(X_tfidf, y_train)
X_rfecv=rfecv.transform(X_tfidf)
print("Optimal number of features : %d" % rfecv.n_features_)
# train classifier
clf = MultinomialNB(alpha=0.5).fit(X_rfecv, y_train)
# test clf on test data
X_test_CV = count_vect.transform(docs_test)
X_test_tfidf = tfidf_transformer.transform(X_test_CV)
X_test_rfecv = rfecv.transform(X_test_tfidf)
y_predicted = clf.predict(X_test_rfecv)
#print the mean accuracy on the given test data and labels
print ("Classifier score is: %s " % rfecv.score(X_test_rfecv,y_test))
Three questions:
1) Is this the correct way to use cross validation and RFECV? I am especially interested to know if I am running any risk of overfitting.
2) The accuracy of my model before and after I implemented RFECV with the above code are almost the same (around 78-79%), which puzzles me. I would expect performance to improve by using RFECV. Anything I might have missed here or could do differently to improve the performance of my model?
3) What other feature selection methods could you recommend me to try? I have tried RFE and SelectKBest so far, but they both haven't given me any improvement in terms of model accuracy.
To answer your questions:
There is a cross-validation built in the RFECV feature selection (hence the name), so you don't really need to have additional cross-validation for this single step. However since I understand you are running several tests, it's good to have an overall cross-validation to ensure you're not overfitting to a specific train-test split. I'd like to mention 2 points here:
I doubt the code behaves exactly like you think it does ;).
# cross validation
sss = StratifiedShuffleSplit(y, 5, test_size=0.2, random_state=42)
for train_index, test_index in sss:
docs_train, docs_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
# feature extraction
count_vect = CountVectorizer(stop_words='English', min_df=3, max_df=0.90, ngram_range=(1,3))
X_CV = count_vect.fit_transform(docs_train)
Here we first go through the loop, that has 5 iterations (n_iter parameter in StratifiedShuffleSplit). Then we go out of the loop and we just run all your code with the last values of train_index, test_index. So this is equivalent to a single train-test split where you probably meant to have 5. You should move your code back into the loop if you want it to run like a 'proper' cross validation.
You are worried about overfitting: indeed when 'looking for the best method' the risk exists that we're going to pick the method that works best... only on the small sample we're testing the method on.
Here the best practice is to have a first train-test split, then to perform cross-validation only using the train set. The test set can be used 'sparingly' when you think you found something, to make sure the scores you get are consistent and you're not overfitting.
It may look like you're throwing away 30% of your data (your test set), but it's absolutely worth it.
It can be puzzling to see feature selection does not have that big an impact. To introspect a bit more you could look into the evolution of the score with the number of selected features (see the example from the docs).
That being said, I don't think this is the right use case for RFE. Basically with your code you are eliminating features one by one, which probably takes a long time to run and does not make so much sense when you have 20000 features.
Other feature selection methods: here you mention SelectKBest but you don't tell us which method you use to score your features! SelectKBest will pick the K best features according to a score function. I'm guessing you were using the default which is ok, but it's better to have an idea of what the default does ;).
I would try SelectPercentile with chi2 as a score function. SelectPercentile is probably a bit more convenient than SelectKBest because if your dataset grows a percentage probably makes more sense than a hardcoded number of features.
Another example from the docs that does just that (and more).
Additional remarks:
You could use a TfidfVectorizer instead of a CountVectorizer followed by a TfidfTransformer. This is strictly equivalent.
You could use a pipeline object to pack the different steps of your classifier into a single object you can run cross validation on (I encourage you to read the docs, it's pretty useful).
from sklearn.feature_selection import chi2_sparse
from sklearn.feature_selection import SelectPercentile
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
pipeline = Pipeline(steps=[
("vectorizer", TfidfVectorizer(stop_words='English', min_df=3, max_df=0.90, ngram_range=(1,3))),
("selector", SelectPercentile(score_func=chi2, percentile=70)),
('NB', MultinomialNB(alpha=0.5))
])
Then you'd be able to run cross validation on the pipeline object to find the best combination of alpha and percentile, which is much harder to do with separate estimators.
Hope this helps, happy learning ;).