KMeans vs MiniBatchKMeans - machine-learning

I have a dataset of 1mln * 44.
When I load data into KMeans - I get OOM (very low RAM)
I looked at MiniBatch K which has a partial_fit method.
I understand correctly that this code will give approximately the same result?
kmeans = MiniBatchKMeans(n_clusters=4, random_state=0, batch_size=100_000, n_init="auto")
for read_csv(csv_path, csv_names, csv_delimiter, chunksize = 100_000):
model.partial_fit(X)
Why can't I stream data into KMeans in sklearn? Using centroids of previous calculations for next calculations?

Related

Neural Network Trains Fine and Test Predictions are Horrible Bordering on Ridiculous

I am having a lot of trouble with a neural network model using R neuralnet() function. When I train a network on all of the data as expected the predictions are very accurate. However, when I split the data into training and test sets, the test predictions are terrible. I cannot figure what all I am doing wrong. I would appreciate any advice or help troubleshooting as I don't think I'll be able to figure this out on my own. Thanks in advance.
I have included the R code and some plots and an example of the data below the full data is 3600 observations.
Best Regards-Pat
UPDATE 05/12/18: BASED ON FEEDBACK THAT THIS LOOKS LIKE OVERFITTING, I TRIED STOPPING THE TRAINING EARLIER AND FOUND THAT THE MSE OF THE TEST PREDICTION NEVER GETS VERY LOW AND IS LOWEST APPROACHING 0 TRAINING EPOCHS AND RISES FROM THERE (PLOT INCLUDED AND CODE APPENDED)
###########
#ANN Models
###########
#Load libraries
library(plyr)
library(ggplot2)
library(gridExtra)
library(neuralnet)
#Retain only numerically coded data from data1 in (data2) for ANN fitting
data2 = data1[,c(3:7)]
#Calculate Min and Max for Scaling
max_data = apply(data2,2,max)
min_data = apply(data2,2,min)
#Scale data 0-1
data2_scaled = scale(data2,center=min_data,scale=max_data-min_data)
#Check data structure
data2_scaled
#Fit neural net model
model_nn1 = neuralnet(formula=time~instructions+nodes+machine_num+app_num,data=data2_scaled,hidden=c(8,8),stepmax=1000000,threshold=0.01)
#Calculate Min and Max Response for rescaling
max_time = max(data2$time)
min_time = min(data2$time)
#Rescale neural net response predictions
pred_nn1 = model_nn1$net.result[[1]][,1]*(max_time-min_time)+min_time
#Compare model predictions to actual values
a03 = cbind.data.frame(data1$time,pred_nn1,data1$machine,data1$app)
colnames(a03) = c("actual","prediction","machine","app")
attach(a03)
p01 = ggplot(a03,aes(x=actual,y=prediction))+
geom_point(aes(color=machine),size=1)+
scale_y_continuous("Predicted Execution Time [s]",breaks=seq(0,1000,100),limits=c(0,1000))+
scale_x_continuous("Actual Execution Time [s]",breaks=seq(0,1000,100),limits=c(0,1000))+
ggtitle("Neural Net Fit (ALL DATA):\nActual vs. Predicted Execution Time")+
geom_abline(intercept=0,slope=1)+
theme_light()
p02 = ggplot(a03,aes(x=actual,y=prediction))+
geom_point(aes(color=app),size=1)+
scale_y_continuous("Predicted Execution Time [s]",breaks=seq(0,1000,100),limits=c(0,1000))+
scale_x_continuous("Actual Execution Time [s]",breaks=seq(0,1000,100),limits=c(0,1000))+
ggtitle("Neural Net Fit (ALL DATA):\nActual vs. Predicted Execution Time")+
geom_abline(intercept=0,slope=1)+
theme_light()
grid.arrange(p01,p02,nrow=1)
#Visualize ANN
plot(model_nn1)
#Epochs taken to train "steps"
model_nn1$result.matrix[3,]
#########################
#Testing and Training ANN
#########################>
#Split the data into a test and training set
index = sample(1:nrow(data2_scaled),round(0.80*nrow(data2_scaled)))
train_data = as.data.frame(data2_scaled[index,])
test_data = as.data.frame(data2_scaled[-index,])
model_nn2 = neuralnet(formula=time~instructions+nodes+machine_num+app_num,data=train_data,hidden=c(3,2),stepmax=1000000,threshold=0.01)
pred_nn2_scaled = compute(model_nn2,test_data[,c(1,2,4,5)])
pred_nn2 = pred_nn2_scaled$net.result*(max_time-min_time)+min_time
test_data_time = test_data$time*(max_time-min_time)+min_time
a04 = cbind.data.frame(test_data_time,pred_nn2,data1[-index,2],data1[-index,1])
colnames(a04) = c("actual","prediction","machine","app")
attach(a04)
p01 = ggplot(a04,aes(x=actual,y=prediction))+
geom_point(aes(color=machine),size=1)+
scale_y_continuous("Predicted Execution Time [s]",breaks=seq(0,1000,100),limits=c(0,1000))+
scale_x_continuous("Actual Execution Time [s]",breaks=seq(0,1000,100),limits=c(0,1000))+
ggtitle("Neural Net Fit (TEST DATA):\nActual vs. Predicted Execution Time")+
geom_abline(intercept=0,slope=1)+
theme_light()
p02 = ggplot(a04,aes(x=actual,y=prediction))+
geom_point(aes(color=app),size=1)+
scale_y_continuous("Predicted Execution Time [s]",breaks=seq(0,1000,100),limits=c(0,1000))+
scale_x_continuous("Actual Execution Time [s]",breaks=seq(0,1000,100),limits=c(0,1000))+
ggtitle("Neural Net Fit (TEST DATA):\nActual vs. Predicted Execution Time")+
geom_abline(intercept=0,slope=1)+
theme_light()
grid.arrange(p01,p02,nrow=1)
#EARLY STOPPING TEST
i = 1000
summary_data = data.frame(matrix(rep(0,4*i),ncol=4))
colnames(summary_data) = c("treshold","epochs","train_mse","test_mse")
for (j in 1:i){
a = runif(1,min=0.01,max=10)
#Train the model
model_nn2 = neuralnet(formula=time~instructions+nodes+machine_num+app_num,data=train_data,hidden=3,stepmax=1000000,threshold=a)
#Calculate Min and Max Response for rescaling
max_time = max(data2$time)
min_time = min(data2$time)
#Predict test data from trained nn
pred_nn2_scaled = compute(model_nn2,test_data[,c(1,2,4,5)])
#Rescale test prediction
pred_test_data_time = pred_nn2_scaled$net.result*(max_time-min_time)+min_time
#Rescale test actual
test_data_time = test_data$time*(max_time-min_time)+min_time
#Rescale train prediction
pred_train_data_time = model_nn2$net.result[[1]][,1]*(max_time-min_time)+min_time
#Rescale train actual
train_data_time = train_data$time*(max_time-min_time)+min_time
#Calculate mse
test_mse = mean((pred_test_data_time-test_data_time)^2)
train_mse = mean((pred_train_data_time-train_data_time)^2)
#Summarize
summary_data[j,1] = a
summary_data[j,2] = model_nn2$result.matrix[3,]
summary_data[j,3] = round(train_mse,3)
summary_data[j,4] = round(test_mse,3)
print(summary_data[j,])
}
plot(summary_data$epochs,summary_data$test_mse,pch=19,xlim=c(0,2000),ylim=c(0,300000),cex=0.5,xlab="Training Steps",ylab="MSE",main="Early Stopping Test: Comparing MSE : TEST=BLACK TRAIN=RED")
points(summary_data$epochs,summary_data$train_mse,pch=19,col=2,cex=0.5)
I would guess that it is overfitting. The network is learning to reproduce the data like a dictionary instead of learning the underlying function in the data. There are various things which can cause this and ways to address them.
Things which cause overfitting are:
The network could be training for too long.
The network could having far more weights than training examples.
Ways to reduce overfitting are:
Create a validation dataset and stop training the network as soon as the
validation set's loss starts increasing. This is a necessity.
Reducing the network size. (Less weights)
Using a regularization technique like weight decay or dropout.
Also, it may be possible that the problem is too difficult for a neural network to solve based on the data it is given. Reproducing training data does not prove that the network can solve the problem, it only proves that the network can remember things like a dictionary.

Non-linear multivariate time-series response prediction using RNN

I am trying to predict the hygrothermal response of a wall, given the interior and exterior climate. Based on literature research, I believe this should be possible with RNN but I have not been able to get good accuracy.
The dataset has 12 input features (time-series of exterior and interior climate data) and 10 output features (time-series of hygrothermal response), both containing hourly values for 10 years. This data was created with hygrothermal simulation software, there is no missing data.
Dataset features:
Dataset targets:
Unlike most time-series prediction problems, I want to predict the response for the full length of the input features time-series at each time-step, rather than the subsequent values of a time-series (eg financial time-series prediction). I have not been able to find similar prediction problems (in similar or other fields), so if you know of one, references are very welcome.
I think this should be possible with RNN, so I am currently using LSTM from Keras. Before training, I preprocess my data the following way:
Discard first year of data, as the first time steps of the hygrothermal response of the wall is influenced by the initial temperature and relative humidity.
Split into training and testing set. Training set contains the first 8 years of data, the test set contains the remaining 2 years.
Normalise training set (zero mean, unit variance) using StandardScaler from Sklearn. Normalise test set analogously using mean an variance from training set.
This results in: X_train.shape = (1, 61320, 12), y_train.shape = (1, 61320, 10), X_test.shape = (1, 17520, 12), y_test.shape = (1, 17520, 10)
As these are long time-series, I use stateful LSTM and cut the time-series as explained here, using the stateful_cut() function. I only have 1 sample, so batch_size is 1. For T_after_cut I have tried 24 and 120 (24*5); 24 appears to give better results. This results in X_train.shape = (2555, 24, 12), y_train.shape = (2555, 24, 10), X_test.shape = (730, 24, 12), y_test.shape = (730, 24, 10).
Next, I build and train the LSTM model as follows:
model = Sequential()
model.add(LSTM(128,
batch_input_shape=(batch_size,T_after_cut,features),
return_sequences=True,
stateful=True,
))
model.addTimeDistributed(Dense(targets)))
model.compile(loss='mean_squared_error', optimizer=Adam())
model.fit(X_train, y_train, epochs=100, batch_size=batch=batch_size, verbose=2, shuffle=False)
Unfortunately, I don't get accurate prediction results; not even for the training set, thus the model has high bias.
The prediction results of the LSTM model for all targets
How can I improve my model? I have already tried the following:
Not discarding the first year of the dataset -> no significant difference
Differentiating the input features time-series (subtract previous value from current value) -> slightly worse results
Up to four stacked LSTM layers, all with the same hyperparameters -> no significant difference in results but longer training time
Dropout layer after LSTM layer (though this is usually used to reduce variance and my model has high bias) -> slightly better results, but difference might not be statistically significant
Am I doing something wrong with the stateful LSTM? Do I need to try different RNN models? Should I preprocess the data differently?
Furthermore, training is very slow: about 4 hours for the model above. Hence I am reluctant to do an extensive hyperparameter gridsearch...
In the end, I managed to solve this the following way:
Using more samples to train instead of only 1 (I used 18 samples to train and 6 to test)
Keep the first year of data, as the output time-series for all samples have the same 'starting point' and the model needs this information to learn
Standardise both input and output features (zero mean, unit variance). I found this improved prediction accuracy and training speed
Use stateful LSTM as described here, but add reset states after epoch (see below for code). I used batch_size = 6 and T_after_cut = 1460. If T_after_cut is longer, training is slower; if T_after_cut is shorter, accuracy decreases slightly. If more samples are available, I think using a larger batch_size will be faster.
use CuDNNLSTM instead of LSTM, this speed up the training time x4!
I found that more units resulted in higher accuracy and faster convergence (shorter training time). Also I found that the GRU is as accurate as the LSTM tough converged faster for the same number of units.
Monitor validation loss during training and use early stopping
The LSTM model is build and trained as follows:
def define_reset_states_batch(nb_cuts):
class ResetStatesCallback(Callback):
def __init__(self):
self.counter = 0
def on_batch_begin(self, batch, logs={}):
# reset states when nb_cuts batches are completed
if self.counter % nb_cuts == 0:
self.model.reset_states()
self.counter += 1
def on_epoch_end(self, epoch, logs={}):
# reset states after each epoch
self.model.reset_states()
return(ResetStatesCallback)
model = Sequential()
model.add(layers.CuDNNLSTM(256, batch_input_shape=(batch_size,T_after_cut ,features),
return_sequences=True,
stateful=True))
model.add(layers.TimeDistributed(layers.Dense(targets, activation='linear')))
optimizer = RMSprop(lr=0.002)
model.compile(loss='mean_squared_error', optimizer=optimizer)
earlyStopping = EarlyStopping(monitor='val_loss', min_delta=0.005, patience=15, verbose=1, mode='auto')
ResetStatesCallback = define_reset_states_batch(nb_cuts)
model.fit(X_dev, y_dev, epochs=n_epochs, batch_size=n_batch, verbose=1, shuffle=False, validation_data=(X_eval,y_eval), callbacks=[ResetStatesCallback(), earlyStopping])
This gave me very statisfying accuracy (R2 over 0.98):
This figure shows the temperature (left) and relative humidity (right) in the wall over 2 years (data not used in training), prediction in red and true output in black. The residuals show that the error is very small and that the LSTM learns to capture the long-term dependencies to predict the relative humidity.

Using RNN to recover sine wave from noisy signal

I am involved with an application that needs to estimate the state of a certain system in real time by measuring a set of (non-linearly) dependent parameters. Up until now the application was using an extended Kalman filter, but it was found to be underperforming in certain circumstances, which is likely caused by the fact that the differences between the real system and its model used in the filter are too significant to be modeled as white noise. We cannot use a more precise model for a number of unrelated reasons.
We decided to try recurrent neural networks for the task. Since my experience with neural networks is quite limited, before tackling the real task itself, I decided to practice with a hand crafted problem first. That problem, however, I could not solve, so I'm asking for help here.
Here's what I did: I generated some sine waveforms of varying phase, frequency, amplitude, and offset. Then I distorted the waveforms with some white noise, and (unsuccessfully) attempted to train an LSTM network to recover my waveforms from the noisy signal. I expected that the network will eventually learn to fit a sine waveform into the noisy data set.
Here's the source (slightly abridged, but it should work):
#!/usr/bin/env python3
import time
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, LSTM
from keras.layers.wrappers import TimeDistributed
from keras.objectives import mean_absolute_error, cosine_proximity
POINTS_PER_WF = int(1e4)
X_SPACE = np.linspace(0, 100, POINTS_PER_WF)
def make_waveform_with_noise():
def add_noise(vec):
stdev = float(np.random.uniform(0.01, 0.2))
return vec + np.random.normal(0, stdev, size=len(vec))
f = np.random.choice((np.sin, np.cos))
wf = f(X_SPACE * np.random.normal(scale=5)) *\
np.random.normal(scale=5) + np.random.normal(scale=50)
return wf, add_noise(wf)
RESCALING = 1e-3
BATCH_SHAPE = (1, POINTS_PER_WF, 1)
model = Sequential([
TimeDistributed(Dense(5, activation='tanh'), batch_input_shape=BATCH_SHAPE),
LSTM(20, activation='tanh', inner_activation='sigmoid', return_sequences=True),
LSTM(20, activation='tanh', inner_activation='sigmoid', return_sequences=True),
TimeDistributed(Dense(1, activation='tanh'))
])
def compute_loss(y_true, y_pred):
skip_first = POINTS_PER_WF // 2
y_true = y_true[:, skip_first:, :] * RESCALING
y_pred = y_pred[:, skip_first:, :] * RESCALING
me = mean_absolute_error(y_true, y_pred)
cp = cosine_proximity(y_true, y_pred)
return me + cp
model.summary()
model.compile(optimizer='adam', loss=compute_loss,
metrics=['mae', 'cosine_proximity'])
NUM_ITERATIONS = 30000
for iteration in range(NUM_ITERATIONS):
wf, noisy_wf = make_waveform_with_noise()
y = wf.reshape(BATCH_SHAPE) * RESCALING
x = noisy_wf.reshape(BATCH_SHAPE) * RESCALING
info = model.train_on_batch(x, y)
model.save_weights('final.hdf5')
The first dense layer is actually useless, the reason I added it is because I wanted to make sure I can successfully combine LSTM and time distributed dense layers, since my real application will likely need that setup.
The error function was modified a number of times. Initially I was using plain mean squared error, but the training process was extremely slow, and it was mostly converging to simply copying the input noisy signal into the output. The cosine proximity metric I added later essentially defines the degree of similarity between the shapes of the functions; it seemed to speed up the learning quite a bit. Also note that I'm applying the loss function only to the last half of the dataset; the motivation for that is that I expected that the network will need to see a few periods of the signal in order to be able to correctly identify the parameters of the waveform. However, I found that this modification has no visible effect on the performance of the network.
The latest modification of the script uses Adam optimizer, I also experimented with RMSProp with varying learning rate and decay settings, but I found no noticeable difference in behavior of the network.
I am using Theano 0.9 (dev) backend configured to use 64 bit floating point, in order to prevent possible issues with numerical stability. The epsilon value is set accordingly to 1e-14.
This is what the output looks like after 15k..30k training steps (performance stops improving starting from about 15k steps) (the first plot is zoomed in for the sake of clarity):
Plot legend:
blue (0) - noisy signal, input of the RNN
green (1) - recovered signal, output of the RNN
red (2) - ground truth
My question is: what am I doing wrong?

Which classifier or ML SDK should I use in this case?

The training data (including both training and validation set) has about 80 million samples and each sample has 200 dense floating points. There are 6 labeled classe and they are imbalanced.
In the common-used ML libs (e.g., libsvm, scikit-learn, Spark MLlib, random forest, XGBoost or else), which should I use? Regarding the hardware configuration, the machine has 24 CPU cores and 250 Gb memory.
I would recommend using scikit-learn's SGDClassifier as it is online so you can load your training data in chunks (mini-batches) into memory and train the classifier gradually so you won't need to load all the data into memory.
It is highly parallel and easy to use.
You can set the warm_start argument to True and call fit multiple times with each chunk of X, y loaded into memory or the better option you can use the partial_fit method.
clf = SGDClassifier(loss='hinge', alpha=1e-4, penalty='l2', l1_ratio=0.9, learning_rate='optimal', n_iter=10, shuffle=False, n_jobs=10, fit_intercept=True)
# len(classes) = n_classes
all_classes = np.array(set_of_all_classes)
while True:
#load a minibatch from disk into memory
X, y = load_next_chunk()
clf.partial_fit(X, y, all_classes)
X_test, y_test = load_test_data()
y_pred = clf.predict(X_test)

Label Propagation in sklearn is classifying every vector as 1

I have 2000 labelled data (7 different labels) and about 100K unlabeled data and I am trying to use sklearn.semi_supervised.LabelPropagation. The data has 1024 dimensions. My problem is that the classifier is labeling everything as 1. My code looks like this:
X_unlabeled = X_unlabeled[:10000, :]
X_both = np.vstack((X_train, X_unlabeled))
y_both = np.append(y_train, -np.ones((X_unlabeled.shape[0],)))
clf = LabelPropagation(max_iter=100).fit(X_both, y_both)
y_pred = clf.predict(X_test)
y_pred is all ones. Also, X_train is 2000x1024 and X_unlabeled is a subset of the unlabeled data which is 10000x1024.
I also get this error upon calling fit on the classifier:
/usr/local/lib/python2.7/site-packages/sklearn/semi_supervised/label_propagation.py:255: RuntimeWarning: invalid value encountered in divide
self.label_distributions_ /= normalizer
Have you tried different values for the gamma parameter ? As the graph is constructed by computing an rbf kernel, the computation includes an exponential and the python exponential functions return 0 if the value is a too big negative number (see http://computer-programming-forum.com/56-python/ef71e144330ffbc2.htm). And if the graph is filled with 0, the label_distributions_ is filled with "nan" (because of normalization) and a warning appears. (be careful, the gamma value in scikit implementation is multiplied to the euclidean distance, it's not the same thing as in the Zhu paper.)
The LabelPropagation will finally be fixed in version 0.19

Resources