Weka IBk parameter details (distanceWeighting, meanSquared) - machine-learning

I am using kNN algorithm to classify. In weka they have provided various parameter setting for kNN. I am intersted to know about the distanceWeighting, meanSquared.
In distanceWeighting we have three values (No distance weighting, weight by 1/distance and weight by 1-distance). What are these values and what is their impact?
Can someone please expalin me? :)

If one uses "no distance weighting", then the predicted value for your data points is the average of all k neighbors. For example
# if values_of_3_neigbors = 4, 5, 6
# then predicted_value = (4+5+6)/3 = 5
For 1/distance weighting, the weight of each neigbor is inversely proportional to the distance to it. The idea is: the closer the neighbor, the more it influences the predicted value. For example
# distance_to_3_neigbors = 1,3,5
# weights_of_neighbors = 1/1, 1/3, 1/5 # sum = 1 + 0.33 + 0.2 = 1.53
# normalized_weights_of_neighbors = 1/1.53, 0.33/1.53, 0.2/1.53 = 0.654, 0.216, 0.131
# then predicted_values = 4*0.654 + 5*0.216 + 6*0.131 = 4.48
For 1-distance it is similar. This is only applicable when all your distances are in the [0,1] range.
Hope this helps

Related

Perceptron : Weight for each sample data , or one common weight

With Perception learning, I am really confused on initializing and updating weight. If I have a sample data that contains 2 inputs x0 and x1 and I have 80 rows of these 2 inputs, hence 80x2 matrix.
Do I need to initialize weight as a matrix of 80x2 or just 2 values w0 and w1 ? Is final goal of perceptron learning is to find 2 weights w0 and w1 which should fit for all 80 input sample rows ?
I have following code and my errors never get to 0, despite going up to 10,000 iterations.
x=input matrix of 80x2
y=output matrix of 80x1
n = number of iterations
w=[0.1,0.1]
learningRate = 0.1
for i in range(n):
expectedT = y.transpose();
xT = x.transpose()
prediction = np.dot (w,xT)
for i in range (len(x)):
if prediction[i] >= 0:
ypred[i] = 1
else:
ypred[i] = 0
error = expectedT - ypred
# updating the weights
w = np.add(w,learningRate*(np.dot(error,x)))
globalError = globalError + np.square(error)
For each feature you will have one weight. Thus you have two features and two weights. It also helps to introduce a bias which adds another weight. For more information about bias check this Role of Bias in Neural Networks. The weights indeed should learn how to fit the sample data best. Depending on the data this can mean that you will never reach error of 0. For example a single layer perceptron can not learn an XOR gate when using a monotonic activation function. (solving XOR with single layer perceptron).
For your example I would recommend two things. Introducing a bias and stopping the training when the error is below a certain threshold or if error is 0 for example.
I completed your example to learn a logical AND gate:
# AND input and output
x = np.array([[0,0],[0,1],[1,0],[1,1]])
y = np.array([0,1,1,1])
n = 1000
w=[0.1,0.1,0.1]
learningRate = 0.01
globalError = 0
def predict(X):
prediction = np.dot(w[0:2],X) + w[2]
ypred = np.zeros(len(y))
for i in range (len(y)):
if prediction[i] >= 0:
ypred[i] = 1
else:
ypred[i] = 0
return ypred
for i in range(n):
expectedT = y.transpose();
xT = x.transpose()
ypred = predict(xT)
error = expectedT - ypred
if sum(error) == 0:
break
# updating the weights
w[0:2] = np.add(w[0:2],learningRate*(np.dot(error,x)))
w[2] += learningRate*sum(error)
globalError = globalError + np.square(error)
After the training the error is 0
print(error)
# [0. 0. 0. 0.]
And the weights are as follows
print(w)
#[0.1, 0.1, -0.00999999999999999]
The perceptron can be used now as AND gate:
predict(x.transpose())
#array([0., 1., 1., 1.])
Hope that helps

h20 predict function probability scoring on test data

I have created h20 random forest model for fraud prediction.now while scoring using predict function for test data. I got below dataframe from predict function output.
Now for 2nd records it predicted 1 but probability of p1 is far less than p0. What's the correct probability scores (p0/1) and classification we can use for my fraud prediction model?
If these are not correct probabilities then calibrated probabilities calculated using parameters(calibrate_model = True) as mentioned below will give correct probability?
nfolds=5
rf1 = h2o.estimators.H2ORandomForestEstimator(
model_id = "rf_df1",
ntrees = 200,
max_depth = 4,
sample_rate = .30,
# stopping_metric="misclassification",
# stopping_rounds = 2,
mtries = 6,
min_rows = 12,
nfolds=3,
distribution = "multinomial",
fold_assignment="Modulo",
keep_cross_validation_predictions=True,
calibrate_model = True,
calibration_frame = calib,
weights_column = "weight",
balance_classes = True
# stopping_tolerance = .005)
)
predict p0 p1
1 0 0.9986012 0.000896514
2 1 0.9985695 0.000448676
3 0 0.9981387 0.000477767
The prediction labels are based on a threshold and the threshold used is generally based on the threshold that maximizes the F1 score. See the following post to learn more on how to interpret the probability results.
Details on how the calibration frame and model work can be found here and here.

Tensorflow KNN : How can we assign the K parameter for defining number of neighbors in KNN?

I have started working on a machine learning project using K-Nearest-Neighbors method on python tensorflow library. I have no experience working with tensorflow tools, so I found some code in github and modified it for my data.
My dataset is like this:
2,2,2,2,0,0,3
2,2,2,2,0,1,0
2,2,2,4,2,2,1
...
2,2,2,4,2,0,0
And this is the code which actually works fine:
import tensorflow as tf
import numpy as np
# Whole dataset => 1428 samples
dataset = 'car-eval-data-1.csv'
# samples for train, remaining for test
samples = 1300
reader = np.loadtxt(open(dataset, "rb"), delimiter=",", skiprows=1, dtype=np.int32)
train_x, train_y = reader[:samples,:5], reader[:samples,6]
test_x, test_y = reader[samples:, :5], reader[samples:, 6]
# Placeholder you can assign values in future. its kind of a variable
# v = ("variable type",[None,4]) -- you can have multidimensional values here
training_values = tf.placeholder("float",[None,len(train_x[0])])
test_values = tf.placeholder("float",[len(train_x[0])])
# MANHATTAN distance
distance = tf.abs(tf.reduce_sum(tf.square(tf.subtract(training_values,test_values)),reduction_indices=1))
prediction = tf.arg_min(distance, 0)
init = tf.global_variables_initializer()
accuracy = 0.0
with tf.Session() as sess:
sess.run(init)
# Looping through the test set to compare against the training set
for i in range (len(test_x)):
# Tensor flow method to get the prediction near to the test parameters in the training set.
index_in_trainingset = sess.run(prediction, feed_dict={training_values:train_x,test_values:test_x[i]})
print("Test %d, and the prediction is %s, the real value is %s"%(i,train_y[index_in_trainingset],test_y[i]))
if train_y[index_in_trainingset] == test_y[i]:
# if prediction is right so accuracy increases.
accuracy += 1. / len(test_x)
print('Accuracy -> ', accuracy * 100, ' %')
The only thing I do not understand is that if it's the KNN method so there has to be some K parameter which defines the number of neighbors for predicting the label for each test sample.
How can we assign the K parameter to tune the number of nearest neighbors for the code?
Is there any way to modify this code to make use of K parameter?
You're right that the example above does not have the provision to select K-Nearest neighbours. In the code below, I have added the ability to add such a parameter(knn_size) along with other corrections
import tensorflow as tf
import numpy as np
# Whole dataset => 1428 samples
dataset = 'PATH_TO_DATASET_CSV'
knn_size = 1
# samples for train, remaining for test
samples = 1300
reader = np.loadtxt(open(dataset, "rb"), delimiter=",", skiprows=1, dtype=np.int32)
train_x, train_y = reader[:samples,:6], reader[:samples,6]
test_x, test_y = reader[samples:, :6], reader[samples:, 6]
# Placeholder you can assign values in future. its kind of a variable
# v = ("variable type",[None,4]) -- you can have multidimensional values here
training_values = tf.placeholder("float",[None, len(train_x[0])])
test_values = tf.placeholder("float",[len(train_x[0])])
# MANHATTAN distance
distance = tf.abs(tf.reduce_sum(tf.square(tf.subtract(training_values,test_values)),reduction_indices=1))
# Here, we multiply the distance by -1 to reverse the magnitude of distances, i.e. the largest distance becomes the smallest distance
# tf.nn.top_k returns the top k values and their indices, here k is controlled by the parameter knn_size
k_nearest_neighbour_values, k_nearest_neighbour_indices = tf.nn.top_k(tf.scalar_mul(-1,distance),k=knn_size)
#Based on the indices we obtain from the previous step, we locate the exact class label set of the k closest matches in the training data
best_training_labels = tf.gather(train_y,k_nearest_neighbour_indices)
if knn_size==1:
prediction = tf.squeeze(best_training_labels)
else:
# Now we make our prediction based on the class label that appears most frequently
# tf.unique_with_counts() gives us all unique values that appear in a 1-D tensor along with their indices and counts
values, indices, counts = tf.unique_with_counts(best_training_labels)
# This gives us the index of the class label that has repeated the most
max_count_index = tf.argmax(counts,0)
#Retrieve the required class label
prediction = tf.gather(values,max_count_index)
init = tf.global_variables_initializer()
accuracy = 0.0
with tf.Session() as sess:
sess.run(init)
# Looping through the test set to compare against the training set
for i in range (len(test_x)):
# Tensor flow method to get the prediction near to the test parameters in the training set.
prediction_value = sess.run([prediction], feed_dict={training_values:train_x,test_values:test_x[i]})
print("Test %d, and the prediction is %s, the real value is %s"%(i,prediction_value[0],test_y[i]))
if prediction_value[0] == test_y[i]:
# if prediction is right so accuracy increases.
accuracy += 1. / len(test_x)
print('Accuracy -> ', accuracy * 100, ' %')

tensorflow resize nearest neighbor approach don't optmize weights

I'm beginner in tensorflow and i'm working on a Model which Colorize Greyscale images and in the last part of the model the paper say :
Once the features are fused, they are processed by a set of
convolutions and upsampling layers, the latter which consist of simply
upsampling the input by using the nearest neighbour technique so that
the output is twice as wide and twice as tall.
when i tried to implement it in tensorflow i used tf.image.resize_nearest_neighbor for upsampling but when i used it i found the cost didn't change in all the epochs except of the 2nd epoch, and without it the cost is optmized and changed
This part of code
def Model(Input_images):
#some code till the following last part
Color_weights = {'W_conv1':tf.Variable(tf.random_normal([3,3,256,128])),'W_conv2':tf.Variable(tf.random_normal([3,3,128,64])),
'W_conv3':tf.Variable(tf.random_normal([3,3,64,64])),
'W_conv4':tf.Variable(tf.random_normal([3,3,64,32])),'W_conv5':tf.Variable(tf.random_normal([3,3,32,2]))}
Color_biases = {'b_conv1':tf.Variable(tf.random_normal([128])),'b_conv2':tf.Variable(tf.random_normal([64])),'b_conv3':tf.Variable(tf.random_normal([64])),
'b_conv4':tf.Variable(tf.random_normal([32])),'b_conv5':tf.Variable(tf.random_normal([2]))}
Color_layer1 = tf.nn.relu(Conv2d(Fuse, Color_weights['W_conv1'], 1) + Color_biases['b_conv1'])
Color_layer1_up = tf.image.resize_nearest_neighbor(Color_layer1,[56,56])
Color_layer2 = tf.nn.relu(Conv2d(Color_layer1_up, Color_weights['W_conv2'], 1) + Color_biases['b_conv2'])
Color_layer3 = tf.nn.relu(Conv2d(Color_layer2, Color_weights['W_conv3'], 1) + Color_biases['b_conv3'])
Color_layer3_up = tf.image.resize_nearest_neighbor(Color_layer3,[112,112])
Color_layer4 = tf.nn.relu(Conv2d(Color_layer3, Color_weights['W_conv4'], 1) + Color_biases['b_conv4'])
return Color_layer4
The Training Code
Prediction = Model(Input_images)
Colorization_MSE = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(Prediction,tf.Variable(tf.random_normal([2,112,112,32]))))
Optmizer = tf.train.AdadeltaOptimizer(learning_rate= 0.05).minimize(Colorization_MSE)
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
for epoch in range(EpochsNum):
epoch_loss = 0
Batch_indx = 1
for i in range(int(ExamplesNum / Batch_size)):#Over batches
print("Batch Num ",i + 1)
ReadNextBatch()
a, c = sess.run([Optmizer,Colorization_MSE],feed_dict={Input_images:Batch_GreyImages})
epoch_loss += c
print("epoch: ",epoch + 1, ",Los: ",epoch_loss)
So what is wrong with my logic or if the problem is in
tf.image.resize_nearest_neighbor what should i do or what is it's replacement ?
Ok, i solved it, i noticed that tf.random normal was the problem and when i replaced it with tf.truncated normal it is works well

Variational bayes method - Unreasonable result from posterior distribution

I'm trying to implement the basic example from https://en.wikipedia.org/wiki/Variational_Bayesian_methods#A_basic_example in order to find the posterior distribution over the mean and variance for the input data(which I generate in the beginning).
It is my understanding that the approximated posterior should be given by the product of q(mu)*q(tau) so I thought I could get it by simple multiplying each distribution for each point in the grid and then plot it. Although I can't see any error with my distributions, the gamma distribution produces extremely small values while the Gaussian distributions only has one non-zero element. My guess is that there is something wrong in the end where I multiply the two distributions for each point in the grid produced by meshgrid but I just wrote both distributions straight from Wikipedia. Why are my posterior probabilities so small/nan and what can I do to fix it?
Here is my code:
# First Exact solution
import numpy as np
from scipy import stats
import matplotlib.pyplot as plt
from scipy.special import factorial
# Produce N data points from known gaussian distribution
real_mu = 2
real_tau = 1
x = np.arange(-3,7,0.1)
N = len(x)
nrv = stats.norm.pdf(x, real_mu, real_tau)
## Approximate posterior distribution over mean and covariance ##
x = nrv # N data points from "unknown distribution"
N = len(x) #
# The Algorithm
#hyperparameters - can be set arbitrarily but smaller positive values indicates larger prior distributions over mu and tau
lambda_0 = 0.05
mu_0 = 0.05
a_0 = 0.05
b_0 = 0.05
##
x_sum = np.sum(x)
x_ave = np.sum(x)/N
x_squared = np.dot(x, x)
mu_n = (lambda_0*mu_0 + N*x_ave)/(lambda_0 + N)
a_N = a_0 + (N + 1)/2
lambda_N = 1 # Initialize lambda to some arbitrary value
difference = 9999999
while difference > 0.0000001:
b_N = b_0 + 0.5*((lambda_0 + N)*((1/lambda_N) + mu_n**2) - 2*(lambda_0*mu_0 + x_sum)*mu_n + (x_squared) + lambda_0*mu_0*mu_0)
new_lambda_N = (lambda_0 + N)*a_N/b_N
difference_1 = new_lambda_N - lambda_N
lambda_N = new_lambda_N
difference = np.absolute(difference_1)
# Calulate the approximated posterior from these parameters: q(mu, tau) = q(mu)q(tau)
t = np.arange(-2,2,0.01)
#qmu = stats.norm.pdf(t, mu_n, 1/lambda_N)
#qtau = gamma.pdf(t, a_N, loc=0, scale=b_N) #scale=1/b_N)
def gaussian(x):
return (1/(np.sqrt(2*np.pi*sigma*sigma)))*np.exp(-(x-mu_n)*(x-mu_n)/(2*sigma*sigma))
def gamma(x):
return ((b_N**a_N)*(x**(a_N-1))*np.exp(-x*b_N))/(factorial(a_N-1))
sigma = 1/lambda_N
xx, yy = np.meshgrid(t, t)
# First part in zz is from Gaussian distribution over mu and second a gamma distribution over tau
# Same as the two defined functions above
zz = ((1/(np.sqrt(2*np.pi*sigma*sigma)))*np.exp(-(xx-mu_n)*(xx-mu_n)/(2*sigma*sigma)))*((b_N**a_N)*(yy**(a_N-1))*np.exp(-yy*b_N))/(factorial(a_N-1))
plt.xlabel("mu")
plt.ylabel("tau")
plt.contourf(t,t,zz)
plt.show()

Resources