My models accuracy increases rapidly to 94.3% but then stays there for the rest of the epochs.
Here is my model and code:
model = Sequential()
model.add(Conv2D(5, (3,3), strides=(2,2), kernel_initializer='normal', activation='sigmoid', input_shape=(dim, dim, 3)))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
model.add(Conv2D(5, (3,3), strides=(2,2), activation='sigmoid'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2,2)))
# Create the feature vector
model.add(Flatten())
model.add(Dense(12288, activation='sigmoid'))
model.add(Dropout(0.2))
model.add(Dense(1536, activation='sigmoid'))
model.add(Dropout(0.3))
model.add(Dense(384, activation='sigmoid'))
model.add(Dropout(0.4))
model.add(Dense(1, activation='sigmoid'))
sgd = SGD(lr=0.001, momentum=0.9)
model.compile(loss="binary_crossentropy", optimizer="sgd", metrics=["accuracy"])
model.fit(data, labels, epochs=20, batch_size=100, callbacks=callbacks_list, verbose=1)
CNN_output = model.predict(data)
The output of the training is shown here:
CNN Output
Checking the output of the CNN (from the prediction) I get the following (note this is just a sample):
ACTUAL: train_0:
[ 1.]
PREDICTION: train_0:
[ 0.]
ACTUAL: train_1:
[ 0.]
PREDICTION: train_1:
[ 0.]
ACTUAL: train_2:
[ 0.]
PREDICTION: train_2:
[ 0.]
ACTUAL: train_3:
[ 0.]
PREDICTION: train_3:
[ 0.]
ACTUAL: train_4:
[ 0.]
PREDICTION: train_4:
[ 0.]
ACTUAL: train_5:
[ 1.]
PREDICTION: train_5:
[ 0.]
ACTUAL: train_6:
[ 0.]
PREDICTION: train_6:
[ 0.]
ACTUAL: train_7:
[ 1.]
PREDICTION: train_7:
[ 0.]
ACTUAL: train_8:
[ 0.]
PREDICTION: train_8:
[ 0.]
ACTUAL: train_9:
[ 0.]
PREDICTION: train_9:
[ 0.]
ACTUAL: train_10:
[ 0.]
PREDICTION: train_10:
[ 0.]
ACTUAL: train_11:
[ 0.]
PREDICTION: train_11:
[ 0.]
ACTUAL: train_12:
[ 0.]
PREDICTION: train_12:
[ 0.]
ACTUAL: train_13:
[ 0.]
PREDICTION: train_13:
[ 0.]
ACTUAL: train_14:
[ 0.]
PREDICTION: train_14:
[ 0.]
ACTUAL: train_15:
[ 0.]
PREDICTION: train_15:
[ 0.]
ACTUAL: train_16:
[ 0.]
PREDICTION: train_16:
[ 0.]
ACTUAL: train_17:
[ 0.]
PREDICTION: train_17:
[ 0.]
ACTUAL: train_18:
[ 0.]
PREDICTION: train_18:
[ 0.]
ACTUAL: train_19:
[ 0.]
PREDICTION: train_19:
[ 0.]
ACTUAL: train_20:
[ 0.]
PREDICTION: train_20:
[ 0.]
ACTUAL: train_21:
[ 0.]
PREDICTION: train_21:
[ 0.]
ACTUAL: train_22:
[ 0.]
PREDICTION: train_22:
[ 0.]
ACTUAL: train_23:
[ 0.]
PREDICTION: train_23:
[ 0.]
ACTUAL: train_24:
[ 0.]
PREDICTION: train_24:
[ 0.]
ACTUAL: train_25:
[ 0.]
PREDICTION: train_25:
[ 0.]
ACTUAL: train_26:
[ 0.]
PREDICTION: train_26:
[ 0.]
ACTUAL: train_27:
[ 0.]
PREDICTION: train_27:
[ 0.]
ACTUAL: train_28:
[ 0.]
PREDICTION: train_28:
[ 0.]
ACTUAL: train_29:
[ 0.]
PREDICTION: train_29:
[ 0.]
ACTUAL: train_30:
[ 0.]
PREDICTION: train_30:
[ 0.]
ACTUAL: train_31:
[ 0.]
PREDICTION: train_31:
[ 0.]
ACTUAL: train_32:
[ 0.]
PREDICTION: train_32:
[ 0.]
ACTUAL: train_33:
[ 0.]
PREDICTION: train_33:
[ 0.]
ACTUAL: train_34:
[ 0.]
PREDICTION: train_34:
[ 0.]
ACTUAL: train_35:
[ 0.]
PREDICTION: train_35:
[ 0.]
ACTUAL: train_36:
[ 0.]
PREDICTION: train_36:
[ 0.]
ACTUAL: train_37:
[ 0.]
PREDICTION: train_37:
[ 0.]
ACTUAL: train_38:
[ 0.]
PREDICTION: train_38:
[ 0.]
ACTUAL: train_39:
[ 0.]
PREDICTION: train_39:
[ 0.]
ACTUAL: train_40:
[ 0.]
PREDICTION: train_40:
[ 0.]
ACTUAL: train_41:
[ 0.]
PREDICTION: train_41:
[ 0.]
ACTUAL: train_42:
[ 0.]
PREDICTION: train_42:
[ 0.]
ACTUAL: train_43:
[ 1.]
PREDICTION: train_43:
[ 0.]
ACTUAL: train_44:
[ 0.]
PREDICTION: train_44:
[ 0.]
ACTUAL: train_45:
[ 0.]
PREDICTION: train_45:
[ 0.]
ACTUAL: train_46:
[ 0.]
PREDICTION: train_46:
[ 0.]
ACTUAL: train_47:
[ 0.]
PREDICTION: train_47:
[ 0.]
ACTUAL: train_48:
[ 0.]
PREDICTION: train_48:
[ 0.]
ACTUAL: train_49:
[ 0.]
PREDICTION: train_49:
[ 0.]
ACTUAL: train_50:
[ 0.]
PREDICTION: train_50:
[ 0.]
ACTUAL: train_51:
[ 0.]
PREDICTION: train_51:
[ 0.]
ACTUAL: train_52:
[ 0.]
PREDICTION: train_52:
[ 0.]
ACTUAL: train_53:
[ 0.]
PREDICTION: train_53:
[ 0.]
ACTUAL: train_54:
[ 0.]
PREDICTION: train_54:
[ 0.]
ACTUAL: train_55:
[ 0.]
PREDICTION: train_55:
[ 0.]
ACTUAL: train_56:
[ 0.]
PREDICTION: train_56:
[ 0.]
ACTUAL: train_57:
[ 0.]
PREDICTION: train_57:
[ 0.]
ACTUAL: train_58:
[ 0.]
PREDICTION: train_58:
[ 0.]
ACTUAL: train_59:
[ 0.]
PREDICTION: train_59:
[ 0.]
ACTUAL: train_60:
[ 1.]
PREDICTION: train_60:
[ 0.]
ACTUAL: train_61:
[ 0.]
PREDICTION: train_61:
[ 0.]
ACTUAL: train_62:
[ 0.]
PREDICTION: train_62:
[ 0.]
ACTUAL: train_63:
[ 0.]
PREDICTION: train_63:
[ 0.]
ACTUAL: train_64:
[ 0.]
PREDICTION: train_64:
[ 0.]
ACTUAL: train_65:
[ 0.]
PREDICTION: train_65:
[ 0.]
ACTUAL: train_66:
[ 0.]
PREDICTION: train_66:
[ 0.]
ACTUAL: train_67:
[ 0.]
PREDICTION: train_67:
[ 0.]
ACTUAL: train_68:
[ 0.]
PREDICTION: train_68:
[ 0.]
ACTUAL: train_69:
[ 0.]
PREDICTION: train_69:
[ 0.]
ACTUAL: train_70:
[ 0.]
PREDICTION: train_70:
[ 0.]
ACTUAL: train_71:
[ 0.]
PREDICTION: train_71:
[ 0.]
ACTUAL: train_72:
[ 0.]
PREDICTION: train_72:
[ 0.]
ACTUAL: train_73:
[ 0.]
PREDICTION: train_73:
[ 0.]
ACTUAL: train_74:
[ 0.]
PREDICTION: train_74:
[ 0.]
ACTUAL: train_75:
[ 0.]
PREDICTION: train_75:
[ 0.]
ACTUAL: train_76:
[ 0.]
PREDICTION: train_76:
[ 0.]
ACTUAL: train_77:
[ 0.]
PREDICTION: train_77:
[ 0.]
ACTUAL: train_78:
[ 0.]
PREDICTION: train_78:
[ 0.]
ACTUAL: train_79:
[ 0.]
PREDICTION: train_79:
[ 0.]
ACTUAL: train_80:
[ 0.]
PREDICTION: train_80:
[ 0.]
ACTUAL: train_81:
[ 0.]
PREDICTION: train_81:
[ 0.]
ACTUAL: train_82:
[ 0.]
PREDICTION: train_82:
[ 0.]
ACTUAL: train_83:
[ 0.]
PREDICTION: train_83:
[ 0.]
ACTUAL: train_84:
[ 0.]
PREDICTION: train_84:
[ 0.]
ACTUAL: train_85:
[ 0.]
PREDICTION: train_85:
[ 0.]
ACTUAL: train_86:
[ 0.]
PREDICTION: train_86:
[ 0.]
ACTUAL: train_87:
[ 0.]
PREDICTION: train_87:
[ 0.]
ACTUAL: train_88:
[ 0.]
PREDICTION: train_88:
[ 0.]
ACTUAL: train_89:
[ 0.]
PREDICTION: train_89:
[ 0.]
ACTUAL: train_90:
[ 0.]
PREDICTION: train_90:
[ 0.]
ACTUAL: train_91:
[ 0.]
PREDICTION: train_91:
[ 0.]
ACTUAL: train_92:
[ 0.]
PREDICTION: train_92:
[ 0.]
ACTUAL: train_93:
[ 0.]
PREDICTION: train_93:
[ 0.]
ACTUAL: train_94:
[ 0.]
PREDICTION: train_94:
[ 0.]
ACTUAL: train_95:
[ 0.]
PREDICTION: train_95:
[ 0.]
ACTUAL: train_96:
[ 0.]
PREDICTION: train_96:
[ 0.]
ACTUAL: train_97:
[ 0.]
PREDICTION: train_97:
[ 0.]
ACTUAL: train_98:
[ 0.]
PREDICTION: train_98:
[ 0.]
ACTUAL: train_99:
[ 0.]
PREDICTION: train_99:
[ 0.]
ACTUAL: train_100:
[ 0.]
PREDICTION: train_100:
[ 0.]
ACTUAL: train_101:
[ 0.]
PREDICTION: train_101:
[ 0.]
ACTUAL: train_102:
[ 0.]
PREDICTION: train_102:
[ 0.]
ACTUAL: train_103:
[ 0.]
PREDICTION: train_103:
[ 0.]
ACTUAL: train_104:
[ 1.]
PREDICTION: train_104:
[ 0.]
ACTUAL: train_105:
[ 0.]
PREDICTION: train_105:
[ 0.]
ACTUAL: train_106:
[ 0.]
PREDICTION: train_106:
[ 0.]
ACTUAL: train_107:
[ 0.]
PREDICTION: train_107:
[ 0.]
ACTUAL: train_108:
[ 0.]
PREDICTION: train_108:
[ 0.]
ACTUAL: train_109:
[ 0.]
PREDICTION: train_109:
[ 0.]
ACTUAL: train_110:
[ 0.]
PREDICTION: train_110:
[ 0.]
ACTUAL: train_111:
[ 0.]
PREDICTION: train_111:
[ 0.]
ACTUAL: train_112:
[ 0.]
PREDICTION: train_112:
[ 0.]
ACTUAL: train_113:
[ 0.]
PREDICTION: train_113:
[ 0.]
ACTUAL: train_114:
[ 0.]
PREDICTION: train_114:
[ 0.]
ACTUAL: train_115:
[ 0.]
PREDICTION: train_115:
[ 0.]
ACTUAL: train_116:
[ 0.]
PREDICTION: train_116:
[ 0.]
ACTUAL: train_117:
[ 0.]
PREDICTION: train_117:
[ 0.]
ACTUAL: train_118:
[ 0.]
PREDICTION: train_118:
[ 0.]
ACTUAL: train_119:
[ 0.]
PREDICTION: train_119:
[ 0.]
ACTUAL: train_120:
[ 0.]
PREDICTION: train_120:
[ 0.]
ACTUAL: train_121:
[ 0.]
PREDICTION: train_121:
[ 0.]
ACTUAL: train_122:
[ 0.]
PREDICTION: train_122:
[ 0.]
ACTUAL: train_123:
[ 0.]
PREDICTION: train_123:
[ 0.]
ACTUAL: train_124:
[ 0.]
PREDICTION: train_124:
[ 0.]
ACTUAL: train_125:
[ 0.]
PREDICTION: train_125:
[ 0.]
ACTUAL: train_126:
[ 0.]
PREDICTION: train_126:
[ 0.]
ACTUAL: train_127:
[ 0.]
PREDICTION: train_127:
[ 0.]
ACTUAL: train_128:
[ 0.]
PREDICTION: train_128:
[ 0.]
ACTUAL: train_129:
[ 0.]
PREDICTION: train_129:
[ 0.]
ACTUAL: train_130:
[ 0.]
PREDICTION: train_130:
[ 0.]
ACTUAL: train_131:
[ 0.]
PREDICTION: train_131:
[ 0.]
ACTUAL: train_132:
[ 1.]
PREDICTION: train_132:
[ 0.]
ACTUAL: train_133:
[ 1.]
PREDICTION: train_133:
[ 0.]
ACTUAL: train_134:
[ 0.]
PREDICTION: train_134:
[ 0.]
ACTUAL: train_135:
[ 0.]
PREDICTION: train_135:
[ 0.]
ACTUAL: train_136:
[ 1.]
PREDICTION: train_136:
[ 0.]
ACTUAL: train_137:
[ 0.]
PREDICTION: train_137:
[ 0.]
ACTUAL: train_138:
[ 0.]
PREDICTION: train_138:
[ 0.]
ACTUAL: train_139:
[ 0.]
PREDICTION: train_139:
[ 0.]
ACTUAL: train_140:
[ 0.]
PREDICTION: train_140:
[ 0.]
ACTUAL: train_141:
[ 0.]
PREDICTION: train_141:
[ 0.]
ACTUAL: train_142:
[ 0.]
PREDICTION: train_142:
[ 0.]
ACTUAL: train_143:
[ 0.]
PREDICTION: train_143:
[ 0.]
ACTUAL: train_144:
[ 0.]
PREDICTION: train_144:
[ 0.]
ACTUAL: train_145:
[ 0.]
PREDICTION: train_145:
[ 0.]
ACTUAL: train_146:
[ 0.]
PREDICTION: train_146:
[ 0.]
ACTUAL: train_147:
[ 0.]
PREDICTION: train_147:
[ 0.]
ACTUAL: train_148:
[ 0.]
PREDICTION: train_148:
[ 0.]
ACTUAL: train_149:
[ 0.]
PREDICTION: train_149:
[ 0.]
ACTUAL: train_150:
[ 0.]
PREDICTION: train_150:
[ 0.]
ACTUAL: train_151:
[ 0.]
PREDICTION: train_151:
[ 0.]
ACTUAL: train_152:
[ 1.]
PREDICTION: train_152:
[ 0.]
ACTUAL: train_153:
[ 0.]
PREDICTION: train_153:
[ 0.]
ACTUAL: train_154:
[ 0.]
PREDICTION: train_154:
[ 0.]
ACTUAL: train_155:
[ 0.]
PREDICTION: train_155:
[ 0.]
ACTUAL: train_156:
[ 0.]
PREDICTION: train_156:
[ 0.]
ACTUAL: train_157:
[ 0.]
PREDICTION: train_157:
[ 0.]
ACTUAL: train_158:
[ 0.]
PREDICTION: train_158:
[ 0.]
ACTUAL: train_159:
[ 0.]
PREDICTION: train_159:
[ 0.]
ACTUAL: train_160:
[ 0.]
PREDICTION: train_160:
[ 0.]
ACTUAL: train_161:
[ 0.]
PREDICTION: train_161:
[ 0.]
ACTUAL: train_162:
[ 0.]
PREDICTION: train_162:
[ 0.]
ACTUAL: train_163:
[ 0.]
PREDICTION: train_163:
[ 0.]
ACTUAL: train_164:
[ 0.]
PREDICTION: train_164:
[ 0.]
ACTUAL: train_165:
[ 0.]
PREDICTION: train_165:
[ 0.]
ACTUAL: train_166:
[ 0.]
PREDICTION: train_166:
[ 0.]
ACTUAL: train_167:
[ 0.]
PREDICTION: train_167:
[ 0.]
ACTUAL: train_168:
[ 0.]
PREDICTION: train_168:
[ 0.]
ACTUAL: train_169:
[ 0.]
PREDICTION: train_169:
[ 0.]
ACTUAL: train_170:
[ 0.]
PREDICTION: train_170:
[ 0.]
ACTUAL: train_171:
[ 0.]
PREDICTION: train_171:
[ 0.]
ACTUAL: train_172:
[ 0.]
PREDICTION: train_172:
[ 0.]
ACTUAL: train_173:
[ 0.]
PREDICTION: train_173:
[ 0.]
ACTUAL: train_174:
[ 0.]
PREDICTION: train_174:
[ 0.]
ACTUAL: train_175:
[ 0.]
PREDICTION: train_175:
[ 0.]
ACTUAL: train_176:
[ 0.]
PREDICTION: train_176:
[ 0.]
ACTUAL: train_177:
[ 0.]
PREDICTION: train_177:
[ 0.]
ACTUAL: train_178:
[ 0.]
PREDICTION: train_178:
[ 0.]
ACTUAL: train_179:
[ 0.]
PREDICTION: train_179:
[ 0.]
ACTUAL: train_180:
[ 1.]
PREDICTION: train_180:
[ 0.]
ACTUAL: train_181:
[ 0.]
PREDICTION: train_181:
[ 0.]
ACTUAL: train_182:
[ 0.]
PREDICTION: train_182:
[ 0.]
ACTUAL: train_183:
[ 0.]
PREDICTION: train_183:
[ 0.]
ACTUAL: train_184:
[ 0.]
PREDICTION: train_184:
[ 0.]
ACTUAL: train_185:
[ 0.]
PREDICTION: train_185:
[ 0.]
ACTUAL: train_186:
[ 0.]
PREDICTION: train_186:
[ 0.]
ACTUAL: train_187:
[ 0.]
PREDICTION: train_187:
[ 0.]
ACTUAL: train_188:
[ 0.]
PREDICTION: train_188:
[ 0.]
ACTUAL: train_189:
[ 0.]
PREDICTION: train_189:
[ 0.]
ACTUAL: train_190:
[ 0.]
PREDICTION: train_190:
[ 0.]
ACTUAL: train_191:
[ 0.]
PREDICTION: train_191:
[ 0.]
ACTUAL: train_192:
[ 0.]
PREDICTION: train_192:
[ 0.]
ACTUAL: train_193:
[ 0.]
PREDICTION: train_193:
[ 0.]
ACTUAL: train_194:
[ 0.]
PREDICTION: train_194:
[ 0.]
ACTUAL: train_195:
[ 0.]
PREDICTION: train_195:
[ 0.]
ACTUAL: train_196:
[ 1.]
PREDICTION: train_196:
[ 0.]
ACTUAL: train_197:
[ 0.]
PREDICTION: train_197:
[ 0.]
ACTUAL: train_198:
[ 0.]
PREDICTION: train_198:
[ 0.]
ACTUAL: train_199:
[ 0.]
PREDICTION: train_199:
[ 0.]
your dataset is just very imbalanced/skewed. You have 94% of label 0 and 6% of label 1. The neural net just learns that it can be very performant if it predicts 0 for everything.
What you can do to avoid that is either to change your data set to have 50% of label 1 and 50% of label 0 or, you can use the "class_weight" parameter of the fit function :
class_weight: dictionary mapping classes to a weight value, used for scaling the loss function (during training only). source
In your case I would use
fit(..., class_weight = {0:1, 1:15.5})
Because you have 15.5 times more samples in the class 0 than 1. The numbers here just say that when you misclassify a 0 your loss is multiplied by 1 and when you misclasify a 1 the loss is multiplied by 15.5... more information here .
Also, I wouldn't use an accuracy metric to really evaluate the result in your case but see the f1 score metric which is way more appropriate for this kind of datasets. f1score on wikipedia
I hope this helps?
Related
I did a ML model of handwritten digit recognition, and I'm trying to use the accuracy_score to know the % of the predcition if the model is accurated enough.
This is the model:
model = Sequential(
[
tf.keras.Input(shape=(64,)),
Dense(25, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.01), name = "L1"),
Dense(15, activation='relu', kernel_regularizer=tf.keras.regularizers.l2(0.01), name = "L2"),
Dense(10, activation='linear', name = "L3"),
], name = "my_model"
)
#Compiling the model
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(0.01),
)
#Fitting the model
model.fit(
x_train, y_train,
epochs=1000
Here is some of the data:
(1797, 64)
X_train.shape (1078, 64) y_train.shape (1078,)
X_cv.shape (359, 64) y_cv.shape (359,)
X_test.shape (360, 64) y_test.shape (360,)
[[ 0. 0. 5. ... 0. 0. 0.]
[ 0. 0. 0. ... 10. 0. 0.]
[ 0. 0. 0. ... 16. 9. 0.]
...
[ 0. 0. 0. ... 7. 0. 0.]
[ 0. 2. 15. ... 5. 0. 0.]
[ 0. 0. 1. ... 3. 0. 0.]]
Everytime I run the code and use the accuracy_score I get the error message:
ValueError: Classification metrics can't handle a mix of multiclass and multiclass-multioutput targets
Does anyone know how I can fix this?
Thanks in advance.
I tried a way to fix, but I'm not sure if it's correct.
I used this code:
predictions = model.predict(x_test)
print(accuracy_score(y_test, np.argmax(predictions, axis=1)))
I get a number like '0.90', but I'm not sure if it's correct.
I'm working on LSTM's.
The output is categorical.
Its of format [[t11,t12,t13],[t21,t22,t23]
I was able to do it for 1d array and i'm finding it difficult to do it for 2d array.
from keras.utils import to_categorical
print(to_categorical([[9,10,11],[10,11,12]]))
output
[[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]
There were two different inputs each having 3 time steps, but in output its all combined.
I need it to be,
[[[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]],
[[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]]
If shapes are weird, try to make it 1D, use the function, and reshape it back:
originalShape = myData.shape
totalFeatures = myData.max() + 1
categorical = myData.reshape((-1,))
categorical = to_categorical(categorical)
categorical = categorical.reshape(originalShape + (totalFeatures,))
I realized i can achieve what I want by reshaping,
print(a.reshape(2,3,13))
[[[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]]
[[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]]]
You get the error when reshaping because the highest class index is 12 and therefore there are 13 classes (0, 1, ..., 12). To avoid such errors furthermore, you might Numpy let infer those dimension by calling one_hot.reshape(sparse.shape + [-1]) where one_hot is the one-hot encoded vector produced by to_categorical() and sparse the orignal one.
I want to find 5 nearest neighbors for each point of blue points(T-SNE1) from red points(T-SNE2). So I wrote this code just to find out the right way to do that but I am not sure is that right or wrong to do that?
X = np.random.random((10, 2)) # 10 points in 3 dimensions
Y = np.random.random((10, 2)) # 10 points in 3 dimensions
NNlist=[]
treex = KDTree(X, leaf_size=2)
for i in range(len(Y)):
dist, ind = treex.query([Y[i]], k=5)
NNlist.append(ind[0][0])
print(ind) # indices of 5 closest neighbors
print(dist)
print("the nearest index is:" ,ind[0][0],"with distance:",dist[0][0], "for Y",i)
print(NNlist)
output
[[9 5 4 6 0]]
[[ 0.21261486 0.32859024 0.41598597 0.42960146 0.43793039]]
the nearest index is: 9 with distance: 0.212614862956 for Y 0
[[0 3 2 6 1]]
[[ 0.10907128 0.11378059 0.13984741 0.18000197 0.27475481]]
the nearest index is: 0 with distance: 0.109071275144 for Y 1
[[8 2 3 0 1]]
[[ 0.21621245 0.30543878 0.40668179 0.4370689 0.49372232]]
the nearest index is: 8 with distance: 0.216212445449 for Y 2
[[8 3 2 6 0]]
[[ 0.16648482 0.2989508 0.40967709 0.42511931 0.46589575]]
the nearest index is: 8 with distance: 0.166484820786 for Y 3
[[1 2 5 0 4]]
[[ 0.15331281 0.25121761 0.29305736 0.30173474 0.44291615]]
the nearest index is: 1 with distance: 0.153312811422 for Y 4
[[2 3 8 0 6]]
[[ 0.20441037 0.20917797 0.25121628 0.2903253 0.33914051]]
the nearest index is: 2 with distance: 0.204410367254 for Y 5
[[2 1 0 3 5]]
[[ 0.08400022 0.1484925 0.17356156 0.32387147 0.33789602]]
the nearest index is: 2 with distance: 0.0840002184199 for Y 6
[[8 2 3 7 0]]
[[ 0.2149891 0.40584999 0.50054235 0.53307269 0.5389266 ]]
the nearest index is: 8 with distance: 0.21498909502 for Y 7
[[1 0 2 5 9]]
[[ 0.07265268 0.11687068 0.19065327 0.20004392 0.30269591]]
the nearest index is: 1 with distance: 0.0726526838766 for Y 8
[[5 9 4 1 0]]
[[ 0.21563204 0.25067242 0.29904262 0.36745386 0.39634179]]
the nearest index is: 5 with distance: 0.21563203953 for Y 9
[9, 0, 8, 8, 1, 2, 2, 8, 1, 5]
import numpy as np
from scipy.spatial import KDTree
X = np.random.random((10, 2)) # 10 points in 3 dimensions
Y = np.random.random((10, 2)) # 10 points in 3 dimensions
NNlist=[]
for i in range(len(X)):
treey = KDTree(np.concatenate([Y.tolist(), np.expand_dims(X[i], axis=0)], axis=0))
dist, ind = treey.query([X[i]], k=6)
print('index', ind) # indices of 5 closest neighbors
print('distance', dist)
print('5 nearest neighbors')
for j in ind[0][1:]:
print(Y[j])
print()
you can get ...
index [[10 5 8 9 1 2]]
distance [[ 0. 0.3393312 0.38565112 0.40120109 0.44200758 0.47675255]]
5 nearest neighbors
[ 0.6298789 0.18283264]
[ 0.42952574 0.83918788]
[ 0.26258905 0.4115705 ]
[ 0.61789523 0.96261285]
[ 0.92417172 0.13276541]
index [[10 1 3 8 4 9]]
distance [[ 0. 0.09176157 0.18219064 0.21845335 0.28876942 0.60082231]]
5 nearest neighbors
[ 0.61789523 0.96261285]
[ 0.51031835 0.99761715]
[ 0.42952574 0.83918788]
[ 0.3744326 0.97577322]
[ 0.26258905 0.4115705 ]
index [[10 7 0 9 5 6]]
distance [[ 0. 0.15771386 0.2751765 0.3457175 0.49918935 0.70597498]]
5 nearest neighbors
[ 0.19803817 0.23495888]
[ 0.41293849 0.05585981]
[ 0.26258905 0.4115705 ]
[ 0.6298789 0.18283264]
[ 0.04527532 0.78806495]
index [[10 0 5 7 9 2]]
distance [[ 0. 0.09269963 0.20597988 0.24505542 0.31104979 0.49743673]]
5 nearest neighbors
[ 0.41293849 0.05585981]
[ 0.6298789 0.18283264]
[ 0.19803817 0.23495888]
[ 0.26258905 0.4115705 ]
[ 0.92417172 0.13276541]
index [[10 9 5 7 0 8]]
distance [[ 0. 0.20406876 0.26125464 0.30645317 0.33369641 0.45509834]]
5 nearest neighbors
[ 0.26258905 0.4115705 ]
[ 0.6298789 0.18283264]
[ 0.19803817 0.23495888]
[ 0.41293849 0.05585981]
[ 0.42952574 0.83918788]
index [[10 5 2 0 7 9]]
distance [[ 0. 0.13641503 0.17524716 0.34224271 0.56393988 0.56893897]]
5 nearest neighbors
[ 0.6298789 0.18283264]
[ 0.92417172 0.13276541]
[ 0.41293849 0.05585981]
[ 0.19803817 0.23495888]
[ 0.26258905 0.4115705 ]
index [[10 7 9 0 5 6]]
distance [[ 0. 0.04152391 0.22807566 0.25709252 0.43421854 0.61332497]]
5 nearest neighbors
[ 0.19803817 0.23495888]
[ 0.26258905 0.4115705 ]
[ 0.41293849 0.05585981]
[ 0.6298789 0.18283264]
[ 0.04527532 0.78806495]
index [[10 5 1 2 8 3]]
distance [[ 0. 0.40641681 0.43652515 0.44861766 0.45186271 0.51705369]]
5 nearest neighbors
[ 0.6298789 0.18283264]
[ 0.61789523 0.96261285]
[ 0.92417172 0.13276541]
[ 0.42952574 0.83918788]
[ 0.51031835 0.99761715]
index [[10 6 9 7 8 4]]
distance [[ 0. 0.17568369 0.2841519 0.40184611 0.43110847 0.47835169]]
5 nearest neighbors
[ 0.04527532 0.78806495]
[ 0.26258905 0.4115705 ]
[ 0.19803817 0.23495888]
[ 0.42952574 0.83918788]
[ 0.3744326 0.97577322]
index [[10 9 7 5 0 8]]
distance [[ 0. 0.11723769 0.2275565 0.32111803 0.32446146 0.4643181 ]]
5 nearest neighbors
[ 0.26258905 0.4115705 ]
[ 0.19803817 0.23495888]
[ 0.6298789 0.18283264]
[ 0.41293849 0.05585981]
[ 0.42952574 0.83918788]
In Maxima, we have matrix_element_add, matrix_element_mult and matrix_element_transpose.
Is there a matrix_element_inv, and if not, how could I make one?
If you want to invert matrix,first remember that not all matrix can be inverted, so first be sure that your matrix can be inverted.
For maxima working with matrix the operator for multiplying is .
so with A . A = A^2
if we want to get this value is A^^2
Normally the operator apply to each element of the matrix so if you would to invert all the elements:
(%i1) A: matrix ([17, 3], [-8, 11]);
[ 17 3 ]
(%o1) [ ]
[ - 8 11 ]
(%i9) A^-1;
[ 1 1 ]
[ -- - ]
[ 17 3 ]
(%o9) [ ]
[ 1 1 ]
[ - - -- ]
[ 8 11 ]
then to get the inverse of a matrix:
(%i2) B: A^^-1;
[ 11 3 ]
[ --- - --- ]
[ 211 211 ]
(%o2) [ ]
[ 8 17 ]
[ --- --- ]
[ 211 211 ]
(%i4) B.A;
[ 1 0 ]
(%o4) [ ]
[ 0 1 ]
(%i5) A.B;
[ 1 0 ]
(%o5) [ ]
[ 0 1 ]
be sure that your matrix is invertible:
(%i6) Bad: matrix ([2, 3], [4, 6]);
[ 2 3 ]
(%o6) [ ]
[ 4 6 ]
(%i7) Bad^^-1;
expt: undefined: 0 to a negative exponent.
-- an error. To debug this try: debugmode(true);
(%i8) newdet(Bad);
(%o8)/R/ 0
Now you should read carefully this section:
http://maxima.sourceforge.net/docs/manual/maxima_23.html
specially when telling about
matrix_element_add
so really there are only this opereators so doesn't exist a matrix_element_inv
so you can write your own using lambda functions as follows for example for getting the transpose of all the inverted elements:
(%i10) matrix_element_transpose: lambda ([x], x^-1)$
(%i11) transpose(A);
[ 1 1 ]
[ -- - - ]
[ 17 8 ]
(%o11) [ ]
[ 1 1 ]
[ - -- ]
[ 3 11 ]
hope this helps
I was reading about TfidfVectorizer implementation of scikit-learn, i don´t understand what´s the output of the method, for example:
new_docs = ['He watches basketball and baseball', 'Julie likes to play basketball', 'Jane loves to play baseball']
new_term_freq_matrix = tfidf_vectorizer.transform(new_docs)
print tfidf_vectorizer.vocabulary_
print new_term_freq_matrix.todense()
output:
{u'me': 8, u'basketball': 1, u'julie': 4, u'baseball': 0, u'likes': 5, u'loves': 7, u'jane': 3, u'linda': 6, u'more': 9, u'than': 10, u'he': 2}
[[ 0.57735027 0.57735027 0.57735027 0. 0. 0. 0.
0. 0. 0. 0. ]
[ 0. 0.68091856 0. 0. 0.51785612 0.51785612
0. 0. 0. 0. 0. ]
[ 0.62276601 0. 0. 0.62276601 0. 0. 0.
0.4736296 0. 0. 0. ]]
What is?(e.g.: u'me': 8 ):
{u'me': 8, u'basketball': 1, u'julie': 4, u'baseball': 0, u'likes': 5, u'loves': 7, u'jane': 3, u'linda': 6, u'more': 9, u'than': 10, u'he': 2}
is this a matrix or just a vector?, i can´t understand what´s telling me the output:
[[ 0.57735027 0.57735027 0.57735027 0. 0. 0. 0.
0. 0. 0. 0. ]
[ 0. 0.68091856 0. 0. 0.51785612 0.51785612
0. 0. 0. 0. 0. ]
[ 0.62276601 0. 0. 0.62276601 0. 0. 0.
0.4736296 0. 0. 0. ]]
Could anybody explain me in more detail these outputs?
Thanks!
TfidfVectorizer - Transforms text to feature vectors that can be used as input to estimator.
vocabulary_ Is a dictionary that converts each token (word) to feature index in the matrix, each unique token gets a feature index.
What is?(e.g.: u'me': 8 )
It tells you that the token 'me' is represented as feature number 8 in the output matrix.
is this a matrix or just a vector?
Each sentence is a vector, the sentences you've entered are matrix with 3 vectors.
In each vector the numbers (weights) represent features tf-idf score.
For example:
'julie': 4 --> Tells you that the in each sentence 'Julie' appears you will have non-zero (tf-idf) weight. As you can see in the 2'nd vector:
[ 0. 0.68091856 0. 0. 0.51785612 0.51785612
0. 0. 0. 0. 0. ]
The 5'th element scored 0.51785612 - the tf-idf score for 'Julie'.
For more info about Tf-Idf scoring read here: http://en.wikipedia.org/wiki/Tf%E2%80%93idf
So tf-idf creates a set of its own vocabulary from the entire set of documents. Which is seen in first line of output. (for better understanding I have sorted it)
{u'baseball': 0, u'basketball': 1, u'he': 2, u'jane': 3, u'julie': 4, u'likes': 5, u'linda': 6, u'loves': 7, u'me': 8, u'more': 9, u'than': 10, }
And when the document is parsed to get its tf-idf. Document:
He watches basketball and baseball
and its output,
[ 0.57735027 0.57735027 0.57735027 0. 0. 0. 0.
0. 0. 0. 0. ]
is equivalent to,
[baseball basketball he jane julie likes linda loves me more than]
Since our document has only these words: baseball, basketball, he, from the vocabulary created. The document vector output has values of tf-idf for only these three words and in the same sorted vocabulary position.
tf-idf is used to classify documents, ranking in search engine. tf: term frequency(count of the words present in document from its own vocabulary), idf: inverse document frequency(importance of the word to each document).
The method addresses the fact that all words should not be weighted equally, using the weights to indicate the words that are most unique to the document, and best used to characterize it.
new_docs = ['basketball baseball', 'basketball baseball', 'basketball baseball']
new_term_freq_matrix = vectorizer.fit_transform(new_docs)
print (vectorizer.vocabulary_)
print ((new_term_freq_matrix.todense()))
{'basketball': 1, 'baseball': 0}
[[ 0.70710678 0.70710678]
[ 0.70710678 0.70710678]
[ 0.70710678 0.70710678]]
new_docs = ['basketball baseball', 'basketball basketball', 'basketball basketball']
new_term_freq_matrix = vectorizer.fit_transform(new_docs)
print (vectorizer.vocabulary_)
print ((new_term_freq_matrix.todense()))
{'basketball': 1, 'baseball': 0}
[[ 0.861037 0.50854232]
[ 0. 1. ]
[ 0. 1. ]]
new_docs = ['basketball basketball baseball', 'basketball basketball', 'basketball
basketball']
new_term_freq_matrix = vectorizer.fit_transform(new_docs)
print (vectorizer.vocabulary_)
print ((new_term_freq_matrix.todense()))
{'basketball': 1, 'baseball': 0}
[[ 0.64612892 0.76322829]
[ 0. 1. ]
[ 0. 1. ]]