How can I allocate max features for counter vectorizer? - vectorization

I created counter vectorizer with skitlearn, but got syntax error at "max_features". "max_features" worked when I created TfidfVectorizer. How can I set max features on counter vectorizer?
vectorizer = CountVectorizer(analyzer='word',
lowercase=False,
tokenizer=None,
preprocessor=None,
min_df=2,
ngram_range=(1,1)
max_features=1000
)

I think you missed , after ngram_range (1, 1).
Try this :
vectorizer = CountVectorizer(analyzer='word',
lowercase=False,
tokenizer=None,
preprocessor=None,
min_df=2,
ngram_range=(1,1),
max_features=1000
)

Related

Questions regarding custom multiclass metrics (Keras)

could anyone explain how to write a custom multiclass metrics for Keras? I tried to write custom metric but encountered some issue. Main problem is I am not familiar with how tensor works during training (I think it is called Graph mode?). I am able to create confusion matrix and derived F1 score using NumPy or Python list.
I printed out the y-true and y_pred and tried to understand them, but the output was not what I expected:
Below is the function I used:
def f1_scores(y_true,y_pred):
y_true = K.print_tensor(y_true, message='y_true = ')
y_pred = K.print_tensor(y_pred, message='y_pred = ')
print(f"y_true_shape:{K.int_shape(y_true)}")
print(f"y_pred_shape:{K.int_shape(y_pred)}")
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
gt = K.argmax(y_true_f)
pred = K.argmax(y_pred_f)
print(f"pred_print:{pred}")
print(f"gt_print:{gt}")
pred = K.print_tensor(pred, message='pred= ')
gt = K.print_tensor(gt, message='gt =')
print(f"pred_shape:{K.int_shape(pred)}")
print(f"gt_shape:{K.int_shape(gt)}")
pred_f = K.flatten(pred)
gt_f = K.flatten(gt)
pred_f = K.print_tensor(pred_f, message='pred_f= ')
gt_f = K.print_tensor(gt_f, message='gt_f =')
print(f"pred_f_shape:{K.int_shape(pred_f)}")
print(f"gt_f_shape:{K.int_shape(gt_f)}")
conf_mat = tf.math.confusion_matrix(y_true_f,y_pred_f, num_classes = 14)
"""
add codes to find F1 score for each class
"""
# return an arbitrary number, as F1 scores not found yet.
return 1
The output at when epoch 1 just started:
y_true_shape:(None, 256, 256, 14)
y_pred_shape:(None, 256, 256, 14)
pred_print:Tensor("ArgMax_1:0", shape=(), dtype=int64)
gt_print:Tensor("ArgMax:0", shape=(), dtype=int64)
pred_shape:()
gt_shape:()
pred_f_shape:(1,)
gt_f_shape:(1,)
Then for the rest of the steps and epochs were similar as below:
y_true = [[[[1 0 0 ... 0 0 0]
[1 0 0 ... 0 0 0]
[1 0 0 ... 0 0 0]
...
y_pred = [[[[0.0889623 0.0624801107 0.0729747042 ... 0.0816219151 0.0735477135 0.0698677748]
[0.0857798532 0.0721047595 0.0754121244 ... 0.0723947287 0.0728530064 0.0676521733]
[0.0825942457 0.0670698211 0.0879610255 ... 0.0721599609 0.0845924541 0.0638583601]
...
pred= 1283828
gt = 0
pred_f= [1283828]
gt_f = [0]
Why is pred a number instead of a list of numbers with each number represents index of class? Similarly, why is pred_f is a list with only one number instead of list of indices?
And for gt (and gt_f), why is the value 0? I expect them to be list of indices.
I looks like argmax() simply uses the flattened y.
You need to specify which axis you want argmax() to reduce. Probably it's the last one, in your case 3. Then you'll get pred with a shape (None, 256, 256) containing integer between 0 and 13.
Try something like this: pred = K.argmax(y_pred, axis=3)
This is the documentation for tensorflow argmax. (But I'm not sure if you're using exactly that, since I can not see what K is imported as)

DesicionTreeClassifier: Input contains NaN, infinity or a value too large for dtype('float32')

clf = DecisionTreeClassifier()
scoring = 'accuracy'
score = cross_val_score(clf, train_data, target, cv=k_fold, n_jobs=1, error_score='raise')
print(score)
after run this code I have error:
ValueError: Input contains NaN, infinity or a value too large for
dtype('float32').
So how I can fix it?
Decision Trees doesn't accept NaN / infinity values.
Try doing (assuming that train_data is a Pandas DataFrame):
train_data.fillna(0, inplace = True)
This will replace all NaN values by 0.
If you don't want this, the only thing to do is to delete entries with NaN data :
train_data.dropna(inplace = True)
If this is not a DataFrame, try adding this line before the fillna method:
train_data = pd.DataFrame(train_data)

Why does Keras not generalize my data?

Ive been trying to implement a basic multilayered LSTM regression network to find correlations between cryptocurrency prices.
After running into unusable training results, i've decided to play around with some sandbox code, to make sure i've got the idea right before trying again on my full dataset.
The problem is I can't get Keras to generalize my data.
ts = 3
in_dim = 1
data = [i*100 for i in range(10)]
# tried this, didn't accomplish anything
# data = [(d - np.mean(data))/np.std(data) for d in data]
x = data[:len(data) - 4]
y = data[3:len(data) - 1]
assert(len(x) == len(y))
x = [[_x] for _x in x]
y = [[_y] for _y in y]
x = [x[idx:idx + ts] for idx in range(0, len(x), ts)]
y = [y[idx:idx + ts] for idx in range(0, len(y), ts)]
x = np.asarray(x)
y = np.asarray(y)
x looks like this:
[[[ 0]
[100]
[200]]
[[300]
[400]
[500]]]
and y:
[[[300]
[400]
[500]]
[[600]
[700]
[800]]]
and this works well when I predict using a very similar dataset, but doesn't generalize when I try a similar sequence with scaled values
model = Sequential()
model.add(BatchNormalization(
axis = 1,
input_shape = (ts, in_dim)))
model.add(LSTM(
100,
input_shape = (ts, in_dim),
return_sequences = True))
model.add(TimeDistributed(Dense(in_dim)))
model.add(Activation('linear'))
model.compile(loss = 'mse', optimizer = 'rmsprop')
model.fit(x, y, epochs = 2000, verbose = 0)
p = np.asarray([[[10],[20],[30]]])
prediction = model.predict(p)
print(prediction)
prints
[[[ 165.78544617]
[ 209.34489441]
[ 216.02174377]]]
I want
[[[ 40.0000]
[ 50.0000]
[ 60.0000]]]
how can I format this so that when i plug in a sequence with values that are of a completely different scale, the network will still output its predicted value? I've tried normalizing my training data, but the results are still entirely unusable.
What have I done wrong here?
How about transform your input data before sending into your LSTM, use something like sklearn.preprocessing.StandardScaler? after prediction you can call scaler.inverse_transform(prediction)

How can I change the max sequence length in a Tensorflow RNN Model?

I am currently trying to adapt my tensorflow classifier, which is able to tag a sequence of words to be positive or negative, to handle much longer sequences, without retraining. My model is a RNN, with a max sequence lenght of 210. One input is one word(300 dim), I vectorised the words with Googles word2vec, so I am able to feed a sequence with max 210 words. Now my question is, how can I change the max sequence length to for example 3000, for classifying movie reviews.
My working model with fixed max sequence length of 210(tf_version: 1.1.0):
n_chunks = 210
chunk_size = 300
x = tf.placeholder("float",[None,n_chunks,chunk_size])
y = tf.placeholder("float",None)
seq_length = tf.placeholder("int64",None)
with tf.variable_scope("rnn1"):
lstm_cell = tf.contrib.rnn.LSTMCell(rnn_size,
state_is_tuple=True)
lstm_cell = tf.contrib.rnn.DropoutWrapper (lstm_cell,
input_keep_prob=0.8)
outputs, _ = tf.nn.dynamic_rnn(lstm_cell,x,dtype=tf.float32,
sequence_length = self.seq_length)
fc = tf.contrib.layers.fully_connected(outputs, 1000,
activation_fn=tf.nn.relu)
output = tf.contrib.layers.flatten(fc)
#*1
logits = tf.contrib.layers.fully_connected(output, self.n_classes,
activation_fn=None)
cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits
(logits=logits, labels=y) )
optimizer = tf.train.AdamOptimizer(learning_rate=0.01).minimize(cost)
...
#train
#train_x padded to fit(batch_size*n_chunks*chunk_size)
sess.run([optimizer, cost], feed_dict={x:train_x, y:train_y,
seq_length:seq_length})
#predict:
...
pred = tf.nn.softmax(logits)
pred = sess.run(pred,feed_dict={x:word_vecs, seq_length:sq_l})
What modifications I already tried:
1 Replacing n_chunks with None and simply feed data in
x = tf.placeholder(tf.float32, [None,None,300])
#model fails to build
#ValueError: The last dimension of the inputs to `Dense` should be defined.
#Found `None`.
# at *1
...
#all entrys in word_vecs still have got the same length for example
#3000(batch_size*3000(!= n_chunks)*300)
pred = tf.nn.softmax(logits)
pred = sess.run(pred,feed_dict={x:word_vecs, seq_length:sq_l})
2 Changing x and then restore the old model:
x = tf.placeholder(tf.float32, [None,n_chunks*10,chunk_size]
...
saver = tf.train.Saver(tf.all_variables(), reshape=True)
saver.restore(sess,"...")
#fails as well:
#InvalidArgumentError (see above for traceback): Input to reshape is a
#tensor with 420000 values, but the requested shape has 840000
#[[Node: save/Reshape_5 = Reshape[T=DT_FLOAT, Tshape=DT_INT32,
#_device="/job:localhost/replica:0/task:0/cpu:0"](save/RestoreV2_5,
#save/Reshape_5/shape)]]
# run prediction
If it is possible could you please provide me with any working example or explain me why it isnt?
I am just wondering why not you just assign the n_chunk a value of 3000?
In your first attempt, you cannot use two None, since tf cannot how many dimensions to put for each one. The first dimension is set as None because it is contingent upon the batch size. In your second attempt, you just change one place and the other places where n_chunks is used may conflict with the x placeholder.

Neural network model not learning?

I tried to model a NN using softmax regression.
After 999 iterations, I got error of about 0.02% for per data point, which i thought was good. But when I visualize the model on tensorboard, my cost function did not reach towards 0 instead I got something like this
And for weights and bias histogram this
I am a beginner and I can't seem to understand the mistake. May be I am using a wrong method to define cost?
Here is my full code for reference.
import tensorflow as tf
import numpy as np
import random
lorange= 1
hirange= 10
amplitude= np.random.uniform(-10,10)
t= 10
random.seed()
tau=np.random.uniform(lorange,hirange)
x_node = tf.placeholder(tf.float32, (10,))
y_node = tf.placeholder(tf.float32, (10,))
W = tf.Variable(tf.truncated_normal([10,10], stddev= .1))
b = tf.Variable(.1)
y = tf.nn.softmax(tf.matmul(tf.reshape(x_node,[1,10]), W) + b)
##ADD SUMMARY
W_hist = tf.histogram_summary("weights", W)
b_hist = tf.histogram_summary("biases", b)
y_hist = tf.histogram_summary("y", y)
# Cost function sum((y_-y)**2)
with tf.name_scope("cost") as scope:
cost = tf.reduce_mean(tf.square(y_node-y))
cost_sum = tf.scalar_summary("cost", cost)
# Training using Gradient Descent to minimize cost
with tf.name_scope("train") as scope:
train_step = tf.train.GradientDescentOptimizer(0.00001).minimize(cost)
sess = tf.InteractiveSession()
# Merge all the summaries and write them out to logfile
merged = tf.merge_all_summaries()
writer = tf.train.SummaryWriter("/tmp/mnist_logs_4", sess.graph_def)
error = tf.reduce_sum(tf.abs(y - y_node))
init = tf.initialize_all_variables()
sess.run(init)
steps = 1000
for i in range(steps):
xs = np.arange(t)
ys = amplitude * np.exp(-xs / tau)
feed = {x_node: xs, y_node: ys}
sess.run(train_step, feed_dict=feed)
print("After %d iteration:" % i)
print("W: %s" % sess.run(W))
print("b: %s" % sess.run(b))
print('Total Error: ', error.eval(feed_dict={x_node: xs, y_node:ys}))
# Record summary data, and the accuracy every 10 steps
if i % 10 == 0:
result = sess.run(merged, feed_dict=feed)
writer.add_summary(result, i)
I got the same plot like you a couple of times.
That happened mostly when I was running tensorboard on multiple log-files. That is, the logdir I gave to TensorBoard contained multiple log-files. Try to run TensorBoard on one single log-file and let me know what happens

Resources