Sequence predictions for TFT in NeuralForecast - time-series

Does NeuralForecast support Sequence of predictions for TFT model?
I noticed that the tgt_size is hardcoded as 1 in TFT class.

Related

Calculating Probability of a Classification Model Prediction

I have a classification task. The training data has 50 different labels. The customer wants to differentiate the low probability predictions, meaning that, I have to classify some test data as Unclassified / Other depending on the probability (certainty?) of the model.
When I test my code, the prediction result is a numpy array (I'm using different models, this is one is pre-trained BertTransformer). The prediction array doesn't contain probabilities such as in Keras predict_proba() method. These are numbers generated by prediction method of pretrained BertTransformer model.
[[-1.7862008 -0.7037363 0.09885322 1.5318055 2.1137428 -0.2216074
0.18905772 -0.32575375 1.0748093 -0.06001111 0.01083148 0.47495762
0.27160102 0.13852511 -0.68440574 0.6773654 -2.2712054 -0.2864312
-0.8428862 -2.1132915 -1.0157436 -1.0340284 -0.35126117 -1.0333195
9.149789 -0.21288703 0.11455813 -0.32903734 0.10503325 -0.3004114
-1.3854568 -0.01692022 -0.4388664 -0.42163098 -0.09182278 -0.28269592
-0.33082992 -1.147654 -0.6703184 0.33038092 -0.50087476 1.1643585
0.96983343 1.3400391 1.0692116 -0.7623776 -0.6083422 -0.91371405
0.10002492]]
I'm using numpy.argmax() to identify the correct label. The prediction works just fine. However, since these are not probabilities, I cannot compare the best result with a threshold value.
My question is, how can I define a threshold (say, 0.6), and then compare the probability of the argmax() element of the BertTransformer prediction array so that I can classify the prediction as "Other" if the probability is less than the threshold value?
Edit 1:
We are using 2 different models. One is Keras, and the other is BertTransformer. We have no problem in Keras since it gives the probabilities so I'm skipping Keras model.
The Bert model is pretrained. Here is how it is generated:
def model(self, data):
number_of_categories = len(data['encoded_categories'].unique())
model = BertForSequenceClassification.from_pretrained(
"dbmdz/bert-base-turkish-128k-uncased",
num_labels=number_of_categories,
output_attentions=False,
output_hidden_states=False,
)
# model.cuda()
return model
The output given above is the result of model.predict() method. We compare both models, Bert is slightly ahead, therefore we know that the prediction works just fine. However, we are not sure what those numbers signify or represent.
Here is the Bert documentation.
BertForSequenceClassification returns logits, i.e., the classification scores before normalization. You can normalize the scores by calling F.softmax(output, dim=-1) where torch.nn.functional was imported as F.
With thousands of labels, the normalization can be costly and you do not need it when you are only interested in argmax. This is probably why the models return the raw scores only.

DL4J - Is there a way to restrict the prediction of a model

I trained a Mnist model with DL4J. When I use this model in inference mode:
INDArray prediction = myModel.output(myINDArrayImage);
That gives me a prediction in an INDArray, it works properly.
The size of this INDArray is equal to number of output on my OutputLayer model.
Is there a way to restrict prediction to a character base?
i.e. somethings like this:
INDArray prediction = myModel.output(myINDArrayImage, charactersPossible);
Where charactersPossible is the list of possible output indexes?
You can create an INDArray (using Nd4j.create(double[])) with 1.0 for possible characters and 0.0 for not-possible characters. Then multiply that with the prediction INDArray, and then Nd4j.argMax the result.

Doc2vecC predicting vectors for unseen documents

I have trained a set of documents using Doc2vecc.
https://github.com/mchen24/iclr2017
I am trying to generate the embedding vector for the unseen documents.I have trained the documents as mentioned in the go.sh.
"""
time ./doc2vecc -train ./aclImdb/alldata-shuf.txt -word
wordvectors.txt -output docvectors.txt -cbow 1 -size 100 -window 10 -
negative 5 -hs 0 -sample 0 -threads 4 -binary 0 -iter 20 -min-count 10
-test ./aclImdb/alldata.txt -sentence-sample 0.1 -save-vocab
alldata.vocab
"""
I get the docvectors.txt and wordvectors.txt for the train set. Now from here how do I generate vectors for unseen test using the same model without retraining.
As far as I can tell, the author (https://github.com/mchen24) of that doc2vecc.c code (and paper) just made minimal changes to some example 'paragraph vector' code that was itself a minimal change to the original Google/Mikolov word2vec.c (https://github.com/tmikolov/word2vec/blob/master/word2vec.c).
Neither the 'paragraph vector' changes nor the subsequent doc2vecc changes appear to include any functionality for inferring vectors for new documents.
Because these are unsupervised algorithms, for some purposes it may be appropriate to calculate the document-vectors for some downstream classification task, for both training and test texts, in the same combined bulk training. (Your ultimate goals may in fact have unlabeled examples to help learn the document-vectorization, even if your classifier should be trained an evaluated on some subset of known-label texts.)
Doc2VecC is expressly designed to create document vectors as averages of the word-vectors in each document. This is unlike Doc2Vec where document embeddings are trained alongside the word embeddings making it impossible to handle unseen documents. The amount of trained vectors is also enormous in Doc2Vec.
To build the vector for an unseen document, just count all the words from your vocabulary in it and compute an average of the word-vectors.

The proper way of using IsolationForest to detect outliers of high-dim dataset

I use the following simple IsolationForest algorithm to detect the outliers of given dataset X of 20K samples and 16 features, I run the following
train_X, tesy_X, train_y, test_y = train_test_split(X, y, train_size=.8)
clf = IsolationForest()
clf.fit(X) # Notice I am using the entire dataset X when fitting!!
print (clf.predict(X))
I get the result:
[ 1 1 1 -1 ... 1 1 1 -1 1]
This question is: Is it logically correct to use the entire dataset X when fitting into IsolationForest or only train_X?
Yes, it is logically correct to ultimately train on the entire dataset.
With that in mind, you could measure the test set performance against the training set's performance. This could tell you if the test set is from a similar distribution as your training set.
If the test set scores anomalous as compared to the training set, then you can expect future data to be similar. In this case, I would like more data to have a more complete view of what is 'normal'.
If the test set scores similarly to the training set, I would be more comfortable with the final Isolation Forest trained on all data.
Perhaps you could use sklearn TimeSeriesSplit CV in this fashion to get a sense for how much data is enough for your problem?
Since this is unlabeled data to the anomaly detector, the more data the better when defining 'normal'.

Normalizing feature values for SVM

I've been playing with some SVM implementations and I am wondering - what is the best way to normalize feature values to fit into one range? (from 0 to 1)
Let's suppose I have 3 features with values in ranges of:
3 - 5.
0.02 - 0.05
10-15.
How do I convert all of those values into range of [0,1]?
What If, during training, the highest value of feature number 1 that I will encounter is 5 and after I begin to use my model on much bigger datasets, I will stumble upon values as high as 7? Then in the converted range, it would exceed 1...
How do I normalize values during training to account for the possibility of "values in the wild" exceeding the highest(or lowest) values the model "seen" during training? How will the model react to that and how I make it work properly when that happens?
Besides scaling to unit length method provided by Tim, standardization is most often used in machine learning field. Please note that when your test data comes, it makes more sense to use the mean value and standard deviation from your training samples to do this scaling. If you have a very large amount of training data, it is safe to assume they obey the normal distribution, so the possibility that new test data is out-of-range won't be that high. Refer to this post for more details.
You normalise a vector by converting it to a unit vector. This trains the SVM on the relative values of the features, not the magnitudes. The normalisation algorithm will work on vectors with any values.
To convert to a unit vector, divide each value by the length of the vector. For example, a vector of [4 0.02 12] has a length of 12.6491. The normalised vector is then [4/12.6491 0.02/12.6491 12/12.6491] = [0.316 0.0016 0.949].
If "in the wild" we encounter a vector of [400 2 1200] it will normalise to the same unit vector as above. The magnitudes of the features is "cancelled out" by the normalisation and we are left with relative values between 0 and 1.

Resources