Text to text model to transform text data - machine-learning

Is there a pre-trained model that can be tuned to provide a task to transform certain text to another form for example 3.00 ARP --becomes--> 3 or highly flammable --becomes--> high or 2.00 ATS --becomes--> 10
etc.
So is this a feasible task to do and get excellent results and if yes what models should I check out
some background about me
I am a beginner when it comes to deep learning but has extensive experience when it comes to traditional machine learning
Expected behavior: example `3.00 ARP --becomes--> 3 or highly flammable --becomes--> high or 2.00 ATS --becomes--> 10, 18 ATS --becomes--> 0.0018#200 etc.

Related

how to classifying with not ordianal data

i'm new to machine learning field.
Trying to classify 10 people with a their phone call logs.
The phone call logs look like this
UserId IsInboundCall Duration PhoneNumber(hashed)
1 false 23 1011112222
2 true 45 1033334444
Trained with this kind of 8700 logs with SVM from sklearn gives a result is accuracy 88%
I have a several question about this result and
what is a proper way to use some not ordinal data(ex. phone number)
I'm not sure using a hashed phone number as a feature but this multi class classifiers accuracy is not bad, is it just a coincidence?
How to use not oridnal data as a feature?
If this classifier have to classify more 1000 classes(more 1000 users), is SVM still work on that case?
Any advice is helpful for me. Thanks
1) Try the SVM without Phone number as a feature to get a sense of how much impact it has.
2) In order to avoid Ordinal Data you can either transform into a number or use a 1 of K approach. Say you added an Phone OS field with possible values {IOS, Android, Blackberry} you can represent this as a number 0,1,2 or as 3 features (1,0,0), (0,1,0), (0,0,1).
3) The SVM will still give good results as long as the data is approximately linearly separable. To achieve this you might need to add more features and map into a different feature space (an RBF kernel is a good start).

Should I normalize my features before throwing them into RNN?

I am playing some demos about recurrent neural network.
I noticed that the scale of my data in each column differs a lot. So I am considering to do some preprocess work before I throw data batches into my RNN. The close column is the target I want to predict in the future.
open high low volume price_change p_change ma5 ma10 \
0 20.64 20.64 20.37 163623.62 -0.08 -0.39 20.772 20.721
1 20.92 20.92 20.60 218505.95 -0.30 -1.43 20.780 20.718
2 21.00 21.15 20.72 269101.41 -0.08 -0.38 20.812 20.755
3 20.70 21.57 20.70 645855.38 0.32 1.55 20.782 20.788
4 20.60 20.70 20.20 458860.16 0.10 0.48 20.694 20.806
ma20 v_ma5 v_ma10 v_ma20 close
0 20.954 351189.30 388345.91 394078.37 20.56
1 20.990 373384.46 403747.59 411728.38 20.64
2 21.022 392464.55 405000.55 426124.42 20.94
3 21.054 445386.85 403945.59 473166.37 21.02
4 21.038 486615.13 378825.52 461835.35 20.70
My question is, is preprocessing the data with, say StandardScaler in sklearn necessary in my case? And why?
(You are welcome to edit my question)
It will be beneficial to normalize your training data. Having different features with widely different scales fed to your model will cause the network to weight the features not equally. This can cause a falsely prioritisation of some features over the others in the representation.
Despite that the whole discussion on data preprocessing is controversial either on when exactly it is necessary and how to correctly normalize the data for each given model and application domain there is a general consensus in Machine Learning that running a Mean subtraction as well as a general Normalization preprocessing step is helpful.
In the case of Mean subtraction, the mean of every individual feature is being subtracted from the data which can be interpreted as centering the data around the origin from a geometric point of view. This is true for every dimensionality.
Normalizing the data after the Mean subtraction step results in a normalization of the data dimensionality to approximately the same scale. Note that the different features will loose any prioritization over each other after this step as mentioned above. If you have good reasons to think that the different scales in your features bear important information that the network may need to truly understand the underlying patterns in your dataset, then a normalization will be harmful. A standard approach would be to scale the inputs to have mean of 0 and a variance of 1.
Further preprocessing operations may be helpful in specific cases such as performing PCA or Whitening on your data. Look into the awesome notes of CS231n (Setting up the data and the model) for further reference on these topics as well as for a more detailed explenation of the topics above.
Definetly yes. Most of neural networks work best with data beetwen 0-1 or -1 to 1(depends on output function). Also when some inputs are higher then others network will "think" they are more important. This can make learning very long. Network must first lower weights in this inputs.
I found this https://arxiv.org/abs/1510.01378
If you normalize it may improve convergence so you will get lower training times.

LibSVM - Multi class classification with unbalanced data

I tried to play with libsvm and 3D descriptors in order to perform object recognition. So far I have 7 categories of objects and for each category I have its number of objects (and its pourcentage) :
Category 1. 492 (14%)
Category 2. 574 (16%)
Category 3. 738 (21%)
Category4. 164 (5%)
Category5. 369 (10%)
Category6. 123 (3%)
Category7. 1025 (30%)
So I have in total 3585 objects.
I have followed the practical guide of libsvm.
Here for reminder :
A. Scaling the training and the testing
B. Cross validation
C. Training
D. Testing
I separated my data into training and testing.
By doing a 5 cross validation process, I was able to determine the good C and Gamma.
However I obtained poor results (CV is about 30-40 and my accuracy is about 50%).
Then, I was thinking about my data and saw that I have some unbalanced data (categories 4 and 6 for example). I discovered that on libSVM there is an option about weight. That's why I would like now to set up the good weights.
So far I'm doing this :
svm-train -c cValue -g gValue -w1 1 -w2 1 -w3 1 -w4 2 -w5 1 -w6 2 -w7 1
However the results is the same. I'm sure that It's not the good way to do it and that's why I ask you some helps.
I saw some topics on the subject but they were related to binary classification and not multiclass classification.
I know that libSVM is doing "one against one" (so a binary classifier) but I don't know to handle that when I have multiple class.
Could you please help me ?
Thank you in advance for your help.
I've met the same problem before. I also tried to give them different weight, which didn't work.
I recommend you to train with a subset of the dataset.
Try to use approximately equal number of different class samples. You can use all category 4 and 6 samples, and then pick up about 150 samples for every other categories.
I used this method and the accuracy did improve. Hope this will help you!

inconsistency between libsvm and scikit-learn.svc results

I have a project that is based on SVM algorithm implemented by libsvm. Recently I decided to try several other classification algorithm, this is where scikit-learn comes to the picture.
The connection to the scikit was pretty straightforward, it supports libsvm format by load_svmlight_file routine. Ans it's svm implementation is based on the same libsvm.
When everything was done, I decided to the check the consistence of the results by directly running libsvm and via scikit-learn, and the results were different. Among 18 measures in learning curves, 7 were different, and the difference is located at the small steps of the learning curve. The libsvm results seems much more stable, but scikit-learn results have some drastic fluctuation.
The classifiers have exactly the same parameters of course.
I tried to check the version of libsvm in scikit-learn implementation, but I din't find it, the only thing I found was libsvm.so file.
Currently I am using libsvm 3.21 version, and scikit-learn 0.17.1 version.
I wound appreciate any help in addressing this issue.
size libsvm scikit-learn
1 0.1336239435355727 0.1336239435355727
2 0.08699516468193455 0.08699516468193455
3 0.32928301642777424 0.2117238289550198 #different
4 0.2835688734876902 0.2835688734876902
5 0.27846766962743097 0.26651875338163966 #different
6 0.2853854654662907 0.18898048915599963 #different
7 0.28196058132165136 0.28196058132165136
8 0.31473956032575623 0.1958710201604552 #different
9 0.33588303670653136 0.2101641630182972 #different
10 0.4075242509025311 0.2997807499800962 #different
15 0.4391771087975972 0.4391771087975972
20 0.3837789445609818 0.2713167833345173 #different
25 0.4252154334940311 0.4252154334940311
30 0.4256407777477492 0.4256407777477492
35 0.45314944605858387 0.45314944605858387
40 0.4278633233755064 0.4278633233755064
45 0.46174762022239796 0.46174762022239796
50 0.45370452524846866 0.45370452524846866

why normalizing feature values doesn't change the training output much?

I have 3113 training examples, over a dense feature vector of size 78. The magnitude of features is different: some around 20, some 200K. For example, here is one of the training examples, in vowpal-wabbit input format.
0.050000 1 '2006-07-10_00:00:00_0.050000| F0:9.670000 F1:0.130000 F2:0.320000 F3:0.570000 F4:9.837000 F5:9.593000 F6:9.238150 F7:9.646667 F8:9.631333 F9:8.338904 F10:9.748000 F11:10.227667 F12:10.253667 F13:9.800000 F14:0.010000 F15:0.030000 F16:-0.270000 F17:10.015000 F18:9.726000 F19:9.367100 F20:9.800000 F21:9.792667 F22:8.457452 F23:9.972000 F24:10.394833 F25:10.412667 F26:9.600000 F27:0.090000 F28:0.230000 F29:0.370000 F30:9.733000 F31:9.413000 F32:9.095150 F33:9.586667 F34:9.466000 F35:8.216658 F36:9.682000 F37:10.048333 F38:10.072000 F39:9.780000 F40:0.020000 F41:-0.060000 F42:-0.560000 F43:9.898000 F44:9.537500 F45:9.213700 F46:9.740000 F47:9.628000 F48:8.327233 F49:9.924000 F50:10.216333 F51:10.226667 F52:127925000.000000 F53:-15198000.000000 F54:-72286000.000000 F55:-196161000.000000 F56:143342800.000000 F57:148948500.000000 F58:118894335.000000 F59:119027666.666667 F60:181170133.333333 F61:89209167.123288 F62:141400600.000000 F63:241658716.666667 F64:199031688.888889 F65:132549.000000 F66:-16597.000000 F67:-77416.000000 F68:-205999.000000 F69:144690.000000 F70:155022.850000 F71:122618.450000 F72:123340.666667 F73:187013.300000 F74:99751.769863 F75:144013.200000 F76:237918.433333 F77:195173.377778
The training result was not good, so I thought I would normalize the features to make them in the same magnitude. I calculated mean and standard deviation for each of the features across all examples, then do newValue = (oldValue - mean) / stddev, so that their new mean and stddev are all 1. For the same example, here is the feature values after normalization:
0.050000 1 '2006-07-10_00:00:00_0.050000| F0:-0.660690 F1:0.226462 F2:0.383638 F3:0.398393 F4:-0.644898 F5:-0.670712 F6:-0.758233 F7:-0.663447 F8:-0.667865 F9:-0.960165 F10:-0.653406 F11:-0.610559 F12:-0.612965 F13:-0.659234 F14:0.027834 F15:0.038049 F16:-0.201668 F17:-0.638971 F18:-0.668556 F19:-0.754856 F20:-0.659535 F21:-0.663001 F22:-0.953793 F23:-0.642736 F24:-0.606725 F25:-0.609946 F26:-0.657141 F27:0.173106 F28:0.310076 F29:0.295814 F30:-0.644357 F31:-0.678860 F32:-0.764422 F33:-0.658869 F34:-0.674367 F35:-0.968679 F36:-0.649145 F37:-0.616868 F38:-0.619564 F39:-0.649498 F40:0.041261 F41:-0.066987 F42:-0.355693 F43:-0.638604 F44:-0.676379 F45:-0.761250 F46:-0.653962 F47:-0.668194 F48:-0.962591 F49:-0.635441 F50:-0.611600 F51:-0.615670 F52:-0.593324 F53:-0.030322 F54:-0.095290 F55:-0.139602 F56:-0.652741 F57:-0.675629 F58:-0.851058 F59:-0.642028 F60:-0.648002 F61:-0.952896 F62:-0.629172 F63:-0.592340 F64:-0.682273 F65:-0.470121 F66:-0.045396 F67:-0.128265 F68:-0.185295 F69:-0.510251 F70:-0.515335 F71:-0.687727 F72:-0.512749 F73:-0.471032 F74:-0.789335 F75:-0.491188 F76:-0.400105 F77:-0.505242
However, this yields basically the same testing result (if not exactly the same, since I shuffle the examples before each training).
Wondering why there is no change in the result?
Here is my training and testing commands:
rm -f cache
cat input.feat | vw -f model --passes 20 --cache_file cache
cat input.feat | vw -i model -t -p predictions --invert_hash readable_model
(Yes, I'm testing on the training data right now since I have only very few data examples to train on.)
More context:
Some of the features are "tier 2" - they were derived by manipulating or doing cross products on "tier 1" features (e.g. moving average, 1-3 order of derivatives, etc). If I normalize the tier 1 features before calculating the tier 2 features, it would actually improve the model significantly.
So I'm puzzled as why normalizing tier 1 features (before generating tier 2 features) helps a lot, while normalizing all features (after generating tier 2 features) doesn't help at all?
BTW, since I'm training a regressor, I'm using SSE as the metrics to judge the quality of the model.
vw normalizes feature values for scale as it goes, by default.
This is part of the online algorithm. It is done gradually during runtime.
In fact it does more than that, vw enhanced SGD algorithm also keeps separate learning rates (per feature) so rarer feature learning rates don't decay as fast as common ones (--adaptive). Finally there's an importance aware update, controlled by a 3rd option (--invariant).
The 3 separate SGD enhancement options (which are all turned on by default) are:
--adaptive
--invariant
--normalized
The last option is the one that adjust values for scale (discounts large values vs small). You may disable all these SGD enhancements by using the option --sgd. You may also partially enable any subset by explicitly specifying it.
All in all you have 2^3 = 8 SGD option combinations you can use.
The Possible reason is that whatever Training algorithm that you used to get the result already did the normalization process for you!.In fact many algorithms do the normalization process before working on it.Hope it helps you :)

Resources