I have a question on how to deal with some interesting data.
I currently have some data (The counts are real, but the situation is fake) where we predict the number of t-shirts that people will purchase online today. We know quite a bit about everyone for our feature attributes and these change from day to day. We also know how many t-shirts the everyone purchased the previous days.
What I want is to have an algorithm that is able to churn out a continuous variable that is a ranking or “score” of the number of t-shirts the person is going to purchase the today. My end goal is that if I can get this score attached to each person, I can sort them according to the score and use them in a specific UI. Currently I’ve been using random forest regression with sci-kit where my target classes are yesterday’s count of t-shirt purchases by each person. This has worked out pretty well except that my data is mildly difficult in that there are a lot of people that purchase 0 t-shirts. This is a problem due to my random forest giving me a lot of predicted classes of 0 and I cannot sort them effectively. I get why this is happening, but I’m not sure the best way to get around it.
What I want is a non-zero score (even if it’s a very small number close to 0) that tells me more about the features and the predicted class. I feel that some of my features must be able to tell me something and give me a better prediction than 0.
I think that the inherent problem is using random forest regressor as the algorithm. Each tree is getting a vote; however, there are so many zeros that there are many forests where all trees are voting for 0. I would like another algorithm to try, but i don’t know which one would work best. Currently I’m training on the whole dataset and using the out-of-bag estimate that scikit provides.
Here are the counts (using python’s Counter([target classes]) of the data classes. This is setup as such: {predicted_class_value: counts_of_that_value_in_the_target_class_list}
{0: 3560426, 1: 121256, 2: 10582, 3: 1029, 4: 412, 5: 88, 6: 66, 7: 35, 8: 21, 9: 17, 10: 17, 11: 10, 12: 2, 13: 2, 15: 2, 21: 2, 17: 1, 18: 1, 52: 1, 25: 1}
I have tried some things to manipulate the training data, but I’m really guessing at things to do.
One thing I tried was scaling the number of zeros in the training set to a linearly scaled amount based on the other data. So instead of passing the algorithm 3.5 million 0-class rows, I scaled it down to 250,000. So my training set looked like: {0: 250,000, 1: 121256, 2: 10582, 3: 1029, … }. This has a drastic effect on the amount of 0’s coming back from the algorithm. I’ve gone from the algo guessing 99% of the data as 0 to about only 50%. However, I don’t know if this is a valid thing to do or if it even makes sense.
Other things I’ve tried include increasing the forest size - however that doesn’t have too much of an effect, telling the random forest to only use sqrt features for each of the tree - which has had a pretty good effect, and using the out-of-bag estimate - which also has seem to have good results.
To summarize, I have a set of data where there is a disproportionate amount of data toward one class. I would like to have some way to produce a continuous value that is a “score” for each value in the predicted dataset so I may sort them.
Thank you for your help!
This is an unbalanced class problem. One thing you could do is over/undersampling. Undersampling means that you randomely delete instances from the majority class. Over sampling means that you sample with replacement instances from the minority class. Or you could use a combination of both. One thing you could try is SMOTE[1] which is an oversampling algorithm but instead of just sampling exsisting instances from the minority class, it creates synthetic instances which will avoid overfitting and in theory will be better at generalizing.
[1] Chawla, Nitesh V., et al. "SMOTE: synthetic minority over-sampling technique." Journal of artificial intelligence research (2002): 321-357.
Related
Let's say I want to calculate which courses a final year student will take and which grades they will receive from the said courses. We have data of previous students'courses and grades for each year (not just the final year) to train with. We also have data of the grades and courses of the previous years for students we want to estimate the results for. I want to use a recurrent neural network with long-short term memory to solve this problem. (I know this problem can be solved by regression, but I want the neural network specifically to see if this problem can be properly solved using one)
The way I want to set up the output (label) space is by having a feature for each of the possible courses a student can take, and having a result between 0 and 1 in each of those entries to describe whether if a student will attend the class (if not, the entry for that course would be 0) and if so, what would their mark be (ie if the student attends class A and gets 57%, then the label for class A will have 0.57 in it)
Am I setting the output space properly?
If yes, what optimization and activation functions I should use?
If no, how can I re-shape my output space to get good predictions?
If I understood you correctly, you want that the network is given the history of a student, and then outputs one entry for each course. This entry is supposed to simultaneously signify whether the student will take the course (0 for not taking the course, 1 for taking the course), and also give the expected grade? Then the interpretation of the output for a single course would be like this:
0.0 -> won't take the course
0.1 -> will take the course and get 10% of points
0.5 -> will take the course and get half of points
1.0 -> will take the course and get full points
If this is indeed your plan, I would definitely advise to rethink it.
Some obviously realistic cases do not fit into this pattern. For example, how would you represent an (A+)-student is "unlikely" to take a course? Should the network output 0.9999, because (s)he is very likely to get the maximum amount of points if (s)he takes the course, OR should the network output 0.0001, because the student is very unlikely to take the course?
Instead, you should output two values between [0,1] for each student and each course.
First value in [0, 1] gives the probability that the student will participate in the course
Second value in [0, 1] gives the expected relative number of points.
As loss, I'd propose something like binary cross-entropy on the first value, and simple square error on the second, and then combine all the losses using some L^p metric of your choice (e.g. simply add everything up for p=1, square and add for p=2).
Few examples:
(0.01, 1.0) : very unlikely to participate, would probably get 100%
(0.5, 0.8): 50%-50% whether participates or not, would get 80% of points
(0.999, 0.15): will participate, but probably pretty much fail
The quantity that you wanted to output seemed to be something like the product of these two, which is a bit difficult to interpret.
There is more than one way to solve this problem. Andrey's answer gives a one good approach.
I would like to suggest simplifying the problem by bucketing grades into categories and adding an additional category for "did not take", for both input and output.
This turns the task into a classification problem only, and solves the issue of trying to differentiate between receiving a low grade and not taking the course in your output.
For example your training set might have m students, n possible classes, and six possible results: ['A', 'B', 'C', 'D', 'F', 'did_not_take'].
And you might choose the following architecture:
Input -> Dense Layer -> RELU -> Dense Layer -> RELU -> Dense Layer -> Softmax
Your input shape is (m, n, 6) and your output shape could be (m, n*6), where you apply softmax for every group of 6 outputs (corresponding to one class) and sum into a single loss value. This is an example of multiclass, multilabel classification.
I would start by trying 2n neurons in each hidden layer.
If you really want a continuous output for grades, however, then I recommend using separate classification and regression networks. This way you don't have to combine classification and regression loss into one number, which can get messy with scaling issues.
You can keep the grade buckets for input data only, so the two networks take the same input data, but for the grade regression network your last layer can be n sigmoid units with log loss. These will output numbers between 0 and 1, corresponding the predicted grade for each class.
If you want to go even further, consider using an architecture that considers the order in which students took previous classes. For example if a student took French I the previous year, it is more likely he/she will take French II this year than if he/she took French Freshman year and did not continue with French after that.
I've a dataset (based on million song dataset) on which I need to do genre classification. Following is the distribution of various genre classes in the dataset.
Genre Count %age
1. Rock 115104 39.94364359
2. Pop 47534 16.49535337
3. Electronic 24313 8.437150809
4. Jazz 16465 5.713720564
5. Rap 15347 5.325749741
6. RnB 13769 4.778148706
7. Country 13509 4.687922933
8. Reggae 8739 3.032627027
9. Blues 7075 2.455182083
10. Latin 7042 2.44373035
11. Metal 6257 2.171317921
12. World 4624 1.604630664
13. Folk 3661 1.270448283
14. Punk 3479 1.207290242
15. New Age 1248 0.433083709
Would you call this data unbalanced? I've tried reading around but found that people describe datasets unbalanced where one of the classes is 99% of the dataset and it's a binary classification problem. Not sure whether the above set falls into this category. Please help. I'm not able to get the classification right and being a newbie can't decide whether it's the data or my naivety. This is one of the hypotheses I have and need to validate.
In general there is no strict definition of imbalanced dataset, but generally, if the smallest class is 10x smaller than the largest one, then calling it imbalanced is a good idea.
In your case, smallest class is actually 100x smaller than the biggest one, so you can even map it onto your consideration of "99-1" for binary classification. If you only ask about differentiating between New Age and Rock you will end up with 99-1 imbalance, so you can expect problems typical to imbalanced classification - to appear in your project.
This my sound as very naive question. I checked on google and many YouTube videos for beginners and pretty much, all explain data weighting as something the most obvious. I still do not understand why data is being weighted.
Let's assume I have four features:
a b c d
1 2 1 4
If I pass each value to Sigmond function, I'll receive -1 >< 1 value already.
I really don't understand why data needs or it is recommended to be weighted first. If you could explain to me this in very simple manner, I would appreciate it a lot.
I think you are not talking about weighing data but features.
A feature is a column in your table and as data I would understand rows.
The confusion comes now from the fact that weighing rows is also sometimes sensible, e.g., if you want to punish misclassification of positive class more.
Why do we need to weigh features?
I assume you are talking about a modle like
prediction = sigmoid(sum_i weight_i * feature_i) > base
Let's assume you want to predict whether a person is overweight based on Bodyweight, height, and age.
In R we can generate a sample dataset as
height = rnorm(100,1.80,0.1) #normal distributed mean 1.8,variance 0.1
weight = rnorm(100,70,10)
age = runif(100,0,100)
ow = weight / (height**2)>25 #overweight if BMI > 25
data = data.frame(height,weight,age,bc,ow)
if we now plot the data you can see that at least my sample of the data can be separated with a straight line in weight/height. However, age does not provide any value. If we weight it prior to the sum/sigmoid you can put all factors into relation.
Furthermore, as you can see from the following plot the weight/height have a very different domain. Hence, they need to be put into relation, such that the line in the following plot has the right slope, as the value of weight have are one order of magnitude larger
I am playing some demos about recurrent neural network.
I noticed that the scale of my data in each column differs a lot. So I am considering to do some preprocess work before I throw data batches into my RNN. The close column is the target I want to predict in the future.
open high low volume price_change p_change ma5 ma10 \
0 20.64 20.64 20.37 163623.62 -0.08 -0.39 20.772 20.721
1 20.92 20.92 20.60 218505.95 -0.30 -1.43 20.780 20.718
2 21.00 21.15 20.72 269101.41 -0.08 -0.38 20.812 20.755
3 20.70 21.57 20.70 645855.38 0.32 1.55 20.782 20.788
4 20.60 20.70 20.20 458860.16 0.10 0.48 20.694 20.806
ma20 v_ma5 v_ma10 v_ma20 close
0 20.954 351189.30 388345.91 394078.37 20.56
1 20.990 373384.46 403747.59 411728.38 20.64
2 21.022 392464.55 405000.55 426124.42 20.94
3 21.054 445386.85 403945.59 473166.37 21.02
4 21.038 486615.13 378825.52 461835.35 20.70
My question is, is preprocessing the data with, say StandardScaler in sklearn necessary in my case? And why?
(You are welcome to edit my question)
It will be beneficial to normalize your training data. Having different features with widely different scales fed to your model will cause the network to weight the features not equally. This can cause a falsely prioritisation of some features over the others in the representation.
Despite that the whole discussion on data preprocessing is controversial either on when exactly it is necessary and how to correctly normalize the data for each given model and application domain there is a general consensus in Machine Learning that running a Mean subtraction as well as a general Normalization preprocessing step is helpful.
In the case of Mean subtraction, the mean of every individual feature is being subtracted from the data which can be interpreted as centering the data around the origin from a geometric point of view. This is true for every dimensionality.
Normalizing the data after the Mean subtraction step results in a normalization of the data dimensionality to approximately the same scale. Note that the different features will loose any prioritization over each other after this step as mentioned above. If you have good reasons to think that the different scales in your features bear important information that the network may need to truly understand the underlying patterns in your dataset, then a normalization will be harmful. A standard approach would be to scale the inputs to have mean of 0 and a variance of 1.
Further preprocessing operations may be helpful in specific cases such as performing PCA or Whitening on your data. Look into the awesome notes of CS231n (Setting up the data and the model) for further reference on these topics as well as for a more detailed explenation of the topics above.
Definetly yes. Most of neural networks work best with data beetwen 0-1 or -1 to 1(depends on output function). Also when some inputs are higher then others network will "think" they are more important. This can make learning very long. Network must first lower weights in this inputs.
I found this https://arxiv.org/abs/1510.01378
If you normalize it may improve convergence so you will get lower training times.
I am using LSTM neural networks (stateful) for time series prediction.
I'm hoping that the stateful LSTM can capture the hidden patterns and make a satisfactory prediction (the physical law that cause the variation of the time series is not clear).
I have a time series X with a length of 1500 (actual observational data), and my purpose is to predict the future 100.
I suppose predict the next 10 will be more promising than predict the next 100 (is that right?).
So, I prepare the training data like this (always using 100 values to predict the next 10; x_n denotes the n-th element in X):
shape of trainX: [140, 100, 1]
shape of trainY: [140, 10, 1]
---
0: [x_0, x_1, ..., x_99] -> [x_100, x_101, ..., x_109]
1: [x_10, x_11, ..., x_109] -> [x_110, x_111, ..., x_119]
2: [x_20, x_21, ..., x_119] -> [x_120, x_121, ..., x_129]
...
139: [x_1390, x_1391, ..., x_1489] -> [x_1490, x_1491, ..., x_1499]
---
After the training, I want to use the model to predict the next 10 values [x_1500 - x_1509] with [x_1400 - x_1499], and then predict the next 10 values [x_1510 - x_1519] with [x_1410 - x_1509].
Is this the right way?
After a lot of reading of documents and examples, I can train a model and make the prediction, but the result seems not satisfactory.
To validate the method, I assume that the last 100 (x_1400 - x_1499) values are unknown, and remove them from trainX and trainY, then try to train a model and predict them. Lastly, compare the predicted values with the observed values.
Any suggestions or comments will be appreciated.
The time series looks like this:
Your question is really complexed. Before I will try to answer it - I'll share my doubts with you about is it sensible to use LSTM for your task. You want to use a really advanced model (LSTM are capable to learn really complexed patterns) to a time series which seems to be relatively easy. Moreover - you have a really small amout of data. To be honest - I would try to train simpler and easier methods first (like ARMA or ARIMA).
To answer your question - if your approach is good - it seems to be reasonable. Other reasonable methods are predicting all 100 steps or e.g. 50 steps twice. With 10 steps you might come across error cumulation - but still it might be a good method.
As I mentioned earlier - I would rather try easier ML method for this task but if you really want to use LSTM you may tackle this problem in a following way:
Define metaparameters like number of steps ahead you want to predict, the size of input fed to network.
Try to use e.g. grid search in order to find the best value of this metaparameters. Evaluate each setup using k-fold crossvalidation.
Retrain final model using the best metaparameter setup.
You have relatively small amount of data so you may easily find the best values of hyperparameters. This will also show you if your approach is good or not - simply check the results provided by the best solution.