I am scaling data for tree based classifiers which I know is not necessary (but also shouldn't hurt). Most my features are quantitative but I also have "day_of_week" which I have left as 0-6 (not dummied). Is it proper to scale this feature? Or should I hold it out of scaling and add it back into after scaling is done? Or should this be in dummy format? Or should I skip scaling completely?
Thanks for any help!
Tree based approaches are insensitive to the scale of the data and are purely based on the order, so there is no need to scale your data.
Related
I know that feature selection helps me remove features that may have low contribution. I know that PCA helps reduce possibly correlated features into one, reducing the dimensions. I know that normalization transforms features to the same scale.
But is there a recommended order to do these three steps? Logically I would think that I should weed out bad features by feature selection first, followed by normalizing them, and finally use PCA to reduce dimensions and make the features as independent from each other as possible.
Is this logic correct?
Bonus question - are there any more things to do (preprocess or transform)
to the features before feeding them into the estimator?
If I were doing a classifier of some sort I would personally use this order
Normalization
PCA
Feature Selection
Normalization: You would do normalization first to get data into reasonable bounds. If you have data (x,y) and the range of x is from -1000 to +1000 and y is from -1 to +1 You can see any distance metric would automatically say a change in y is less significant than a change in X. we don't know that is the case yet. So we want to normalize our data.
PCA: Uses the eigenvalue decomposition of data to find an orthogonal basis set that describes the variance in data points. If you have 4 characteristics, PCA can show you that only 2 characteristics really differentiate data points which brings us to the last step
Feature Selection: once you have a coordinate space that better describes your data you can select which features are salient.Typically you'd use the largest eigenvalues(EVs) and their corresponding eigenvectors from PCA for your representation. Since larger EVs mean there is more variance in that data direction, you can get more granularity in isolating features. This is a good method to reduce number of dimensions of your problem.
of course this could change from problem to problem, but that is simply a generic guide.
Generally speaking, Normalization is needed before PCA.
The key to the problem is the order of feature selection, and it's depends on the method of feature selection.
A simple feature selection is to see whether the variance or standard deviation of the feature is small. If these values are relatively small, this feature may not help the classifier. But if you do normalization before you do this, the standard deviation and variance will become smaller (generally less than 1), which will result in very small differences in std or var between the different features.If you use zero-mean normalization, the mean of all the features will equal 0 and std equals 1.At this point, it might be bad to do normalization before feature selection
Feature selection is flexible, and there are many ways to select features. The order of feature selection should be chosen according to the actual situation
Good answers here. One point needs to be highlighted. PCA is a form of dimensionality reduction. It will find a lower dimensional linear subspace that approximates the data well. When the axes of this subspace align with the features that one started with, it will lead to interpretable feature selection as well. Otherwise, feature selection after PCA, will lead to features that are linear combinations of the original set of features and they are difficult to interpret based on the original set of features.
I'm a newbie for machine learning, and I have following question. Suppose that I have implemented a classification algorithm on some data, and recognized the best combination of features for the classification algorithm. If someday I get data from same resource, which lack the target feature in previous classification task, Can I use the best combination of features for classification directly to clustering task? (I know I can use the model I trained to predict the target of data, but I just want to know whether the best combination of features is same between classification and clustering algorithms)
I have searched websites and any resource I know, but I can't find the answer for my question, Could somebody tell me or just give me a link? Thanks!
I would say yes, provided the nature of the target is the same in both cases. What we want ideally is a tractable number of features which are orthogonal (perpendicular) to each other in N space, so that each can contribute maximally to the prediction.
Take a concrete example, that of T shirts and whether they are Large size or Small size. You are given data which shows that in the manufacturing process there is a bit of material shrinkage which means the T shirts come out a bit irregular, and the shrinkage varies between the height and width, but not much. The data shows height, width and colour and you want to decide if they are in the large group or the small. You find that the height and width are important but the colour is not, so you decide to go with the height and width as your classification features.
The important point is that these two features have been identified as the most orthogonal to each other, which should apply in a classification or clustering context. The number of clusters remains a factor to be examined.
It may not be good enough.
For example a decision tree or random forest can be analyzed to get the importance of features. But this will not tell you what kind of preprocessing (in particular scaling and weighting) is necessary to be able to cluster them (in particular, categorical features are difficult to use, anything that is not continuous or that is skewed is hard).
Furthermore, data tends to change over time. Features that were important once (e.g. Facebook likes) are useless now.
I have a question regarding data augmentation for training the deep neural network for object detection.
I have quite limited data set (nearly 300 images). I augmented the data by rotating each image from 0-360 degrees with stepsize of 15 degree. Consequently I got 24 rotated images out of just one. So in total, I got around 7200 images. Then I drew bounding box around the object of interest in each augmented image.
Does it seem to be a reasonable approach to enhance the data?
Best Regards
In order to train a good model you need lots of representative data. Your augmentation is representative only for rotations, so yes, it is a good method, if you are concerned about having not enough object rotations. However, it will not help in any sense with generalization to other objects/transformations.
It seems like you are on the right track, rotation is usually a very useful transformation for augmenting the training data. I would suggest to try other transformations like shift (you most probably want to detect partially present objects), zoom (makes your model invariant to the scale), shear, flip, etc. By combining different transformations you can introduce additional diversity in your training data. Training set of 300 images is a very small number, so you would definitely need more than one transformation to augment so tiny training set.
This is a good approach as long as you don't implicitly change the labels when you do rotation. E.g. An image containing the digit 6 will become digit 9 on rotation of 180 deg. So, you've to pay some attention in such scenarios.
But, you could also do other geometric transformations like scaling, translation
Other augmentation that you can consider is using the pre-trained model such as ImageNet, if your problem domain has some resemblance to the ImageNet data. This will allow you to train deeper models even for your data scarce situation.
Even though rotation increases the representational complexity of your image, it might be not enough. Instead you probably need to add other types of augmentation as well.
Color augmentations are useful if they still represent the real distribution of your data.
Spatial augmentations work very good. Keep in mind that most modern systems use a lot of cropping, so that might help.
Actually I have a few scripts that I am trying to turn into a library that might work for you. Check them https://github.com/lozuwa/impy if you would like to.
Good morning everyone, first I would like to make it clear that I began to take my first steps in machine learning yesterday.
I've read most basic items and attended some presentations.
I will participate in a project here a few months that this technology will be applied.
As a beginner I would like to ask a question that I think is silly, but I could not find answers for her.
In presentations and articles, I have seen the creation of a classifier that can classify images or data sets, but never both at the same time.
For example, Iris flower data set, which is used as an example. In this data set we have the characteristics of flowers, such as petal width, but we do not have a visual representation of it. It is possible to fit both and for example, to estimate the width of the petal of a certain image?
I imagine this is a very basic question, but I could not find something suitable for a beginner.
I would be very grateful.
Machine learning models always work on some abstract data items like vectors, points in multidimensional spaces etc. For the simplicity, let us assume for a moment that ML algorithms work on vectors. Classification therefore would be a task of assigning a label Y to a vector X(n).
Now with a data set conversion of values in a row into a vector is relatively easy - well, you have to somehow convert texts onto numbers or vice versa, but it is a standard procedure.
With images it is different. You have to now build a ML-suitable representation of an image. In other words you need to create features (e.g. numerical) describing the image, that you can later use as inputs to your ML.
Examples of such features are: colour histograms, average brightness, number of edges, various convolutions etc. There can be more complicated, semantic features like the presence of a human on the picture. Calculating these however is much more difficult.
So summing up - you can build a classifier on both the image and dataset, but it basically means transforming both into a set of features.
I'v got a binary classification problem. I'm trying to train a neural network to recognize objects from images. Currently I've about 1500 50x50 images.
The question is whether extending my current training set by the same images flipped horizontally is a good idea or not? (images are not symetric)
Thanks
I think you can do this to a much larger extent, not just flipping the images horizontally, but changing the angle of the image by 1 degree. This will result in 360 samples for every instance that you have in your training set. Depending on how fast your algorithm is, this may be a pretty good way to ensure that the algorithm isn't only trained to recognize images and their mirrors.
It's possible that it's a good idea, but then again, I don't know what's the goal or the domain of the image recognition. Let's say the images contain characters and you're asking the image recognition software to determine if an image contains a forward slash / or a back slash \ then flipping the image will make your training data useless. If your domain doesn't suffer from such issues, then I'd think it's a good idea to flip them and even rotate with varying degrees.
I have used flipped images in AdaBoost with great success in the course: http://www.csc.kth.se/utbildning/kth/kurser/DD2427/bik12/Schedule.php
from the zip "TrainingImages.tar.gz".
I know there are some information on pros/cons with using flipped images somewhere in the slides (at the homepage) but I can't find it. Also a great resource is http://www.csc.kth.se/utbildning/kth/kurser/DD2427/bik12/DownloadMaterial/FaceLab/Manual.pdf (together with the slides) going thru things like finding things in different scales and orientation.
If the images patches are not symmetric I don't think its a good idea to flip. Better idea is to do some similarity transforms to the training set with some limits. Another way to increase the dataset is to add gaussian smoothed templates to it. Make sure that the number of positive and negative samples are proportional. Too many positive and too less negative might skew the classifier and give bad performance on testing set.
It depends on what your NN is based on. If you are extracting rotation invariant features or features that do not depend on the spatial position within the the image (like histograms or whatever) and train your NN with these features, then rotating will not be a good idea.
If you are training directly on pixel values, then it might be a good idea.
Some more details might be useful.