Relation between features for classification and clustering - machine-learning

I'm a newbie for machine learning, and I have following question. Suppose that I have implemented a classification algorithm on some data, and recognized the best combination of features for the classification algorithm. If someday I get data from same resource, which lack the target feature in previous classification task, Can I use the best combination of features for classification directly to clustering task? (I know I can use the model I trained to predict the target of data, but I just want to know whether the best combination of features is same between classification and clustering algorithms)
I have searched websites and any resource I know, but I can't find the answer for my question, Could somebody tell me or just give me a link? Thanks!

I would say yes, provided the nature of the target is the same in both cases. What we want ideally is a tractable number of features which are orthogonal (perpendicular) to each other in N space, so that each can contribute maximally to the prediction.
Take a concrete example, that of T shirts and whether they are Large size or Small size. You are given data which shows that in the manufacturing process there is a bit of material shrinkage which means the T shirts come out a bit irregular, and the shrinkage varies between the height and width, but not much. The data shows height, width and colour and you want to decide if they are in the large group or the small. You find that the height and width are important but the colour is not, so you decide to go with the height and width as your classification features.
The important point is that these two features have been identified as the most orthogonal to each other, which should apply in a classification or clustering context. The number of clusters remains a factor to be examined.

It may not be good enough.
For example a decision tree or random forest can be analyzed to get the importance of features. But this will not tell you what kind of preprocessing (in particular scaling and weighting) is necessary to be able to cluster them (in particular, categorical features are difficult to use, anything that is not continuous or that is skewed is hard).
Furthermore, data tends to change over time. Features that were important once (e.g. Facebook likes) are useless now.

Related

Right order of doing feature selection, PCA and normalization?

I know that feature selection helps me remove features that may have low contribution. I know that PCA helps reduce possibly correlated features into one, reducing the dimensions. I know that normalization transforms features to the same scale.
But is there a recommended order to do these three steps? Logically I would think that I should weed out bad features by feature selection first, followed by normalizing them, and finally use PCA to reduce dimensions and make the features as independent from each other as possible.
Is this logic correct?
Bonus question - are there any more things to do (preprocess or transform)
to the features before feeding them into the estimator?
If I were doing a classifier of some sort I would personally use this order
Normalization
PCA
Feature Selection
Normalization: You would do normalization first to get data into reasonable bounds. If you have data (x,y) and the range of x is from -1000 to +1000 and y is from -1 to +1 You can see any distance metric would automatically say a change in y is less significant than a change in X. we don't know that is the case yet. So we want to normalize our data.
PCA: Uses the eigenvalue decomposition of data to find an orthogonal basis set that describes the variance in data points. If you have 4 characteristics, PCA can show you that only 2 characteristics really differentiate data points which brings us to the last step
Feature Selection: once you have a coordinate space that better describes your data you can select which features are salient.Typically you'd use the largest eigenvalues(EVs) and their corresponding eigenvectors from PCA for your representation. Since larger EVs mean there is more variance in that data direction, you can get more granularity in isolating features. This is a good method to reduce number of dimensions of your problem.
of course this could change from problem to problem, but that is simply a generic guide.
Generally speaking, Normalization is needed before PCA.
The key to the problem is the order of feature selection, and it's depends on the method of feature selection.
A simple feature selection is to see whether the variance or standard deviation of the feature is small. If these values are relatively small, this feature may not help the classifier. But if you do normalization before you do this, the standard deviation and variance will become smaller (generally less than 1), which will result in very small differences in std or var between the different features.If you use zero-mean normalization, the mean of all the features will equal 0 and std equals 1.At this point, it might be bad to do normalization before feature selection
Feature selection is flexible, and there are many ways to select features. The order of feature selection should be chosen according to the actual situation
Good answers here. One point needs to be highlighted. PCA is a form of dimensionality reduction. It will find a lower dimensional linear subspace that approximates the data well. When the axes of this subspace align with the features that one started with, it will lead to interpretable feature selection as well. Otherwise, feature selection after PCA, will lead to features that are linear combinations of the original set of features and they are difficult to interpret based on the original set of features.

Possible/maybe category in deep learning

I'm interested in taking advantage of some partially labeled data that I have in a deep learning task. I'm using a fully convolutional approach, not sampling patches from the labeled regions.
I have masks that outline regions of definite positive examples in an image, but the unmasked regions in the images are not necessarily negative - they may be positive. Does anyone know of a way to incorporate this type of class in a deep learning setting?
Triplet/contrastive loss seems like it may be the way to go, but I'm not sure how to accommodate the "fuzzy" or ambiguous negative/positive space.
Try label smoothing as described in section 7.5.1 of Deep Learning book:
We can assume that for some small constant eps, the training set label y is correct with probability 1 - eps, and otherwise any of the other possible labels might be correct.
Label smoothing regularizes a model based on a softmax with k output values by replacing the hard 0 and 1 classification targets with targets of eps / k and 1 - (k - 1) / k * eps, respectively.
See my question about implementing label smoothing in Pandas.
Otherwise if you know for sure, that some areas are negative, other are positive while some are uncertain, then you can introduce a third uncertain class. I have worked with data sets that contained uncertain class, which corresponded to samples that could belong to any of the available classes.
I'm assuming that you are struggling with a data segmantation task with a problem of a ill-definied background (e.g. you are not sure if all examples are correctly labeled). Recently I came across the similiar problem and this is what I came across during my research:
In old days before deep learning and at the begining of deep learning era - the common way to deal with that is to smooth your output with some kind of a probability model which would take into account the possibility of a noisy labels (you could read about this in a Learning to Label from Noisy Data chapter from this book. It's important to discriminate this probabilistic models from models used to smooth your labels w.r.t. to image or label structure like classical CRFs for bilateral smoothing.
What we finally used (and worked really well) is the Channel Inhibited Softmax idea from this paper. In terms of a mathematical properties - it makes your network much more robust to some objects not labeled - because it makes your network to output much higher positive valued logits at correctly labeled objects.
You could treat this as a semi-supervised problem. Use the full dataset without labels to train a bottleneck autoencoder structure (or a GAN approach). This pretrained model can then be adjusted (e.g. removing the last layers, adding a better layer structure at the end on top of the bottleneck features) and finetuned on the labeled data.

Mathematical comparison of various image enhancement techniques on an image?

I want know what could be the various ways to compare the same image enhanced by various image enhancement techniques not visually but mathematically?
For example: (i) May be (I am not sure) one could look at their histograms and calculate the variance of them. One with the highest variance might be the best technique? or
(ii) Randomly, pick a local region in all the enhanced images and compute again variance or look at the difference of the max. and min. values of that local region. One with highest variance or difference might be the best?
Thanks a lot.
It really depends on the sort of enhancement you are looking at.
For example, for the likes of denoising and deblurring, the PSNR and MSE might be appropriate, especially when you have access to groundtruth images which you can compare the enhanced image against.
Aesthetic enhancement on the other hand might be harder to quantify as it requires a certain degree of subjectivity. A highly cited work in this area is:
Studying Aesthetic in Photographic Images Using a Computational Approach
You can check out the citations therein for relevant references.
Two common metrics for comparing images are mean square error (MSE) and peak signal to noise ratio (PSNR).

Number of instances or the content of the instances more important (machine learning)?

Say in the document classification domain, if I'm having a dataset of 1000 instances but the instances (documents) are rather of small content; and I'm having another dataset of say 200 instances but each individual instance with richer content. If IDF is out of my concern, will the number of instances really matter in training? Do classification algorithms sort of take that into account?
Thanks.
sam
You could pose this as a general machine learning problem. The simplest problem that can help you understand how the size of training data matters is curve fitting.
The uncertainty and bias of a classifier or a fitted model are functions of the sample size. Small sample size is a well known problem which we often try to avoid by collecting more training samples. This is because the uncertainty estimation of non-linear classifiers is estimated by a linear approximation of the model. And this estimation is accurate only if a large number samples are available as the main condition of the central limit theorem.
The proportion of outliers is also an important factor you should consider when deciding on the training sample size. If a larger sample size means a greater proportion of outliers then should limit the sample size.
The document size is actually is an indirect indicator of feature space size. If for example from each document you have got only 10 features then you're trying to separate/classify the documents in a 10-dimensional space. If you have got 100 features in each document then the same is happening in a 100-dimensional space. I guess it's easy for you to see drawing lines that separate the documents in a higher dimension is easier.
For both document size and sample size the rule of thumb is go to as high as possible but in practice this is not possible. And for example, if you estimate the uncertainty function of the classifier then you find a threshold that sample sizes higher than that lead to virtually no reduction of uncertainty and bias. Empirically you can also find this threshold for some problems by Monte Carlo simulation.
Most engineers don't bother to estimate uncertainty and that often leads to sub-optimal behavior of the methods they implement. This is fine for toy problems but in real-world problems considering uncertainty of estimations and computation is vital for most systems. I hope that answers your questions to some degree.

Machine learning algorithm for this task?

Trying to write some code that deals with this task:
As an starting point, I have around 20 "profiles" (imagine a landscape profile), i.e. one-dimensional arrays of around 1000 real values.
Each profile has a real-valued desired outcome, the "effective height".
The effective height is some sort of average but height, width and position of peaks play a particular role.
My aim is to generalize from the input data so as to calculate the effective height for further profiles.
Is there a machine learning algorithm or principle that could help?
Principle 1: Extract the most import features, instead of feeding it everything
As you said, "The effective height is some sort of average but height, width and position of peaks play a particular role." So that you have a strong priori assumption that these measures are the most important for learning. If I were you, I would calculate these measures at first, and use them as the input for learning, instead of the raw data.
Principle 2: While choosing a learning algorithm, the first thing to care about would be the the linear separability
Suppose the height is a function of those measures, then you have to think about that to what extent the function is linear. For example if the function is almost linear, then a very simple Perceptron would be perfect. Otherwise if it's far from linear, you might want to pick up a multiple-layer neural network. If it's far far far from linear....please turn to principle 1, and check out if you are extracting the right features.
Principle 3: More data help
As you said, you have around 20 "profiles" for training. In general speaking, that's not enough. Almost all of the machine learning algorithms were designed for somehow big data. Even they claimed that their algorithm is good at learning small sample, but usually not as small as 20. Get more data!
Maybe multivariate linear regression suffices?
I would probably use a combination of what you said about which features play the most important role, and then train a regression on that. Basically, you need at least one coefficient corresponding to each feature, and you need substantially more data points than coefficients. So, I would pick something like the heights and width of the two biggest peaks. You've now reduced every profile to just 4 numbers. Now do this trick: divide the data into 5 groups of 4. Pick the first 4 groups. Reduce all those profiles to 4 numbers, and then use the desired outcomes to come up with a regression. Once you have trained the regression, try your technique on the last 4 points and see how well it works. Repeat this procedure 5 times, each time leaving out a different set of data. This is called cross-validation, and it's very handy.
Obviously getting more data would help.

Resources