What is meant by stability in relation to neural networks - machine-learning

I hear the terms stability/instability thrown around a lot when reading up on Deep Q Networks. I understand that stability is improved with the addition of a target network and replay buffer but I fail to understand exactly what it's refering to.
What would the loss graph look like for an instable vs stable neural network?
What does it mean when a neural network converges/diverges?

Stability, also known as algorithmic stability, is a notion in
computational learning theory of how a machine learning algorithm is
perturbed by small changes to its inputs. A stable learning algorithm
is one for which the prediction does not change much when the training
data is modified slightly.
Here Stability means suppose you have 1000 training data that you use to train the model and it performs well. So in terms of model stability if you train the same model with 900 training data the model should still perform well , thats why it is also called as algorithmic stability.
As For the loss Graph if the model is stable the loss graph probably should be same for both size of training data (1000 & 900). And different in case of unstable model.
As in Machine learning we want to minimize loss so when we say a model converges we mean to say that the model's loss value is within acceptable margin and the model is at that stage where no additional training would improve the model.
Divergence is a non-symmetric metrics which is used to measure the difference between continuous value. For example you want to calculate difference between 2 graphs you would use Divergence instead of traditional symmetric metrics like Distance.

Related

TFF : test accuracy fluctuate

I train a ResNet50 model with TFF, I use test accuracy on test data for evaluation, but I find many fluctuations as shown in the figure below, So please how can I avoid this fluctuation ?
I would say behavior such as this is to be expected for stochastic optimization in general. The inherent variance causes you to oscillate somewhere around good solution. The magnitude of the variance and properties of the optimization objective control how much this oscillates when looking at a accuracy metric.
For plain SGD, decreasing learning rate decreases the variance and slows down convergence.
For optimization methods for federated learning, the story is a bit more complicated, but decreasing the client learning rate, or decreasing the number of local steps (while keeping other things the same) can have a similar effect, typically including slowing down convergence. More details can be found in https://arxiv.org/abs/2007.00878 mentioned also in the other answer. Potentially decreasing the client learning rate across rounds could also work. The details can differ also based on what exactly is the optimization method you are using.
How is the test accuracy calculated? How many local epochs are the clients training?
If the global model is tested on a held out set of examples, it is possible that clients are detrimentally overfitting during local training. As the global model approaches convergence, each client ends up training a model that works well for them individually, but may be diverging from the optimal global model (sometimes called client drift https://arxiv.org/abs/1910.06378). This is may occur when the client's local dataset has a distribution very different from the global distribution and more likely when the client learning rates are high (https://arxiv.org/abs/2007.00878).
Decreasing the client learning rate, reducing the number of steps/batches, and other methods that cause the clients to do less "work" per communication round may reduce the fluctuation.

The impact of number of negative samples used in a highly imbalanced dataset (XGBoost)

I am trying to model a classifier using XGBoost on a highly imbalanced data-set, with a limited number of positive samples and practically infinite number of negative samples.
Is it possible that having too many negative samples (making the data-set even more imbalanced) will weaken the model's predictive power? Is there a reason to limit the number of negative samples aside from running time?
I am aware of the scale_pos_weight parameter which should address the issue but my intuition says even this method has its limits.
To answer your question directly: adding more negative examples will likely decrease the decision power of the trained classifier. For the negative class choose the most representative examples and discard the rest.
Learning from imbalanced dataset can influence the predictive power and even an ability of a classifier to converge at all. Generally recommended strategy is to maintain similar sizes of training examples per each of the classes. Imbalance of classes effect on learning depends on the shape of the decision space and the width of boundaries between classes. The wider they are, and the simpler the decision space the more successful training even for imbalanced datasets.
TL;DR
For a quick overview of the methods of imbalanced learning I recommend these two articles:
SMOTE and AdaSyn by example
How to Handle Imbalanced Data: An Overview
Dealing with Imbalanced Classes in Machine Learning
Learning from Imbalanced Data by Prof. Haibo He (more scientific)
There is a Python package called imbalanced-learn which has an extensive documentation of algorithms that I recommend for in-depth review.

Machine Learning - Feature Ranking by Algorithms

I have a dataset that contains around 30 features and I want to find out which features contribute the most to the outcome. I have 5 algorithms:
Neural Networks
Logistics
Naive
Random Forest
Adaboost
I read a lot about Information Gain technique and it seems it is independent of the machine learning algorithm used. It is like a preprocess technique.
My question follows, is it best practice to perform feature importance for each algorithm dependently or just use Information Gain. If yes what are the technique used for each ?
First of all, it's worth stressing that you have to perform the feature selection based on the training data only, even if it is a separate algorithm. During testing, you then select the same features from the test dataset.
Some approaches that spring to mind:
Mutual information based feature selection (eg here), independent of the classifier.
Backward or forward selection (see stackexchange question), applicable to any classifier but potentially costly since you need to train/test many models.
Regularisation techniques that are part of the classifier optimisation, eg Lasso or elastic net. The latter can be better in datasets with high collinearity.
Principal components analysis or any other dimensionality reduction technique that groups your features (example).
Some models compute latent variables which you can use for interpretation instead of the original features (e.g. Partial Least Squares or Canonical Correlation Analysis).
Specific classifiers can aid interpretability by providing extra information about the features/predictors, off the top of my head:
Logistic regression: you can obtain a p-value for every feature. In your interpretation you can focus on those that are 'significant' (eg p-value <0.05). (same for two-classes Linear Discriminant Analysis)
Random Forest: can return a variable importance index that ranks the variables from most to least important.
I have a dataset that contains around 30 features and I want to find out which features contribute the most to the outcome.
This will depend on the algorithm. If you have 5 algorithms, you will likely get 5 slightly different answers, unless you perform the feature selection prior to classification (eg using mutual information). One reason is that Random Forests and neural networks would pick up nonlinear relationships while logistic regression wouldn't. Furthermore, Naive Bayes is blind to interactions.
So unless your research is explicitly about these 5 models, I would rather select one model and proceed with it.
Since your purpose is to get some intuition on what's going on, here is what you can do:
Let's start with Random Forest for simplicity, but you can do this with other algorithms too. First, you need to build a good model. Good in the sense that you need to be satisfied with its performance and it should be Robust, meaning that you should use a validation and/or a test set. These points are very important because we will analyse how the model takes its decisions, so if the model is bad you will get bad intuitions.
After having built the model, you can analyse it at two level : For the whole dataset (understanding your process), or for a given prediction. For this task I suggest you to look at the SHAP library which computes features contributions (i.e how much does a feature influences the prediction of my classifier) that can be used for both puproses.
For detailled instructions about this process and more tools, you can look fast.ai excellent courses on the machine learning serie, where lessons 2/3/4/5 are about this subject.
Hope it helps!

Does it overfit if the nested models are trained on the same data

Does it overfit if I build a machine learning model where it use the output from another machine learning model while both models are trained on the same data?
Basically I was wondering if I can use the KNN prediction result as an input for a deep neural network model while both of the models are trained on the very same data.
Nesting machine learning models is possible. For example, neuronal networks can be seen as multiple nested perceptrons (see https://en.wikipedia.org/wiki/Perceptron).
However you are right - nesting machine learning models increase the VC-dimension (https://en.wikipedia.org/wiki/VC_dimension) of your complete machine learning system and thus the risk of overfitting.
In practice cross-validation is often used in order to reduce the risk of overfitting.
Edit:
#MatiasValdenegro +1 for pointing towards a point I do not specify very clearly in my answer. Pure cross-validation can indeed only be used in order to detect overfitting.
However when we training certain machine learning systems like neuronal networks, it is possible to use some sort of cross-validation in order to reduce the risk of overfitting. In order to do so, we simply discard e.g. 10% of the training data for training. Then after each training round, the trained machine learning system is evaluated on the discarded training data. Once the trained neuronal network is getting worse on the discarded part, the training algorithm stops. This is for example done by the python pybrain (http://pybrain.org/) library.

How to interpret weight distributions of neural net layers

I have designed a 3 layer neural network whose inputs are the concatenated features from a CNN and RNN. The weights learned by network take very small values. What is the reasonable explanation for this? and how to interpret the weight histograms and distributions in Tensorflow? Any good resource for it?
This is the weight distribution of the first hidden layer of a 3 layer neural network visualized using tensorboard. How to interpret this? all the weights are taking up zero value?
This is the weight distribution of the second hidden layer of a 3 layer neural:
how to interpret the weight histograms and distributions in Tensorflow?
Well, you probably didn't realize it, but you have just asked the 1 million dollar question in ML & AI...
Model interpretability is a hyper-active and hyper-hot area of current research (think of holy grail, or something), which has been brought forward lately not least due to the (often tremendous) success of deep learning models in various tasks; these models are currently only black boxes, and we naturally feel uncomfortable about it...
Any good resource for it?
Probably not exactly the kind of resources you were thinking of, and we are well off a SO-appropriate topic here, but since you asked...:
A recent (July 2017) article in Science provides a nice overview of the current status & research: How AI detectives are cracking open the black box of deep learning (no in-text links, but googling names & terms will pay off)
DARPA itself is currently running a program on Explainable Artificial Intelligence (XAI)
There was a workshop in NIPS 2016 on Interpretable Machine Learning for Complex Systems
On a more practical level:
The Layer-wise Relevance Propagation (LRP) toolbox for neural networks (paper, project page, code, TF Slim wrapper)
FairML: Auditing Black-Box Predictive Models, by Fast Forward Labs (blog post, paper, code)
A very recent (November 2017) paper by Geoff Hinton, Distilling a Neural Network Into a Soft Decision Tree, with an independent PyTorch implementation
SHAP: A Unified Approach to Interpreting Model Predictions (paper, authors' code)
These should be enough for starters, and to give you a general idea of the subject about which you asked...
UPDATE (Oct 2018): I have put up a much more detailed list of practical resources in my answer to the question Predictive Analytics - “Why” factor?
The weights learned by network take very small values. What is the reasonable explanation for this? How to interpret this? all the weights are taking up zero value?
Not all weights are zero, but many are. One reason is regularization (in combination with a large, i.e. wide layers, network) Regularization makes weights small (both L1 and L2). If your network is large, most weights are not needed, i.e., they can be set to zero and the model still performs well.
How to interpret the weight histograms and distributions in Tensorflow? Any good resource for it?
I am not so sure about weight distributions. There is some work that analysis them, but I am not aware of a general interpretation, e.g., for CNNs it is known that center weights of a filter/feature usually have larger magnitude than those in corners, see [Locality-Promoting Representation Learning, 2021, ICPR, https://arxiv.org/abs/1905.10661]
For CNNs you can also visualize weights directly, if you have large filters. For example, for (simpl)e networks you can see that weights first converge towards some kind of class average before overfitting starts. This is shown in Figure 2 of [The learning phases in NN: From Fitting the Majority to Fitting a Few, 2022, http://arxiv.org/abs/2202.08299]
Rather than going for weights, you can also look at what samples trigger the strongest activations for specific features. If you don't want to look at single features, there is also the possibility to visualize what the network actually remembers on the input, e.g., see [Explaining Neural Networks by Decoding Layer Activations, https://arxiv.org/abs/2005.13630].
These are just a few examples (Disclaimer I authored these works) - there is thousands of other works on explainability out there.

Resources