Should the neurons in a neural network be asynchronous? - machine-learning

I am designing a neural network and am trying to determine if I should write it in such a way that each neuron is its own 'process' in Erlang, or if I should just go with C++ and run a network in one thread (I would still use all my cores by running an instance of each network in its own thread).
Is there a good reason to give up the speed of C++ for the asynchronous neurons that Erlang offers?

I'm not sure I understand what you're trying to do. An artificial neural network is essentially represented by the weight of the connections between nodes. The nodes themselves don't exist in isolation; their values are only calculated (at least in feed-forward networks) through the forward-propagation algorithm, when it is given input.
The backpropagation algorithm for updating weights is definitely parallelizable, but that doesn't seem to be what you're describing.

The usefulness of having neurons in a Neural Network (NN), is to have a multi-dimension matrix which coefficients you want to handle ( to train them, to change them, to adapt them little by little, so as they fit well to the problem you want to solve). On this matrix you can apply numerical methods (proven and efficient) so as to find an acceptable solution, in an acceptable time.
IMHO, with NN (namely with back-propagation training method), the goal is to have a matrix which is efficient both at run-time/predict-time, and at training time.
I don't grasp the point of having asynchronous neurons. What would it offers ? what issue would it solve ?
Maybe you could explain clearly what problem you would solve putting them asynchronous ?
I am indeed inverting your question: what do you want to gain with asynchronicity regarding traditional NN techniques ?

It would depend upon your use case: the neural network computational model and your execution environment. Here is a recent paper (2014) by Plotnikova et al, that uses "Erlang and platform Erlang/OTP with predefined base implementation of actor model functions" and a new model developed by the authors that they describe as “one neuron—one process” using "Gravitation Search Algorithm" for training:
http://link.springer.com/chapter/10.1007%2F978-3-319-06764-3_52
To briefly cite their abstract, "The paper develops asynchronous distributed modification of this algorithm and presents the results of experiments. The proposed architecture shows the performance increase for distributed systems with different environment parameters (high-performance cluster and local network with a slow interconnection bus)."
Also, most other answers here reference a computational model that uses matrix operations for the base of training and simulation, for which the authors of this paper compare by saying, "this case neural network model [ie matrix operations based] becomes fully mathematical and its original nature (from neural networks biological prototypes) gets lost"
The tests were run on three types of systems;
IBM cluster is represented as 15 virtual machines.
Distributed system deployed to the local network is represented as 15 physical machines.
Hybrid system is based on the system 2 but each physical machine has four processor cores.
They provide the following concrete results, "The presented results evidence a good distribution ability of gravitation search, especially for large networks (801 and more neurons). Acceleration depends on the node count almost linearly. If we use 15 nodes we can get about eight times acceleration of the training process."
Finally, they conclude regarding their model, "The model includes three abstraction levels: NNET, MLP and NEURON. Such architecture allows encapsulating some general features on general levels and some specific for the considered neural networks features on special levels. Asynchronous message passing between levels allow to differentiate synchronous and asynchronous parts of training and simulation algorithms and, as a result, to improve the use of resources."

It depends what you are after.
2nd Generation of Neural Networks are synchronous. They perform computations on an input-output basis without a delay, and can be trained either through reinforcement or back-propagation. This is the prevailing type of ANN at the moment and the easiest to get started with if you are trying to solve a problem via machine learning, lots of literature and examples available.
3rd Generation of Neural Networks (so-called "Spiking Neural Networks") are asynchronous. Signals propagate internally through the network as a chain-reaction of spiking events, and can create interesting patterns and oscillations depending on the shape of the network. While they model biological brains more closely they are also harder to make use of in a practical setting.

I think that async computation for NNs might prove beneficial for the (recognition) performance. In fact, the result might be similar (maybe less pronounced) to using dropout.
But a straight-forward implementation of async NNs would be much slower, because for synchronous NNs you can use linear algebra libraries, which make good use of vectorization or GPUs.

Related

what machine learning algorithm could be better for this scenario

I have a dataset comprised of roughly 15M observations, with approximately 3% of it being from the interest class. I can train the model in a pc, but i need to implement the classifier in a raspberry pi3. Since the raspberry has such a limited memory, what algorithms represent the least load for it?.
Additional info: the dataset is hard to differentiate. For example, ANNs can't get past the 80% detection rate for the interest class, no matter the architecture or activation function. Random forest has demonstrated great performance but the number of trees and nodes required aren't feasible for the implementation on a microcontroller.
Thank you, in advance.
You could potentially trim the trees in Random Forest approach so that to balance the classifier performance with memory / processing power requirements.
Also, I am suspecting you have a strongly imbalanced train/test sets so I wonder if you used any of the approaches suggested in this case (e.g. SMOTE, ADASYN, etc.). In case of python I strongly suggest reviewing imbalanced-learn library. Using such an approach could lead to a reduced size of classifier with acceptably good performance that you would be able to fit to run on the target device.
Last but not least, this question could easily go to Cross Validated or Data Science sites.

How to interpret weight distributions of neural net layers

I have designed a 3 layer neural network whose inputs are the concatenated features from a CNN and RNN. The weights learned by network take very small values. What is the reasonable explanation for this? and how to interpret the weight histograms and distributions in Tensorflow? Any good resource for it?
This is the weight distribution of the first hidden layer of a 3 layer neural network visualized using tensorboard. How to interpret this? all the weights are taking up zero value?
This is the weight distribution of the second hidden layer of a 3 layer neural:
how to interpret the weight histograms and distributions in Tensorflow?
Well, you probably didn't realize it, but you have just asked the 1 million dollar question in ML & AI...
Model interpretability is a hyper-active and hyper-hot area of current research (think of holy grail, or something), which has been brought forward lately not least due to the (often tremendous) success of deep learning models in various tasks; these models are currently only black boxes, and we naturally feel uncomfortable about it...
Any good resource for it?
Probably not exactly the kind of resources you were thinking of, and we are well off a SO-appropriate topic here, but since you asked...:
A recent (July 2017) article in Science provides a nice overview of the current status & research: How AI detectives are cracking open the black box of deep learning (no in-text links, but googling names & terms will pay off)
DARPA itself is currently running a program on Explainable Artificial Intelligence (XAI)
There was a workshop in NIPS 2016 on Interpretable Machine Learning for Complex Systems
On a more practical level:
The Layer-wise Relevance Propagation (LRP) toolbox for neural networks (paper, project page, code, TF Slim wrapper)
FairML: Auditing Black-Box Predictive Models, by Fast Forward Labs (blog post, paper, code)
A very recent (November 2017) paper by Geoff Hinton, Distilling a Neural Network Into a Soft Decision Tree, with an independent PyTorch implementation
SHAP: A Unified Approach to Interpreting Model Predictions (paper, authors' code)
These should be enough for starters, and to give you a general idea of the subject about which you asked...
UPDATE (Oct 2018): I have put up a much more detailed list of practical resources in my answer to the question Predictive Analytics - “Why” factor?
The weights learned by network take very small values. What is the reasonable explanation for this? How to interpret this? all the weights are taking up zero value?
Not all weights are zero, but many are. One reason is regularization (in combination with a large, i.e. wide layers, network) Regularization makes weights small (both L1 and L2). If your network is large, most weights are not needed, i.e., they can be set to zero and the model still performs well.
How to interpret the weight histograms and distributions in Tensorflow? Any good resource for it?
I am not so sure about weight distributions. There is some work that analysis them, but I am not aware of a general interpretation, e.g., for CNNs it is known that center weights of a filter/feature usually have larger magnitude than those in corners, see [Locality-Promoting Representation Learning, 2021, ICPR, https://arxiv.org/abs/1905.10661]
For CNNs you can also visualize weights directly, if you have large filters. For example, for (simpl)e networks you can see that weights first converge towards some kind of class average before overfitting starts. This is shown in Figure 2 of [The learning phases in NN: From Fitting the Majority to Fitting a Few, 2022, http://arxiv.org/abs/2202.08299]
Rather than going for weights, you can also look at what samples trigger the strongest activations for specific features. If you don't want to look at single features, there is also the possibility to visualize what the network actually remembers on the input, e.g., see [Explaining Neural Networks by Decoding Layer Activations, https://arxiv.org/abs/2005.13630].
These are just a few examples (Disclaimer I authored these works) - there is thousands of other works on explainability out there.

Why is lift for neural network that stable in SAS Viya demo?

I'm looking at the SAS Viya machine learing demo. It races some machine Learning algorithms against each other on a given dataset. All models produce almost equally good "lift" as shown in lift diagrams in the output.
If you tweak the Learning to perform on a smaller subset of the data; only 0.002% of the total data set (proc partition data=&casdata partition samppct=0.002;), most algorithms get into problems producing lift.
But the neural network is still performing very well. Feature or bug? I could imagine that the script does not re-initilize the network, but it is hard to guess from the calls alone.
I got good answers over at the SAS Community posted by BrettWujek and Xinmin there:
Mats - the short answer without running some studies of my own is that neural networks are highly adaptive and can train very accurate models with far fewer observations than many other techniques. The tree-based models are going to be quite unstable with very few observations. In this case you sampled all the way down to around 20 observations...even that might be sufficient for a neural network if the space it not overly nonlinear.
As for your last comment - it seems you are referring to what is known as warm start, where a previously trained model can be used as a starting point and refined by providing new observations. That is NOT what is happening here, as that capability is only coming available in our upcoming release which is just over a month away.
Brett
And I've got some detail on this from Xinmin:
Mats, PROC NNET initializes weight random, if you specify a seed in the train statement, the initial weights are repeatable. NNET training is powered by a sophiscated nonlinear optimization solver, if the log shows "converged" status, it means the model is fit very well.

How to model a for loop in a neural network

I am currently in the process of learning neural networks and can understand basic examples like AND, OR, Addition, Multiplication, etc.
Right now, I am trying to build a neural network that takes two inputs x and n, and computes pow(x, n). And, this would require the neural network to have some form of a loop, and I am not sure how I can model a network with a loop
Can this sort of computation be modelled on a neural network? I am assuming it is possible.. based on the recently released paper(Neural Turing Machine), but not sure how. Any pointers on this would be very helpful.
Thanks!
Feedforward neural nets are not Turing-complete, and in particular they cannot model loops of arbitrary order. However, if you fix the maximum n that you want to treat, then you can set up an architecture which can model loops with up to n repetitions. For instance, you could easily imagine that each layer could act as one iteration in the loop, so you might need n layers.
For a more general architecture that can be made Turing-complete, you could use Recurrent Neural Networks (RNN). One popular instance in this class are the so-called Long short-term memory (LSTM) networks by Hochreiter and Schmidhuber. Training such RNNs is quite different from training classical feedforward networks, though.
As you pointed out, Neural Turing Machines seem to working well to learn the basic algorithms. For instance, the repeat copy task which has been implemented in the paper, might tell us that NTM can learn the algorithm itself. As of now, NTMs have been used only for simple tasks so understanding its scope by using the pow(x,n) will be interesting given that repeat copy works well. I suggest reading Reinforcement Learning Neural Turing Machines - Revised for a deeper understanding.
Also, recent developments in the area of Memory Networks empower us to perform more complicated tasks. Hence, to make a neural network understand pow(x,n) might be possible. So go ahead and give it a shot!

Neural Network / Machine Learning memory storage

I am currently trying to set up an Neural Network for information extraction and I am pretty fluent with the (basic) concepts of Neural Networks, except for one which seem to puzzle me. It is probably pretty obvious but I can't seem to found information about it.
Where/How do Neural Networks store their memory? ( / Machine Learning)
There is quite a bit of information available online about Neural Networks and Machine Learning but they all seem to skip over memory storage. For example after restarting the program, where does it find its memory to continue learning/predicting? Many examples online don't seem to 'retain' memory but I can't imagine this being 'safe' for real/big-scale deployment.
I have a difficult time wording my question, so please let me know if I need to elaborate a bit more.
Thanks,
EDIT: - To follow up on the answers below
Every Neural Network will have edge weights associated with them.
These edge weights are adjusted during the training session of a
Neural Network.
This is exactly where I am struggling, how do/should I vision this secondary memory?
Is this like RAM? that doesn't seem logical.. The reason I ask because I haven't encountered an example online that defines or specifies this secondary memory (for example in something more concrete such as an XML file, or maybe even a huge array).
Memory storage is implementation-specific and not part of the algorithm per se. It is probably more useful to think about what you need to store rather than how to store it.
Consider a 3-layer multi-layer perceptron (fully connected) that has 3, 8, and 5 nodes in the input, hidden, and output layers, respectively (for this discussion, we can ignore bias inputs). Then a reasonable (and efficient) way to represent the needed weights is by two matrices: a 3x8 matrix for weights between the input and hidden layers and an 8x5 matrix for the weights between the hidden and output layers.
For this example, you need to store the weights and the network shape (number of nodes per layer). There are many ways you could store this information. It could be in an XML file or a user-defined binary file. If you were using python, you could save both matrices to a binary .npy file and encode the network shape in the file name. If you implemented the algorithm, it is up to you how to store the persistent data. If, on the other hand, you are using an existing machine learning software package, it probably has its own I/O functions for storing and loading a trained network.
Every Neural Network will have edge weights associated with them. These edge weights are adjusted during the training session of a Neural Network. I suppose your doubt is about storing these edge weights. Well, these values are stored separately in a secondary memory so that they can be retained for future use in the Neural Network.
I would expect discussion of the design of the model (neural network) would be kept separate from the discussion of the implementation, where data requirements like durability are addressed.
A particular library or framework might have a specific answer about durable storage, but if you're rolling your own from scratch, then it's up to you.
For example, why not just write the trained weights and topology in a file? Something like YAML or XML could serve as a format.
Also, while we're talking about state/storage and neural networks, you might be interested in investigating associative memory.
This may be answered in two steps:
What is "memory" in a Neural Network (referred to as NN)?
As a neural network (NN) is trained, it builds a mathematical model
that tells the NN what to give as output for a particular input. Think
of what happens when you train someone to speak a new language. The
human brain creates a model of the language. Similarly, a NN creates
mathematical model of what you are trying to teach it. It represents the mapping from input to output as a series of functions. This math model
is the memory. This math model is the weights of different edges in the network. Often, a NN is trained and these weights/connections are written to the hard disk (XML, Yaml, CSV etc). Whenever a NN needs to be used, these values are read back and the network is recreated.
How can you make a network forget its memory?
Think of someone who has been taught two languages. Let us say the individual never speaks one of these languages for 15-20 years, but uses the other one every day. It is very likely that several new words will be learnt each day and many words of the less frequent language forgotten. The critical part here is that a human being is "learning" every day. In a NN, a similar phenomena can be observed by training the network using new data. If the old data were not included in the new training samples, then the underlying math model will change so much that the old training data will no longer be represented in the model. It is possible to prevent a NN from "forgetting" the old model by changing the training process. However, this has the side effect that such a NN cannot learn completely new data samples.
I would say your approach is wrong. Neural Networks are not dumps of memory as we see on the computer. There are no addresses where a particular chunk of memory resides. All the neurons together make sure that a given input leads to a particular output.
Lets compare it with your brain. When you taste sugar, your tongue's taste buds are the input nodes which read chemical signals and transmit electric signals to brain. The brain then determines the taste using the various combinations of electric signals.
There are no lookup tables. There is no primary and secondary memories, only short and long term memory.

Resources