How to make trained neural networks with nondeterministic output? - machine-learning

I'm trying to figure out how to train a neural network to play a puzzle game, given no prior knowledge of the rules. I've previously used neural networks as classifiers, but they're essentially deterministic once trained. This is fine for action games like Mario, but there's a starting problem with a puzzles or games like go or chess where the initial setup is always the same. If the network's first attempt produces an illegal move, the board won't change, so a deterministic system will just keep trying the same illegal move.
Are there particular methods people use to make such a network try different outputs if the first is rejected? I considered having "number of moves" and/or "number of illegal moves blocked" inputs, to give a sense of time, but presumably there are more sophisticated techniques out there!

In your case, you would need to use techniques that come from reinforcement learning such as Deep Q learning. Indeed, these techniques are not deterministic and learn a policy that is not deterministic. The policy will first integrate the rules of the game to eventually choose a strategy that maximises the payoff. Of course, the environment will have to be properly defined for these methods to be effective.

Related

What is the difference between machine learning and deep learning in building a chatbot?

To be more specific, The traditional chatbot framework consists of 3 components:
NLU (1.intent classification 2. entity recognition)
Dialogue Management (1. DST 2. Dialogue Policy)
NLG.
I am just confused that If I use a deep learning model(seq2seq, lstm, transformer, attention, bert…) to train a chatbot, Is it cover all those 3 components? If so, could you explain more specifically how it related to those 3 parts? If not, how can I combine them?
For example, I have built a closed-domain chatbot, but it is only task-oriented which cannot handle the other part like greeting… And it can’t handle the problem of Coreference Resolution (it seems doesn't have Dialogue Management).
It seems like your question can be split into two smaller questions:
What is the difference between machine learning and deep learning?
How does deep learning factor into each of the three components of chatbot frameworks?
For #1, deep learning is an example of machine learning. Think of your task as a graphing problem. You transform your data so it has an n-dimensional representation on a plot. The goal of the algorithm is to create a function that represents a line drawn on the plot that (ideally) cleanly separates the points from one another. Each sector of the graph represents whatever output you want (be it a class/label, related words, etc). Basic machine learning creates a line on a 'linearly separable' problem (i.e. it's easy to draw a line that cleanly separates the categories). Deep learning enables you to tackle problems where the line might not be so clean by creating a really, really, really complex function. To do this, you need to be able to introduce multiple dimensions to the mapping function (which is what deep learning does). This is a very surface-level look at what deep learning does, but that should be enough to handle the first part of your question.
For #2, a good quick answer for you is that deep learning can be a part of each component of the chatbot framework depending on how complex your task is. If it's easy, then classical machine learning might be good enough to solve your problem. If it's hard, then you can begin to look into deep learning solutions.
Since it sounds like you want the chatbot to go a bit beyond simple input-output matching and handle complicated semantics like coreference resolution, your task seems sufficiently difficult and a good candidate for a deep learning solution. I wouldn't worry so much about identifying a specific solution for each of the chatbot framework steps because the tasks involved in each of those steps blend into one another with deep learning (e.g. a deep learning solution wouldn't need to classify intent and then manage dialogue, it would simply learn from hundreds of thousands of similar situations and apply a variation of the most similar response).
I would recommend handling the problem as a translation problem - but instead of translating from one language to another, you're translating from the input query to the output response. Translation frequently needs to resolve coreference and solutions people have used to solve that might be an ideal course of action for you.
Here are some excellent resources to read up on in order to frame your problem and how to solve it:
Google's Neural Machine Translation
Fine Tuning Tasks with BERT
There is always a trade-off between using traditional machine learning models and using deep learning models.
Deep learning models require large data to train and there will be an increase in training time & testing time. But it will give better results.
Traditional ML models work well with fewer data with moderate performance comparatively. The inference time is also less.
For Chatbots, latency matters a lot. And the latency depends on the application/domain.
If the domain is banking or finance, people are okay with waiting for a few seconds but they are not okay with wrong results. On the other hand in the entertainment domain, you need to deliver the results at the earliest.
The decision depends on the application domain + the data size you are having + the expected precision.
RASA is something worth looking into.

train a neural network on real subject input/output to have it behave similarly to subject

The goal is to create an AI to play a simple game, tracking a horizontally moving dot across the screen which increases speed until no longer tracked.
I would like to create an AI to behave similarly to a real test subject. I have a large amount of trials that were recorded of many months, position of dot on screen and user cursor position over time.
I would like to train the network on these trials so that the network behaves similarly to a real test subject and I can then obtain very large amounts of test data to observe how changing the parameters of the game affects the networks ability to track the moving dot.
I am interested in learning about the underlying code of neural networks and would love some advice on where to start with this project. I understand AIs can get very good at performing different tasks such as snake, or other simple games, but my goal would be to have the AI perform similarly to a real test subject.
Your question is a bit broad, but i'll try to answer nonetheless.
To imitate a subjects behavior you could use and LSTM network which has an understanding of the state it's in (in your case the state may include information about how fast and in which direction the dot is going and where the pointer is) and then decides on an action. You will need to feed your data (the dot coordination and users behavior) into the the network.
A simpler yet effective approach would be using simple MLP network. Your problem does not seem like a hard one and a simple network should be able to learn what a user would in a certain situation. However, based on what you mean by "perform similarly to a real test subject" you might need a more complex architecture.
Finally there are GAN networks, which are somewhat complicated if you're not familiar with NNs, are hard and time-consuming to train and in some cases might fail to train at all. The bright side is they are exactly designed to imitate a probability distribution (or to put it more simply, a set of data).
There are two more important notes to mention:
The performance of your network depends heavily on your data and the game. for example, if in your dataset users have acted very differently to the same situation MLP or LSTMs will not be able to learn all those reactin.
Your network can only imitate what it's taught. So if you're planning to figure out what a human agent would do under some conditions that never happened in your dataset (e.g. if in your dataset the dot only moves in a line but you want it to move in a circle when experimenting) you won't get good results.
hope this helps.

Clarification on a Neural Net that plays Snake

I'm new to neural networks/machine learning/genetic algorithms, and for my first implementation I am writing a network that learns to play snake (An example in case you haven't played it before) I have a few questions that I don't fully understand:
Before my questions I just want to make sure I understand the general idea correctly. There is a population of snakes, each with randomly generated DNA. The DNA is the weights used in the neural network. Each time the snake moves, it uses the neural net to decide where to go (using a bias). When the population dies, select some parents (maybe highest fitness), and crossover their DNA with a slight mutation chance.
1) If given the whole board as an input (about 400 spots) enough hidden layers (no idea how many, maybe 256-64-32-2?), and enough time, would it learn to not box itself in?
2) What would be good inputs? Here are some of my ideas:
400 inputs, one for each space on the board. Positive if snake should go there (the apple) and negative if it is a wall/your body. The closer to -1/1 it is the closer it is.
6 inputs: game width, game height, snake x, snake y, apple x, and apple y (may learn to play on different size boards if trained that way, but not sure how to input it's body, since it changes size)
Give it a field of view (maybe 3x3 square in front of head) that can alert the snake of a wall, apple, or it's body. (the snake would only be able to see whats right in front unfortunately, which could hinder it's learning ability)
3) Given the input method, what would be a good starting place for hidden layer sizes (of course plan on tweaking this, just don't know what a good starting place)
4) Finally, the fitness of the snake. Besides time to get the apple, it's length, and it's lifetime, should anything else be factored in? In order to get the snake to learn to not block itself in, is there anything else I could add to the fitness to help that?
Thank you!
In this post, I will advise you of:
How to map navigational instructions to action sequences with an LSTM
neural network
Resources that will help you learn how to use neural
networks to accomplish your task
How to install and configure neural
network libraries based on what I needed to learn the hard way
General opinion of your idea:
I can see what you're trying to do, and I believe that your game idea (of using randomly generated identities of adversaries that control their behavior in a way that randomly alters the way they're using artificial intelligence to behave intelligently) has a lot of potential.
Mapping navigational instructions to action sequences with a neural network
For processing your game board, because it involves dense (as opposed to sparse) data, you could find a Convolutional Neural Network (CNN) to be useful. However, because you need to translate the map to an action sequence, sequence-optimized neural networks (such as Recurrent Neural Networks) will likely be the most useful for you. I did find some studies that use neural networks to map navigational instructions to action sequences, construct the game map, and move a character through a game with many types of inputs:
Mei, H., Bansal, M., & Walter, M. R. (2015). Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. arXiv preprint arXiv:1506.04089. Available at: Listen, Attend, and Walk: Neural Mapping of Navigational Instructions to Action Sequences
Lample, G., & Chaplot, D. S. (2016). Playing FPS games with deep reinforcement learning. arXiv preprint arXiv:1609.05521. Available at: Super Mario as a String: Platformer Level Generation Via LSTMs
Lample, G., & Chaplot, D. S. (2016). Playing FPS games with deep reinforcement learning. arXiv preprint arXiv:1609.05521. Available at: Playing FPS Games with Deep Reinforcement Learning
Schulz, R., Talbot, B., Lam, O., Dayoub, F., Corke, P., Upcroft, B., & Wyeth, G. (2015, May). Robot navigation using human cues: A robot navigation system for symbolic goal-directed exploration. In Robotics and Automation (ICRA), 2015 IEEE International Conference on (pp. 1100-1105). IEEE. Available at: Robot Navigation Using Human Cues: A robot navigation system for symbolic goal-directed exploration
General opinion of what will help you
It sounds like you're missing some basic understanding of how neural networks work, so my primary recommendation to you is to study more of the underlying mechanics behind neural networks in general. It's important to keep in mind that a neural network is a type of machine learning model. So, it doesn't really make sense to just construct a neural network with random parameters. A neural network is a machine learning model that is trained from sample data, and once it is trained, it can be evaluated on test data (e.g. to perform predictions).
The root of machine learning is largely influenced by Bayesian statistics, so you might benefit from getting a textbook on Bayesian statistics to gain a deeper understanding of how machine-based classification works in general.
It will also be valuable for you to learn the differences between different types of neural networks, such as Long Short Term Memory (LSTM) and Convolutional Neural Networks (CNNs).
If you want to tinker with how neural networks can be used for classification tasks, try this:
Tensorflow Playground
To learn the math:
My professional opinion is that learning the underlying math of neural networks is very important. If it's intimidating, I give you my testimony that I was able to learn all of it on my own. But if you prefer learning in a classroom environment, then I recommend that you try that. A great resource and textbook for learning the mechanics and mathematics of neural networks is:
Neural Networks and Deep Learning
Tutorials for neural network libraries
I recommend that you try working through the tutorials for a neural network library, such as:
TensorFlow tutorials
Deep Learning tutorials with Theano
CNTK tutorials (CNTK 205: Artistic Style Transfer is particularly cool.)
Keras tutorial (Keras is a powerful high-level neural network library that can use either TensorFlow or Theano.)
I saw similar application. Inputs usually were snake coordinates, apple coordinates and some sensory data(is wall next to snake head or no in your case).
Using genetic algorithm is a good idea in this case. You doing only parametric learning(finding set of weights), but structure will be based on your estimation. GA can be also used for structure learning(finding topology of ANN). But using GA for both will be very computational hard.
Professor Floreano did something similar. He use GA for finding weights for neural network controller of robot. Robot was in labyrinth and perform some task. Neural network hidden layer was one neuron with recurrent joints on inputs and one lateral connection on himself. There was two outputs. Outputs were connected on input layer and hidden layer(mentioned one neuron).
But Floreano did something more interesting. He say, We don't born with determined synapses, our synapses change in our lifetime. So he use GA for finding rules for change of synapses. These rules was based on Hebbian learning. He perform node encoding(for all weights connected to neuron will apply same rule). On beginning, he initialized weights on small random values. Finding rules instead of numerical value of synapse leads to better results.
One from Floreno's articles.
And on the and my own experience. In last semester I and my schoolmate get a task finding the rules for synapse with GA but for Spiking neural network. Our SNN was controller for kinematic model of mobile robot and task was lead robot in to the chosen point. We obtained some results but not expected. You can see results here. So I recommend you use "ordinary" ANN instead off SNN because SNN brings new phenomens.

Should the neurons in a neural network be asynchronous?

I am designing a neural network and am trying to determine if I should write it in such a way that each neuron is its own 'process' in Erlang, or if I should just go with C++ and run a network in one thread (I would still use all my cores by running an instance of each network in its own thread).
Is there a good reason to give up the speed of C++ for the asynchronous neurons that Erlang offers?
I'm not sure I understand what you're trying to do. An artificial neural network is essentially represented by the weight of the connections between nodes. The nodes themselves don't exist in isolation; their values are only calculated (at least in feed-forward networks) through the forward-propagation algorithm, when it is given input.
The backpropagation algorithm for updating weights is definitely parallelizable, but that doesn't seem to be what you're describing.
The usefulness of having neurons in a Neural Network (NN), is to have a multi-dimension matrix which coefficients you want to handle ( to train them, to change them, to adapt them little by little, so as they fit well to the problem you want to solve). On this matrix you can apply numerical methods (proven and efficient) so as to find an acceptable solution, in an acceptable time.
IMHO, with NN (namely with back-propagation training method), the goal is to have a matrix which is efficient both at run-time/predict-time, and at training time.
I don't grasp the point of having asynchronous neurons. What would it offers ? what issue would it solve ?
Maybe you could explain clearly what problem you would solve putting them asynchronous ?
I am indeed inverting your question: what do you want to gain with asynchronicity regarding traditional NN techniques ?
It would depend upon your use case: the neural network computational model and your execution environment. Here is a recent paper (2014) by Plotnikova et al, that uses "Erlang and platform Erlang/OTP with predefined base implementation of actor model functions" and a new model developed by the authors that they describe as “one neuron—one process” using "Gravitation Search Algorithm" for training:
http://link.springer.com/chapter/10.1007%2F978-3-319-06764-3_52
To briefly cite their abstract, "The paper develops asynchronous distributed modification of this algorithm and presents the results of experiments. The proposed architecture shows the performance increase for distributed systems with different environment parameters (high-performance cluster and local network with a slow interconnection bus)."
Also, most other answers here reference a computational model that uses matrix operations for the base of training and simulation, for which the authors of this paper compare by saying, "this case neural network model [ie matrix operations based] becomes fully mathematical and its original nature (from neural networks biological prototypes) gets lost"
The tests were run on three types of systems;
IBM cluster is represented as 15 virtual machines.
Distributed system deployed to the local network is represented as 15 physical machines.
Hybrid system is based on the system 2 but each physical machine has four processor cores.
They provide the following concrete results, "The presented results evidence a good distribution ability of gravitation search, especially for large networks (801 and more neurons). Acceleration depends on the node count almost linearly. If we use 15 nodes we can get about eight times acceleration of the training process."
Finally, they conclude regarding their model, "The model includes three abstraction levels: NNET, MLP and NEURON. Such architecture allows encapsulating some general features on general levels and some specific for the considered neural networks features on special levels. Asynchronous message passing between levels allow to differentiate synchronous and asynchronous parts of training and simulation algorithms and, as a result, to improve the use of resources."
It depends what you are after.
2nd Generation of Neural Networks are synchronous. They perform computations on an input-output basis without a delay, and can be trained either through reinforcement or back-propagation. This is the prevailing type of ANN at the moment and the easiest to get started with if you are trying to solve a problem via machine learning, lots of literature and examples available.
3rd Generation of Neural Networks (so-called "Spiking Neural Networks") are asynchronous. Signals propagate internally through the network as a chain-reaction of spiking events, and can create interesting patterns and oscillations depending on the shape of the network. While they model biological brains more closely they are also harder to make use of in a practical setting.
I think that async computation for NNs might prove beneficial for the (recognition) performance. In fact, the result might be similar (maybe less pronounced) to using dropout.
But a straight-forward implementation of async NNs would be much slower, because for synchronous NNs you can use linear algebra libraries, which make good use of vectorization or GPUs.

How to model a for loop in a neural network

I am currently in the process of learning neural networks and can understand basic examples like AND, OR, Addition, Multiplication, etc.
Right now, I am trying to build a neural network that takes two inputs x and n, and computes pow(x, n). And, this would require the neural network to have some form of a loop, and I am not sure how I can model a network with a loop
Can this sort of computation be modelled on a neural network? I am assuming it is possible.. based on the recently released paper(Neural Turing Machine), but not sure how. Any pointers on this would be very helpful.
Thanks!
Feedforward neural nets are not Turing-complete, and in particular they cannot model loops of arbitrary order. However, if you fix the maximum n that you want to treat, then you can set up an architecture which can model loops with up to n repetitions. For instance, you could easily imagine that each layer could act as one iteration in the loop, so you might need n layers.
For a more general architecture that can be made Turing-complete, you could use Recurrent Neural Networks (RNN). One popular instance in this class are the so-called Long short-term memory (LSTM) networks by Hochreiter and Schmidhuber. Training such RNNs is quite different from training classical feedforward networks, though.
As you pointed out, Neural Turing Machines seem to working well to learn the basic algorithms. For instance, the repeat copy task which has been implemented in the paper, might tell us that NTM can learn the algorithm itself. As of now, NTMs have been used only for simple tasks so understanding its scope by using the pow(x,n) will be interesting given that repeat copy works well. I suggest reading Reinforcement Learning Neural Turing Machines - Revised for a deeper understanding.
Also, recent developments in the area of Memory Networks empower us to perform more complicated tasks. Hence, to make a neural network understand pow(x,n) might be possible. So go ahead and give it a shot!

Resources