argmax from probability distribution better policy than random sampling from softmax? - machine-learning

I am trying to train Echo State Network for text generation with stochastic optimization along the lines of Reinforcement learning, where the optimization depends on the reward signal.
I have observed that during evaluation, when I sample from the probability distribution, the bleu score is bigger than when I argmax from the distribution. The difference is almost more than 0.10 points (BLEU Score is generally between the range 0 and 1 ).
I am not sure why does that happen.
Help needed.

You don't use the argmax function as it is a deterministic approach. And the main issue with that is that it can easily get you trap into a loop. Meaning that in case of an error in the text generation you are likely to keep going on this path without any possibility to get out. The randomness allows a "jump out" of the loop.
A good example to illustrate this need of a jump out is for example the Page Rank algorithm. It uses a random walk parameter that allows the imaginary surfer to get out of dead ends.
The TensorFlow team says about this in their tutos this (without any justification)
:
Note: It is important to sample from this distribution as taking the argmax of the distribution can easily get the model stuck in a loop.

Related

Training PPO from stable_baselines3 on a grid world that randomizes

I'm new to RL and I was hoping to get some advice from yol:
I created a custom environment that is a 10x10 grid world where the agent and its target destination (as well as some obstacles, namely: Fires) can be randomly placed. The state of the env that the model is trained on is just the Box numpy array representing the different position (0s for empty spaces, 1 for the target, etc).
what the world could look like
The PPO model (from stable_baselines3) is unable to learn now to navigate randomly generated worlds even after 5 million time steps of training (each reset of an environment creates new random world layout). Tensor-board is showing only a very slight average reward increase after all that training.
I am able to train the model effectively only I keep the world layout the same on every reset (so no random placement of the agent, etc).
So my question is: should PPO be in theory able to deal with random world generation like that or am I trying to make it do something that is beyond its capabilities?
More details: I'm using all default PPO parameters (with MlpPolicy).
The reward system is as follows:
On every step the reward is -0.5 * distance between the agent (smily face) and the target ('$')
If the agent is next to a fire ('X'), it gets -100 reward
If the agent is is next to the target ('$'), it gets reward of 1000 and the episode ends
Max of 200 steps per episode.
I would rather try good old off-policy deterministic solutions like DQN for that task, but on-policy stochastic PPO should solve it as well. I recommend to change three things in your design, maybe changing them could help your training.
First, your reward signal design probably "embarrasses" your network: you have an enormously big positive terminal reward while trying to push your agent to this terminal state asap with small punishments. I would definetely suggest reward normalization for PPO.
Second, if you don't fine-tune your PPO hyperparameters, your entropy coefficient ent_coef remains 0.0, however entropy part of your loss function could be very useful in your env. I would try 0.01 at least.
Third, PPO would be really enhanced in your case (in my opinion) if you change mlp policy to a recurrent one. Take a look at RecurrentPPO

Gaussian Progress Regression usecase

while reading the paper :" Tactile-based active object discrimination and target object search in an unknown workspace", there is something that I just can not understand:
The paper is about finding object's position and other properties using only tactile information. In the section 4.1.2, the author says that he uses GPR to guide the exploratory process and in section 4.1.4 he describes how he trained his GPR:
Using the example from the section 4.1.2, the input is (x,z) and the ouput y.
Whenever there is a contact, the coresponding y-value is stored.
This procedure is repeated several times.
This trained GPR is used to estimate the next exploring point, which is the point where the variance is maximum at.
In the following link, you also can see the demonstration: https://www.youtube.com/watch?v=ZiLq3i-BJcA&t=177s . In the first part of video (0:24-0:29), the first initalization takes place where the robot samples 4 times. Then in the next 25 seconds, the robot explores explores from the corresponding direction. I do not understand how this tiny initialization of GPR can guide the exploratory process. Could someone please explain how the input points (x,z) from the first exploring part could be estimated?
Any regression algorithm simply maps the input (x,z) to an output y in some way unique to the specific algorithm. For a new input (x0,z0) the algorithm will likely predict something very close to the true output y0 if many data points similar to this was included in the training. If only training data was available in a vastly different region, the predictions will likely be very bad.
GPR includes a measure of confidence of the predictions, namely the variance. The variance will naturally be very high in regions where no training data has been seen before and low very close to already seen data points. If the 'experiment' takes much longer than evaluating the Gaussian Process, you can use the Gaussian Process fit to make sure you sample regions where you are very uncertain of your answer.
If the goal is to fully explore the entire input space, you could draw a lot of random values of (x,z) and evaluate the variance at these values. Then you could perform the costly experiment at the input point where you are most uncertain in y. Then you can retrain the GPR with all the explored data so far and repeat the process.
For optimization problems (Not the OP's question)
If you wish to find the lowest value of y across the input space, you are not interested in doing the experiment in regions that you know give high values of y, but you are just uncertain of how high these values will be. So instead of choosing the (x,z) points with the highest variance, you might choose the predicted value of y plus one standard deviation. Minimizing values this way is named Bayesian Optimization and this specific scheme is named Upper Confidence Bound (UCB). Expected Improvement (EI) - the probability of improving the previously best score - is also commonly used.

Is reinforcement learning suitable for predicting bias in dice?

I would like to analyze a problem similar to the following.
Problem:
You will be given N dices.
You will be given a lot of data about each dice (eg surface information, material information, location of the center of gravity … etc).
The features of the dice are randomly generated every game and are fired at the same speed, angle and initial position.
As a result of rolling the dice, you get 1 point if you get 6 and 0 points otherwise.
There are training data of 100000 games. (Dice data and match results)
I would like to learn the rule of selecting only dice whose probability of getting 6 is higher than 1/6.
I apologize for the vague problem statement.
First of all, it is my mistake to assume that "N dice".
The dice may be one by one.
One dice with random characteristics are distributed
When it rolls, it is recorded whether 6 has come out or not.
It was easy to understand if it was made into the problem that "this [characteristics, result] data is 100,000".
If you get something other than 6, you will get -1 points.
If you get 6, you will get +5 points.
Example:
X: vector of a dice data
f: function I want to know
f: X-> [0, 1]
(if result> 0.5, I pick this dice.)
For example, a dice with a 1/5 chance of getting a 6 gets 4 out of 5 times a non-6, so I wondered if it would be better to give an immediate reward.
Is it good to decide the reward by the number of points after 100000 games?
I have read some general reinforcement learning methods, but there is a concept of state transition. However, there is no state transition in this game. (Each game ends in 1 step, and each game is independent.)
I am a student just learning neural networks from scratch. It helps if you give me a hint. Thank you.
by the way,
I think that the result of this learning can be concluded "It is good to choose the dice whose pips farthest to the center of gravity is 6."
Let's first talk about Reinforcement-Learning.
Problem setups, in order of increasing generality:
Multi-Armed Bandit - no state, just actions with unknown rewards
Contextual Bandit - rewards also depend on some context (state)
Reinforcement Learning (MDP) - actions can also influence the next state
Common to all of all three is that you want to maximize the sum of rewards over time, and there is an exploration vs exploitation trade-off. You are not just given a large dataset. If you want to know what the best action is, you have to try it a few times and observe the reward. This may cost you some reward you could have earned otherwise.
Of those three, the Contextual Bandit is the closest match to your setup, although it doesn't quite match to your goals. It would go like this: Given some properties of the dice (the context), select the best dice from a group of possible choices (the action, e.g. the output from your network), such that you get the highest expected reward. At the same time you are also training your network, so you have to pick bad or unknown properties sometimes to explore them.
However, there are two reasons why it doesn't match:
You already have data from several 100000 of games, and seem to be not interested in minimizing the cost of trial and error to acquire more data. You assume this data is representative, so no exploration is required.
You are only interested in prediction. You want classify the dice into "good for rolling 6" vs "bad". This piece of information could be used later to make a decision between different choices if you know the cost for making a wrong decision. If you are just learning f() because you are curious about the property of a dice, then is a pure statistical prediction problem. You don't have to worry about short- or long-term rewards. You don't have to worry about selection or consequences of any actions.
Because of this, you actually only have a supervised learning problem. You could still solve it with reinforcement learning because RL is more general. But your RL algorithm would be wasting a lot of time figuring out that it really cannot influence the next state.
Supervised Learning
Your dice actually behaves like a biased coin, it's a Bernoulli trial with ~1/6 success probability. This is now a standard classification problem: given your features, predict the probability that a dice will lead to a good match result.
It seems that your "match results" can be easily converted in the number of rolls and the number of positive outcomes (rolled a six) with the same dice. If you have a large number of rolls for every dice, you can simply classify this die and use this class (together with the physical properties) as one data point to train your network.
You can do more fancy things if you have fewer rolls but I won't go into that. (If you are interested, have a look at the beta distribution and how the cross-entropy loss works with neural networks.)

When should one set the staircase is True when decaying the learning rate in TensorFlow?

Recall that when exponentially decaying the learning rate in TensorFlow one does:
decayed_learning_rate = learning_rate *
decay_rate ^ (global_step / decay_steps)
the docs mention this staircase option as:
If the argument staircase is True, then global_step /decay_steps is an
integer division and the decayed learning rate follows a staircase
function.
when is it better to decay every X number of steps and follow at stair case function rather than a smoother version that decays more and more with every step?
The existing answers didn't seem to describe this. There are two different behaviors being described as 'staircase' behavior.
From the feature request for staircase, the behavior is described as being a hand-tuned piecewise constant decay rate, so that a user could provide a set of iteration boundaries and a set of decay rates, to have the decay rate jump to the specified value after the iterations pass a given boundary.
If you look into the actual code for this feature pull request, you'll see that the PR isn't related much to the staircase option in the function arguments. Instead, it defines a wholly separate piecewise_constant operation, and the associated unit test shows how to define your own custom learning rate as a piecewise constant with learning_rate_decay.piecewise_constant.
From the documentation on decaying the learning rate, the behavior is described as treating global_step / decay_steps as integer division, so for the first set of decay_steps steps, the division results in 0, and the learning rate is constant. Once you cross the decay_steps-th iteration, you get the decay rate raised to a power of 1, then a power of 2, etc. So you only observe decay rates at the particular powers, rather than smoothly varying across all the powers if you treated the global step as a float.
As to advantages, this is just a hyperparameter decision you should make based on your problem. Using the staircase option allows you hold a decay rate constant, essentially like maintaining a higher temperature in simulated annealing for a longer time. This can allow you explore more of the solution space by taking bigger strides in the gradient direction, at the cost of possible noisy or unproductive updates. Meanwhile, smoothly increasing the decay rate power will steadily "cool" the exploration, which can limit you by making you stuck near a local optimum, but it can also prevent you from wasting time with noisily large gradient steps.
Whether one approach or the other is better (a) often doesn't matter very much and (b) usually needs to be specially tuned in the cases when it might matter.
Separately, as the feature request link mentions, the piecewise constant operation seems to be for very specifically tuned use cases, when you have separate evidence in favor of a hand-tuned decay rate based on collecting training metrics as a function of iteration. I would generally not recommend that for general use.
Good question.
For all I know it is preference of the research group.
Back from the old times, it was computationally more efficient to reduce the learning rate only every epoch. That's why some people prefer to use it nowadays.
Another, hand-wavy, story that people may tell is it prevents from local optima. By "suddenly" changing the learning rate, the weights might jump to a better bassin. (I don;t agree with this, but add it for completeness)

HMM application in speech recognition

this is my first time posting a question here so if the approach is not so standard i apologize, i understand there are lots of questions out there on this and i have read tons of thesis, questions, aritcles and tutorials yet i seem to have a problem and it's always best to ask. i am creating a speech recognition application, using phoneme level processing(not isolated word) continuous HMMs based on gaussian mixture models, involving baum welch, forward-backward, and viterbi algorithms,
i have implemented a very good feature extraction and pre-processing method (MFCC), feature vectors consist of the mfcc, delta and acceleration coefficients as well and it's working pretty well on it's part however when it comes to HMMs , i seem to either have a 'Major Misunderstading' about how HMMs are supposed to help recognize speech or i am missing a little point here...i have try harded a lot that at this point i can't really tell what's wrong and right.
first off, i recorded around 50 words, each 6 utterances, and run them through a correct compatibility and conversion program that i wrote myself and the extracted the features so that they can be used for baum-welch.
i want you to please tell me where am i making a mistake in this procedure, also i will mention a few doubts i have on it so that you can help me understand this whole subject better.
here are the steps in my application concerning anything related to the training :
steps for initial parameters of HMM model :
1 - assign all observations from each training sample of each model to their corresponding discrete state(or in other words, which feature vector belongs to which alphabetic state).
2 - use k-means to find the initial continuous emission parameters, clustering is done over all observations of each state, here the cluster size is 6 (equal to number of mixtures for probability density function), parameters would be sample means, sample covariances and some mixture weights for each cluster.
3 - create initial state-initial and transition probability matrices for each model and training sample individually(left right structure is used in this case), 0 for previous states and 1 for up to 1 next state for transitions and 1 for initial and 0 for others in state initials.
4 - calculate gaussian mixture model based probability density function for each state -> it's corresponding cluster -> assigned to all the vectors in all the training samples for each model
5 - calculate initial emission parameters using the pdf and mixture weights for clusters.
6 - now calculate the gamma variables using initial paramters(transitions, emissions, initials) in forward-backward and initial PDFs, using the continuous formula for gamma..(gamma = probability of being in a certain state at a certain time for any of the mixtures)
7 - estimate new state initials
8 - estimate new state tranisitons
9 - estimate new sample means
10 - estimate new sample covariances
11 - estimate new pdfs
12 - estimate new emissions using new pdfs
repeat the steps from 6 to 12 using new estimated values on each iteration, use viterbi to get an overlook on how the estimating is going and when the probability is not changing anymore, stop and save.
now my issues :
first i don't know if the entire procedure i have followed is correct or not, or is there a better method to approach this...for all i know is that the convergence is pretty fast, for up to 4-5 iterations and it's already not changing anymore, however considering that if i am right then :
it's not possible for me to sit down and pre assign each feature vector to it's state in the beginning at step 1...and i don't think it's a standard procedure either...again i don't even know if i have to do it necessarily, from all my studies it was the best method i could find to get a rapid convergence.
second, say this whole baum welch has done a great job in re estimating and finding local maximums, what's raising my doubt about my baum welch implementation is that how are they later going to help me recognize speech? i assume the estimated parameters are used in viterbi for finding the optimal state for every spoken utterance...if so then emission parameters are not known cause if you look closely you will see that final emission parameters in my algorithm will be assigning each alphabetic state of each model to all the observed signals in each model, other than that...no emission parameters can be found if the signal is not exactly match to the ones used in re-estimation, and it won't obviously work, any attempt to try and match out the signals and find emissions will make the whole HMM lose it's purpose...
again i might have a wrong idea about almost everything here, i would appreciate if you help me understand what i am doing wrong here...if ANYTHING is wrong, notify me please...thank you.
You're attempting to determine the most likely set of phonemes that would have generated the sounds that you're observing - you're not attempting to work out emission parameters, you're working out the most likely set of inputs that would have produced them.
Also, your input corpus is quite small - it's unsurprising that it would converge so quickly. If you're doing this while involved with a university, see if they have access to one of the larger speech corpuses commonly used to train this kind of algorithm on.

Resources