What are the things that I should save to a file/db with Reinforcement Learning? - machine-learning

I'm trying to get into machine learning, and decided to try things out for myself. I wrote a small tic-tac-toe game. So far, the computer plays against itself using random moves.
Now, I want to apply reinforcement learning by writing an agent that will explore or exploit based on the knowledge it has on the current state of the board.
The part I don't understand is this:
What does the agent use to train itself for the current state? Lets say a RNG bot (o) player does this:
[..][..][..]
[..][x][o]
[..][..][..]
Now the agent has to decide what the best move should be. A well trained one would pick 1st, 3rd, 7th or 9th. Does it look up a similar state in the DB that led him to a win? Because if so, I think I will need to save every single move into the DB up to eventually it's end state (win/lose/draw state), and that would be quite a lot of data for a single play?
If I'm thinking this through wrong, I would like to know how to this correctly.

Learning
1) Observe a current board state s;
2) Make a next move based on the distribution of all available V(s') of next moves. Strictly the choice is often based on Boltzman’s distribution of V(s'), but can be simplified to maximum-value move (greedy) or, with some probability epsilon, a random move as you are using;
3) Record s' in a sequence;
4) If the game finishes, it updates the values of the visited states in the sequence and starts over again; otherwise, go to 1).
Game Playing
1) Observe a current board state s;
2) Make a next move based on the distribution of all available V(s') of next moves;
3) Until the game is over and it starts over again; otherwise, go to 1).
Regarding your question, yes the look-up table in Game Playing phase is built up in the Learning phase. Every time the state is chosen from the all the V(s) with a maximum possible number of 3^9=19683. Here is a sample code written by Python that runs 10000 games in training.

Related

Training Snake to eat food in specific number of steps, using Reinforcement learning

I am trying my hands on Reinforcement/Deep-Q learning these days. And I started with a basic game of 'Snake'.
With the help of this article: https://towardsdatascience.com/how-to-teach-an-ai-to-play-games-deep-reinforcement-learning-28f9b920440a
Which I successfully trained to eat food.
Now I want it to eat food in specific number of steps say '20', not more, not less. How will the reward system and Policy be changed for this?
I have tried many things, with little to no result.
For example I tried this:
def set_reward(self, player, crash):
self.reward = 0
if crash:
self.reward = -10
return self.reward
if player.eaten:
self.reward = 20-abs(player.steps - 20)-player.penalty
if (player.steps == 10):
self.reward += 10 #-abs(player.steps - 20)
else:
player.penalty+=1
print("Penalty:",player.penalty)
Thank You.
Here's is the program:
https://github.com/maurock/snake-ga
I would suggest this approach is problematic because despite changing your reward function you haven't included the number of steps in the observation space. The agent needs that information in the observation space to be able to differentiate at what point it should bump into the goal. As it stands, if your agent is next to the goal and all it has to do is turn right but all it's done so far is five moves, that is exactly the same observation as if it had done 19 moves. The point is you can't feed the agent the same state and expect it to make different actions because the agent doesn't see your reward function it only receives a reward based on state. Therefore you are contradicting the actions.
Think of when you come to the testing the agents performance. There is no longer a reward. All you are doing is passing the network a state and you are expecting it to choose different actions for the same state.
I assume your state space is some kind of 2D array. Should be straightforward to alter the code to contain the number of steps in the state space. Then the reward function would be something like if observation[num_steps] = 20: reward = 10.
Ask if you need more help coding it

Monte Carlo Tree Search - intuition behind child selection function for games of two players with opposite goals

Simple question on hello world example of the MCTS for tic-tac-toe,
Let's assume we are given a board and we want to make an optimal decision. As I undestand the choice of consecutive nodes while simulation (until leaf is met) is determined by a exploration/exploitation trade-off function (as described on wikipedia). I really wonder what is the intuition behind first component (exploitation) of the function here, especially for games between two players with oppposite goals. Then the meaning of "the most promising" changes depending on who makes a move. Shouldn't this function change depeding on who makes the next move (especially its first component)?
Yes, that exploitation part of the equation should be implemented to take into account the evaluations from the perspective of the agent/player who gets to select an action in that node.
For single-agent settings, the implementation is straightforward; simply always maximize.
For zero-sum, turn-based, two-player settings, you'd want to alternate between maximizing or minimizing that exploitation part of the equation (note: always maximize the exploration term!). This can also be implemented by simply multiplying that term by -1 in nodes where the opponent gets to move.
Other settings are possible too, but require slightly more implementation effort (e.g. keeping different average scores for different players in settings which are not zero-sum or have more than two players)

Controlling the phase of signal in pure data

I'm in need of figure out a way of changing the phase of a signal. Objective is to generate two signals with one phase changed and observe the patters when combined.
below is the program I'm using so far:
As in the above setting, I need to use the same signal to generate a phase changed signal and later combine the two signals and observe patters.
Can someone help me out on this?
Thanks.
Using the right inlet of the [osc~] object is a valid way to set the phase of an oscillator but it isn't the only or even the most correct way. The right inlet only permits a float at the control level.
A more comprehensive manipulation of phase can be done at the signal level using the [phasor~], [cos~], [wrap~], and [+~] objects. Essentially, you are performing the same function as [osc~] with a technique called a table lookup using [phasor~] and [cos~]. You could read another table with [tabread4~] instead of [cos~] as well.
This technique keeps your oscillators in sync. You can manipulate the phase of your oscillators with other oscillators, table lookups, and still of course floats (so long as the phase value is between 0 and 1, hence the [wrap~] object).
phase modulation at the signal level
Afterwards, like the other examples here, you can add the signals together and write them to corresponding tables or output the signal chain or both.
Here's how you might do the same for a custom table lookup. Of course, you'd replace sometable with your custom table name and num-samp-in-some-table with the number of samples in your table.
signal level phase modulation with custom tables
Hope it helps!
To change the phase of an oscillator, use the right-hand side inlet.
Quoting Johannes Kreidler's Programming Electronic Music in Pd:
3.1.2.1.3 Phase
In Pd, you can also set membrane position for a sound wave where it should begin (or where it should jump to). This is called the phase of a wave. You can set the phase in Pd in the right inlet of the "osc~" object with numbers between 0 and 1:
A wave's entire period is encompassed by the range from 0 to 1. However, it is often spoken of in terms of degrees, where the entire period has 360 degrees. One speaks, for example, of a "90 degree phase shift". In Pd, the input for the phase would be 0.25.
So for instance, if you want to observe how two signals can become mute due to destructive interference, you can try something like this:
Note that I connected a bang to adjust simultaneously the phases of both signals. This is important, because while you can reset the phase of a signal to any value between 0.0 and 1.0 at any moment, the other oscillator won't be reset and therefore the results will be quite random (you never know at which phase value the other signal will be at!). So resetting both does the trick.

Is this a correct implementation of Q-Learning for Checkers?

I am trying to understand Q-Learning,
My current algorithm operates as follows:
1. A lookup table is maintained that maps a state to information about its immediate reward and utility for each action available.
2. At each state, check to see if it is contained in the lookup table and initialise it if not (With a default utility of 0).
3. Choose an action to take with a probability of:
(*ϵ* = 0>ϵ>1 - probability of taking a random action)
1-ϵ = Choosing the state-action pair with the highest utility.
ϵ = Choosing a random move.
ϵ decreases over time.
4. Update the current state's utility based on:
Q(st, at) += a[rt+1, + d.max(Q(st+1, a)) - Q(st,at)]
I am currently playing my agent against a simple heuristic player, who always takes the move that will give it the best immediate reward.
The results - The results are very poor, even after a couple hundred games, the Q-Learning agent is losing a lot more than it is winning. Furthermore, the change in win-rate is almost non-existent, especially after reaching a couple hundred games.
Am I missing something? I have implemented a couple agents:
(Rote-Learning, TD(0), TD(Lambda), Q-Learning)
But they all seem to be yielding similar, disappointing, results.
There are on the order of 10²⁰ different states in checkers, and you need to play a whole game for every update, so it will be a very, very long time until you get meaningful action values this way. Generally, you'd want a simplified state representation, like a neural network, to solve this kind of problem using reinforcement learning.
Also, a couple of caveats:
Ideally, you should update 1 value per game, because the moves in a single game are highly correlated.
You should initialize action values to small random values to avoid large policy changes from small Q updates.

SARSA Implementation

I am learning about SARSA algorithm implementation and had a question. I understand that the general "learning" step takes the form of:
Robot (r) is in state s. There are four actions available:
North (n), East (e), West (w) and South (s)
such that the list of Actions,
a = {n,w,e,s}
The robot randomly picks an action, and updates as follows:
Q(a,s) = Q(a,s) + L[r + DQ(a',s1) - Q(a,s)]
Where L is the learning rate, r is the reward associated to (a,s), Q(s',a') is the expected reward from an action a' in the new state s' and D is the discount factor.
Firstly, I don't undersand the role of the term - Q(a,s), why are we re-subtracting the current Q-value?
Secondly, when picking actions a and a' why do these have to be random? I know in some implementations or SARSA all possible Q(s', a') are taken into account and the highest value is picked. (I believe this is Epsilon-Greedy?) Why not to this also to pick which Q(a,s) value to update? Or why not update all Q(a,s) for the current s?
Finally, why is SARSA limited to one-step lookahead? Why, say, not also look into an hypothetical Q(s'',a'')?
I guess overall my questions boil down to what makes SARSA better than another breath-first or depth-first search algorithm?
Why do we subtract Q(a,s)? r + DQ(a',s1) is the reward that we got on this run through from getting to state s by taking action a. In theory, this is the value that Q(a,s) should be set to. However, we won't always take the same action after getting to state s from action a, and the rewards associated with going to future states will change in the future. So we can't just set Q(a,s) equal to r + DQ(a',s1). Instead, we just want to push it in the right direction so that it will eventually converge on the right value. So we look at the error in prediction, which requires subtracting Q(a,s) from r + DQ(a',s1). This is the amount that we would need to change Q(a,s) by in order to make it perfectly match the reward that we just observed. Since we don't want to do that all at once (we don't know if this is always going to be the best option), we multiply this error term by the learning rate, l, and add this value to Q(a,s) for a more gradual convergence on the correct value.`
Why do we pick actions randomly? The reason to not always pick the next state or action in a deterministic way is basically that our guess about which state is best might be wrong. When we first start running SARSA, we have a table full of 0s. We put non-zero values into the table by exploring those areas of state space and finding that there are rewards associated with them. As a result, something not terrible that we have explored will look like a better option than something that we haven't explored. Maybe it is. But maybe the thing that we haven't explored yet is actually way better than we've already seen. This is called the exploration vs exploitation problem - if we just keep doing things that we know work, we may never find the best solution. Choosing next steps randomly ensures that we see more of our options.
Why can't we just take all possible actions from a given state? This will force us to basically look at the entire learning table on every iteration. If we're using something like SARSA to solve the problem, the table is probably too big to do this for in a reasonable amount of time.
Why can SARSA only do one-step look-ahead? Good question. The idea behind SARSA is that it's propagating expected rewards backwards through the table. The discount factor, D, ensures that in the final solution you'll have a trail of gradually increasing expected rewards leading to the best reward. If you filled in the table at random, this wouldn't always be true. This doesn't necessarily break the algorithm, but I suspect it leads to inefficiencies.
Why is SARSA better than search? Again, this comes down to an efficiency thing. The fundamental reason that anyone uses learning algorithms rather than search algorithms is that search algorithms are too slow once you have too many options for states and actions. In order to know the best action to take from any other state action pair (which is what SARSA calculates), you would need to do a search of the entire graph from every node. This would take O(s*(s+a)) time. If you're trying to solve real-world problems, that's generally too long.

Resources