Dyna-Q vs "Imagination-Augmented Agents for Deep Reinforcement Learning" - machine-learning

What is the difference between the two?
Both use a world model, both based on reinforcement learning.
Is it accurate to say that in Dyna Q, the world model only use for providing extra simulation sample to "fine-tune" the model-free agent, where as in Imagination-Augmented Agents for Deep Reinforcement Learning, the world model actually factor into the Agent's decision making process?

Related

Is recurrent neural network a reinforcement learning or supervised learning model?

I just learn Machine learning and some ANN for a while and still need to figure it out the big picture of it.
I'm still learning the basic and terminology to deepen my knowledge.
I have learn about Reinforcement learning and what i understand (please correct me if i wrong) there 3 grouping method learning.
unsupervised (example for this is restricted boltzman machine)
supervised (CNN)
reinforcement (EKF, Particle Filter)
When I learn about recurrent net, some said that it belongs to supervised learning.
But when i see how it works, it more suitable to said that it belong to reinforcement learning.
can anyone clarify is recurrent net belong to supervised or reinforcement learning?
RNN is always used in supervised learning, because the core functionality of RNN requires labelled data sent in serially.
Now you must have seen RNN in RL too, but the catch is current deep reinforcement learning use the concept of supervised RNN which acts as a good feature vector for agent inside the RL ecosystem.
In simpler terms, the agent, the reward shaping, the environment everything is RL, but the way the deep network in agent learns is using RNN(or CNN or any type of ANN depending upon the problem statement).
So in short RNN always requires labelled data and hence supervised learning, but it can be used in RL environment too.
Supervised learning vs. reinforcement learning. It is almost the same. In supervised learning there is a finite amount of labelled examples. Each example is self standing. All the examples come from the same distribution. If the example is a series of inputs (ex. a sentence made out of words), it is still a single example (ex. "What a lovely day" -> positive).
With RL there are no examples and and at the same time there are infinite number of examples. What??!? Yes, your agent can interact with an environment and this will generate a lot of episodes (ex. "Start -> Left -> 1, 2 -> Up -> 2, 4.."). Or also a sentence (ex. "What -> ah -> a -> go on -> lovely -> not again -> day".). And what is the label? Not clear. Some reward mechanism should be developed to communicate the desired behavior. Also note that the episodes are dependent on the actions the agent takes. So no more "same distribution".

What's the difference between reinforcement learning, deep learning, and deep reinforcement learning? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 25 days ago.
Improve this question
What's the difference between reinforcement learning, deep learning, and deep reinforcement learning? Where does Q-learning fit in?
Reinforcement learning is about teaching an agent to navigate an environment using rewards. Q-learning is one of the primary reinforcement learning methods.
Deep learning uses neural networks to achieve a certain goal, such as recognizing letters and words from images.
Deep reinforcement learning is a combination of the two, using Q-learning as a base. But instead of using actual state-value pairs, this is often used in environments where the state-action space is so large that it would take too long for Q-learning to converge. By using neural networks, we can find other state-action pairs that are similar. This “function approximation” allows effective learning in environments with very large state-action spaces.
Deep learning is a method using neural networks to make function approximators to solve various problems.
Ex: Learning a function which takes an image as input and output the bounding boxes of objects in the image.
Reinforcement learning is a field in which we have an agent and we want that agent to perform a task i.e, goal based problems where we use trial and error learning methods.
Ex: Agent learning to move from one position on grid world to a goal position without falling in a pit present in between.
Deep reinforcement learning is a way to solve goal based problems using neural networks. This is because, when we want agents to perform task in real world or current games, the state space is very big.
It takes agent very long time to even visit each state once and we cannot use look up tables to store the value functions.
So, to tackle this problem we use neural networks to approximate the state to generalize the learning process
Ex: We use DQN to solve many atari games.
Q-learning : It is a temporal difference learning method, where we have a Q-table to look for best action possible in the current state based on Q value function.
For learning Q values we use the reward and the maximum possible next state Q value.
Q-learning basically falls under Reinforcement learning and its deep reinforcement learning analog is Deep Q network (DQN).
The goal of machine learning methods is to learn rules from data and make predictions and/or decisions based on them.
The learning process can be done in a(n) supervised, semi-supervised, unsupervised, reinforcement learning fashion.
In reinforcement learning (RL), an agent interacts with an environment and learns an optimal policy, by trial and error (using reward points for successful actions and penalties for errors). It is used in sequential decision making problems [1].
Deep learning as a sub-field of machine learning is a mathematical framework for learning latent rules in the data or new representations of the data at hand. The term "deep" refer to the number of learning layers in the framework. Deep learning can be used with any of aforementioned learning strategies, i.e., supervised, semi-supervised, unsupervised, and reinforcement learning.
A deep reinforcement learning technique is obtained when deep learning is utilized by any of the components of reinforcement learning [1]. Note that Q-learning is a component of RL used to tell an agent that what action needs to be taken in what situation. Detailed information can be found in [1].
[1] Li, Yuxi. "Deep reinforcement learning: An overview." arXiv preprint arXiv:1701.07274 (2017).
Reinforcement learning refers to finish -oriented algorithms, which learn how to attain a coordination compound objective (goal) or maximize along a particular dimension over many steps. The basic theme behind Reinforcement learning is that an agentive role will learn from the environment by interacting with it and getting rewards for performing actions.
Deep Learning uses multiple layers of nonlinear processing units to extract feature and transformation
Deep Reinforcement Learning approach introduces deep neural networks to solve Reinforcement Learning problems thus they are named “deep.”
There's more distinction between reinforcement learning and supervised learning, both of which can use deep neural networks aka deep learning. In supervised learning - training set is labeled by a human (e.g. AlphaGo). In reinforcement learning (e.g. AlphaZero)- the algorithm is self-taught.
To put it in simple words,
Deep Learning - It's uses the model of neural network(mimicking the brain , neurons) and deep learning is used in image classification , data analyzing and in reinforcement learning too.
Reinforcement learning - This is a branch of machine learning, that revolves around an agent (ex: clearing robot) taking actions(ex: moving around searching trash) in it's environment(ex:home) and getting rewards(ex: collecting trash)
Deep-Reinforcement learning - This is one among the list of algorithms reinforcement learning has , this algorithm utilizes deep learning concepts.
Reinforcement learning (RL) is a type of machine learning that is mainly motivated by the feedback control of systems. RL is usually considered a type of optimal control that learns through interacting with a system/environment and getting feedback. RL usually replaces the computationally expensive dynamic programming methods with single time-step/multi time-step learning rule. Popular temporal difference methods in RL are considered somewhere in between dynamic programming and monte carlo methods. Classic RL methods use tabular algorithms that are not that scalable.
Deep learning (DL) is considered crucial part of modern machine learning (classical machine learning usually mean SVM, liner regression etc.). DL uses deep multilayered neural networks (NN) with backpropagation for learning. By using well designed deep NN networks complex input-output relations can be learned. Because of this property of approximating very complex functions DL have been extremely popular in recent years (2010-ish), especially in natural language tasks and computer vision tasks. One of the attractive aspect of DL is that these models can be end-to-end, meaning we do not need to do manual feature engineering. There are numerous types of DL algorithms, like Deep neural networs, convolutional neural networks, GRU, LSTM, GAN, attention, transfromer etc.
Deep RL uses deep NN architectures to replace the tabular methods for very high dimensional problems. Informally speaking, the controller is no longer a table look-up rather we use a deep NN as the controller. Because of leveraging deep NN in RL this is commonly known as deep RL.
roughly speaking:
deep learning uses deep neural networks to approximate complicated functions.
reinforcement learning is a branch in machine learning where your learner learns through interaction with environment. It is different from supervised or unsupervised learning.
if you use deep learning to approximate functions in reinforcement learning you call it deep reinforcement learning.
Reinforcement learning is a type of artificial intelligence that aims to model human-like decision-making. It's based on the idea that humans learn from their actions and reward themselves for doing things that are good, and punish themselves for doing things that are bad. Reinforcement learning algorithms try to replicate this process by changing the value of some variable in response to an action.
Deep learning is a type of machine learning model which uses multiple layers of processing to solve problems more effectively than traditional approaches. Deep learning models can be used for image recognition, speech recognition, and translation.
Deep reinforcement learning is a type of deep learning model that tries to solve problems by using sequences of actions called episodes to improve over time as well as by comparing results from different episodes. It's also known as Q-learning because it was first described by Richard Sutton in 1997 using the Q function (the fourth derivative).
Q-learning is a particular type of deep reinforcement learning algorithm that makes use of Q values (quantified measures) instead of actual rewards or penalties, which means it can be used without having access to real data or rewards/penalties yet still produce useful results

Are there examples of using reinforcement learning for text classification?

Imagine a binary classification problem like sentiment analysis. Since we have the labels, cant we use the gap between actual - predicted as reward for RL ?
I wish to try Reinforcement Learning for Classification Problems
Interesting thought! According to my knowledge it can be done.
Imitation Learning - On a high level it is observing sample trajectories performed by the agent in the environment and use it to predict the policy given a particular stat configuration. I prefer Probabilistic Graphical Models for the prediction since I have more interpretability in the model. I have implemented a similar algorithm from the research paper: http://homes.soic.indiana.edu/natarasr/Papers/ijcai11_imitation_learning.pdf
Inverse Reinforcement Learning - Again a similar method developed by Andrew Ng from Stanford to find the reward function from sample trajectories, and the reward function can be used to frame the desirable actions.
http://ai.stanford.edu/~ang/papers/icml00-irl.pdf

Can the TextRank Algorithm be categorized as unsupervised machine learning?

TextRank is an approach to Automatic Text Summarization. Many categorize it as an "unsupervised" approach. I wish to know if this translates into TextRank being categorized as an Unsupervised Machine Learning technique.
TextRank is not directly related to machine learning: Machine learning involves the creation of a data model to predict future observation based on previous observations. This involves tuning model parameters to fit observed data.
On the other hand, TextRank is a graph-based ranking algorithm: it finds the summary parts based on the structure of a single document and does not use observations to learn anything. Since it's not machine learning, it can't be unsupervised machine learning, either.
The original authors of TextRank, Mihalcea and Tarau, described their work as unsupervised in a sense:
In particular, we proposed and evaluated two innovative unsupervised approaches for keyword and sentence extraction.
However that differs from unsupervised learning, i.e. finding hidden structure within unlabeled data.
Also, TextRank is not a machine learning algorithm, in other words it does not generalize from data by "minimizing a loss function together with a regularization term or side constraints" (per Stephen Boyd, et al.). Linguists might not some similarities, though that's outside the scope of this question.
Even so, some confusion might come from the fact that TextRank and related approaches get used to develop feature vectors to present to machine learning algorithms.

What is Reinforcement machine learning?

I know about supervised and unsupervised learning but still not getting how Reinforcement machine learning works.
can somebody help me with proper example ? and use cases that how it works ?
Reinforcement machine learning is when the machine learns from experience, where the feedback is "good" or "bad".
A classic example is when training agents for games. You first start training your agent with the data you have (supervised), and when it is exhausted, start training several agents and let the compete each other. Those who win are getting "reinforced", and go on.
This was one of the "tricks" used to train AlphaGo, (and previously in TD-Gammon)
...
The policy networks were therefore
improved by letting them play against each other, using the outcome of
these games as a training signal. This is called reinforcement
learning, or even deep reinforcement learning (because the networks
being trained are deep).
You mentioned about supervised and unsupervised learning.
There is a slight difference in these 3.
Supervised learning: You have label for each tuple of data.
Unsupervised learning: You don't have label for tuples but you want to find relations between inputs
Reinforcement leaning: You have very few labels for sparse entries. that label is rewards.
reinforcement learning is a process how a person learns about a new situation. it takes any random action, observe the behavior of the environment and learns accordingly.
What is a reward.?
a reward is positive or negative feedback from the environment. An action is responsible for all its future rewards. hence it need to take those action which can achieve most positive reward in future.
This can be achieve by Q-learning algorithm. i request you to check about this topic.
I used reinforcement algorithm to train pacman. i hope you know the game. the goal is to take action by which it should not hit the ghosts and also should be able to take all points from the map. it train itself after many iterations and thousands of gameplay. I also used same to train a car to drive on a specific track!
The reinforcement learning can be used to train an AI to learn any game.! Though more complex games require Neural Networks, and that is called Deep learning.
Reinforcement learning is a type of model that is rewarded for doing good (or bad) things. With supervised learning, it is up to some curator to label all the data that the model can learn from. That is the beauty of reinforcement learning, the model obtains direct feedback from its environment and adjusts its behavior automatically. It's how human's learn a lot of our simple life lessons (e.g., avoid things that hurt you, do more of things that make you feel good)
A lot of reinforcement learning is focused around deep learning these days and the biggest examples have been about video games. Reinforcement learning is also a powerful personalization tool. You can think of an amazon recommender as a reinforcement learning algorithm that is rewarded when it recommends the right products by receiving a click or purchase, or a netflix recommender is rewarded when a user starts watching a movie.
Reinforcement learning is often used for robotics, gaming, and navigation.
With reinforcement learning, the algorithm discovers through trial and error which actions yield the greatest rewards.
This type of learning has three primary components: the agent (the learner or decision-maker), the environment (everything the agent interacts with) and actions (what the agent can do).
The objective is for the agent to choose actions that maximize the expected reward over a given amount of time.
The agent will reach the goal much faster by following a good policy. So the goal in reinforcement learning is to learn the best policy.

Resources