How to use Reinforcement Learning for a Classification problem? - machine-learning

Using Python, I'm trying to predict if an individual is eligible for a loan or not. So, my output will be either a 1 or 0.
I want to use a Reinforcement Learning approach to learn and experiment. But, I haven't found any useful resources on how to use RL in classification problems.
My question is, is to suitable to use RL or is it too complex for my problem and it's not used in similar real-world problems?
If the answer is yes, how can I apply RL in classification problems?

Related

When to use supervised or unsupervised learning?

Which are the fundamental criterias for using supervised or unsupervised learning?
When is one better than the other?
Is there specific cases when you can only use one of them?
Thanks
If you a have labeled dataset you can use both. If you have no labels you only can use unsupervised learning.
It´s not a question of "better". It´s a question of what you want to achieve. E.g. clustering data is usually unsupervised – you want the algorithm to tell you how your data is structured. Categorizing is supervised since you need to teach your algorithm what is what in order to make predictions on unseen data.
See 1.
On a side note: These are very broad questions. I suggest you familiarize yourself with some ML foundations.
Good podcast for example here: http://ocdevel.com/podcasts/machine-learning
Very good book / notebooks by Jake VanderPlas: http://nbviewer.jupyter.org/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/Index.ipynb
Depends on your needs. If you have a set of existing data including the target values that you wish to predict (labels) then you probably need supervised learning (e.g. is something true or false; or does this data represent a fish or cat or a dog? Simply put - you already have examples of right answers and you are just telling the algorithm what to predict). You also need to distinguish whether you need a classification or regression. Classification is when you need to categorize the predicted values into given classes (e.g. is it likely that this person develops a diabetes - yes or no? In other words - discrete values) and regression is when you need to predict continuous values (1,2, 4.56, 12.99, 23 etc.). There are many supervised learning algorithms to choose from (k-nearest neighbors, naive bayes, SVN, ridge..)
On contrary - use the unsupervised learning if you don't have the labels (or target values). You're simply trying to identify the clusters of data as they come. E.g. k-Means, DBScan, spectral clustering..)
So it depends and there's no exact answer but generally speaking you need to:
Collect and see you data. You need to know your data and only then decide which way you choose or what algorithm will best suite your needs.
Train your algorithm. Be sure to have a clean and good data and bear in mind that in case of unsupervised learning you can skip this step as you don't have the target values. You test your algorithm right away
Test your algorithm. Run and see how well your algorithm behaves. In case of supervised learning you can use some training data to evaluate how well is your algorithm doing.
There are many books online about machine learning and many online lectures on the topic as well.
Depends on the data set that you have.
If you have target feature in your hand then you should go for supervised learning. If you don't have then it is a unsupervised based problem.
Supervised is like teaching the model with examples. Unsupervised learning is mainly used to group similar data, it plays a major role in feature engineering.
Thank you..

How to improve classification accuracy for machine learning

I have used the extreme learning machine for classification purpose and found that my classification accuracy is only at 70+% which leads me to use the ensemble method by creating more classification model and testing data will be classified based on the majority of the models' classification. However, this method only increase classification accuracy by a small margin. Can I asked what are the other methods which can be used to improve classification accuracy of the 2 dimension linearly inseparable dataset ?
Your question is very broad ... There's no way to help you properly without knowing the real problem you are treating. But, some methods to enhance a classification accuracy, talking generally, are:
1 - Cross Validation : Separe your train dataset in groups, always separe a group for prediction and change the groups in each execution. Then you will know what data is better to train a more accurate model.
2 - Cross Dataset : The same as cross validation, but using different datasets.
3 - Tuning your model : Its basically change the parameters you're using to train your classification model (IDK which classification algorithm you're using so its hard to help more).
4 - Improve, or use (if you're not using) the normalization process : Discover which techniques (change the geometry, colors etc) will provide a more concise data to you to use on the training.
5 - Understand more the problem you're treating... Try to implement other methods to solve the same problem. Always there's at least more than one way to solve the same problem. You maybe not using the best approach.
Enhancing a model performance can be challenging at times. I’m sure, a lot of you would agree with me if you’ve found yourself stuck in a similar situation. You try all the strategies and algorithms that you’ve learnt. Yet, you fail at improving the accuracy of your model. You feel helpless and stuck. And, this is where 90% of the data scientists give up. Let’s dig deeper now. Now we’ll check out the proven way to improve the accuracy of a model:
Add more data
Treat missing and Outlier values
Feature Engineering
Feature Selection
Multiple algorithms
Algorithm Tuning
Ensemble methods
Cross Validation
if you feel the information is lacking then this link should you learn, hopefully can help : https://www.analyticsvidhya.com/blog/2015/12/improve-machine-learning-results/
sorry if the information I give is less satisfactory

Machine Learning, GA + BP or GA with huge NN?

Sorry for the poor title,
I'm currently studying ML and I want to focus on a problem using the toolset I have acquired, which exludes reinforcement learning.
I want to create a NN that takes a simple 2D game level ( think of mario in the simplest case, simple fitness function, simple controls and easy feature selection) and outputs key sequence.
Since we don't know the correct key sequence(ks), I see 2 options,
1-) I find that out using genetic algorithm and use backprop or similar algorithms to associate levels with key sequences and predict a KS for a new level
2-) I build a huge NN and use genetic algorithm to solve whole internal structure of it.
What are the pros and cons of each approach? Why should I implement one instead of the other? Please remember that I'm fairly new to the topic and want to solve this problem with what I've learned until now, basics really.
What you are suggesting is in essence reinforcement learning, e.g. trying out "semi random" combinations and then using rewards to learn the network. The first approach is classical reinforcement learning and the other one is reinforcement learning using a neural network.
If you want to solve the topic like this there are plenty of tutorials and github repos available to help you solve this problem, with a simple google search.

What classifier to use while performing unsupervised learning

I am new to Machine learning and I have this basic question. As I am weak in Math part of the algorithm I find it difficult to understand this.
When you are given a task to design a classifier(keep it simple -- a 2 class classifier) using unsupervised learning(no training samples), how to decide what type of classifier(linear or non-linear) to use? If we do not know this, then the importance on feature selection(which means indirectly knowing what the data set is) becomes very critical.
Am I thinking in the right direction or is there something big that I dont know. Insight into this topic is greatly appreciated.
classification is by definition a "supervised learning" problem. such models require examples of points within given classes to understand how to separate the classes from one another. if you are simply looking for relationships between unlabeled data points, you're solving an unsupervised problem. look into clustering algorithms. k-means is where a lot of people start.
hope this helps!
This is a huge problem. Yes, the term "clustering" is the best entry point for googling about that, but I understand that you want to train a classifier, where "training" means optimizing an objective function with parameters. The first choice is definitely not discriminative classifiers (such as linear ones), because with them, the standard maximum likelihood (ML) objective does not work without labels. If you absolutely want to use linear classifiers, then you have to tweak the ML objective, or better use another objective (approximating the classifier risk). But an easier choice is to rather look at generative models, such as HMMs, Naive Bayes, Latent Dirichlet Allocation, ... for which the ML objective works without labels.

How to approach a machine learning programming competition

Many machine learning competitions are held in Kaggle where a training set and a set of features and a test set is given whose output label is to be decided based by utilizing a training set.
It is pretty clear that here supervised learning algorithms like decision tree, SVM etc. are applicable. My question is, how should I start to approach such problems, I mean whether to start with decision tree or SVM or some other algorithm or is there is any other approach i.e. how will I decide?
So, I had never heard of Kaggle until reading your post--thank you so much, it looks awesome. Upon exploring their site, I found a portion that will guide you well. On the competitions page (click all competitions), you see Digit Recognizer and Facial Keypoints Detection, both of which are competitions, but are there for educational purposes, tutorials are provided (tutorial isn't available for the facial keypoints detection yet, as the competition is in its infancy. In addition to the general forums, competitions have forums also, which I imagine is very helpful.
If you're interesting in the mathematical foundations of machine learning, and are relatively new to it, may I suggest Bayesian Reasoning and Machine Learning. It's no cakewalk, but it's much friendlier than its counterparts, without a loss of rigor.
EDIT:
I found the tutorials page on Kaggle, which seems to be a summary of all of their tutorials. Additionally, scikit-learn, a python library, offers a ton of descriptions/explanations of machine learning algorithms.
This cheatsheet http://peekaboo-vision.blogspot.pt/2013/01/machine-learning-cheat-sheet-for-scikit.html is a good starting point. In my experience using several algorithms at the same time can often give better results, eg logistic regression and svm where the results of each one have a predefined weight. And test, test, test ;)
There is No Free Lunch in data mining. You won't know which methods work best until you try lots of them.
That being said, there is also a trade-off between understandability and accuracy in data mining. Decision Trees and KNN tend to be understandable, but less accurate than SVM or Random Forests. Kaggle looks for high accuracy over understandability.
It also depends on the number of attributes. Some learners can handle many attributes, like SVM, whereas others are slow with many attributes, like neural nets.
You can shrink the number of attributes by using PCA, which has helped in several Kaggle competitions.

Resources