Most tutorials and RL courses focuses on teaching how to apply a model (e.g. Q-Learning) to an environment (gym environments) one can input a state in order to get some output / reward
How it is possible to use RL for historical data, where you cannot get new data? (for example, from a massive auction dataset, how can I derive the best policy using RL)
If your dataset is formed, for example, of time series, you can set each instant of time as your state. Then, you can make your agent to explore the data series for learning a policy over it.
If your dataset is already labeled with actions, you can train the agent over it for learning the a police underlying those actions.
The trick is to feed your agent with each successive instant of time, as if it were exploring it on real time.
Of course, you need to model the different states from the information in each instant of time.
Related
The basic process for most supervised machine learning problems is to divide the dataset into a training set and test set and then train a model on the training set and evaluate its performance on the test set. But in many (most) settings, disease diagnosis for example, more data will be available in the future. How can I use this to improve upon the model? Do I need to retrain from scratch? When might be the appropriate time to retrain if this is the case (e.g., a specific percent of additional data points)?
Let’s take the example of predicting house prices. House prices change all the time. The data you used to train a machine learning model that predicts house prices six months ago could provide terrible predictions today. For house prices, it’s imperative that you have up-to-date information to train your models.
When designing a machine learning system it is important to understand how your data is going to change over time. A well-architected system should take this into account, and a plan should be put in place for keeping your models updated.
Manual retraining
One way to maintain models with fresh data is to train and deploy your models using the same process you used to build your models in the first place. As you can imagine this process can be time-consuming. How often do you retrain your models? Weekly? Daily? There is a balance between cost and benefit. Costs in model retraining include:
Computational Costs
Labor Costs
Implementation Costs
On the other hand, as you are manually retraining your models you may discover a new algorithm or a different set of features that provide improved accuracy.
Continuous learning
Another way to keep your models up-to-date is to have an automated system to continuously evaluate and retrain your models. This type of system is often referred to as continuous learning, and may look something like this:
Save new training data as you receive it. For example, if you are receiving updated prices of houses on the market, save that information to a database.
When you have enough new data, test its accuracy against your machine learning model.
If you see the accuracy of your model degrading over time, use the new data, or a combination of the new data and old training data to build and deploy a new model.
The benefit to a continuous learning system is that it can be completely automated.
I am looking to implement machine learning for a problems that are built on small data sets related to approvals of expenses in a specific supply chain domain. Typically labelled data is unavailable
I was looking to build models in one data set that I have labelled data and then use that model developed in similar contexts- where the feature set is very similar, but not identical. The expectation is that this allows the starting point for recommendations and gather labelled data in the new context.
I understand this is the essence of Transfer Learning. Most of the examples I read in this domain speak of image data sets- any guidance how this can be leveraged in small data sets using standard tree-based classification algorithms
I can’t really speak to tree-based algos, I don’t know how to do transfer learning with them. But, for deep learning models, the customary method for transfer learning is to load up a pretrained model, then retrain the last layer of the dataset using your new data, and then fine-tune the rest of the network.
If you don’t have much data to go on, you might look into creating synthetic data.
raghu, I believe you are looking for a kernel method when you are saying abstraction layer in deep learning. There are several ML algorithms that support kernel functions. With kernel functions, you might be able to do it; but using kernel functions might be more complex than solving your original problem. I would lean toward Tdoggo's suggestion of using Decision Tree.
Sorry, I want to add a comment, but they won't allow me, so I posted a new answer.
Ok with tree-based algos you can do just what you said: train the tree on one dataset and apply it to another similar dataset. All you would need to do is change the terms/nodes on the second tree.
For instance, let’s say you have a decision tree trained for filtering expenses for a construction company. You will outright deny any reimbursements for workboots, because workers should provide those themselves.
You want to use the trained tree on your accounting firm, and so instead of workboots, you change that term to laptops, because accountants should be buying their own.
Does that make sense, and is that helpful to you?
After some research, we have decided to proceed with random forest models with the intuition that trees in the original model that have common features will form the starting point for decisions.
As we gain more labelled data in the new context, we will start replacing the original trees with new trees that comprise of (a)only new features and (b) combination of old and new features
This has worked to provide reasonable results in initial trials
Scenario - I have data that does not have labels but I can create a function to label the data based on behavior and deploy the model so I don't have to keep labeling the data. Is this considered machine learning?
Objective: classify accounts with Volume spikes based on high medium low labels to deploy on big data (trillions of lines of data)
Data: the data I have includes the following attributes:
Account, Time, Date, Volume amount.
Method:
Create a new feature column called "spike" and create a pandas function to ID a spike greater than 5. Is this feature engineering?
Next I create my label column and classify it as low medium or high spike.
Next I Train a machine learning classifier and deploy it to label future accounts with similar patterns in big data.
Thoughts on this process? Is this approach correct for Machine learning?
1st question:
If your algorithm takes the decision, that is, put a label in a sample, based on the set of samples that you have, I'd say it's a machine learning algorithm. But if you design a code that takes into account your experience regarding the data, I think it's not an ML method. In brief, ML look at the data to get patterns and insights from them. I don't know why you're doing that, but is it need to be an ML algorithm? Sometimes you can solve the problem in a very simple way, without using ML.
2nd question: I'm afraid not. Select your data attributes (ex: Account, Time, Date, Volume amount), checking their correlations, try to figure out if you have a dominant one, etc. This process is pre ML. The feature engineering will select what are the best features to present to our algorithm in order to perform the classification (in your case)
3rd question: I think it's fair enough to start playing with some ML algorithms, such as KNN, SVM, NNs, Decision Tree, etc.
I am working on a personal project in which I log data of the bike rental service my city has in a MySQL database. A script runs every thirty minutes and logs data for every bike station and the free bikes each one has. Then, in my database I average the availability of each station for each day at that given time making it, as today, an approximate prediction with 2 months of data logging.
I've read a bit on machine learning and I'd like to learn a bit. Would it be possible to train a model with my data and make better predictions with ML in the future?
The answer is very likely yes.
The first step is to have some data, and it sounds like you do. You have a response (free bikes) and some features on which it varies (time, location). You have already applied a basic conditional means model by averaging values over factors.
You might augment the data you know about locations with some calendar events like holiday or local event flags.
Prepare a data set with one row per observation, and benchmark the accuracy of your current forecasting process for a period of time on a metric like Mean Absolute Percentage Error (MAPE). Ensure your predictions (averages) for the validation period do not include any of the data within the validation period!
Use the data for this period to validate other models you try.
Split off part of the remaining data into a test set, and use the rest for training. If you have a lot of data, then a common training/test split is 70/30. If the data is small you might go down to 90/10.
Learn one or more machine learning models on the training set, checking performance periodically on the test set to ensure generalization performance is still increasing. Many training algorithm implementations will manage this for you, and stop automatically when test performance starts to decrease due to overfitting. This a big benefit of machine learning over your current straight average, the ability to learn what generalizes and throw away what does not.
Validate each model by predicting over the validation set, computing the MAPE and compare the MAPE of the model to that of your original process on the same period. Good luck, and enjoy getting to know machine learning!
I have a problem whereby our users receive the balance of an account each day, and based on the balance, perform an action.
Given the list of historical balances and resulting actions, is it possible to use machine learning to predict the future actions? Preferably in the .net platform.
Thanks.
Ark
I've never used .NET for any data analytics, but I'm sure it won't be too terribly difficult to transpose what I say here into logic in .NET
One of the things people don't like about data sciences is that in order to see if something IS actually possible (predicting future outcomes in this case), you need to do a lot of exploring with the data and see if the data has enough of a pattern to be learned (by either human or by a ML algorithm).
The way to do this would be to shuffle and split the data in some way...let's say into one group with 70 percent of the data and a second with 30 percent of the data.
Once you do this, you want to train some algorithm with the first group (training set) and use the second group(test set) to verify the accuracy of your algorithm.
So how do you chose an algorithm? That's the trickiest part. Only you can say which is best for your particular scenario given full access to the data. However, given that your output seems to be very discrete (let's say max 5 actions), that makes this a supervised learning classification problem. I'd do some analysis using one of these algorithms (SVM, kNN, and DecisionsTrees are a few popular ones), and use some error LIKE F1 or R^2 to determine how well your fitted algorithm performs on your test set.
To perform supervised Machine Learning in .NET, the ML.NET Framework has been announced, and a preview is now available (as of 7th May 2018).
A good starting place for ML.NET is here.