I have a time dependent outcome variable (web site visits) and time dependent covariates (marketing spending on different websites). I have no categorical covariates
Which model is the best for this kind of analysis? I am trying to qualtify the effect of spending on each channel on the web site visits.
I have come across the cox regression model, would that be appropriate?
Related
Consider use cases like
lending money - ML model predicts that lending money is safe to an individual.
predictive maintenance in which a machine learning model predicts that an equipment will not fail.
In above cases, it is easy to find if the ML model's prediction was correct or not
depending on whether the money was paid back or not and whether the equipment part failed or not.
How is performance of a model evaluated for the following scenarios? Am I correct that it is not possible to evaluate performance for the following scenarios?
lending money - ML model predicts that lending money is NOT safe to an individual and money is not lend.
predictive maintenance in which a machine learning model predicts that an equipment will fail and equipment is thus replaced.
In general, would I be correct is saying that some predictions can be evaluated but some can't be? For scenarios where the performance can't be evaluated, how do businesses ensure that they are not losing opportunities due to incorrect predictions? I am guessing that there is no way to do this as this problem exists in general without use of ML models as well. Just putting my doubt/question here to validate my thought process.
If you think about it, both groups are referring to same models, just different use cases. If you take the model predicting whether it's safe to lend money and invert it's prediction, you would get a prediction whether it's NOT safe to lend money.
And if you use your model to predict safe lending you would still care about increasing recall (i.e. reducing number of safe cases that are classified as unsafe).
Some predictions can't be evaluated if we act on them (if we denied lending we can't tell whether the model was right). Another related problem is gathering a good dataset to train the model further: usually we would train the model on the data we observed, and if we deny 90% of applications based on the current model prediction, then in the future we can only train next model on the remaining 10% of the applications.
However, there are some ways to work around this:
Turning the model off for some percentage of applications. Let's say that random 1% of applications are approved regardless of model prediction. This will get us an unbiased dataset evaluate the model.
Using historical data, that was gathered before the model was introduced.
Finding a proxy metric that correlates with business-metric, but is easier to evaluate. As an example you could measure percentage of applicants who within 1 year after their applications made late-payments (with other lenders, not us) among the applicants who have been approved vs rejected by our model. The higher the difference of this metric between rejected and approved groups, the better our model performs. But for this to work you have to prove that this metric correlates with probability of our lending being unsafe.
I have a classification problem and I'm using a logistic regression (I tested it among other models and this one was the best). I look for information from game sites and test if a user has the potential to be a buyer of certain games.
The problem is that lately some sites from which I get this information (and also from where I got the information to train the model) change weekly and, with that, part of the database I use for prediction is "partially" different from the one used for training (with different information for each user, in this case). Since when these sites started to change, the model's predictive ability has dropped considerably.
To solve this, an alternative would be, of course, to retrain the model. It's something we're considering, although we'll have to do it with some frequency given the fact that the sites are changing every couple of weeks, considerably.
other solutions considered was the use of algorithms that could adapt to these changes and, with that, we could retrain the model less frequently.
Two options raised were neural networks to classify or try to adapt some genetic algorithm. However, I have read that genetic algorithms would be very expensive and are not a good option for classification problems, given the fact that they may not converge.
Does anyone have any suggestions for a modeling approach that we can test?
The basic process for most supervised machine learning problems is to divide the dataset into a training set and test set and then train a model on the training set and evaluate its performance on the test set. But in many (most) settings, disease diagnosis for example, more data will be available in the future. How can I use this to improve upon the model? Do I need to retrain from scratch? When might be the appropriate time to retrain if this is the case (e.g., a specific percent of additional data points)?
Let’s take the example of predicting house prices. House prices change all the time. The data you used to train a machine learning model that predicts house prices six months ago could provide terrible predictions today. For house prices, it’s imperative that you have up-to-date information to train your models.
When designing a machine learning system it is important to understand how your data is going to change over time. A well-architected system should take this into account, and a plan should be put in place for keeping your models updated.
Manual retraining
One way to maintain models with fresh data is to train and deploy your models using the same process you used to build your models in the first place. As you can imagine this process can be time-consuming. How often do you retrain your models? Weekly? Daily? There is a balance between cost and benefit. Costs in model retraining include:
Computational Costs
Labor Costs
Implementation Costs
On the other hand, as you are manually retraining your models you may discover a new algorithm or a different set of features that provide improved accuracy.
Continuous learning
Another way to keep your models up-to-date is to have an automated system to continuously evaluate and retrain your models. This type of system is often referred to as continuous learning, and may look something like this:
Save new training data as you receive it. For example, if you are receiving updated prices of houses on the market, save that information to a database.
When you have enough new data, test its accuracy against your machine learning model.
If you see the accuracy of your model degrading over time, use the new data, or a combination of the new data and old training data to build and deploy a new model.
The benefit to a continuous learning system is that it can be completely automated.
I just have a general question:
In a previous job, I was tasked with building a series of non-linear models to quantify the impact of certain factors on the number of medical claims filed. We had a set of variables we would use in all models (eg: state, year, Sex, etc.). We used all of our data to build these models; meaning we never split the data into training and test data sets.
If I were to go back in time to this job and split the data into training and test data sets, what would the advantages of that approach be besides assessing the prediction accuracy of our models. What is an argument for not splitting the data and then fitting the model? Never really thought about it too much until now - curious as to why we didn't take that approach.
Thanks!
The sole purpose of setting aside a test set is to assess prediction accuracy. However, there is more to this than just checking the number and thinking "huh, that's how my model performs"!
Knowing how your model performs at a given moment gives you an important benchmark for potential improvements of the model. How will you know otherwise whether adding a feature increases model performance? Moreover, how do you know otherwise whether your model is at all better than mere random guessing? Sometimes, extremely simple models outperform the more complex ones.
Another thing is removal of features or observations. This depends a bit on the kind of models you use, but some models (e.g., k-Nearest-Neighbors) perform significantly better if you remove unimportant features from the data. Similarly, suppose you add more training data and suddenly your model's test performance drops significantly. Perhaps there is something wrong with the new observations? You should be aware of these things.
The only argument I can think of for not using a test set is that otherwise you'd have too little training data for the model to perform optimally.
I am working on a personal project in which I log data of the bike rental service my city has in a MySQL database. A script runs every thirty minutes and logs data for every bike station and the free bikes each one has. Then, in my database I average the availability of each station for each day at that given time making it, as today, an approximate prediction with 2 months of data logging.
I've read a bit on machine learning and I'd like to learn a bit. Would it be possible to train a model with my data and make better predictions with ML in the future?
The answer is very likely yes.
The first step is to have some data, and it sounds like you do. You have a response (free bikes) and some features on which it varies (time, location). You have already applied a basic conditional means model by averaging values over factors.
You might augment the data you know about locations with some calendar events like holiday or local event flags.
Prepare a data set with one row per observation, and benchmark the accuracy of your current forecasting process for a period of time on a metric like Mean Absolute Percentage Error (MAPE). Ensure your predictions (averages) for the validation period do not include any of the data within the validation period!
Use the data for this period to validate other models you try.
Split off part of the remaining data into a test set, and use the rest for training. If you have a lot of data, then a common training/test split is 70/30. If the data is small you might go down to 90/10.
Learn one or more machine learning models on the training set, checking performance periodically on the test set to ensure generalization performance is still increasing. Many training algorithm implementations will manage this for you, and stop automatically when test performance starts to decrease due to overfitting. This a big benefit of machine learning over your current straight average, the ability to learn what generalizes and throw away what does not.
Validate each model by predicting over the validation set, computing the MAPE and compare the MAPE of the model to that of your original process on the same period. Good luck, and enjoy getting to know machine learning!