How to use machine learning algorithm for different input data? - machine-learning

Yes, my title is "How to use a machine learning algorithm for different input data?",
For example, we have an algorithm called "Email spams detector". Okay, so how to apply the algorithm for 1M users with different input data? because each user creates its own data. So instead of training 1M for every single user. Do we have any method to train once for different input data?
To answer what do I mean by different input data:
For example,
I have a time series data collected from a server called "Server A", the data consists of attributes like (CPU level, ram level, protocol...). Next, I put the data collected from Server A to "model X" to predict the server's usage.
Now I have a data center with hundreds of servers. Are there any ways to train these hundreds of servers' data once with the same model (model X)? Of course, I don't want to train hundreds of times with model X to get the prediction.

Are there any ways to train these hundreds of servers' data once with the same model (model X)?
Sure! You can use Spark streaming for ingesting your time series data from different servers/sources and Spark MLlib for processing data to train the same model (model X) stored in HDFS.
This blog would be useful for your understanding

Related

Multiple data from different sources on time series forecasting

I have an interesting question about time series forecasting. If someone has temporal data from multiple sensors, each dataset would have data, e.g., from 2010 to 2015, so if one were to train a forecasting model using all the data from those different sensors, how should the data be organized? because if one just stacked up the data set, it would generate, e.g., sensorDataset1 (2010–2015), sensorDataset2 (2010–2015), and the cycle would start over with sensors 3, 4, and n. Is this a problem with time series data or not?
If yes, what is the proper way to handle this?
I tried using all the data stacked up and training the model anyway, and actually it has a good error, but I wonder if that approach is actually valid.
Try sampling your individual sensor data sets to the same period.
For example, if sensor 1 has a data entry every 5 minutes and sensor 2 has an entry every 10 minutes. Try to sample your data to a common period across all sensors. Each data point you show to your model will have better quality data that should influence the performance of your model.
The aspect that will influence your error depends on what you're trying to forecast and the relationships that exist in your data that showcase the relationship between variables.

Temporal train-test split for forecasting

I know this may be a basic question but I want to know if I am using the train, test split correctly.
Say I have data that ends at 2019, and I want to predict values in the next 5 years.
The graph I produced is provided below:
My training data starts from 1996-2014 and my test data starts from 2014-2019. The test data perfectly fits the training data. I then used this test data to make predictions from 2019-2024.
Is this the correct way to do it, or my predictions should also be from 2014-2019 just like the test data?
The test/validation data is useful for you to evaluate the predictor to use. Once you have decided which model to use, you should train the model with the whole dataset 1996-2019 so that you do not lose possible valuable knowledge from 2014-2019. Take into account that when working with time-series, usually the newer part of the serie has more importance in your prediction than older values of the serie.

Is it a bad idea to use the cluster ID from clustering text data using K-means as feature to your supervised learning model?

I am building a model that will predict the lead time of products flowing through a pipeline.
I have a lot of different features, one is a string containing a few words about the purpose of the product (often abbreviations, name of the application it will be a part of and so forth). I have previously not used this field at all when doing feature engineering.
I was thinking that it would be nice to do some type of clustering on this data, and then use the cluster ID as a feature for my model, perhaps the lead time is correlated with the type of info present in that field.
Here was my line of thinking)
1) Cleaning & tokenizing text.
2) TF-IDF
3) Clustering
But after thinking more about it, is it a bad idea? Because the clustering was based on the old data, if new words are introduced in the new data this will not be captured by the clustering algorithm, and the data should perhaps be clustered differently now. Does this mean that I would have to retrain the entire model (k-means model and then the supervised model) whenever I want to predict new data points? Are there any best practices for this?
Are there better ways of finding clusters for text data to use as features in a supervised model?
I understand the urge to use an unsupervised clustering algorithm first to see for yourself, which clusters were found. And of course you can try if such a way helps your task.
But as you have labeled data, you can pass the product description without an intermediate clustering. Your supervised algorithm shall then learn for itself if and how this feature helps in your task (of course preprocessing such as removal of stopwords, cleaining, tokenizing and feature extraction needs to be done).
Depending of your text descriptions, I could also imagine that some simple sequence embeddings could work as feature-extraction. An embedding is a vector of for example 300 dimensions, which describes the words in a manner that hp office printer and canon ink jet shall be close to each other but nice leatherbag shall be farer away from the other to phrases. For example fasText-Word-Embeddings are already trained in english. To get a single embedding for a sequence of hp office printerone can take the average-vector of the three vectors (there are more ways to get an embedding for a whole sequence, for example doc2vec).
But in the end you need to run tests to choose your features and methods!

Is it a good practice to use your full data set for predictions?

I know you're supposed to separate your training data from your testing data, but when you make predictions with your model is it OK to use the entire data set?
I assume separating your training and testing data is valuable for assessing the accuracy and prediction strength of different models, but once you've chosen a model I can't think of any downsides to using the full data set for predictions.
You can use full data for prediction but better retain indexes of train and test data. Here are pros and cons of it:
Pro:
If you retain index of rows belonging to train and test data then you just need to predict once (and so time saving) to get all results. You can calculate performance indicators (R2/MAE/AUC/F1/precision/recall etc.) for train and test data separately after subsetting actual and predicted value using train and test set indexes.
Cons:
If you calculate performance indicator for entire data set (not clearly differentiating train and test using indexes) then you will have overly optimistic estimates. This happens because (having trained on train data) model gives good results of train data. Which depending of % split of train and test, will gives illusionary good performance indicator values.
Processing large test data at once may create memory bulge which is can result in crash in all-objects-in-memory languages like R.
In general, you're right - when you've finished selecting your model and tuning the parameters, you should use all of your data to actually build the model (exception below).
The reason for dividing data into train and test is that, without out-of-bag samples, high-variance algorithms will do better than low-variance ones, almost by definition. Consequently, it's necessary to split data into train and test parts for questions such as:
deciding whether kernel-SVR is better or worse than linear regression, for your data
tuning the parameters of kernel-SVR
However, once these questions are determined, then, in general, as long as your data is generated by the same process, the better predictions will be, and you should use all of it.
An exception is the case where the data is, say, non-stationary. Suppose you're training for the stock market, and you have data from 10 years ago. It is unclear that the process hasn't changed in the meantime. You might be harming your prediction, by including more data, in this case.
Yes, there are techniques for doing this, e.g. k-fold cross-validation:
One of the main reasons for using cross-validation instead of using the conventional validation (e.g. partitioning the data set into two sets of 70% for training and 30% for test) is that there is not enough data available to partition it into separate training and test sets without losing significant modelling or testing capability. In these cases, a fair way to properly estimate model prediction performance is to use cross-validation as a powerful general technique.
That said, there may not be a good reason for doing so if you have plenty of data, because it means that the model you're using hasn't actually been tested on real data. You're inferring that it probably will perform well, since models trained using the same methods on less data also performed well. That's not always a safe assumption. Machine learning algorithms can be sensitive in ways you wouldn't expect a priori. Unless you're very starved for data, there's really no reason for it.

Logistic Regression Using Mahout

I've just read this interesting article about logistic regression using Mahout. The tutorial is clear to me... but how would a real use case looks like? For instance, when a [web] application first starts, some training data needs to be processed... and the result is kept in an OnlineLogisticRegression instance. Then, to test new data, one just needs to invoke OnlineLogisticRegression.classifyFull and look at the probability — represented by a value between 0 and 1 — that the data falls in a given classification.
But what if I want to improve a model and train it with additional data while the [web] application is online? The idea would be to train the model with additional data once a week or similar in order to improve accuracy. What's the correct way to implement such a mechanism? Are there significant performance issues?
Dont know whats your usecase but I have implemented like below.
I used Naivebayes. current flow using my model which is online.
Now after 15 days I used to add new training data into previous training data and generate a new model. once the new model is created its been replaced with the online model by cron.

Resources