Find points to split n-dimensional time series based on labeled data - machine-learning

I'm at the starting point of a bigger project to identify common patterns in time series.
The goal is to automatically find split points in time series which splits the series into commonly used patterns. (Later I want to split the time series based on the split points to use the time series in between independently.)
One time series consists of:
n data series based on a fix time interval as input
The x-axis represents the interval indices from 0 to m
The y-axis represents the values of the specific time series
For example, it could look like this:
pos_x,pos_y,pos_z,force_x,force_y,force_z,speed,is_split_point
2, 3, 4, 0.4232, 0.4432, 0, 0.6, false
2, 3, 4, 0.4232, 0.4432, 0, 0.6, false
2, 3, 4, 0.4232, 0.4432, 0, 0.6, true
My best bet is to solve this problem with Machine Learning because I need a general approach to detect the patterns based on the user selection beforehand.
Therefore I have a lot of labeled data where the split points are already set manually by the user.
Currently, I have two ideas to solve this problem:
Analyzing the data around the split points in the labeled data to derive a correlation between the different data dimensions and use this as new features on unlabeled data.
Analyzing the patterns between two keyframes to find similar patterns in unlabeled data.
I prefer 1. because I think it's more important to find out what defines a split point.
I'm curious about if neuronal networks are well suited for this task?
I ask the question not to get a solution for the problem, I just want to get a second opinion on this. I'm relatively new to Machine Learning, that's why it's a bit overwhelming to find a good starting point for this problem. I'm very happy with any ideas, techniques and useful resources which could cover this problem and can give me a good starting point.

Whooo that is a great queastion.
In matter of fact, I also have some ideas to give to you, some of them were tested and worked on different time-based problems with anomaly events I encountered.
First, analyzing the data is always a great approach for better understanding of the problem, regardless of what solution you will use. This way you ensure that you don't feed you models garbage. Tools for this analysis can be peaks in a truncated past window, derivatives, etc.
Then you can draw the data using t-sne and see if there is some kind of separation in the data.
However, simply using neural networks can be problematic since you have small number of split points and large number of non-split points.
You can use LSTMs and train them in a many-to-one configuration, where you create balanced number of positive and negative examples. The LSTMs will help you to overcome the varying length of the examples, and give more meaning to the time domain.
Going into that direction you can use truncated past and make it as an example with the is_split_point as the label, and use an i.i.d model by pulling samples in a balanced way. DNNS also works in that configuration.
All the above are experimented approaches that I found useful.
I hope it helps. GOOD LUCK!

Related

Classifying pattern in time series

I am dealing with a repeating pattern in time series data. My goal is to classify every pattern as 1, and anything that does not follow the pattern as 0. The pattern repeats itself between every two peaks as shown below in the image.
The patterns are not necessarily fixed in sample size but stay within approximate sample size, let's say 500samples +-10%. The heights of the peaks can change. The random signal (I called it random, but basically it means not following pattern shape) can also change in value.
The data is from a sensor. Patterns are when the device is working smoothly. If the device is malfunctioning, then I will not see the patterns and will get something similar to the class 0 I have shown in the image.
What I have done so far is building a logistic regression model. Here are my steps for data preparation:
Grab data between every two consecutive peaks, resample it to a fixed size of 100 samples, scale data to [0-1]. This is class 1.
Repeated step 1 on data between valley and called it class 0.
I generated some noise, and repeated step 1 on chunk of 500 samples to build extra class 0 data.
Bottom figure shows my predictions on the test dataset. Prediction on the noise chunk is not great. I am worried in the real data I may get even more false positives. Any idea on how I can improve my predictions? Any better approach when there is no class 0 data available?
I have seen similar question here. My understanding of Hidden Markov Model is limited but I believe it's used to predict future data. My goal is to classify a sliding window of 500 sample throughout my data.
I have some proposals, that you could try out.
First, I think in this field often recurrent neural networks are used (e.g. LSTMs). But I also heard that some people also work with tree based method like light gbm (I think Aileen Nielsen uses this approach).
So if you don't want to dive into neural networks, which is probably not necessary, because your signals seem to be distinguishable relative easily, you can give light gbm (or other tree ensamble methods) a chance.
If you know the maximum length of a positive sample, you can define the length of your "sliding sample-window" that becomes your input vector (so each sample in the sliding window becomes one input feature), then I would add an extra attribute with the number of samples when the last peak occured (outside/before the sample window). Then you can check in how many steps you let your window slide over the data. This also depends on the memory you have available for this.
But maybe it would be wise then to skip some of the windows between a change between positive and negative, because the states might not be classifiable unambiguously.
In case memory becomes an issue, neural networks could be the better choice, because for training they do not need all training data available at once, so you can generate your input data in batches. With tree based methods this possible does not exist or only in a very limited way.
I'm not sure of what you are trying to achieve.
If you want to characterize what is a peak or not - which is an after the facts classification - then you can use a simple rule to define peaks such as signal(t) - average(signal, t-N to t) > T, with T a certain threshold and N a number of data points to look backwards to.
This would qualify what is a peak (class 1) and what is not (class 0), hence does a classification of patterns.
If your goal is to predict that a peak is going to happen few time units before the peak (on time t), using say data from t-n1 to t-n2 as features, then logistic regression might not necessarily be the best choice.
To find the right model you have to start with visualizing the features you have from t-n1 to t-n2 for every peak(t) and see if there is any pattern you can find. And it can be anything:
was there a peak in in the n3 days before t ?
is there a trend ?
was there an outlier (transform your data into exponential)
in order to compare these patterns, think of normalizing them so that the n2-n1 data points go from 0 to 1 for example.
If you find a pattern visually then you will know what kind of model is likely to work, on which features.
If you don't then it's likely that the white noise you added will be as good. so you might not find a good prediction model.
However, your bottom graph is not so bad; you have only 2 major false positives out of >15 predictions. This hints at better feature engineering.

How to reduce position changes after dimensionality reduction?

Disclaimer: I'm a machine learning beginner.
I'm working on visualizing high dimensional data (text as tdidf vectors) into the 2D-space. My goal is to label/modify those data points and recomputing their positions after the modification and updating the 2D-plot. The logic already works, but each iterative visualization is very different from the previous one even though only 1 out of 28.000 features in 1 data point changed.
Some details about the project:
~1000 text documents/data points
~28.000 tfidf vector features each
must compute pretty quickly (let's say < 3s) due to its interactive nature
Here are 2 images to illustrate the problem:
Step 1:
Step 2:
I have tried several dimensionality reduction algorithms including MDS, PCA, tsne, UMAP, LSI and Autoencoder. The best results regarding computing time and visual representation I got with UMAP, so I sticked with it for the most part.
Skimming some research papers I found this one with a similar problem (small change in high dimension resulting in big change in 2D):
https://ieeexplore.ieee.org/document/7539329
In summary, they use t-sne to initialize each iterative step with the result of the first step.
First: How would I go about achieving this in actual code? Is this related to tsne's random_state?
Second: Is it possible to apply that strategy to other algorithms like UMAP? tsne takes way longer and wouldn't really fit into the interactive use case.
Or is there some better solution I haven't thought of for this problem?

Applying machine learning to training data parameters

I'm new to machine learning, and I understand that there are parameters and choices that apply to the model you attach to a certain set of inputs, which can be tuned/optimised, but those inputs obviously tie back to fields you generated by slicing and dicing whatever source data you had in a way that makes sense to you. But what if the way you decided to model and cut up your source data, and therefore training data, isn't optimal? Are there ways or tools that extend the power of machine learning into, not only the model, but the way training data was created in the first place?
Say you're analysing the accelerometer, GPS, heartrate and surrounding topography data of someone moving. You want to try determine where this person is likely to become exhausted and stop, assuming they'll continue moving in a straight line based on their trajectory, and that going up any hill will increase heartrate to some point where they must stop. If they're running or walking modifies these things obviously.
So you cut up your data, and feel free to correct how you'd do this, but it's less relevant to the main question:
Slice up raw accelerometer data along X, Y, Z axis for the past A number of seconds into B number of slices to try and profile it, probably applying a CNN to it, to determine if running or walking
Cut up the recent C seconds of raw GPS data into a sequence of D (Lat, Long) pairs, each pair representing the average of E seconds of raw data
Based on the previous sequence, determine speed and trajectory, and determine the upcoming slope, by slicing the next F distance (or seconds, another option to determine, of G) into H number of slices, profiling each, etc...
You get the idea. How do you effectively determine A through H, some of which would completely change the number and behaviour of model inputs? I want to take out any bias I may have about what's right, and let it determine end-to-end. Are there practical solutions to this? Each time it changes the parameters of data creation, go back, re-generate the training data, feed it into the model, train it, tune it, over and over again until you get the best result.
What you call your bias is actually the greatest strength you have. You can include your knowledge of the system. Machine learning, including glorious deep learning is, to put it bluntly, stupid. Although it can figure out features for you, interpretation of these will be difficult.
Also, especially deep learning, has great capacity to memorise (not learn!) patterns, making it easy to overfit to training data. Making machine learning models that generalise well in real world is tough.
In most successful approaches (check against Master Kagglers) people create features. In your case I'd probably want to calculate magnitude and vector of the force. Depending on the type of scenario, I might transform (Lat, Long) into distance from specific point (say, point of origin / activation, or established every 1 minute) or maybe use different coordinate system.
Since your data in time series, I'd probably use something well suited for time series modelling that you can understand and troubleshoot. CNN and such are typically your last resort in majority of cases.
If you really would like to automate it, check e.g. Auto Keras or ludwig. When it comes to learning which features matter most, I'd recommend going with gradient boosting (GBDT).
I'd recommend reading this article from AirBnB that takes deeper dive into journey of building such systems and feature engineering.

Input selection for neural networks

I am going to use ANN for my work in which I have a large dataset, let say input[600x40] and output[600x6]. As one can see, the number of inputs (40) is too high for ANN and it may trap in local minimum and/or increases the CPU time dramatically. Is there any way to select the most informative input?
As my first try, I used the following code in Matlab to find the cross-correlation between each two inputs:
[rho, ~] = corr(inputs, 'rows','pairwise')
However, I think this simple correlation cannot identify some hidden complex relation between the inputs.
Any ideas?
First of all 40 inputs is a very small space and it should not be reduced. Large number of inputs is 100,000, not 40. Also, 600x40 is not a big dataset, nor the one "increasing the CPU time dramaticaly", if it learns slowly than check your code because it appears to be the problem, not your data.
Furthermore, feature selection is not a good way to go, you should use it only when gathering features is actually expensive. In any other scenario you are looking for dimensionality reduction, such as PCA, LDA etc. although as said before - your data should not be reduced, rather - you should consider getting more of it (new samples/new features).
Disclaimer: I'm with lejlot on this - you should get more data and
more features instead of trying to remove features. Still, that doesn't answer your question, so here we go.
Try most basic greedy approach - try removing each feature and retrain your ANN (several times, of course) and see if your results got better or worse. Choose this situation where results got better and improvement was the best. Repeat until you'll get no improvement by removing features. This will take a lot of time, so you may want to try doing it on some subset of your data (for example on 3 folds of dataset splitted into 10 folds).
It's ugly, but sometimes it works.
I repeat what I've said in disclaimer - this is not the way to go.

Clustering Baseline Comparison, KMeans

I'm working on an algorithm that makes a guess at K for kmeans clustering. I guess I'm looking for a data set that I could use as a comparison, or maybe a few data sets where the number of clusters is "known" so I could see how my algorithm is doing at guessing K.
I would first check the UCI repository for data sets:
http://archive.ics.uci.edu/ml/datasets.html?format=&task=clu&att=&area=&numAtt=&numIns=&type=&sort=nameUp&view=table
I believe there are some in there with the labels.
There are text clustering data sets that are frequently used in papers as baselines, such as 20newsgroups:
http://qwone.com/~jason/20Newsgroups/
Another great method (one that my thesis chair always advocated) is to construct your own small example data set. The best way to go about this is to start small, try something with only two or three variables that you can represent graphically, and then label the clusters yourself.
The added benefit of a small, homebrew data set is that you know the answers and it is great for debugging.
Since you are focused on k-means, have you considered using the various measures (Silhouette, Davies–Bouldin etc.) to find the optimal k?
In reality, the "optimal" k may not be a good choice. Most often, one does want to choose a much larger k, then analyze the resulting clusters / prototypes in more detail to build clusters out of multiple k-means partitions.
The iris flower dataset is a good one to start with, that clustering works nicely on.
Download here

Resources