Learning from time-series data to predict time-series (not forecasting) - machine-learning

I have a number of datasets where each of them contains a number of input variables (lets say 3) as time series and an output variable, also as a time series and all over the same time period.
Each of these series has the same number of datapoints (say 1000*10 if 10 second data was gathered at 1000Hz).
I want to learn from this data and given a new dataset with 3 time serieses for input variables, I want to predict the time series for the output variable.
I will write the problem below in some non-English notation. I will avoid using terms like features, sample, target etc because since I haven't formulated the problem for any algorithm, I don't want to speculate what will be what.
Datasets to learn from look like this:
dataset1:{Inputs=(timSeries1,timSeries2,timSeries3), Output=(timSeriesOut)}
dataset2:{Inputs=(timSeries1,timSeries2,timSeries3), Output=(timSeriesOut)}
dataset3:{Inputs=(timSeries1,timSeries2,timSeries3), Output=(timSeriesOut)}
.
.
datasetn:{Inputs=(timSeries1,timSeries2,timSeries3), Output=(timSeriesOut)}
Now, given a new (timSeries1, timSeries2, timSeries3) I want to predict (timSeriesOut)
datasetPredict:{Inputs=(timeSeries1,timSeries2,timSeries3), Output = ?}
What technique should I use and how should the problem be formulated? Should I just break it as separate learning problem for each time stamp with three features and one target (either for that or next timestamp)?
Thank you all!

Related

forecasting with TFT - forecast results are very flat

I'm using TFT (Temporal Fusion Transformer) from Pytorch-forecating for the first time for my forecasting project. Im quite confused by a few things:
forecasted time series is very flat, what are the possible reasons for this? is it my train data set is too short? the train data set length is between 1 max_encoder_length and 2 times of max_encoder_length, ratio of max_encoder_length:max_prediction_length is 4:1;
how does optimize_hyperparameters work? will it update the model with best parameters after being run? or I should manually update the model with output values of the process? Im asking because after I run it, nothing seem happened, the model remain unchanged.

Clustering "access-time" data sequences

I have many sequences of data looking like this:
s1 = t11, t12, ..., t1m_1
s2 = t21, t22, ..., t2m_2
...
si = ti1, ti2, ..., tim_i
si means the i-th sequence, tij means the i-th sequence be accessed at time tj
each sequence has different length of data (m_1 may not equal to m_2),
and each sequence's data means that the sequence si was accessed time at ti1, ti2, ..., tim_i.
My goal is to cluster the similar access-time sequences.
I'm not sure whether I can translate this problem to a time-series problem.
For my understanding the time-series data like that each sequence's data means the value at that time like stock data, but my sequence's value means which time the sequence be accessed.
If it can translate to time-series problem, but there is another problem. The problem is that the sequence's access time is very discrete (may be accessed at 1s, 1000s, 2000s), so if I translate to time-series format, its space would be very large, I think this can't run cluster with some algorithm like (DTW), its time complexity may too large.
As you pointed out, DTW would be quite slow, since comparing the first two series takes k * m_1 * m_2 operations.
To avoid this, and to more easily compare your sequences, you might somehow hammer them into the same format (thereby also losing information).
Here are some ideas:
Differentiate to obtain times-between-accesses, and build histograms with fixed bins across all data.
Count the number of accesses during each minute every week (and divide by number of times that minute-of-week appears in each series). Adapt to timescales of interest.
Count "number of accesses up until now". So, instead of having data points only when an access was made ("sparse"), you'd get a data point for every timestamp ("dense") showing accesses for every minute up to the current one.
#3 would be similar to an "integral image" in computer vision. After this, new summarization techniques open up, like moving averages, or even direct comparison (if the recordings happen in parallel).
In order to pick a more useful representation, you need to think about what is meaningful in your application.
After you get a uniform-length representation, you can use cheaper similarity measures. A typical one is cosine similarity (but be sure to normalize first).

Interpreting Seasonality in Time Series

I have a discrete time series covering 49 quarters between January 2007 and March 2019, which I am trying to analyse. Before undertaking various forms of analysis I wanted to check for the existence of seasonality and have tried to methods for such in R. In the first I used the WO function (Webel and Ollech) from the seastests package, which informed me that the data did not display seasonality.
library(seastests)
summary(wo(tt))
> summary(wo(tt))
Test used: WO
Test statistic: 0
P-value: 0.8174965 0.5785041 0.2495668
The WO - test does not identify seasonality
However, I wanted to check such again and used the decompose function, from which I got the below, which would appear to suggest a seasonal component. Can anyone advise if;
I am reading the decomposed data correctly?
AND
Why there is such disagreement between decompose and the seastest results?
The decompose function is a simple function that basically estimates the (moving) period average. The volatility of your time series increases strongly in the last years. Thus the averages may pick up on some random increases. Also, the seasonal component that you obtain using the decompose() function will basically always look seasonal.
set.seed(1234)
x <- ts(rnorm(80), frequency=4)
seastests::wo(x)
plot(decompose(x))
Therefore, seasonality tests are preferable to assessing whether a time series really is seasonal.
Still, if you have information that the data generating process has changed, you may want to use the test on the last few years of observations.

Are data dependencies relevant when preparing data for neural network?

Data: When I have N rows of data like this: (x,y,z) where logically f(x,y)=z, that is z is dependent on x and y, like in my case (setting1, setting2 ,signal) . Different x's and y's can lead to the same z, but the z's wouldn't mean the same thing.
There are 30 unique setting1, 30 setting2 and 1 signal for each (setting1, setting2)-pairing, hence 900 signal values.
Data set: These [900,3] data points are considered 1 data set. I have many samples of these data sets.
I want to make a classification based on these data sets, but I need to flatten the data (make them all into one row). If I flatten it, I will duplicate all the setting values (setting1 and setting2) 30 times, i.e. I will have a row with 3x900 columns.
Question:
Is it correct to keep all the duplicate setting1,setting2 values in the data set? Or should I remove them and only include the unique values a single time?, i.e. have a row with 30 + 30 + 900 columns. I'm worried, that the logical dependency of the signal to the settings will be lost this way. Is this relevant? Or shouldn't I bother including the settings at all (e.g. due to correlations)?
If I understand correctly, you are training NN on a sample where each observation is [900,3].
You are flatning it and getting an input layer of 3*900.
Some of those values are a result of a function on others.
It is important which function, as if it is a liniar function, NN might not work:
From here:
"If inputs are linearly dependent then you are in effect introducing
the same variable as multiple inputs. By doing so you've introduced a
new problem for the network, finding the dependency so that the
duplicated inputs are treated as a single input and a single new
dimension in the data. For some dependencies, finding appropriate
weights for the duplicate inputs is not possible."
Also, if you add dependent variables you risk the NN being biased towards said variables.
E.g. If you are running LMS on [x1,x2,x3,average(x1,x2)] to predict y, you basically assign a higher weight to the x1 and x2 variables.
Unless you have a reason to believe that those weights should be higher, don't include their function.
I was not able to find any link to support, but my intuition is that you might want to decrease your input layer in addition to omitting the dependent values:
From professor A. Ng's ML Course I remember that the input should be the minimum amount of values that are 'reasonable' to make the prediction.
Reasonable is vague, but I understand it so: If you try to predict the price of a house include footage, area quality, distance from major hub, do not include average sun spot activity during the open home day even though you got that data.
I would remove the duplicates, I would also look for any other data that can be omitted, maybe run PCA over the full set of Nx[3,900].

How to predict several unlabelled attributes at once using WEKA and NaiveBayes?

I have a binary array that has 96 elements, it could look someting like this:
[false, true, true, false, true, true, false, false, false, true.....]
Each element represents a time interval in 15 minutes starting from 00.00. The first element is 00.15, the second is 00.30, the third 00.45 etc. The boolean tells whether a house has been occupied in that time interval.
I want to train a classifier, so that it can predict the rest of a day, when only some part of the day is known. Let's say I have observations for the past 100 days, and I only know the the first 20 elements of the current day.
How can I use classification to predict the rest of the day?
I tried creating a ARFF file that looks like this:
#RELATION OccupancyDetection
#ATTRIBUTE Slot1 {true, false}
#ATTRIBUTE Slot2 {true, false}
#ATTRIBUTE Slot3 {true, false}
...
#ATTRIBUTE Slot96 {true, false}
#DATA
false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,true,true,true,true,true,true,true,false,true,true,true,false,true,false,false,false,false,false,false,true,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false
false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,true,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,true,true,true,true,true,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,true,true,true,true,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false,false
.....
And did a Naive Bayes classification on it. The problem is, that the results only show the success of one attribute (the last one, for instance).
A "real" sample taken on a given day might look like this:
true,true,true,true,true,true,true,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?
How can i predict all the unlabelled attributes at once?
I made this based on the WekaManual-3-7-11, and it works, but only for a single attribute:
..
Instances unlabeled = DataSource.read("testWEKA1.arff");
unlabeled.setClassIndex(unlabeled.numAttributes() - 1);
// create copy
Instances labeled = new Instances(unlabeled);
// label instances
for (int i = 0; i < unlabeled.numInstances(); i++) {
double clsLabel = classifier.classifyInstance(unlabeled.instance(i));
labeled.instance(i).setClassValue(clsLabel);
DataSink.write("labeled.arff", labeled);
Sorry, but I don't believe that you can predict multiple attributes using Naive Bayes in Weka.
What you could do as an alternative, if running Weka through Java code, is loop through all of the attributes that need to be filled. This could be done by building classifiers with n attributes and filling in the next blank until all of the missing data is entered.
It also appears that what you have is time-based as well. Perhaps if the model was somewhat restructured, it may be able to all fit within a single model. For example, you could have attributes for prediction time, day of week and presence over the last few hours as well as attributes that describe historical presence in the house. It might be going over the top for your problem, but could also eliminate the need for multiple classifiers.
Hope this Helps!
Update!
As per your request, I have taken a couple of minutes to think about the problem at hand. The thing about this time-based prediction is that you want to be able to predict the rest of the day, and the amount of data available for your classifier is dynamic depending on the time of day. This would mean that, given the current structure, you would need a classifier to predict values for each 15 minute time-slot, where earlier timeslots contain far less input data than the later timeslots.
If it is possible, you could instead use a different approach where you could use an equal amount of historical information for each time slot and possibly share the same classifier for all cases. One possible set of information could be as outlined below:
The Time Slot to be estimated
The Day of Week
The Previous hour or two of activity
Other Activity for the previous 24 Hours
Historical Information about the general timeslot
If you obtain your information on a daily basis, it may be possible to quantify each of these factors and then us it to predict any time slot. Then, if you wanted to predict it for a whole day, you could keep feeding it the previous predictions until you have completed the predictions for the day.
I have done a similar problem for predicting time of arrival based on similar factors (previous behavior, public holidays, day of week, etc.) and the estimates were usually reasonable, though as accurate as you could expect for human process.
I can't tell if there's something wrong with your arff file.
However, here's one idea: you can add a NominalToBinary unsupervised-Attribute-filter to make sure that the attributes slot1-slot96 are recognized as binary.
There two frameworks which provide multi-label learning and work on top of WEKA:
MULAN: http://mulan.sourceforge.net/
MEKA: http://meka.sourceforge.net/
I only tried MULAN and it works very good. To get the latest release you need to clone their git repository and build the project.

Resources