I am new in time series analysis. I am trying to find the trend of a short (1 day) temperature time series and tried to different approximations. Moreover, sampling frequency is 2 minute. The data were collocated for different stations. And I will compare different trends to see whether they are similar or not.
I am facing three challenges in doing this:
Q1 - How I can extract the pattern?
Q2 - How I can quantify the trend since I will compare trends belong to two different places?
Q3 - When can I say two trends are similar or not similar?
Q1 -How I can extract the pattern?
You would start by performing time series analysis on both your data sets. You will need a statistical library to do the tests and comparisons.
If you can use Python, pandas is a good option.
In R, the forecast package is great. Start by running ets on both data sets.
Q2 - How I can quantify the trend since I will compare trends belong to two different places?
The idea behind quantifying trend is to start by looking for a (linear) trend line. All stats packages can assist with this. For example, if you are assuming a linear trend, then the line that minimizes the squared deviation from your data points.
The Wikipedia article on trend estimation is quite accessible.
Also, keep in mind that trend can be linear, exponential or damped. Different trending parameters can be tried to take care of these.
Q3 - When can I say two trends are similar or not similar?
Run ARIMA on both data sets. (The basic idea here is to see if the same set of parameters (which make up the ARIMA model) can describe both your temp time series. If you run auto.arima() in forecast (R), then it will select the parameters p,d,q for your data, a great convenience.
Another thought is to perform a 2-sample t-test of both your series and check the p-value for significance. (Caveat: I am not a statistician, so I am not sure if there is any theory against doing this for time series.)
While researching I came across the Granger Test – where the basic idea is to see if one time series can help in forecasting another. Seems very applicable to your case.
So these are just a few things to get you started. Hope that helps.
Related
I want to see if the following problem can be solved by using neural networks: I have a database containing over 1000 basketball events, where the total score has been recorded every second from minute 5 till minute 20, and where the basketball games are all from the same league. This means that the events are occurring on different time periods. The data is afterwards interpolated to have the exact time difference between two timesteps, and thus obtaining exactly 300 points between minute 5 and minute 20. This can be seen here:
Time series. The final goal is to have a model that can predict the y values between t=15 till t=20 and use as input data the y values between t=5 and t=15. I want to train the model by using the database containing the 1000 events. For this I tried using the following network:
input data vs output data
Neural network
The input data, that will be used to train the neural network model would have the shape (1000,200) and the output data, would have the shape (1000,100).
Can someone maybe guide me in the right direction for this and maybe give some feedback if this is a correct approach for such a problem, I have found some previous time series problems, but all of them were based on one large time series, while in this situation I have 1000 different time series.
There are a couple different ways to approach this problem. Based on the comments this sounds like a univariate/multi-step time series forecasting albeit across many different events.
First to clarify most deep learning for time series models/frameworks take data in the following format (batch_size, n_historical_steps, n_feature_time_series) and output the result in the format (batch_size, n_forecasted_steps, n_targets) .
Since this is a univariate forecasting problem n_feature_time_series would be one (unless I'm missing something). Now n_historical_steps is a hyper parameter we often optimize on as often the entire temporal history is not relevant to forecasting the next time n steps. You might want to try optimizing on that as well. However let say you choose to use the full temporal history then this would look like (batch_size, 200, 1). Following this approach you might then have output shape of (batch_size, 100, 1). You could then use a batch_size of 1000 to feed in all the different events at once (assuming of course you have a different validation/test set).This would give you an input shape of (1000, 200, 1) This is how you would likely do it for instance if you were going to use models like DA-RNN, LSTM, vanilla Transformer, etc.
There are some other models though that would create a learnable series embedding_id such as the Convolutional Transformer Paper or Deep AR. This is essentially a unique series identifier that would be associated with each event and the model would learn to forecast in the same pass on each.
I have models of both varieties implemented that you could use in Flow Forecast. Though I don't have any detailed tutorials on this type of problem at the moment. I will also say also that in all honesty given that you only have 1000 BB events (each with only 300 univariate time steps) and the many variables in play at Basketball I doubt that you will be able to accomplish this task with any real degree of accuracy. I would guess you probably need at least 20k+ basketball event data to be able to forecast this type of problem well with deep learning at least.
I'm working on my idea for Master thesis topic.
I get a dataset with milions of records which describe on-street parking sensors.
Data i have :
-vehicle present on particular sensor ( true or false)
It's normal that there are few parking event where there are False values with different duration time in a row.
-arrival time and departure time(month,day,hour,minute and even second)
-duration in minutes
And few more columns, but i don't have any idea how to show in my analysis that "continuity of time" and
reflect this in the calculations for a certain future time based on the time when the parking space was usually free or occupied.
Any ideas?
You can take two approaches:
If you want to predict whether a particular space will be occupied or not and if you take in count order of the events (TIME), this seems like a time series problem. You should start by trying simple time-series algorithms like Moving average or ARIMA Models. There are more sophisticated methods that take in count long and short term relationships, like recurrent neural networks, especially LSTM (Long short-term memory) which have shown good performance in time series problems.
You can take in the count all variables and use them to train a clustering algorithm like K-means or SVM.
As you pointed out:
And few more columns, but I don't have any idea how to show in my analysis that "continuity of time" and reflect this in the calculations for a certain future time based on the time when the parking space was usually free or occupied.
I recommend you to work this problem as a time series problem.
Timeseries modeling will be better option for this kind of modelling. As you said you want to predict binary output at different time intervals i.e whether the the parking slot will be occupied at the particular time interval or not. You can use LSTM for this purpose.
Time series is definitely an option here... if you are really going with LSTMs why not look into Transformers and take advantage of attention mechanism while doing time series forecasting !! I don't know them thoroughly, yet, just have a vague idea and performance benefits over RNNs and LSTM.
I have a problem where I have a lot of data about 1 year recordings of thermostats where every hour it gives me the mean temperature in that household. But a lot of data is not available due to they only installed the thermostat in the middle of the year or they put out the thermostat for a week or ... But a lot of this thermostat data is really similar. What I want to do is impute the missing data using similar timeseries.
So lets say house A only started in july but from there they are very similar to household B I would want to then use the info from household B to predict what the data dould be before july in house A.
I was thinking about training a Recurrent Neural Network that could do this for me but I am not shure what is out there to do this and when I search for papers and such they almost exclusively work on data sets over multiple years and impute the data using the data of previous years. I do not have this data, so that is not an option.
Does anyone have a clue how to tackle this problem or a refference I could use that solves a similar problem ?
As I understand it you want to impute the data using cross-sectional data rather than time series information.
There are actually quite a lot of imputation packages that can do this for you in R. (if you are using R)
You'd need equally spaced data. So 1 values per hour and if it is not present, then it needs to be NA. So ideally you have then multiple time series of qual length.
Then you merge these time series according to the time stamp / hour.
Afterwards you can apply an imputation package like e.g. mice, missForest, imputeR with basically one line of code. These packages will use the correlations between the different time series to estimate the missing values in these series.
This question might sound trivial to some but I just got interested in time series analysis and have been reading up about it in the last couple of days. However, I am yet to understand the topic of identifying stationary/non-stationary time-series data. I generated some time-series data of two dimensions using some tool I found. Plotting it out, I get something like in this image:
Looking at the plot, I think it shows some seasonalities (with the spike in the middle) and I would say its not stationary. However, doing the stationarity test as described in Machine Learning Mastery, it passed the stationarity test (the tests says its stationary) . Now, I'm confused maybe I didn't understand what seasons and trends means in time-series data. Am I wrong in thinking that the spikes hints at seasons?
Judging from the plot, your data looks like white noise, which is a type of stationary random data. A stationary time series has constant mean (in your case zero), variance, autocorrelation, etc. across time.
Seasonality is regular patterns that occur at specific calendar intervals within a year, for example quarterly, monthly, or daily patterns. Accordingly, large spikes in a plot do not usually indicate seasonality.
In contrast, the following time series (using R) exhibits an upward trend, monthly seasonality, and increasing variance:
plot(AirPassengers)
In sum, the AirPassengers time series is not stationary.
I have a dataset with presence-absence species data measured at several different sites. The data was measured over the span of 10 years. On many sites, measurements were taken several times within a year. The frequency of measurements is not constant nor were all sites measured several times, some were even only measured once.
I know that a classical Detrended Correspondance Analysis is not helpful here, since it does not consider the cofactor time. Is there any way to include all sampling points or any other correspondance analysis method that is useful here?
Thanks a lot for any help!
If you want to estimate the time effect or partial it out, yes, but not in vegan. Canoco has detrended canonical correspondence analysis (DCCA), the constrained form of DCA but vegan doesn't and is unlikely to ever have it.
There's nothing stopping you throwing all samples into a DCA you just can't remove the temporal effects.
Alternatively, choose a suitable dissimilarity coefficient and use NMDS via vegan's wrapper metaMDS(). This will give you a DCA-like analysis. If you want to account for the temporal effects, then using the same dissimilarity look at dbrda() as one option.