What is the best way to detect seasonality in a signal (time series) in Python? I want to provide the algorithm with the signal and the output should be a 1 indicating seasonality exists and 0 indicating it does not exist.
Hope that helps for some basic usage, still I do not suggest it for complicated problems. A simple seasonality detection code I wrote:
def check_repetition(arr, limit, index_start, index_end):
"""
Checks repetition in data so that we can apply de-noising.
"""
length = index_start
# length is the length we want to apply the checking
# check how many periods there are with that kind of length
for i in range(0, int( len(array)/length)):
# if the difference in seasons is not smaller than the limit
condition = np.array( arr[i:int(i+length)]) - np.array( arr[int( i+length):int( i+2*length)])
condition = np.sum([abs(number) for number in condition])
if condition >= limit :
# check if the length is still bigger than the limit
# increase the length to check
if length + 1 <= index_end:
#print( "Checked for length:" + str( length))
return check_repetition(arr, limit, length + 1, index_end)
# if not than no more computations needed
else:
return 0
# if it passed the for loop for one cycle of i then return the number of entries per cycle
if i == int( len(array)/length)-2:
return(length)
# if nothing worked
return 0
This returns the seasonality length. You can play with it in starting with seasonality from array length/2 to a small value, or the opposite. Also included is some noise detection with the parameter limit, which should limit the amount of noise accepted.
Related
Given input signal x (e.g. a voltage, sampled thousand times per second couple of minutes long), I'd like to calculate e.g.
/ this is not q
y[3] = -3*x[0] - x[1] + x[2] + 3*x[3]
y[4] = -3*x[1] - x[2] + x[3] + 3*x[4]
. . .
I'm aiming for variable window length and weight coefficients. How can I do it in q? I'm aware of mavg and signal processing in q and moving sum qidiom
In the DSP world it's called applying filter kernel by doing convolution. Weight coefficients define the kernel, which makes a high- or low-pass filter. The example above calculates the slope from last four points, placing the straight line via least squares method.
Something like this would work for parameterisable coefficients:
q)x:10+sums -1+1000?2f
q)f:{sum x*til[count x]xprev\:y}
q)f[3 1 -1 -3] x
0n 0n 0n -2.385585 1.423811 2.771659 2.065391 -0.951051 -1.323334 -0.8614857 ..
Specific cases can be made a bit faster (running 0 xprev is not the best thing)
q)g:{prev[deltas x]+3*x-3 xprev x}
q)g[x]~f[3 1 -1 -3]x
1b
q)\t:100000 f[3 1 1 -3] x
4612
q)\t:100000 g x
1791
There's a kx white paper of signal processing in q if this area interests you: https://code.kx.com/q/wp/signal-processing/
This may be a bit old but I thought I'd weigh in. There is a paper I wrote last year on signal processing that may be of some value. Working purely within KDB, dependent on the signal sizes you are using, you will see much better performance with a FFT based convolution between the kernel/window and the signal.
However, I've only written up a simple radix-2 FFT, although in my github repo I do have the untested work for a more flexible Bluestein algorithm which will allow for more variable signal length. https://github.com/callumjbiggs/q-signals/blob/master/signal.q
If you wish to go down the path of performing a full manual convolution by a moving sum, then the best method would be to break it up into blocks equal to the kernel/window size (which was based on some work Arthur W did many years ago)
q)vec:10000?100.0
q)weights:30?1.0
q)wsize:count weights
q)(weights$(((wsize-1)#0.0),vec)til[wsize]+) each til count v
32.5931 75.54583 100.4159 124.0514 105.3138 117.532 179.2236 200.5387 232.168.
If your input list not big then you could use the technique mentioned here:
https://code.kx.com/q/cookbook/programming-idioms/#how-do-i-apply-a-function-to-a-sequence-sliding-window
That uses 'scan' adverb. As that process creates multiple lists which might be inefficient for big lists.
Other solution using scan is:
q)f:{sum y*next\[z;x]} / x-input list, y-weights, z-window size-1
q)f[x;-3 -1 1 3;3]
This function also creates multiple lists so again might not be very efficient for big lists.
Other option is to use indices to fetch target items from the input list and perform the calculation. This will operate only on input list.
q) f:{[l;w;i]sum w*l i+til 4} / w- weight, l- input list, i-current index
q) f[x;-3 -1 1 3]#'til count x
This is a very basic function. You can add more variables to it as per your requirements.
I am working with a data-set of patient information and trying to calculate the Propensity Score from the data using MATLAB. After removing features with many missing values, I am still left with several missing (NaN) values.
I get errors due to these missing values, as the values of my cost-function and gradient vector become NaN, when I try to perform logistic regression using the following Matlab code (from Andrew Ng's Coursera Machine Learning class) :
[m, n] = size(X);
X = [ones(m, 1) X];
initial_theta = ones(n+1, 1);
[cost, grad] = costFunction(initial_theta, X, y);
options = optimset('GradObj', 'on', 'MaxIter', 400);
[theta, cost] = ...
fminunc(#(t)(costFunction(t, X, y)), initial_theta, options);
Note: sigmoid and costfunction are working functions I created for overall ease of use.
The calculations can be performed smoothly if I replace all NaN values with 1 or 0. However I am not sure if that is the best way to deal with this issue, and I was also wondering what replacement value I should pick (in general) to get the best results for performing logistic regression with missing data. Are there any benefits/drawbacks to using a particular number (0 or 1 or something else) for replacing the said missing values in my data?
Note: I have also normalized all feature values to be in the range of 0-1.
Any insight on this issue will be highly appreciated. Thank you
As pointed out earlier, this is a generic problem people deal with regardless of the programming platform. It is called "missing data imputation".
Enforcing all missing values to a particular number certainly has drawbacks. Depending on the distribution of your data it can be drastic, for example, setting all missing values to 1 in a binary sparse data having more zeroes than ones.
Fortunately, MATLAB has a function called knnimpute that estimates a missing data point by its closest neighbor.
From my experience, I often found knnimpute useful. However, it may fall short when there are too many missing sites as in your data; the neighbors of a missing site may be incomplete as well, thereby leading to inaccurate estimation. Below, I figured out a walk-around solution to that; it begins with imputing the least incomplete columns, (optionally) imposing a safe predefined distance for the neighbors. I hope this helps.
function data = dnnimpute(data,distCutoff,option,distMetric)
% data = dnnimpute(data,distCutoff,option,distMetric)
%
% Distance-based nearest neighbor imputation that impose a distance
% cutoff to determine nearest neighbors, i.e., avoids those samples
% that are more distant than the distCutoff argument.
%
% Imputes missing data coded by "NaN" starting from the covarites
% (columns) with the least number of missing data. Then it continues by
% including more (complete) covariates in the calculation of pair-wise
% distances.
%
% option,
% 'median' - Median of the nearest neighboring values
% 'weighted' - Weighted average of the nearest neighboring values
% 'default' - Unweighted average of the nearest neighboring values
%
% distMetric,
% 'euclidean' - Euclidean distance (default)
% 'seuclidean' - Standardized Euclidean distance. Each coordinate
% difference between rows in X is scaled by dividing
% by the corresponding element of the standard
% deviation S=NANSTD(X). To specify another value for
% S, use D=pdist(X,'seuclidean',S).
% 'cityblock' - City Block distance
% 'minkowski' - Minkowski distance. The default exponent is 2. To
% specify a different exponent, use
% D = pdist(X,'minkowski',P), where the exponent P is
% a scalar positive value.
% 'chebychev' - Chebychev distance (maximum coordinate difference)
% 'mahalanobis' - Mahalanobis distance, using the sample covariance
% of X as computed by NANCOV. To compute the distance
% with a different covariance, use
% D = pdist(X,'mahalanobis',C), where the matrix C
% is symmetric and positive definite.
% 'cosine' - One minus the cosine of the included angle
% between observations (treated as vectors)
% 'correlation' - One minus the sample linear correlation between
% observations (treated as sequences of values).
% 'spearman' - One minus the sample Spearman's rank correlation
% between observations (treated as sequences of values).
% 'hamming' - Hamming distance, percentage of coordinates
% that differ
% 'jaccard' - One minus the Jaccard coefficient, the
% percentage of nonzero coordinates that differ
% function - A distance function specified using #, for
% example #DISTFUN.
%
if nargin < 3
option = 'mean';
end
if nargin < 4
distMetric = 'euclidean';
end
nanVals = isnan(data);
nanValsPerCov = sum(nanVals,1);
noNansCov = nanValsPerCov == 0;
if isempty(find(noNansCov, 1))
[~,leastNans] = min(nanValsPerCov);
noNansCov(leastNans) = true;
first = data(nanVals(:,noNansCov),:);
nanRows = find(nanVals(:,noNansCov)==true); i = 1;
for row = first'
data(nanRows(i),noNansCov) = mean(row(~isnan(row)));
i = i+1;
end
end
nSamples = size(data,1);
if nargin < 2
dataNoNans = data(:,noNansCov);
distances = pdist(dataNoNans);
distCutoff = min(distances);
end
[stdCovMissDat,idxCovMissDat] = sort(nanValsPerCov,'ascend');
imputeCols = idxCovMissDat(stdCovMissDat>0);
% Impute starting from the cols (covariates) with the least number of
% missing data.
for c = reshape(imputeCols,1,length(imputeCols))
imputeRows = 1:nSamples;
imputeRows = imputeRows(nanVals(:,c));
for r = reshape(imputeRows,1,length(imputeRows))
% Calculate distances
distR = inf(nSamples,1);
%
noNansCov_r = find(isnan(data(r,:))==0);
noNansCov_r = noNansCov_r(sum(isnan(data(nanVals(:,c)'==false,~isnan(data(r,:)))),1)==0);
%
for i = find(nanVals(:,c)'==false)
distR(i) = pdist([data(r,noNansCov_r); data(i,noNansCov_r)],distMetric);
end
tmp = min(distR(distR>0));
% Impute the missing data at sample r of covariate c
switch option
case 'weighted'
data(r,c) = (1./distR(distR<=max(distCutoff,tmp)))' * data(distR<=max(distCutoff,tmp),c) / sum(1./distR(distR<=max(distCutoff,tmp)));
case 'median'
data(r,c) = median(data(distR<=max(distCutoff,tmp),c),1);
case 'mean'
data(r,c) = mean(data(distR<=max(distCutoff,tmp),c),1);
end
% The missing data in sample r is imputed. Update the sample
% indices of c which are imputed.
nanVals(r,c) = false;
end
fprintf('%u/%u of the covariates are imputed.\n',find(c==imputeCols),length(imputeCols));
end
To deal with missing data you can use one of the following three options:
If there are not many instances with missing values, you can just delete the ones with missing values.
If you have many features and it is affordable to lose some information, delete the entire feature with missing values.
The best method is to fill some value (mean, median) in place of missing value. You can calculate the mean of the rest of the training examples for that feature and fill all the missing values with the mean. This works out pretty well as the mean value stays in the distribution of your data.
Note: When you replace the missing values with the mean, calculate the mean only using training set. Also, store that value and use it to change the missing values in the test set also.
If you use 0 or 1 to replace all the missing values then the data may get skewed so it is better to replace the missing values by an average of all the other values.
Algo question
Binary array of 0/1 given
In one operation i can flip any array[index] of array i.e. 0->1 or 1->0
so aim is to minimize the maximum lenth of continious 1's or 0's by using atmost k flips
eg if 11111 if array and k=1 ,best is to make array as 11011
And minimized value of maximum continous 1's or 0's is 2
for 111110111111 and k=3 ans is 2
I tried Brute Force (by trying various position flips) but its not efficient
I think Greedy ,but can not figure out exactly
can you please help me for algo,O(n) or similar
A solution could be devised by reading each bit in order and recording the size of each continuous group of 1 into a list A.
Once you are done filling A, you can follow the algorithm narrated by the pseudocode below:
result = N
for i = 1 to N
flips_needed = 0
for a in A:
flips_needed += <number of flips needed to make sure largest group remaining in a is of size i>
if k >= flips_needed:
result = flips_needed
break
return result
N is the number of bits in the entire initial sequence.
The algorithm above works by dividing the groups of 1 into sizes of at most i. Whenever doing that requires <= k, we have the result we are looking for, as i starts from 1 and goes up. (i.e. when we found flips_needed <= k, we know the groups of 1 are as minimal as they can get)
We have a boolean variable X which is either true or false and alternates at each time step with a probability p. I.e. if p is 0.2, X would alternate once every 5 time steps on average. We also have a time line and observations of the value of this variable at various non-uniformly sampled points in time.
How would one learn, from observations, the probability that after t+n time steps where t is the time X is observed and n is some time in the future that X has alternated/changed value at t+n given that p is unknown and we only have observations of the value of X at previous times? Note that I count changing from true to false and back to true again as changing value twice.
I'm going to approach this problem as if it were on a test.
First, let's name the variables.
Bx is value of the boolean variable after x opportunities to flip (and B0 is the initial state). P is the chance of changing to a different value every opportunity.
Given that each flip opportunity is not related to other flip opportunities (there is, for example, no minimum number of opportunities between flips) the math is extremely simple; since events are not affected by the events of the past, we can consolidate them into a single computation, which works best when considering Bx not as a boolean value, but as itself a probability.
Here is the domain of the computations we will use: Bx is a probability (with a value between 0 and 1 inclusive) representing the likelyhood of truth. P is a probability (with a value between 0 and 1 inclusive) representing the likelyhood of flipping at any given opportunity.
The probability of falseness, 1 - Bx, and the probability of not flipping, 1 - P, are probabilistic identities which should be quite intuitive.
Assuming these simple rules, the general probability of truth of the boolean value is given by the recursive formula Bx+1 = Bx*(1-P) + (1-Bx)*P.
Code (in C++, because it's my favorite language and you didn't tag one):
int max_opportunities = 8; // Total number of chances to flip.
float flip_chance = 0.2; // Probability of flipping each opportunity.
float probability_true = 1.0; // Starting probability of truth.
// 1.0 is "definitely true" and 0.0 is
// "definitely false", but you can extend this
// to situations where the initial value is not
// certain (say, 0.8 = 80% probably true) and
// it will work just as well.
for (int opportunities = 0; opportunities < max_opportunities; ++opportunities)
{
probability_true = probability_true * (1 - flip_chance) +
(1 - probability_true) * flip_chance;
}
Here is that code on ideone (the answer for P=0.2 and B0=1 and x=8 is B8=0.508398). As you would expect, given that the value becomes less and less predictable as more and more opportunities pass, the final probability will approach Bx=0.5. You will also observe oscillations between more and less likely to be true, if your chance of flipping is high (for instance, with P=0.8, the beginning of the sequence is B={1.0, 0.2, 0.68, 0.392, 0.46112, ...}.
For a more complete solution that will work for more complicated scenarios, consider using a stochastic matrix (page 7 has an example).
I know this question has been asked ad nauseam but somehow I can't make it work properly. I created a single, sine wave of 440 Hz having a unit amplitude. Now, after the FFT, the bin at 440 Hz has a distinct peak but the value just isn't right. I'd expect to see 0 dB since I'm dealing with a unit amplitude sine wave. Instead, the power calculated is well above 0 dB. The formula I'm using is simply
for (int i = 0; i < N/2; i++)
{
mag = sqrt((Real[i]*Real[i] + Img[i]*Img[i])/(N*0.54)); //0.54 correction for a Hamming Window
Mag[i] = 10 * log(mag) ;
}
I should probably point out that the total energy in the time domain is equal to the energy in the frequency domain (Parseval's theorem), so I know that my FFT routine is fine.
Any help is much appreciated.
I've been struggling with this again for work. It seems that a lot of software routines / books are a bit sloppy on the normalization of the FFT.
The best summary I have is: Energy needs to be conserved - which is Parseval's theorem. Also when coding this in Python, you can easily loose an element and not know it. Note that numpy.arrays indexing is not inclusive of the last element.
a = [1,2,3,4,5,6]
a[1:-1] = [2,3,4,5]
a[-1] = 6
Here's my code to normalize the FFT properly:
# FFT normalization to conserve power
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal
sample_rate = 500.0e6
time_step = 1/sample_rate
carrier_freq = 100.0e6
# number of digital samples to simulate
num_samples = 2**18 # 262144
t_stop = num_samples*time_step
t = np.arange(0, t_stop, time_step)
# let the signal be a voltage waveform,
# so there is no zero padding
carrier_I = np.sin(2*np.pi*carrier_freq*t)
#######################################################
# FFT using Welch method
# windows = np.ones(nfft) - no windowing
# if windows = 'hamming', etc.. this function will
# normalize to an equivalent noise bandwidth (ENBW)
#######################################################
nfft = num_samples # fft size same as signal size
f,Pxx_den = scipy.signal.welch(carrier_I, fs = 1/time_step,\
window = np.ones(nfft),\
nperseg = nfft,\
scaling='density')
#######################################################
# FFT comparison
#######################################################
integration_time = nfft*time_step
power_time_domain = sum((np.abs(carrier_I)**2)*time_step)/integration_time
print 'power time domain = %f' % power_time_domain
# Take FFT. Note that the factor of 1/nfft is sometimes omitted in some
# references and software packages.
# By proving Parseval's theorem (conservation of energy) we can find out the
# proper normalization.
signal = carrier_I
xdft = scipy.fftpack.fft(signal, nfft)/nfft
# fft coefficients need to be scaled by fft size
# equivalent to scaling over frequency bins
# total power in frequency domain should equal total power in time domain
power_freq_domain = sum(np.abs(xdft)**2)
print 'power frequency domain = %f' % power_freq_domain
# Energy is conserved
# FFT symmetry
plt.figure(0)
plt.subplot(2,1,1)
plt.plot(np.abs(xdft)) # symmetric in amplitude
plt.title('magnitude')
plt.subplot(2,1,2)
plt.plot(np.angle(xdft)) # pi phase shift out of phase
plt.title('phase')
plt.show()
xdft_short = xdft[0:nfft/2+1] # take only positive frequency terms, other half identical
# xdft[0] is the dc term
# xdft[nfft/2] is the Nyquist term, note that Python 2.X indexing does NOT
# include the last element, therefore we need to use 0:nfft/2+1 to have an array
# that is from 0 to nfft/2
# xdft[nfft/2-x] = conjugate(xdft[nfft/2+x])
Pxx = (np.abs(xdft_short))**2 # power ~ voltage squared, power in each bin.
Pxx_density = Pxx / (sample_rate/nfft) # power is energy over -fs/2 to fs/2, with nfft bins
Pxx_density[1:-1] = 2*Pxx_density[1:-1] # conserve power since we threw away 1/2 the spectrum
# note that DC (0 frequency) and Nyquist term only appear once, we don't double those.
# Note that Python 2.X array indexing is not inclusive of the last element.
Pxx_density_dB = 10*np.log10(Pxx_density)
freq = np.linspace(0,sample_rate/2,nfft/2+1)
# frequency range of the fft spans from DC (0 Hz) to
# Nyquist (Fs/2).
# the resolution of the FFT is 1/t_stop
# dft of size nfft will give nfft points at frequencies
# (1/stop) to (nfft/2)*(1/t_stop)
plt.figure(1)
plt.plot(freq,Pxx_density_dB,'^')
plt.figure(1)
plt.plot(f, 10.0*np.log10(Pxx_den))
plt.xlabel('Freq (Hz)'),plt.ylabel('dBm/Hz'),#plt.show()
plt.ylim([-200, 0])
plt.show()
Many common (but not all) FFT libraries scale the FFT result of a unit amplitude sinusoid by the length of the FFT. This maintains Parsevals equality since a longer sinusoid represents more total energy than a shorter one of the same amplitude.
If you don't want to scale by the FFT length when using one of these libraries, then divide by the length before computing the magnitude in dB.
Normalization can be done in many different ways - depending on window, number of samples, etc.
Common trick: take FFT of known signal and normalize by the value of the peak. Say in the above example your peak is 123 - if you want it to be 1, then divide it ( and all results obtained with this algorithm) by 123.