I have read in all books that:
Impulse(n) = 1 when n=0 , 0 otherwise
So when we state impulse response of a signal x(n), I do not understand what is its actual meaning -
Does it mean that for n=1,2,3,4... value of:
x(n) . Impulse(0)
Hence in that case if n >= 0 we would always get the output y(n) from x(n) as:
x(n) . Impulse(0) = x(n) . 1
It is a known fact that anything into 1 would result in same.
But in many DSP problems I see that impulse response (h(n)) is = (1/2)n(u-3) for example. I am not able to understand what then is the function and technical meaning of Impulse Response.
Please correct my understanding on same.
The impulse response h of a system (not of a signal) is the output y of this system when it is excited by an impulse signal x (1 at t = 0, 0 otherwise).
Not diving too much in theory and considerations, this response is very important because most linear sytems (filters, etc...) in signal processing can be written in the form of the following convolution:
y(n) = sum(h(u) * x(n-u), u=0, u=L-1);
--
The response at n=0 of such system when excited by impulse signal is then:
y(0) = h(0) * x(0) + h(1) * x(0-1) + ... h(L-1) * x(0-(L-1))
= h(0) * x(0) + 0 + 0 + ... + 0 => indeed x is zero everywhere except for t = 0
= h(0)
The response at n = 1, is also:
y(1) = h(0) * x(1-0) + h(1) * x(1-1) + ... h(L-1) * x(1-(L-1))
= 0 + h(1) * x(0) + 0 + 0 + ... + 0 => indeed x is zero everywhere except for t = 0
= h(1)
Tadin ... you see that by exciting the system with impulse response you obtain the values for h, thus you can fully characterise your system and know how it will react to any other input signal.
A system G is said linear and time invariant (LTI) if it is linear and its behaviour doe not change with time or in other words:
Linearity: G[k1*i1(t)+k2*i2(t)] = k1*G[i1]+k2*G[i2]
Time Invariance: y(n-r) = G[i(n-r)]
where time invariance means that a delay in the input corresponds to a delay in the output and
where i's are input functions and k's are scalars and y output function.
Now in general a lot of systems belong to/can be approximated with this class.
The important fact that I think you are looking for is that these systems are completely characterised by their impulse response. The impulse is the function you wrote, in general the impulse response is how your system reacts to this function: you take your system, you feed it with the impulse and you get the impulse response.
Now you keep the impulse response: when your system is fed with another input, you can calculate the new output by performing the convolution in time between the impulse response and your new input.
That is why the system is completely characterized by the impulse response: whatever input function you take, you can calculate the output by convoluting the input with the impulse response.
Another important fact is that if you perform the Fourier Transform of the impulse response you get the behaviour of your system in the frequency domain.
Considering this you can calculate the output also by taking the FT of your input, the FT of the impulse response, multiply them (in the frequency domain) and then perform the inverse Fourier Transform of the result: the result is the output signal of your system.
Related
We are trying an alteration optimization strategy to solve Lyapunov problems.
We break down our decision variables into two sets, Set 1 and Set 2.
We were perplexed how it was possible that, after getting a solution to Set 1, and plugging in those solved variables into the optimization over Set 2, the transferred variables would not be feasible.
The constraints that fail are those due to the SOS coefficient matching equality constraints.
Here, we print in each row the constraint that failed, and the value of our Initial Guess. We can see that the Initial Guess is off only a very small amount compared to the constraints.
LinearEqualityConstraint
(2 * Symmetric(97,40) + 2 * Symmetric(96,41)) == 9.50028
[9.50027496]
LinearEqualityConstraint
(2 * Symmetric(97,47) + 2 * Symmetric(96,48)) == 234.465
[234.4647013]
LinearEqualityConstraint
(2 * Symmetric(97,54) + 2 * Symmetric(96,55)) == -234.463
[-234.46336504]
LinearEqualityConstraint
(2 * Symmetric(97,61) + 2 * Symmetric(96,62)) == 12.7962
[12.79618825]
LinearEqualityConstraint
(2 * Symmetric(97,68) + 2 * Symmetric(96,69)) == -12.7964
[-12.79637068]
LinearEqualityConstraint
(2 * Symmetric(97,75) + 2 * Symmetric(96,76)) == -51.4061
[-51.40605828]
LinearEqualityConstraint
(2 * Symmetric(97,81) + 2 * Symmetric(96,82)) == 51.406
[51.40604213]
LinearEqualityConstraint
(2 * Symmetric(97,86) + 2 * Symmetric(96,87)) == 192.794
[192.79430158]
LinearEqualityConstraint
(2 * Symmetric(97,90) + 2 * Symmetric(96,91)) == -141.924
[-141.92366183]
LinearEqualityConstraint
(2 * Symmetric(97,93) + 2 * Symmetric(96,94)) == -37.6674
[-37.66740401]
InitialGuess V_sos and
Our guess for what's happening is:
When you extract the solution from one optimization using result.GetSolution(var), you lose some precision.
Or, when you set the previous solution using prog.SetInitialGuess(np_array) you lose some precision.
What's the solution here? Should we just keep feeding the solution back in even though it says infeasible?
This is a partial cookbook when I debug SOS problem, especially when working with Lyapunov problems:
Choose the right monomial basis
The main idea is to remove the 0-th order monomial 1 from the monomial basis of the sos polynomial. Here is a quick explanation:
The mathematical problem is
Find λ(x)
−Vdot − λ(x) * (ρ − V) is sos
λ(x) is sos
Namely you want to prove that V≤ρ ⇒ Vdot ≤ 0
So first I would suggest to re-write your dynamics to make sure that 0 is the goal state (you can always shift your state).
Second you can see that since x=0 is the equilibrium point, then both V(0) = 0 and Vdot(0) = 0 (Because x=0 is the global minimal of V(x), hence ∂V/∂x=0 at x=0, indicating Vdot(0) = 0), now your sos polynomial p(x) = −Vdot − λ(x) * (ρ − V) must satisfy p(0) = -λ(0) * ρ. But λ(x) >= 0 and ρ > 0, so we know λ(0) = 0.
Lemma
If a sos polynomial s(x) satisfies s(0) = 0, then its monomial basis cannot contain the 0-th order monomial (namely 1).
Proof
Remember that s(x) is a sos polynomial, namely
s(x) = m(x)ᵀQm(x)
where m(x) contains the monomial basis, and Q is a psd matrix. Now let's decompose the monomial basis m(x) into two parts, the 0-th order monomial 1 and the remaining monomials mbar(x). For example, if m(x) = [x1, x2, 1], then mbar(x) = [x1, x2]. We also decompose the psd matrix Q accordingly
s(x) = [mbar(x)]ᵀ [Q11 Q10] [mbar(x)]
[ 1] [Q10 Q00] [ 1]
Since s(0) = Q00 = 0, we also know that Q10 = 0, so now we can use a smaller psd matrix Q11 rather than Q. Equivalently we write s(x) = mbar(x)ᵀ * Q11 * mbar(x), where mbar(x) is the monomial basis that doesn't contain the 0-th order monomial, QED.
So why removing the 0-th order monomial from the monomial basis and use the smaller psd matrix Q11 is a good idea when your sos polynomial s(x) satisfies s(0) = 0? The reason is that if the monomial basis contains 1, then your psd matrix Q has to be on the boundary of the psd cone, namely your SDP problem doesn't have a strict interior. This could leads to violation of Slater's condition, which also breaks the strong duality. One example is that if your s(x) = x², by including the 0'th order monomial, it is written as
x² = [x] [1 0] [x]
[1] [0 0] [1]
And you see that the Gram matrix [[1 0], [0, 0]] is on the boundary of the psd cone (with one eigen value equal to 0). But if you remove 1 from the monomial basis, then its Gram matrix is just Q11=1, strictly in the interior of the psd cone.
In Drake, after removing 1 from the monomial basis, you can create your sos polynomial λ(x) as
lambda_poly, lambda_gram = prog.NewSosPolynomial(monomial_basis)
whee monomial_basis doesn't contain the 0-th order monomial.
Backoff during bilinear alternation
This is a typical problem in bilinear alternation. The issue is that when you solve a conic optimization problem with an objective function, the optimal solution always occurs at the boundary of the cone, namely it is very close to being infeasible. Then when you fix some variables to this solution at the cone boundary, in the next iteration the problem is very likely infeasible due to numerical roundoff error.
A typical solution is that after solving the optimization problem on variable Set 1 with an objective, now "backoff" a little bit by solving a feasibility problem on variable Set 1. This new solution is often strictly feasible (namely it is inside the strict interior of the cone), now pass this strictly feasible solution Set 1 to the next iteration and search for Set 2.
More concretely, suppose at one iteration you solve the following optimization problem
min c'*x
s.t constraint_on_x
and denote the optimal cost as p. Now solve a new feasibility problem
find x
s.t c'*x <= p + epsilon
constraint_on_x
where epsilon can be a small positive number. This new solution will be used in the next iteration to search for a different set of variables.
You can check if your solution is on the boundary of the positive semidefinite cone by checking the Eigen value of your psd matrix. Here is the pseudo-code
for binding : prog.positive_semidefinite_constraints():
psd_sol = result.GetSolution(binding.variables())
psd_sol.reshape((binding.evaluator().matrix_rows(), binding.evaluator().matrix_rows()))
print(f"minimal eigenvalue {np.linalg.eig(psd_sol)[0].min()}")
You should see that before doing this "backoff" some of the minimal eigen value is almost 0. After "backoff" the minimal eigen value gets larger.
I am a student working with time-series data which we feed into a neural network for classification (my task is to build and train this NN).
We're told to use a band-pass filter of 10 Hz to 150 Hz since anything outside that is not interesting.
After applying the band-pass, I've also down-sampled the data to 300 samples per second (originally it was 768 Hz). My understanding of the Shannon Nyquist sampling theorem is that, after applying the band-pass, any information in the data will be perfectly preserved at this sample-rate.
However, I got into a discussion with my supervisor who claimed that 300 Hz might not be sufficient even if the signal was band-limited. She says that it is only the minimum sample rate, not necessarily the best sample rate.
My understanding of the sampling theorem makes me think the supervisor is obviously wrong, but I don't want to argue with my supervisor, especially in case I'm actually the one who has misunderstood.
Can anyone help to confirm my understanding or provide some clarification? And how should I take this up with my supervisor (if at all).
The Nyquist-Shannon theorem states that the sampling frequency should at-least be twice of bandwidth, i.e.,
fs > 2B
So, this is the minimal criteria. If the sampling frequency is less than 2B then there will be aliasing. There is no upper limit on sampling frequency, but more the sampling frequency, the better will be the reconstruction.
So, I think your supervisor is right in saying that it is the minimal condition and not the best one.
Actually, you and your supervisor are both wrong. The minimum sampling rate required to faithfully represent a real-valued time series whose spectrum lies between 10 Hz and 150 Hz is 140 Hz, not 300 Hz. I'll explain this, and then I'll explain some of the context that shows why you might want to "oversample", as it is referred to (spoiler alert: Bailian-Low Theorem). The supervisor is mixing folklore into the discussion, and when folklore is not properly-contexted, it tends to telephone tag into fakelore. (That's a common failing even in the peer-reviewed literature, by the way). And there's a lot of fakelore, here, that needs to be defogged.
For the following, I will use the following conventions.
There's no math layout on Stack Overflow (except what we already have with UTF-8), so ...
a^b denotes a raised to the power b.
∫_I (⋯x⋯) dx denotes an integral of (⋯x⋯) taken over all x ∈ I, with the default I = ℝ.
The support supp φ (or supp_x φ(x) to make the "x" explicit) of a function φ(x) is the smallest closed set containing all the x-es for which φ(x) ≠ 0. For regularly-behaving (e.g. continuously differentiable) functions that means a union of closed intervals and/or half-rays or the whole real line, itself. This figures centrally in the Shannon-Nyquist sampling theorem, as its main condition is that a spectrum have bounded support; i.e. a "finite bandwidth".
For the Fourier transform I will use the version that has the 2π up in the exponent, and for added convenience, I will use the convention 1^x = e^{2πix} = cos(2πx) + i sin(2πx) (which I refer to as the Ramanujan Convention, as it is the convention I frequently used in my previous life oops I mean which Ramanujan secretly used in his life to make the math a whole lot simpler).
The set ℤ = {⋯, -2, -1, 0, +1, +2, ⋯ } is the integers, and 1^{x+z} = 1^x for all z∈ℤ - making 1^x the archetype of a periodic function whose period is 1.
Thus, the Fourier transform f̂(ν) of a function f(t) and its inverse are given by:
f̂(ν) = ∫ f(t) 1^{-νt} dt, f(t) = ∫ f̂(ν) 1^{+νt} dν.
The spectrum of the time series given by the function f(t) is the function f̂(ν) of the cyclic frequency ν, which is what is measured in Hertz (Hz.); t, itself, being measured in seconds. A common convention is to use the angular frequency ω = 2πν, instead, but that muddies the picture.
The most important example, with respect to the issue at hand, is the Fourier transform χ̂_Ω of the interval function given by χ_Ω(t) = 1 if t ∈ [-½Ω,+½Ω] and χ_Ω(t) = 0 else:
χ̂_Ω(t) = ∫_[-½Ω,+½Ω] 1^ν dν
= {1^{+½Ω} - 1^{-½Ω}}/{2πi}
= {2i sin πΩ}/{2πi}
= Ω sinc πΩ
which is where the function sinc x = (sin πx)/(πx) comes into play.
The cardinal form of the sampling theorem is that a function f(t) can be sampled over an equally-spaced sampled domain T ≡ { kΔt: k ∈ ℤ }, if its spectrum is bounded by supp f̂ ⊆ [-½Ω,+½Ω] ⊆ [-1/(2Δt),+1/(2Δt)], with the sampling given as
f(t) = ∑_{t'∈T} f(t') Ω sinc(Ω(t - t')) Δt.
So, this generally applies to [over-]sampling with redundancy factors 1/(ΩΔt) ≥ 1. In the special case where the sampling is tight with ΩΔt = 1, then it reduces to the form
f(t) = ∑_{t'∈T} f(t') sinc({t - t'}/Δt).
In our case, supp f̂ = [10 Hz., 150 Hz.] so the tightest fits are with 1/Δt = Ω = 300 Hz.
This generalizes to equally-spaced sampled domains of the form T ≡ { t₀ + kΔt: k ∈ ℤ } without any modification.
But it also generalizes to frequency intervals supp f̂ = [ν₋,ν₊] of width Ω = ν₊ - ν₋ and center ν₀ = ½ (ν₋ + ν₊) to the following form:
f(t) = ∑_{t'∈T} f(t') 1^{ν₀(t - t')} Ω sinc(Ω(t - t')) Δt.
In your case, you have ν₋ = 10 Hz., ν₊ = 150 Hz., Ω = 140 Hz., ν₀ = 80 Hz. with the condition Δt ≤ 1/140 second, a sampling rate of at least 140 Hz. with
f(t) = (140 Δt) ∑_{t'∈T} f(t') 1^{80(t - t')} sinc(140(t - t')).
where t and Δt are in seconds.
There is a larger context to all of this. One of the main places where this can be used is for transforms devised from an overlapping set of windowed filters in the frequency domain - a typical case in point being transforms for the time-scale plane, like the S-transform or the continuous wavelet transform.
Since you want the filters to be smoothly-windowed functions, without sharp corners, then in order for them to provide a complete set that adds up to a finite non-zero value over all of the frequency spectrum (so that they can all be normalized, in tandem, by dividing out by this sum), then their respective supports have to overlap.
(Edit: Generalized this example to cover both equally-spaced and logarithmic-spaced intervals.)
One example of such a set would be filters that have end-point frequencies taken from the set
Π = { p₀ (α + 1)ⁿ + β {(α + 1)ⁿ - 1} / α: n ∈ {0,1,2,⋯} }
So, for interval n (counting from n = 0), you would have ν₋ = p_n and ν₊ = p_{n+1}, where the members of Π are enumerated
p_n = p₀ (α + 1)ⁿ + β {(α + 1)ⁿ - 1} / α,
Δp_n = p_{n+1} - p_n = α p_n + β = (α p₀ + β)(α + 1)ⁿ,
n ∈ {0,1,2,⋯}
The center frequency of interval n would then be ν₀ = p_n + ½ Δp₀ (α + 1)ⁿ and the width would be Ω = Δp₀ (α + 1)ⁿ, but the actual support for the filter would overlap into a good part of the neighboring intervals, so that when you add up the filters that cover a given frequency ν the sum doesn't drop down to 0 as ν approaches any of the boundary points. (In the limiting case α → 0, this produces an equally-spaced frequency domain, suitable for an equalizer, while in the case β → 0, it produces a logarithmic scale with base α + 1, where octaves are equally-spaced.)
The other main place where you may apply this is to time-frequency analysis and spectrograms. Here, the role of a function f and its Fourier transform f̂ are reversed and the role of the frequency bandwidth Ω is now played by the (reciprocal) time bandwidth 1/Ω. You want to break up a time series, given by a function f(t) into overlapping segments f̃(q,λ) = g(λ)* f(q + λ), with smooth windowing given by the functions g(λ) with bounded support supp g ⊆ [-½ 1/Ω, +½ 1/Ω], and with interval spacing Δq much larger than the time sampling Δt (the ratio Δq/Δt is called the "hop" factor). The analogous role of Δt is played, here, by the frequency interval in the spectrogram Δp = Ω, which is now constant.
Edit: (Fixed the numbers for the Audacity example)
The minimum sampling rate for both supp_λ g and supp_λ f(q,λ) is Δq = 1/Ω = 1/Δp, and the corresponding redundancy factor is 1/(ΔpΔq). Audacity, for instance, uses a redundancy factor of 2 for its spectrograms. A typical value for Δp might be 44100/2048 Hz., while the time-sampling rate is Δt = 1/(2×3×5×7)² second (corresponding to 1/Δt = 44100 Hz.). With a redundancy factor of 2, Δq would be 1024/44100 second and the hop factor would be Δq/Δt = 1024.
If you try to fit the sampling windows, in either case, to the actual support of the band-limited (or time-limited) function, then the windows won't overlap and the only way to keep their sum from dropping to 0 on the boundary points would be for the windowing functions to have sharp corners on the boundaries, which would wreak havoc on their corresponding Fourier transforms.
The Balian-Low Theorem makes the actual statement on the matter.
https://encyclopediaofmath.org/wiki/Balian-Low_theorem
And a shout-out to someone I've been talking with, recently, about DSP-related matters and his monograph, which provides an excellent introductory reference to a lot of the issues discussed here.
A Friendly Guide To Wavelets
Gerald Kaiser
Birkhauser 1994
He said it's part of a trilogy, another installment of which is forthcoming.
Given input signal x (e.g. a voltage, sampled thousand times per second couple of minutes long), I'd like to calculate e.g.
/ this is not q
y[3] = -3*x[0] - x[1] + x[2] + 3*x[3]
y[4] = -3*x[1] - x[2] + x[3] + 3*x[4]
. . .
I'm aiming for variable window length and weight coefficients. How can I do it in q? I'm aware of mavg and signal processing in q and moving sum qidiom
In the DSP world it's called applying filter kernel by doing convolution. Weight coefficients define the kernel, which makes a high- or low-pass filter. The example above calculates the slope from last four points, placing the straight line via least squares method.
Something like this would work for parameterisable coefficients:
q)x:10+sums -1+1000?2f
q)f:{sum x*til[count x]xprev\:y}
q)f[3 1 -1 -3] x
0n 0n 0n -2.385585 1.423811 2.771659 2.065391 -0.951051 -1.323334 -0.8614857 ..
Specific cases can be made a bit faster (running 0 xprev is not the best thing)
q)g:{prev[deltas x]+3*x-3 xprev x}
q)g[x]~f[3 1 -1 -3]x
1b
q)\t:100000 f[3 1 1 -3] x
4612
q)\t:100000 g x
1791
There's a kx white paper of signal processing in q if this area interests you: https://code.kx.com/q/wp/signal-processing/
This may be a bit old but I thought I'd weigh in. There is a paper I wrote last year on signal processing that may be of some value. Working purely within KDB, dependent on the signal sizes you are using, you will see much better performance with a FFT based convolution between the kernel/window and the signal.
However, I've only written up a simple radix-2 FFT, although in my github repo I do have the untested work for a more flexible Bluestein algorithm which will allow for more variable signal length. https://github.com/callumjbiggs/q-signals/blob/master/signal.q
If you wish to go down the path of performing a full manual convolution by a moving sum, then the best method would be to break it up into blocks equal to the kernel/window size (which was based on some work Arthur W did many years ago)
q)vec:10000?100.0
q)weights:30?1.0
q)wsize:count weights
q)(weights$(((wsize-1)#0.0),vec)til[wsize]+) each til count v
32.5931 75.54583 100.4159 124.0514 105.3138 117.532 179.2236 200.5387 232.168.
If your input list not big then you could use the technique mentioned here:
https://code.kx.com/q/cookbook/programming-idioms/#how-do-i-apply-a-function-to-a-sequence-sliding-window
That uses 'scan' adverb. As that process creates multiple lists which might be inefficient for big lists.
Other solution using scan is:
q)f:{sum y*next\[z;x]} / x-input list, y-weights, z-window size-1
q)f[x;-3 -1 1 3;3]
This function also creates multiple lists so again might not be very efficient for big lists.
Other option is to use indices to fetch target items from the input list and perform the calculation. This will operate only on input list.
q) f:{[l;w;i]sum w*l i+til 4} / w- weight, l- input list, i-current index
q) f[x;-3 -1 1 3]#'til count x
This is a very basic function. You can add more variables to it as per your requirements.
I have implemented a very simple linear regression with gradient descent algorithm in JavaScript, but after consulting multiple sources and trying several things, I cannot get it to converge.
The data is absolutely linear, it's just the numbers 0 to 30 as inputs with x*3 as their correct outputs to learn.
This is the logic behind the gradient descent:
train(input, output) {
const predictedOutput = this.predict(input);
const delta = output - predictedOutput;
this.m += this.learningRate * delta * input;
this.b += this.learningRate * delta;
}
predict(x) {
return x * this.m + this.b;
}
I took the formulas from different places, including:
Exercises from Udacity's Deep Learning Foundations Nanodegree
Andrew Ng's course on Gradient Descent for Linear Regression (also here)
Stanford's CS229 Lecture Notes
this other PDF slides I found from Carnegie Mellon
I have already tried:
normalizing input and output values to the [-1, 1] range
normalizing input and output values to the [0, 1] range
normalizing input and output values to have mean = 0 and stddev = 1
reducing the learning rate (1e-7 is as low as I went)
having a linear data set with no bias at all (y = x * 3)
having a linear data set with non-zero bias (y = x * 3 + 2)
initializing the weights with random non-zero values between -1 and 1
Still, the weights (this.b and this.m) do not approach any of the data values, and they diverge into infinity.
I'm obviously doing something wrong, but I cannot figure out what it is.
Update: Here's a little bit more context that may help figure out what my problem is exactly:
I'm trying to model a simple approximation to a linear function, with online learning by a linear regression pseudo-neuron. With that, my parameters are:
weights: [this.m, this.b]
inputs: [x, 1]
activation function: identity function z(x) = x
As such, my net will be expressed by y = this.m * x + this.b * 1, simulating the data-driven function that I want to approximate (y = 3 * x).
What I want is for my network to "learn" the parameters this.m = 3 and this.b = 0, but it seems I get stuck at a local minima.
My error function is the mean-squared error:
error(allInputs, allOutputs) {
let error = 0;
for (let i = 0; i < allInputs.length; i++) {
const x = allInputs[i];
const y = allOutputs[i];
const predictedOutput = this.predict(x);
const delta = y - predictedOutput;
error += delta * delta;
}
return error / allInputs.length;
}
My logic for updating my weights will be (according to the sources I've checked so far) wi -= alpha * dError/dwi
For the sake of simplicity, I'll call my weights this.m and this.b, so we can relate it back to my JavaScript code. I'll also call y^ the predicted value.
From here:
error = y - y^
= y - this.m * x + this.b
dError/dm = -x
dError/db = 1
And so, applying that to the weight correction logic:
this.m += alpha * x
this.b -= alpha * 1
But this doesn't seem correct at all.
I finally found what's wrong, and I'm answering my own question in hopes it will help beginners in this area too.
First, as Sascha said, I had some theoretical misunderstandings. It may be correct that your adjustment includes the input value verbatim, but as he said, it should already be part of the gradient. This all depends on your choice of the error function.
Your error function will be the measure of what you use to measure how off you were from the real value, and that measurement needs to be consistent. I was using mean-squared-error as a measurement tool (as you can see in my error method), but I was using a pure-absolute error (y^ - y) inside of the training method to measure the error. Your gradient will depend on the choice of this error function. So choose only one and stick with it.
Second, simplify your assumptions in order to test what's wrong. In this case, I had a very good idea what the function to approximate was (y = x * 3) so I manually set the weights (this.b and this.m) to the right values and I still saw the error diverge. This means that weight initialization was not the problem in this case.
After searching some more, my error was somewhere else: the function that was feeding data into the network was mistakenly passing a 3 hardcoded value into the predicted output (it was using a wrong index in an array), so the oscillation I saw was because of the network trying to approximate to y = 0 * x + 3 (this.b = 3 and this.m = 0), but because of the small learning rate and the error in the error function derivative, this.b wasn't going to get near to the right value, making this.m making wild jumps to adjust to it.
Finally, keep track of the error measurement as your network trains, so you can have some insight into what's going on. This helps a lot to identify a difference between simple overfitting, big learning rates and plain simple mistakes.
A small program that I wrote in octave does not yield desired phase spectrum. The magnitude plot is perfect though.
f = 200;
fs = 1000;
phase = pi/3;
t = 0: 1/fs: 1;
sig = sin((2*pi*f*t) + phase);
sig_fft = fft(sig);
sig_fft_phase = angle(sig_fft) * 180/pi;
sig_fft_phase(201)
sig_fft_phase(201) returns 5.998 (6 degrees) rather than 60 degrees. What am I doing wrong? Is my expectation incorrect?
In your example, if you generate the frequency axis: (sorry, I don’t have Octave here, so Python will have to do—I’m sure it’s the same in Octave):
faxis = (np.arange(0, t.size) / t.size) * fs
you’ll see that faxis[200] (Python is 0-indexed, equivalent to Octave’s 201 index) is 199.80019980019981. You think you’re asking for the phase at 200 Hz but you’re not, you’re asking for the phase of 199.8 Hz.
(This happens because your t vector includes 1.0—that one extra sample slightly decreases the spectral spacing! I don’t think the link #Sardar_Usama posted in their comment is correct—it has nothing to do with the fact that the sinusoid doesn’t end on a complete cycle, since this approach should work with incomplete cycles.)
The solution: zero-pad the 1001-long sig vector to 2000 samples. Then, with a new faxis frequency vector, faxis[400] (Octave’s 401st index) corresponds to exactly 200 Hz:
In [54]: sig_fft = fft.fft(sig, 2000);
In [55]: faxis = np.arange(0, sig_fft.size) / sig_fft.size * fs
In [56]: faxis[400]
Out[56]: 200.0
In [57]: np.angle(sig_fft[400]) * 180 / np.pi
Out[57]: -29.950454729683386
But oh no, what happened? This says the angle is -30°?
Well, recall that Euler’s formula says that sin(x) = (exp(i * x) - exp(-i * x)) / 2i. That i in the denominator means that the phase recovered by the FFT won’t be 60°, even though the input sine wave has phase of 60°. Instead, the FFT bin’s phase will be 60 - 90 degrees, since -90° = angle(1/i) = angle(-i). So this is actually the right answer! To recover the sine wave’s phase, you’d need to add 90° to the phase of the FFT bin.
So to summarize, you need to fix two things:
make sure you’re looking at the right frequency bin. For an N-point FFT (and no fftshift), the bins are [0 : N - 1] / N * fs. Above, we just used a N=2000 point FFT to ensure that 200 Hz be represented.
Understand that, although you have a sine wave, as far as the FFT is concerned, it gets two complex exponentials, at +200 and -200 Hz, and with amplitudes 1/(2i) and -1/(2i). That imaginary value in the denominators shifts the phase you expect by -90° and +90° respectively.
If you happened to have used cos, a cosine wave, for sig, you wouldn’t have run into this mathematical obstacle 😆, so pay attention to the difference between sin and cos in the future 💪!
change to t=0:1/fs:1-1/fs; then
sig_fft_phase(201)
ans = -30.000