Is there a function to know where the heat diffusion started on a surface, at t=0 - signal-processing

I am trying to find a way through Fourier transform, that if I know temperature of each vertex on a connected graph at time T, is there is a way to find out which vertex the heat diffusion started from at t=0, and led to the current temperature signal at time T.

Related

What is the meaning of pixel 'value' when interpreting as inverse-fourier transform?

I’m working on a image signal project using C++ with DFT, IDEF.
I major in physics and have lots of experience dealing with 1d fourier transform..
HOWEVER, 2d dft is really not intuitive.
I studied a lot and now have a little understanding of what is 2d dft.
THIS IS WHAT I REALLY WANT TO KNOW.
In 1d, assume you have 2 functions each having frequency 30, 60(ignore unit).
then I can have a sine function with frequency 30, 60.(spatial domain)
When I take DFT to each sine function, I got value of 30,60 in frequency domain.
*** If I reduced the value of frequency (f = 30), then I get low amplitude in spatial domain, which means Asin(2pi30x), coefficient A reduce.
alright then.
when I have a image of 100x100 pixels and take 2d dft.
Then I also have 2d frequency domain(only magnitude).
*** What happen to pixels in spatial domain when I reduce the value of specific frequency?
suppose we have two frequency (10,10), (20,20) in frequency domain(u,v)
this means image in spatial domain is composed of these two frequency sinusoidal functions.
Also same as 1D, when I reduced the value of the specific frequency, I thinks it should reduce the amplitude of 2d sinusoidal function,, right?
Then How can i interpret pixel?
*** what can I interpret pixel in regard to sinusoidal function.
This question arises because I and colleague are working on project,
we are inserting specific frequency like (30,30) in frequency domain to original 1 image.
then after, when I idft, I got image what i want.
But my colleague is trying to insert frequency not in frequency domain, but in spatial domain, dealing with pixel value, which I can’t understand…

What is derivative of FFT outputs with respect to time?

I am quite new to Digital Signal Processing. I am trying to implement an anti-cogging algorithm in my PMSM control algorithm. I follow this [documentation].
I collected velocity data according to the angle. And I translated velocity data to the frequency domain with FFT. But last step, Acceleration Based Waveform Analysis, a calculated derivative of FFT outputs with respect to time. Outputs are frequency domain how could I calculate derivative of FFT outputs with respect to time, and why does it do this calculation?
"derivative of FFT outputs with respect to time" doesn't make any sense, so even though the notation used in the paper seems to say just that, I think we can understand it better by reading the text that says "the accelerations are
found by taking the time derivative of the FFT fitted speeds".
That makes perfect sense: The FFT of the velocity array is used to interpolate between samples, providing velocity as a continuous function of position. That continuous function is then differentiated at the appropriate position (j in the paper) to find the acceleration at every position i. I didn't read closely enough to find out how i and j are related.
In implementation, every FFT output for frequency f would be multiplied by fi (that is, the frequency times sqrt(-1), not i the position) to produce the FFT of the acceleration function, and then the FFT basis functions would be evaluated in their continuous form (using Math.sin and Math.cos) to produce an acceleration at any desired point.

Quantify how good is a object detection in image processing

I developed Oil&Gas pipeline detector, which works fairly well. The detector outputs is a line that represents the pipe position in respect to the camera.
Although it has very low false-positive rate, I would like to quantify how reliable is my detection. So I can provide it to other components that receives the line information.
Initially I started to compute the standard deviation of the last 10 samples, which gave me a good starting point, since when a false positive is detected, the std deviation increases. However since the camera moves over time this metric not reliable, because the movement itself would increase the value.
I have camera velocities information, so I thought I could kind of fuse the "predicted" measurement with the detected, with a Kalman Filter, for instance. The filter covariances would give the estimation I want.
Edited to add more relevant information:
Single camera with known parameters and fixed local length.
The camera is attached to a robot's body.
Camera moves with low velocities (max: 0.5 m/s max and 0.3 rad/s).
The detector output is the line angle and shorter distance camera-line meters.
However I'm not sure if Kalman filter is the right/best technique to apply here. Does anyone have any suggestion how I can handle this?

Sinusoids with frequencies that are random variales - What does the FFT impulse look like?

I'm currently working on a program in C++ in which I am computing the time varying FFT of a wav file. I have a question regarding plotting the results of an FFT.
Say for example I have a 70 Hz signal that is produced by some instrument with certain harmonics. Even though I say this signal is 70 Hz, it's a real signal and I assume will have some randomness in which that 70Hz signal varies. Say I sample it for 1 second at a sample rate of 20kHz. I realize the sample period probably doesn't need to be 1 second, but bear with me.
Because I now have 20000 samples, when I compute the FFT. I will have 20000 or (19999) frequency bins. Let's also assume that my sample rate in conjunction some windowing techniques minimize spectral leakage.
My question then: Will the FFT still produce a relatively ideal impulse at 70Hz? Or will there 'appear to be' spectral leakage which is caused by the randomness the original signal? In otherwords, what does the FFT look like of a sinusoid whose frequency is a random variable?
Some of the more common modulation schemes will add sidebands that carry the information in the modulation. Depending on the amount and type of modulation with respect to the length of the FFT, the sidebands can either appear separate from the FFT peak, or just "fatten" a single peak.
Your spectrum will appear broadened and this happens in the real world. Look e.g for the Voight profile, which is a Lorentizan (the result of an ideal exponential decay) convolved with a Gaussian of a certain width, the width being determined by stochastic fluctuations, e.g. Doppler effect on molecules in a gas that is being probed by a narrow-band laser.
You will not get an 'ideal' frequency peak either way. The limit for the resolution of the FFT is one frequency bin, (frequency resolution being given by the inverse of the time vector length), but even that (as #xvan pointed out) is in general broadened by the window function. If your window is nonexistent, i.e. it is in fact a square window of the length of the time vector, then you'll get spectral peaks that are convolved with a sinc function, and thus broadened.
The best way to visualize this is to make a long vector and plot a spectrogram (often shown for audio signals) with enough resolution so you can see the individual variation. The FFT of the overall signal is then the projection of the moving peaks onto the vertical axis of the spectrogram. The FFT of a given time vector does not have any time resolution, but sums up all frequencies that happen during the time you FFT. So the spectrogram (often people simply use the STFT, short time fourier transform) has at any given time the 'full' resolution, i.e. narrow lineshape that you expect. The FFT of the full time vector shows the algebraic sum of all your lineshapes and therefore appears broadened.
To sum it up there are two separate effects:
a) broadening from the window function (as the commenters 1 and 2 pointed out)
b) broadening from the effect of frequency fluctuation that you are trying to simulate and that happens in real life (e.g. you sitting on a swing while receiving a radio signal).
Finally, note the significance of #xvan's comment : phi= phi(t). If the phase angle is time dependent then it has a derivative that is not zero. dphi/dt is a frequency shift, so your instantaneous frequency becomes f0 + dphi/dt.

3D stereo, bad 3D coordinates

I'm using stereo vision to obtain 3D reconstruction. I'm using opencv library.
I've implemented my code this way:
1) Stereo Calibration
2) undistort and Rectification of image pair
3) disparity map - using SGBM
4) 3D coordinates calculating depht map - unsing reprojectImageTo3D();
Results:
-Good disparity map, and good 3D reconstruction
-Bad 3D coordinates values, the distances don't corresponde to the reality.
The 3D distances, the distante between camera and object, have 10 mm error and increse with distance. I,ve used various baselines and i get always error.
When i compare the extrinsic parameter, vector T, output of "stereoRectify" the baseline match.
So i dont know where the problem is.
Can someone help me please, thanks in advance
CAlibration:
http://textuploader.com/ocxl
http://textuploader.com/ocxm
Ten mm error can be reasonable for stereo vision solutions, all depending of course on the sensor sensitivity, resolution, baseline and the distance to the object.
The increasing error with respect to the object's distance is also typical to the problem - the stereo correspondence essentially performs triangulation between the two video sensors to the object, and the larger the distance is the derivative of the angle between the video sensors to the object translates to larger distance on the depth axis, which means larger error. Good example is when the angle between the video sensors to the object is almost right, which means that any small positive error in estimating it will throw the estimated depth to infinity.
The architecture you selected looks good. You can try increasing the sensors resolution, or maybe dig in to the calibration process which has a lot of room for tuning in the openCV library - making sure only images taken with the chessboard being static are selected, choose higher variety of the different poses of the chessboard, adding images until the registration between the two images drops below the maximal error you can allow, etc.

Resources