How to choose a valid window of the signal in STFT - signal-processing

I have been reading a tutorial about short time fourier transfor, and i reached some lines in the text which i could not understand or figure out. the tutorial says the following
This window function is first located to the very beginning of the signal. That is, the
window function is located at t=0. Let's suppose that the width of the window is "T" s. At
this time instant (t=0), the window function will overlap with the first T/2 seconds (I
will assume that all time units are in seconds). The window function and the signal are
then multiplied. By doing this, only the first T/2 seconds of the signal is being chosen,
with the appropriate weighting of the window.
What i could not understand is: only the first T/2 seconds of the signal is being chosen,
with the appropriate weighting of the window.
my question is: why only the first T/2 seconds will be chosen? I think, since the width of the window is T, then the whole portion of the signal contained within the width of the window should be chosen, and NOT only the T/2 of the window.
Can any one please explain why T/2 of the width of thw window is chosen instead of T?

For high precission analysis of time-series one needs to apply a window like Hanning-Window or Hamming-Window to a short interval of the time-series before calculating the FFT to avoid leakage or side-lobes or whatever you may call the effect of the rectangular window which is used implicitely if no other window is applied.
For a complete analysis this window is moved by about 50% of the window length between consecutive FFTs. (The exact optimal shift depends on the window, 50% is a good value)
So with a window length of 100ms you will start the window at 200ms, 250ms, 300ms, ...
Now about the start of your time series. In order to see the very beginning of your time-series in the FFTs, you need to zero-pad the signal before it begins and start the first window at -50ms. Then you continue with 0ms, 50ms, 100ms, and so on.
You see the very first window start half a window-length before the actual time-series starts. (I guess this is meant in your text with T/2)

Related

Distribution chart not showing bars in AnyLogic

So I've set up a simulation to measure the time it would take for a ship to get from houston to sydney. I put a time measure start and end between the move to block and made a distribution chart using a histogram. But when I run the simulation, the bars don't show up. The numbers are still there, and the rest of the histograms are working properly, but no bars on this particular one.Here is a screenshot of the simulation, histogram in question is the second one.
#Emile Zankoul
Can't be 100% sure why.
Try to copy and paste one of the two histograms that work then change the time measure name in the new histogram's properties.
I think most likely, you have "Show PDF" unchecked:
Make sure it is checked.
After reviewing the model I realized that the total measured duration is over 1,000,000 seconds. It seems there is a limitation to the maximum value for the histogram. So to resolve this, change your model time units from seconds to anything bigger (e.g. minutes, hours, etc.).

Is there a way to program games without depending on fram rate?

Im programming an iOS game and I use the method update for a lot of things, which is called at the game speed refresh (for the moment 60 times per second) but the problem is if the frame rate drops down (for example a notification, or any behavior in the game that, when called, it makes drop down a little bit the fps...) then the bugs comes....
A fast example is if I have an animation of 80 pictures, 40 for jump up and 40 for fall, I would need 1,2 second to run the animation, so if the jump takes 1,2 second it would be ok, the animation would run. But if my fps drop down to 30 then the animation would cut because it would need 2,4 seconds to run the animation but the jump remains 1,2 second. This is only a fast example, there is a lot of unexpected behaviors in the game if the frame rate drops, so my question is, are games developers depending so much on frame rate or there is a way to avoid those fps-bugs? (another way to program or any trick?)
Base your timing on the time, rather than the frame count. So, save a time stamp on each frame, and on the next frame, calculate how much time has elapsed, and based on your ideal frame rate, figure out how many frames of animation to advance. At full speed, you shouldn’t notice a difference, and when the frame rate drops, your animations may get jerky but the motion will never get more than 1 frame behind where it should be.
Edit: as uliwitness points out, be careful what time function you use, so you don’t encounter issues when, for example, the computer goes to sleep or the game pauses.
Always use the delta value in your update method. This is platform and engine independent. Multiply any speed or change value by the delta value (the time interval between the current and the last frames).
In case of the animation, one way to fix the issue could be to multiply the animation counter by delta (and an inverse of the expected interval). Then round this value to get the correct image for the animation.
// currentFrame is a float ivar set to 0 at the beginning of the animation.
currentFrame = currentFrame + delta * 60.0;
int imageIndex = roundf(currentFrame);
However, with Sprite Kit there is a better way to do this kind of animation, as there is a premade SKAction dealing with sprite animation.
[SKAction animateWithTextures:theTextures timePerFrame:someInterval];
With this solution you don't have to deal with timing the images at all. The engine will do that for you.
There's a great discussion about FPS-based and Time-based techniques here:
Why You Should be Using Time-based Animation and How to Implement it
It's the best on my opinion, very complete, easy to follow and provides JsFiddle examples. I translated those examples to C++/Qt.
Click here to watch a video of my app running:

iOS Accurate AudioTimeStamp when rendering Audio Units

In my AudioInputRenderCallback I'm looking to capture an accurate time stamp of certain audio events. To test my code, I'm inputting a click track #120BPM or every 500 milliseconds (The click is accurate, I checked, and double checked). I first get the decibel of every sample, and check if it's over a threshold, this works as expected. I then take the hostTime from the AudioTimeStamp, and convert it to milliseconds. The first click gets assigned to that static timestamp and the second time through does a calculation of the interval and then reassigns to the static one. I expected to see a 500 interval. To be able to calculate the click correctly I have to be with in 5 milliseconds. The numbers seem to bounce back and forth between 510 & 489. I understand it's not an RTOS, but can iOS be this accurate? Is there any issues with using the mach_absolute_time member of the AudioUnitTimeStamp?
Audio Units are buffer based. The minimum length of an iOS Audio Unit buffer seems to be around 6 mS. So if you use the time-stamps of the buffer callbacks, your time resolution or time sampling jitter will be about +- 6 mS.
If you look at the actual raw PCM samples inside the Audio Unit buffer and pattern match the "attack" transient (by threshold or autocorrelation, etc.) you might be able get sub-millisecond resolution.

How to profile on iOS?

I'm using instruments to profile the CPU activity of an iOS game.
The problem I'm having is that I'm not entirely sure what the data I'm looking at represents.
Here is the screen I see after running my game for a couple of minutes,
I can expand the call tree to see exactly what methods are using the most CPU time. I'm unsure if this data represents CPU usage for the entire duration the profiler was running or is it just at that point in time.
I've tried running the slider along the timeline to see what effect that has on the numbers and it doesn't seem to have any. So that leeds me to believe the data represents CPU usage for the duration the game was running.
If this is the case then is it possible to access CPU usage at a particular point in time. There are a few spikes along the time line, I would like to see exactly what was happening at that time to see if there are any improvements I can make.
Thanks in advance for any responses.
To select a time range, use the "inspection range" buttons at the top of the window (left of the stop watch).
First select the start of the range by clicking on the graph ruler, the press the left most button to select the left edge. Then select the end of the range on the graph ruler and press the right most button to select the right edge.

How can I ensure the correct frame rate when recording an animation using DirectShow?

I am attempting to record an animation (computer graphics, not video) to a WMV file using DirectShow. The setup is:
A Push Source that uses an in-memory bitmap holding the animation frame. Each time FillBuffer() is called, the bitmap's data is copied over into the sample, and the sample is timestamped with a start time (frame number * frame length) and duration (frame length). The frame rate is set to 10 frames per second in the filter.
An ASF Writer filter. I have a custom profile file that sets the video to 10 frames per second. Its a video-only filter, so there's no audio.
The pins connect, and when the graph is run, a wmv file is created. But...
The problem is it appears DirectShow is pushing data from the Push Source at a rate greater than 10 FPS. So the resultant wmv, while playable and containing the correct animation (as well as reporting the correct FPS), plays the animation back several times too slowly because too many frames were added to the video during recording. That is, a 10 second video at 10 FPS should only have 100 frames, but about 500 are being stuffed into the video, resulting in the video being 50 seconds long.
My initial attempt at a solution was just to slow down the FillBuffer() call by adding a sleep() for 1/10th second. And that indeed does more or less work. But it seems hackish, and I question whether that would work well at higher FPS.
So I'm wondering if there's a better way to do this. Actually, I'm assuming there's a better way and I'm just missing it. Or do I just need to smarten up the manner in which FillBuffer() in the Push Source is delayed and use a better timing mechanism?
Any suggestions would be greatly appreciated!
I do this with threads. The main thread is adding bitmaps to a list and the recorder thread takes bitmaps from that list.
Main thread
Animate your graphics at time T and render bitmap
Add bitmap to renderlist. If list is full (say more than 8 frames) wait. This is so you won't use too much memory.
Advance T with deltatime corresponding to desired framerate
Render thread
When a frame is requested, pick and remove a bitmap from the renderlist. If list is empty wait.
You need a threadsafe structure such as TThreadList to hold the bitmaps. It's a bit tricky to get right but your current approach is guaranteed to give to timing problems.
I am doing just the right thing for my recorder application (www.videophill.com) for purposes of testing the whole thing.
I am using Sleep() method to delay the frames, but am taking great care to ensure that timestamps of the frames are correct. Also, when Sleep()ing from frame to frame, please try to use 'absolute' time differences, because Sleep(100) will sleep about 100ms, not exactly 100ms.
If it won't work for you, you can always go for IReferenceClock, but I think that's overkill here.
So:
DateTime start=DateTime.Now;
int frameCounter=0;
while (wePush)
{
FillDataBuffer(...);
frameCounter++;
DateTime nextFrameTime=start.AddMilliseconds(frameCounter*100);
int delay=(nextFrameTime-DateTime.Now).TotalMilliseconds;
Sleep(delay);
}
EDIT:
Keep in mind: IWMWritter is time insensitive as long as you feed it with SAMPLES that are properly time-stamped.

Resources