Is there a way to work around the Limits of the Ttimer's inteval so it can be preciser? for example instead of only integers like 1000ms , to use 1000.5ms . And if no, which component can I use instead which will give me preciser interval
You are trying to keep track of time to a reasonable degree of accuracy. However, the standard system timer cannot be used for that purpose. All that the system timer guarantees is that it will fire no sooner than the interval which you specify. And you can get the message late if you are tardy in pumping your message queue. Quite simply, the system timer is not designed to be used as a stopwatch and to attempt to do so is inappropriate.
Instead you need to use the high resolution performance counter which you can get hold of by calling QueryPerformanceCounter.
If you are using Delphi 2010 or later then you can use Diagnostics.TStopwatch which provides a very convenient wrapper to the high performance timer.
You can still use a system timer to give your app a regular tick or pulse, but make sure that you keep track of time with the high resolution timer.
Having said all of that, I'm not sure that you will ever be able to achieve what you are hoping to do. If you want to keep reasonably accurate time then what I say above is true. Trying to maintain lock-step synchronisation with code running on another machine somewhere remote over the net sounds pretty much intractable to me.
1) The TTimer class is not accurate enough for your task, period! (then again, neither would the web-site timer be, either)
2) If you increase the timer resolution using the TimeBeginPeriod() API call, this will get you closer, but still nowhere near close enough
3) If you adjust the TTimer interval each time based on a constant start time (and synchronised with the PC clock), you can average a set number of milliseconds for each time event compared to the PC clock
4) I don't know if the TTimer class handles 3) correctly, but I have a TTimer equivalent that does
5) To account for PC clock drift you will need to synchronise the PC clock periodically with an NTP server
6) I have a system that keeps the PC clock on a good machine to with +/- 5 milliseconds of a reference time permanently (I adjust every minute) and a timer with a resolution of +/- 2 milliseconds as long as the machine is not overloaded (Windows is not a real-time OS)
7) It took me a long time to get to this point - is this what you really need, or are you asking the wrong question?
Related
I'm coding an emulator in XNA. The emulator's accuracy depends greatly on real fps. The fps must be 60 per second. While Microsoft claims their update function on FIXED TIME SETUP, should be called exactly 60 times per second, in my emulator (or a blank XNA 4.0 refresh project), I've collected these values so far, using differect processors: AMD athlon 2x 6000 (the unsynced ones, that need an extra optimized update from AMD to run correctly) 60-61 tps. Intel i5 (2.8 i think) 55tps. On an old Lenovo laptop with a slow centrino 59-60tps (the desired one). Same goes for an old intel p4 hyperthreading; exactly 60 tps. On a DELL implementation (don't remember the cpu) it was 52-53 tps.
So what should I do. Timer's accuracy on seperate threads is horrible too. Besides, the update function will be called too in parallel (using almost 100% of the cpu).
Make a different profile for each cpu? (Assume that the tps, [times per second] counting is correct)
The Update method calls are not 100% fixed. On a PC they will try to be made 60 times per second, but that depends on the machine and the logic of your game.
You can check if the system is working slower than it should by checking the GameTime.IsRunningSlowly property on your game (link here).
Also, you can modify the ticks it makes every second, for example a phone project starts with 33 ticks per second configured, against the 60 of a normal Windows Game project.
It would be wise to check Shawn Hargreaves blog post about GameTime, it won't fix your problem but you will give you a better understanding of how the Update method works and maybe get an idea on how to fix the problem.
I quote part of his post here:
By default XNA runs in fixed timestep mode, with a TargetElapsedTime
of 60 frames per second. This provides a simple guarantee: •We will
call your Update method exactly 60 times per second •We will call your
Draw method whenever we feel like it
Digging into exactly what that means, you will realize there are
several possible scenarios depending on how long your Update and Draw
methods take to execute.
The simplest situation is that the total time you spend in Update +
Draw is exactly 1/60 of a second. In this case we will call Update,
then call Draw, then look at the clock and notice it is time for
another Update, then Draw, and so on. Simple!
What if your Update + Draw takes less than 1/60 of a second? Also
simple. Here we call Update, then call Draw, then look at the clock,
notice we have some time left over, so wait around twiddling our
thumbs until it is time to call Update again.
What if Update + Draw takes longer than 1/60 of a second? This is
where things get complicated. There are many reasons why this could
happen:
1.The computer might be slightly too slow to run the game at the desired speed.
2.Or the computer might be way too slow to run the game at the desired speed!
3.The computer might be basically fast enough, but this particular frame might have taken an unusually long time for some reason. Perhaps
there were too many explosions on screen, or the game had to load a
new texture, or there was a garbage collection.
4.You could have paused the program in the debugger.
We do the same thing in response to all four causes of slowness: •Set
GameTime.IsRunningSlowly to true. •Call Update extra times (without
calling Draw) until we catch up. •If things are getting ridiculous and
we are too far behind, we just give up.
If this doesn't help you, then posting some code showing what you think may be making the process slower would help.
Basically, for my team's app, we need to be able to synchronize music across multiple iOS devices. The first way we did this was by having the music on all the devices already and just sending a play command to all the devices. Some would get it later than others, so that method did not work. There was an idea mentioned to calculate the latency between all the devices and send the commands at the appropriate times based on the latency.
The second way proposed would be to stream the music. If we were to implement streaming, how should we go about doing it. Should Audio Units be used, OpenAL, etc.? Also, if streaming was being done, how would we go about making sure that each device's stream was in sync.
Basically, the audio has to be in sync so that the person hearing it cannot differentiate between the devices. A few milliseconds off should not be a problem (unless the listener has super-human hearing).
You'd be amazed at how good the human ear us at spotting audio anomalies...
Sync the time of day
Effectively your trying to meet a real time requirement with a whole load if very variable things in the way (wifi, etc). I strongly suspect the only way you're going to get close to doing this is to issue a 'play' instruction that includes a particular time to start playing. Of course that relies on all the clocks being accurately set.
NTP
I don't know how iPhones get their time of day. If they use (or could use) NTP then you'll be getting close. NTP is designed to convey accurate time of day information over a network despite variable network delays. I've had a quick look and it seems that most NTP clients for iOS are the simple ones, not the full NTP that measures and tunes out network delays, clock drifts, etc.
GPS
Alternatively GPS is also a very good source of time information. Again I don't know if iPhones can or do use GPS for setting their clock but if it could be done then that would likely be pretty good. On Solaris (and I think Linux too) the 1 pulse per second that most GPS chips generate from the GPS signal can be used to condition the internal OS clock, making it very accurate indeed (sub microsecond accuracy).
I fear that iPhones don't do either of these things natively; both involve using a fair bit of electricity, so I wouldn't be surprised if they did something else less sophisticated.
Cell Time Service
Some Cell networks provide a time service too, but I don't think it's designed for accurate time setting. Also it tends not to be available everywhere. You often find it at major airports so that recent arrivals get their phones set to something close to local time.
Play at time X
So if one of those could be used to ensure that all the iPhones are set to exactly the same time of day then all you have to do is write your software to start playing at a specific time. That will probably involve polling the clock in a very tight loop waiting for it to tick over; most OSes don't provide a means of sleeping until a specific time. They do at least allow for sleeping for a period of time, which can be used to sleep until close to the appointed time. You'd then start polling the clock until the right time is reached.
Delay Measurement and Standard Deviation
Your first method is doomed I think. You might be able to measure average delays and so forth but that doesn't mean that every message has exactly the same latency. The standard deviation in the latency will tell you what you can expect to achieve, and I don't think that's going to be particularly small. If so then the message has got to include a timestamp.
NTP can work because it's only interested in the average delay measured over a period of time (hours sometimes), whereas you're interested in instantaneous delay.
Streaming with RTP
Your second method may work if you can time sync the devices as discussed above. The RTP protocol was designed for use in these circumstances; it doesn't help with achieving sync, but it does help a lot with the streaming. It tells you whereabouts in the stream any one piece of received data fits, allowing you to play it at the right time.
Clock Drift
Another problem to deal with is how long you're playing for. If it's a long time then you may discover that the 44kHz (or whatever) audio clock rate on each device isn't quite the same. So, whilst you might find a way of starting to play all at the same time, the separate devices will then start diverging ever so slightly. Over a long period of time they may be noticeably out.
BlueTooth
It might be possible to do something with BlueTooth. It has many weird and wonderful profiles, and it might be that one of those would serve to send an accurate 'start now' message.
Audio Trigger
You might also use sound as a means of conveying a start signal. One device can play a particular sound whilst your software in the others is listening with the mic. When a particular feature is detected in the sound, that's the time for everyone to start playing. Sort of a computerised "1, 2, a 1 2 3 4".
Camera Flash
Should be easy to spot in software...
I think your first way would work if you expand it a little bit. Assuming all the clocks on the devices are in sync you could include a timestamp in your play command. Then each device would calculate the time between the timestamp and when it received the command. You would then play the music and offset it by the time difference.
I have a game with a visible running timer, but can only achieve 2 digits of accuracy (#.##) beyond the decimal. Is this a limit of the framework, or is there a workaround? I am trying to achieve 4-6 digits of accuracy (#.######) on this timer.
A timer on iOS runs at the max frequency of 60Hz so thats why you only get 2 digit accuracy.
You could make a work around by taking the time and the start of your event and then take the time at the end of the event and calculate the difference. This time won't take into account things like frame rate drops, pausing and moving into the background though.
This is a limitation of the underlying system. iOS is not a realtime system and timers get scheduled on the so called run loop, which dispatches timers once they are due. However, for a timer to get dispatched accurate, the run loop has to run often enough and check the timer on every iteration. The run loop however runs also other stuff, for example the whole event mechanism, messages and networking are run on the run loop, so your timer aren't checked every few nanoseconds (also, the run loop ins't run consistently but gives some time back to the system as well)
I have created an iOS 5/iOS 6 app with a display that responds to changes in the musical pitch performed by the user. It uses the record function in the sample SpeakHere code but does not actually save a file because it is designed to respond in real time.
I would now like to extend this app to respond simultaneously to the pitch itself and the duration that the same pitch is sustained (for example, changing the color when the same pitch is held steadily for a minimum period of time). I have been reading about NSTimer and NSDate functions, which seem straightforward, as well as AudioTimeStamp functions, which are apparently C based and which I find very confusing. Based on other posts, it seems like NSTimer and NSDate checks might cause the display's real-time response to an actual musical performance to lag. How about dispatchAfter? Could I expect the block to execute at the scheduled time?
My question is, what approach is most likely to yield the desired result of allowing me to measure duration of a particular pitch in the AudioQueue and update my display continuously in real time? Do I need to be saving to a file for this to work?
I am self-taught and have only been programming for a few months, so no matter what I will have to do a lot of learning of APIs/C language features that are new to me. I'm hoping someone can point me in a fruitful direction. Thanks!
You're definitely getting into pretty advanced stuff here. Here are a few thoughts:
Your audio processing seems to be the most intensive operation. Because this processing needs to be continuous, you're probably going to have to do this processing in another thread. By processing, I mean examining the audio to determine pitch.
Once you've identified the pitch, you should store the time for which it began.
Then, in the main thread, setup an NSTimer that repeats continuously and in the NSTimer's fire method, subtract the pitch's start date from the current date to get the elapsed time, as an NSTimeInterval.
Send the NSTimeInterval to your display logic so that you can update the color on screen.
Some things to check out:
Beginner's tutorial on multi-threading and Grand Central Dispatch on iOS
NSTimer
Using NSTimers
Hope that helps you out!
I am doing a delphi application that will run on my pc 24/7 in the background and will check if it has to do some actions or not, wait 30 minutes and check again, and so on.
How can I make sure the application will not overload cpu or memory because of being running all the time.
Create a timer to run every 30 minutes, and call your checks/actions from there. Then your application can just sit idle when there is nothing to do.
Alternatively you could create a Scheduled Task that just runs periodically to do this.
The answers about timers are good solutions, and I add this:
Make sure that the timer event, or subsequent procedure called, checks for busy. i.e. if you wake up, make sure that the last batch is done before starting a new batch. This is easy to miss when things are flowing well, and then you have a situation where things are backed up and the whole system logjams at 8 in the morning because something bad happened at midnight and now there are now 16 calls stacked up (or threads, processes, etc..).
So write your timer event like this:
OnMyTimer...
begin
MyTimer.Enabled := false;
try
DoSomethingForALongTime; // Usually runs in 45 seconds, but sometimes takes 45 minutes!
finally
MyTimer.Enabled := true; // and (SomeAbortRequest = False) and (SomeHorribleErrorCount = 0);
end;
end;
The answers about timers are pretty much exactly what you're looking for. As for your question about not overloading the CPU or memory, take a look at your program in the Task Manager. When it's not doing anything, it should sit at a "steady state" of memory, not allocating any more, and using 1% or less of CPU time.
It's safe to let most programs idle for long periods. The VCL knows how to handle the idling without hogging CPU resources for you, and you just need to use a timer to make sure it wakes up and activates its event at the right time.
Most programming languages have a "sleep" function that you can call to make the program stop doing stuff.
You are in control of the memory usage; you can deallocate your memory before going to sleep and reallocate it when coming awake, but...
It might be better to just set up some recurring job. I don't know how to do this, but I suspect there's a way with Windows Scripting Host to just launch your application on whatever schedule you want.
If you want to enforce an absolute state of inactivity, I guess you could use the "sleep" function. Though I'm not sure how it would behave on a reboot. (I guess Windows would report the application as being unresponsive.)
If the application has no main form and just sitting in the tray (Or being totally invisible), it won't do much. The main message loop will handle all message it receive from the OS, but it shouldn't receive many. And the few message it will receive, it should process them (Shutdown messages, System parameters change notification, etc)
So, I think you could just set up a timer and forget about setting code to force your program to stay idle.
If you really want to limit that process activity to a maximum, you could set the thread priority when you enter/leave the timer's event. So you would set the priority to "normal" when you enter the event, and set it to "Low" when getting out of it.
You didn't tell, but if your application uses more than one thread, this could add to the amount of CPU the OS spends on your process (read up on time slices and thread-switches for example).
The OS may also swap out some of your memory pages, thus using less memory and/or reducing memory-accesses in the "wait-time" helps too.
So, if you use only one thread and have no additional message-loops either, just calling Sleep() could be a good way, as that will signal the OS that you don't need a time slice at all for a long while to come.
Avoid YieldThread()/SwitchToThread() and your own time-keeping (using Now() for example) as that would mean lots of thread-switching is taking place, to do .... nothing!
Another method could be to use WaitForMultipleObjects with a large timeout, so your application can still respond to messages.
I would suggest creating a service application (set startup type as automatic) and use CreateTimerQueueTimer as your timer.
Memory leaks can be mitigated by reserving memory requirements/pooling classes.