XNA Update() function update rate, varies greatly on different machines - xna

I'm coding an emulator in XNA. The emulator's accuracy depends greatly on real fps. The fps must be 60 per second. While Microsoft claims their update function on FIXED TIME SETUP, should be called exactly 60 times per second, in my emulator (or a blank XNA 4.0 refresh project), I've collected these values so far, using differect processors: AMD athlon 2x 6000 (the unsynced ones, that need an extra optimized update from AMD to run correctly) 60-61 tps. Intel i5 (2.8 i think) 55tps. On an old Lenovo laptop with a slow centrino 59-60tps (the desired one). Same goes for an old intel p4 hyperthreading; exactly 60 tps. On a DELL implementation (don't remember the cpu) it was 52-53 tps.
So what should I do. Timer's accuracy on seperate threads is horrible too. Besides, the update function will be called too in parallel (using almost 100% of the cpu).
Make a different profile for each cpu? (Assume that the tps, [times per second] counting is correct)

The Update method calls are not 100% fixed. On a PC they will try to be made 60 times per second, but that depends on the machine and the logic of your game.
You can check if the system is working slower than it should by checking the GameTime.IsRunningSlowly property on your game (link here).
Also, you can modify the ticks it makes every second, for example a phone project starts with 33 ticks per second configured, against the 60 of a normal Windows Game project.
It would be wise to check Shawn Hargreaves blog post about GameTime, it won't fix your problem but you will give you a better understanding of how the Update method works and maybe get an idea on how to fix the problem.
I quote part of his post here:
By default XNA runs in fixed timestep mode, with a TargetElapsedTime
of 60 frames per second. This provides a simple guarantee: •We will
call your Update method exactly 60 times per second •We will call your
Draw method whenever we feel like it
Digging into exactly what that means, you will realize there are
several possible scenarios depending on how long your Update and Draw
methods take to execute.
The simplest situation is that the total time you spend in Update +
Draw is exactly 1/60 of a second. In this case we will call Update,
then call Draw, then look at the clock and notice it is time for
another Update, then Draw, and so on. Simple!
What if your Update + Draw takes less than 1/60 of a second? Also
simple. Here we call Update, then call Draw, then look at the clock,
notice we have some time left over, so wait around twiddling our
thumbs until it is time to call Update again.
What if Update + Draw takes longer than 1/60 of a second? This is
where things get complicated. There are many reasons why this could
happen:
1.The computer might be slightly too slow to run the game at the desired speed.
2.Or the computer might be way too slow to run the game at the desired speed!
3.The computer might be basically fast enough, but this particular frame might have taken an unusually long time for some reason. Perhaps
there were too many explosions on screen, or the game had to load a
new texture, or there was a garbage collection.
4.You could have paused the program in the debugger.
We do the same thing in response to all four causes of slowness: •Set
GameTime.IsRunningSlowly to true. •Call Update extra times (without
calling Draw) until we catch up. •If things are getting ridiculous and
we are too far behind, we just give up.
If this doesn't help you, then posting some code showing what you think may be making the process slower would help.

Related

iOS Instruments : Timer's time is not matching with the sum of running times in call tree

I am analysing an app's slow performance using iOS Instruments. To load a login page it takes around 25 seconds. In Instruments, the timer shows 25 seconds to load the page. But when I sum the running times of the call tree, It is just around 4 seconds only. I want to know where the slowness is occuring. Is there anyway to force instruments to show all the time taken in call tree?
Note: I tried Xamarin profiler also. It shows maximum time taken by any call as 1E-06 ms. Is there any way to know the time taken by the whole method?
Have you considered using the Stopwatch class? It is supported in Project Core Libraries and can be used in a high-resolution mode for higher accuracy. It would allow you to time the execution of a particular method (which sounds like what you are attempting to accomplish). You can find Microsoft documentation and examples here.

ios game gets out of sync while analysing process in Instruments - how can I avoid that?

I am building an iOS game and I notice that the game performs fine while running normally in the XCode debugger. However, when i run it from within Instruments (Product-> Profile to trace Leaks), the game freezes when Instruments displays 'Analyzing Process' in the left sidebar. After that the game messes up all its state since some parts of the game that were being analyzed froze up while other parts kept going.
Is this something I can/need to fix or is it sufficient to make sure the game runs in release?
If a fix is needed, what do I need to do to make it work?
Update 1:
So we found the issue - the same problem repros even if we are playing the game, press the home button and click on the game icon and continue playing.
The issue is that most of our work is done in the update method, and it relies on the value of the (ccTime)dt parameter. The value of dt is usually < 0.1 seconds, and occasionally somewhere upto 0.5 seconds).When we pause (either by clicking the home button, or when instruments pauses the game to take a snapshot) and resume playing, the value of dt is several seconds! And that throws all our calculations out of range.
We tried a temporary (but ugly) workaround that fixes the issue: at the beginning of the update method, we add this:
if(dt > 1)
return;
And it now works as expected - doesn't go out of sync. However, this is not a permanent solution, since sometimes, the values of dt are legitimately close to 1 second, and in resource crunched situations, this may lead to stutter (or worse).
We also considered another (equally ugly) solution of storing the previous value of dt, and then check in the update method:
if(dt > 10 * prevDt)
{
return;
}
We tried calling unscheduleUpdate in AppDelegate.m's applicationDidEnterBackground, and called scheduleUpdate in the applicationWillEnterForeground method, but that approach did not work.
What is the best way to deal with updates with erratic time values due to external pauses?
Thanks
Anand
I don't know much about how cocos2D works, but if you've run out of options, I would simply clamp the delta time to an upper range. In whatever update methods used delta time, I would do the following.
double delta = (dt > 0.5) : (0.5) ? (dt);
Use delta instead of dt from that point onward. The result is that any frame with a delta of over half a second will be limited to a delta of half a second. You don't want to skip frames, because then you could potentially skip many frames in a row which would be bad for the user; they'd probably think the app crashed or something. However, you don't want to run a frame with a large delta because then delta-dependent objects and forces will get multiplied many times for that frame, preventing you from catching collisions and stuff. So, we prevent the game from getting completely screwed over while not skipping frames by simply clamping the value downwards.
The negative is that the app will appear to move more slowly when your frame rate drops to 2 frames per second. You lose the benefit of a time-based update method, which is the ability to always make the game appear to run at a well defined speed, when the engine is overburdened.
To be honest, if you ever run at 2 frames per second for an extended period of time, you've got bigger problems to worry about. I don't think the user will notice if objects move slightly more slowly per second if the game is only rendering once every who-knows-when anyways.
Unfortunately this is a problem, for which there is no sure answer; at least not without access to your system to run a whole variety of checks.
The failure in the profile may be because your game is running tight loops the timing of which get's upset in unpredictable ways and your game is crashing due to timing or resource issues (where those timing issues don't crop up with the debugger in the same way). If that's the case, there is probably not much you can do about it. Or it may be because there is a problem in your code. The problem is it can be very difficult to figure out which of these is the case. It's probably best to assume the problem is in your code though and do some further investigation.
The first thing to do, if you haven't done it already, is run the static analysis tool (Analyse from the Product menu in Xcode). Consider each of the raised errors carefully, and work to remove all of them. Sometimes they might seem obvious and you think you can ignore them, but some prodding reveals they are a symptom of a deeper problem.
If you haven't tried already, try running the instrument to check for zombies. There's a high chance this will fail also if the allocation instrument is failing, but if there are some stale references to de-allocated objects hanging around they could be causing the problem you are experiencing. Another instrument you can try is the performance analyser, to check where your app is spending most of it's time. There may be some really significant problem with the overallocation of resources you are not aware of. If you can't run the memory profiler, it will be difficult for you to see if this is the case, but using the performance analyser, it might be possible to see if your app is getting hung up for too long somewhere it shouldn't be.
Lastly - if all else fails and this may be a sledgehammer to crack a nut - and also may not in any case provide the solution. If you aren't using ARC, consider how long it would take to convert your app to using it (definitely create a branch first before doing it though). The Apple algorithms for object allocation/deallocation are very efficient and there is a very good chance if you have subtle memory management errors, they will be eliminated by Automatic Reference Counting.

Interval of a timer - from integer to float/double - Delphi

Is there a way to work around the Limits of the Ttimer's inteval so it can be preciser? for example instead of only integers like 1000ms , to use 1000.5ms . And if no, which component can I use instead which will give me preciser interval
You are trying to keep track of time to a reasonable degree of accuracy. However, the standard system timer cannot be used for that purpose. All that the system timer guarantees is that it will fire no sooner than the interval which you specify. And you can get the message late if you are tardy in pumping your message queue. Quite simply, the system timer is not designed to be used as a stopwatch and to attempt to do so is inappropriate.
Instead you need to use the high resolution performance counter which you can get hold of by calling QueryPerformanceCounter.
If you are using Delphi 2010 or later then you can use Diagnostics.TStopwatch which provides a very convenient wrapper to the high performance timer.
You can still use a system timer to give your app a regular tick or pulse, but make sure that you keep track of time with the high resolution timer.
Having said all of that, I'm not sure that you will ever be able to achieve what you are hoping to do. If you want to keep reasonably accurate time then what I say above is true. Trying to maintain lock-step synchronisation with code running on another machine somewhere remote over the net sounds pretty much intractable to me.
1) The TTimer class is not accurate enough for your task, period! (then again, neither would the web-site timer be, either)
2) If you increase the timer resolution using the TimeBeginPeriod() API call, this will get you closer, but still nowhere near close enough
3) If you adjust the TTimer interval each time based on a constant start time (and synchronised with the PC clock), you can average a set number of milliseconds for each time event compared to the PC clock
4) I don't know if the TTimer class handles 3) correctly, but I have a TTimer equivalent that does
5) To account for PC clock drift you will need to synchronise the PC clock periodically with an NTP server
6) I have a system that keeps the PC clock on a good machine to with +/- 5 milliseconds of a reference time permanently (I adjust every minute) and a timer with a resolution of +/- 2 milliseconds as long as the machine is not overloaded (Windows is not a real-time OS)
7) It took me a long time to get to this point - is this what you really need, or are you asking the wrong question?

I'm making a multiplayer game and I need to verify that players aren't speed hacking

For security reasons I have a feeling that that testing should be done server side. Nonetheless, that would be rather taxing on the server, right? Given the gear and buffs a player is wearing they will have a higher movement speed, so each time they move I would need to calculate that new constant and see if their movement is legitimate (using TCP so don't need to worry about lost, unordered packets). I realize I could instead just save the last movement speed and only recalculate it if they've changed something affecting their speed, but even then that's another check.
Another idea I had is that the server randomly picks data that the client is sending it and verifies it and gives each client a trust rating. A low enough trust rating would mean every message from the client would be inspected and all of their actions would be logged in a more verbose manner. I would then know they're hacking by inspecting the logs and could ban/suspend them as well as undo any benefits they may have spread around through hacking.
Any advice is appreciated, thank you.
Edit: I just realized there's also the case where a hacker could send tiny movements (within the capability of their regular speed) in a very high succession. Each individual movement alone would be legite, but the cumulative effect would be speed hacking. What are some ways around this?
The standard way to deal with this problem is to have the server calculate all movement. The only thing that the clients should send to the server are commands, e.g. "move left" and the server should then calculate how fast the player moves etc., then finally send the updated position back to the client.
If you leave any calculation at all on the client, the chances are that sooner or later someone will find a way to cheat.
[...] testing should be done server side. Nonetheless, that would be rather taxing on the server, right?
Nope. This is the way to do it. It's the only way to do it. All talk of checking trust or whatever is inherently flawed, one way or another.
If you're letting the player send positions:
Check where someone claims they are.
Compare that to where they were a short while ago. Allow a tiny bit of deviation to account for network lag.
If they're moving too quickly, reposition them somewhere more reasonable. Small errors may be due to long periods of lag, so clients should use interpolation to smooth out these corrections.
If they're moving far too quickly, disconnect them. And check for bugs in your code.
Don't forget to handle legitimate traversals over long distance, eg. teleports.
The way around this is that all action is done on the server. Never trust any information that comes from the client. If anybody actually plays your game, somebody will reverse-engineer the communication to the server and figure out how to take advantage of it.
You can't assign a random trust rating, because cautious cheaters will cheat only when they really need to. That gives them a considerable advantage with a low chance of being spotted cheating.
And, yes, this means you can't get by with a low-grade server, but there's really no other method of preventing client-side cheating.
If you are developing in a language that has access to Windows API function calls, I have found from my own studies in speed hacking, that you can easily identify a speed hacker by calling two functions and comparing results.
TimeGetTime
and...
GetTickCount
Both functions will return the number of seconds since the system started. However, TimeGetTime is much more accurate than GetTickCount, whereas TimeGetTime is accurate up to ~1ms vs. GetTickCount, which is accurate at around ~50ms
Even though there is a small lag between these two functions, if you turn on a speed hacking application (pick your poison), you should see a very large difference between the two result sets, sometimes even up to a couple of seconds. The difference is very noticable.
Write a simple application that returns the GetTickCount and TimeGetTime results, without the speed hacking application running, and leave it running. Compare the results and display the difference -- you should see a very small difference between the two. Then, with your application still running, turn on the speed hacking application and you will see the very large difference in the two result sets.
The trick is figuring out what threshold will constitue suspicious activity.

What's the reasonable time for generating web page?

I'm working on web app (Rails 3 based). And I really don't like the time it takes to generate the page - depending on the displayed data it takes up to 2.5 and even 4 seconds.
So I just was wondering what is the average reasonable time for generating page in your apps. Saying you check the generation time, e.g. it's 750ms and think "Ok, that should be fine even without caching". Or when you see 1.5sec you think "Oh my God, the user won't wait so long and leave the site"
There's a huge amount of research data regarding the time from query to rendering and user's experience. I'd recommend reading this useit.com article. After all Google integrated page speed in its results for a reason ;)
The 3 response-time limits are the
same today as when I wrote about them
in 1993 (based on 40-year-old research
by human factors pioneers):
0.1 seconds gives the feeling of instantaneous response — that is, the
outcome feels like it was caused by
the user, not the computer. This level
of responsiveness is essential to
support the feeling of direct
manipulation (direct manipulation is
one of the key GUI techniques to
increase user engagement and control —
for more about it, see our Principles
of Interface Design seminar).
1 second keeps the user's flow of thought seamless. Users can sense a
delay, and thus know the computer is
generating the outcome, but they still
feel in control of the overall
experience and that they're moving
freely rather than waiting on the
computer. This degree of
responsiveness is needed for good
navigation.
10 seconds keeps the user's attention. From 1–10 seconds, users
definitely feel at the mercy of the
computer and wish it was faster, but
they can handle it. After 10 seconds,
they start thinking about other
things, making it harder to get their
brains back on track once the computer
finally does respond.
A 10-second delay will often make
users leave a site immediately. And
even if they stay, it's harder for
them to understand what's going on,
making it less likely that they'll
succeed in any difficult tasks.
As a rule of thumb, think that you always should aim for a balance of optimization time vs time gained. Don't spend days optimizing the hell out of one routine when your images aren't compressed correctly, or your scripts/css not combined. Yes, faster is better, but a 90% gain in generating the page by setting up a smart cache beats a 10% gain after one week tweaking the algorithm.
Also don't look too much into the first-render-time when the framework has to load everything, but use stress-testing, cached or not, to simulate various situations.
Now, some data; some of the latest sites i worked on used DotNetNuke, a huge open-source CMS, and Asp.Net MVC where you nearer to the metal. Average page time with average db queries was 600-700 milliseconds for DotNetNuke. For Asp.net MVC, it's 70-100 milliseconds... Users really like the second one :)
There's no 'right' answer to this - the faster the better. Personally I normally aim for < 200ms, although I know from experience that it can be quite difficult to achieve this in Rails on anything but simple apps. Try and figure out where your bottlenecks are and cache what you can.
Edit: There seems to be some confusion between page generation time and page render time. Obviously a quick page render is the goal, and on most sites doing things like reducing HTTP requests, gzipping CSS/JS are where you can get most of your quick wins. But if the page itself can take 4-5 seconds to generate, then you're probably right that your app is where you should start.
It depends on whether nothing is displayed for 2.5-4 seconds, or that the user already sees (a part of) the page from the start, and it finishes loading completely after 2.5-4 seconds. In that case the user doesn't experience a 2.5-4 second load. Take the http://www.nytimes.com/ website; I see most of it right away, but according to the Web Inspector it takes 1.94 seconds for it to be loaded completely.
And keep in mind that the speed will also depend on the browser, computer, internet connection. What's fast for you might be slower for others.
Measure your apdex score and see how it is performing. That will give you a rough indiciation. From there, you can decide how you want to increase performance.
It also depends on what your site is; an system application for a business or software as a service (SaaS)? If it's a system application, the users are forced to use it to performance can be negotiated. If it is a SaaS, then the higher your apdex score, the more chance you have of losing your user's interest.
There are a few gems out there that measure performance and report on what your apdex is.
Here's a little more info: http://apdex.org/blog/?p=630
My personal rule - no page should take more than 0.05 seconds, or you are in troubles.
As long as you write proper code, you don't need to spend much time on optimization to stay under 0.05.
If you stick to giant frameworks, then you are out of luck.

Resources