Any existing in-game MonogGame/XNA performance stats / diagnostics info? - xna

I'm in the process of creating a game using MonoGame and a fork-of-a-fork of Farseer Physics (https://github.com/alundgren04/Aether.Physics2D) and trying to push the physics to be able to process a very large world. In doing so, the on-screen statistics are invaluable: https://github.com/alundgren04/Aether.Physics2D
Many of these came with the physics engine, and others I had to add. I'm wondering if there's something similar for MonoGame. Something which would show each frame how many polygons were rendered, how many sprites, etc., and how long each took. This would be analogous to the physics info where it lists both the number of "Bodies," "Fixtures," "Joints," etc., and the time it took to up date each of them.
See screen grab here: https://i.imgur.com/5RdOlay.png
I see the total physics update time is around 3-5ms yet the game only appears to be rendering perhaps once a second or so (1 fps). This points to the rendering being the bottleneck in performance, and I would like to have thorough diagnostics before beginning the optimization effort. I could build it myself, and may end up doing so, but I'm hoping there's a built-in solution I can at least use as a foundation.
Thanks!

GraphicsDevice.Metrics returns rendering information when you draw. It's reset whenever Present is called. While it doesn't mention how long everything took, it still contains information that should help with debugging your issue.

Related

Getting FPS and frame-time info from a GPU

I am a mathematician and not a programmer, I have a notion on the basics of programming and am a quite advanced power-user both in linux and windows.
I know some C and some python but nothing much.
I would like to make an overlay so that when I start a game it can get info about amd and nvidia GPUs like frame time and FPS because I am quite certain the current system benchmarks use to compare two GPUs is flawed because small instances and scenes that bump up the FPS momentarily (but are totally irrelevant in terms of user experience) result in a higher average FPS number and mislead the market either unintentionally or intentionally (for example, I cant remember the name of the game probably COD there was a highly tessellated entity on the map that wasnt even visible to the player which lead AMD GPUs to seemingly under perform when roaming though that area leading to lower average FPS count)
I have an idea on how to calculate GPU performance in theory but I dont know how to harvest the data from the GPU, Could you refer me to api manuals or references to help me making such an overlay possible?
I would like to study as little as possible (by that I mean I would like to learn what I absolutely have to learn in order to get the job done I dont intent to become a coder).
I thank you in advance.
It is generally what the Vulkan Layer system is for, which allows to intercept API commands and inject your own. But it is nontrivial to code it yourself. Here are some pre-existing open-source options for you:
To get to timing info and draw your custom overlay you can use (and modify) a tool like OCAT. It supports Direct3D 11, Direct3D 12, and Vulkan apps.
To just get the timing (and other interesting info) as CSV you can use a command-line tool like PresentMon. Should work in D3D, and I have been using it with Vulkan apps too and it seems to accept them.

ARKit with multiplayer experience to share same planes [duplicate]

What is the best way, if any, to use Apple's new ARKit with multiple users/devices?
It seems that each devices gets its own scene understanding individually. My best guess so far is to use raw features points positions and try to match them across devices to glue together the different points of views since ARKit doesn't offer any absolute referential reference.
===Edit1, Things I've tried===
1) Feature points
I've played around and with the exposed raw features points and I'm now convinced that in their current state they are a dead end:
they are not raw feature points, they only expose positions but none of the attributes typically found in tracked feature points
their instantiation doesn't carry over from frame to frame, nor are the positions exactly the same
it often happens that reported feature points change by a lot when the camera input is almost not changing, with either a lot appearing or disappearing.
So overall I think it's unreasonable to try to use them in some meaningful way, not being able to make any kind of good point matching within one device, let alone several.
Alternative would to implement my own feature point detection and matching, but that'd be more replacing ARKit than leveraging it.
2) QR code
As #Rickster suggested, I've also tried identifying an easily identifiable object like a QR code and getting the relative referential change from that fixed point (see this question) It's a bit difficult and implied me using some openCV to estimate camera pose. But more importantly very limiting
As some newer answers have added, multiuser AR is a headline feature of ARKit 2 (aka ARKit on iOS 12). The WWDC18 talk on ARKit 2 has a nice overview, and Apple has two developer sample code projects to help you get started: a basic example that just gets 2+ devices into a shared experience, and SwiftShot, a real multiplayer game built for AR.
The major points:
ARWorldMap wraps up everything ARKit knows about the local environment into a serializable object, so you can save it for later or send it to another device. In the latter case, "relocalizing" to a world map saved by another device in the same local environment gives both devices the same frame of reference (world coordinate system).
Use the networking technology of your choice to send the ARWorldMap between devices: AirDrop, cloud shares, carrier pigeon, etc all work, but Apple's Multipeer Connectivity framework is one good, easy, and secure option, so it's what Apple uses in their example projects.
All of this gives you only the basis for creating a shared experience — multiple copies on your app on multiple devices all using a world coordinate system that lines up with the same real-world environment. That's all you need to get multiple users experiencing the same static AR content, but if you want them to interact in AR, you'll need to use your favorite networking technology some more.
Apple's basic multiuser AR demo shows encoding an ARAnchor
and sending it to peers, so that one user can tap to place a 3D
model in the world and all others can see it. The SwiftShot game example builds a whole networking protocol so that all users get the same gameplay actions (like firing slingshots at each other) and synchronized physics results (like blocks falling down after being struck). Both use Multipeer Connectivity.
(BTW, the second and third points above are where you get the "2 to 6" figure from #andy's answer — there's no limit on the ARKit side, because ARKit has no idea how many people may have received the world map you saved. However, Multipeer Connectivity has an 8 peer limit. And whatever game / app / experience you build on top of this may have latency / performance scaling issues as you add more peers, but that depends on your technology and design.)
Original answer below for historical interest...
This seems to be an area of active research in the iOS developer community — I met several teams trying to figure it out at WWDC last week, and nobody had even begun to crack it yet. So I'm not sure there's a "best way" yet, if even a feasible way at all.
Feature points are positioned relative to the session, and aren't individually identified, so I'd imagine correlating them between multiple users would be tricky.
The session alignment mode gravityAndHeading might prove helpful: that fixes all the directions to a (presumed/estimated to be) absolute reference frame, but positions are still relative to where the device was when the session started. If you could find a way to relate that position to something absolute — a lat/long, or an iBeacon maybe — and do so reliably, with enough precision... Well, then you'd not only have a reference frame that could be shared by multiple users, you'd also have the main ingredients for location based AR. (You know, like a floating virtual arrow that says turn right there to get to Gate A113 at the airport, or whatever.)
Another avenue I've heard discussed is image analysis. If you could place some real markers — easily machine recognizable things like QR codes — in view of multiple users, you could maybe use some form of object recognition or tracking (a ML model, perhaps?) to precisely identify the markers' positions and orientations relative to each user, and work back from there to calculate a shared frame of reference. Dunno how feasible that might be. (But if you go that route, or similar, note that ARKit exposes a pixel buffer for each captured camera frame.)
Good luck!
Now, after releasing ARKit 2.0 at WWDC 2018, it's possible to make games for 2....6 users.
For this, you need to use ARWorldMap class. By saving world maps and using them to start new sessions, your iOS application can now add new Augmented Reality capabilities: multiuser and persistent AR experiences.
AR Multiuser experiences. Now you may create a shared frame of a reference by sending archived ARWorldMap objects to a nearby iPhone or iPad. With several devices simultaneously tracking the same world map, you may build an experience where all users (up to 6) can share and see the same virtual 3D content (use Pixar's USDZ file format for 3D in Xcode 10+ and iOS 12+).
session.getCurrentWorldMap { worldMap, error in
guard let worldMap = worldMap else {
showAlert(error)
return
}
}
let configuration = ARWorldTrackingConfiguration()
configuration.initialWorldMap = worldMap
session.run(configuration)
AR Persistent experiences. If you save a world map and then your iOS application becomes inactive, you can easily restore it in the next launch of app and in the same physical environment. You can use ARAnchors from the resumed world map to place the same virtual 3D content (in USDZ or DAE format) at the same positions from the previous saved session.
Not bulletproof answers more like workarounds but maybe you'll find these helpful.
All assume the players are in the same place.
DIY ARKit sets up it's world coordinate system quickly after the AR session has been started. So if you can have all players, one after another, put and align their devices to the same physical location and let them start the session there, there you go. Imagine the inside edges of an L square ruler fixed to whatever available. Or any flat surface with a hole: hold phone agains surface looking through the hole with camera, (re)init session.
Medium Save the player aligning phone manually, instead detect a real world marker with image analysis just like #Rickster described.
Involved Train an Core ML model to recognize iPhones and iPads and their camera location. Like it's done with human face and eyes. Aggregate data on a server, then turn off ML to save power. Note: make sure your model is cover-proof. :)
I'm in the process of updating my game controller framework (https://github.com/robreuss/VirtualGameController) to support a shared controller capability, so all devices would receive input from the control elements on the screens of all devices. The purpose of this enhancement is to support ARKit-based multiplayer functionality. I'm assuming developers will use the first approach mentioned by diviaki, where the general positioning of the virtual space is defined by starting the session on each device from a common point in physical space, a shared reference, and specifically I have in mind being on opposite sides of a table. All the devices would launch the game at the same time and utilize a common coordinate space relative to physical size, and using the inputs from all the controllers, the game would remain theoretically in sync on all devices. Still testing. The obvious potential problem is latency or disruption in the network and the sync falls apart, and it would be difficult to recover except by restarting the game. The approach and framework may work for some types of games fairly well - for example, straightforward arcade-style games, but certainly not for many others - for example, any game with significant randomness that cannot be coordinated across devices.
This is a hugely difficult problem - the most prominent startup that is working on it is 6D.ai.
"Multiplayer AR" is the same problem as persistent SLAM, where you need to position yourself in a map that you may not have built yourself. It is the problem that most self driving car companies are actively working on.

where is the actual code for the XNA "game loop"?

I am beginner trying to learn C# and XNA. I am trying to get a deeper understanding of the XNA game loop.
Although there are plenty of articles explaining what the game loop is, I can't seem to find the actual loop implementation anywhere.
The closest I think I have gone to finding the loop is this member of the game class in the MSDN documentation:
public void Tick ()
If this is the correct one, where can I find the inner implementation of this method to see how it calls the Update and Draw methods, or is that not possible?
Monogame is an open source replica of XNA based on modern rendering pipelines, and SharpDX.Toolkit implements a very XNA-Like interface for DX11 (Monogame actually uses SharpDX under the hood for DirectX integration)... you can probably find the game loop in the source code from either of those projects and it will likely be be close to if not identical to what MS XNA actually uses.
That being said, the game loop actually doesn't do much for simple demo applications (they tend to pile up everything in a single method to update/render a simple scene), though in full games with component based scene graphs it can get a little complicated. For single threaded engines, the general idea is to:
1. update the state of any inputs,
2. process any physics simulation if needed,
3. call update on all the updatable objects in
your scene graph (the engine I'm working on
uses interfaces to avoid making wasteful calls
on empty stub methods),
4. clear the working viewport/depth buffers,
5. call render on each renderable object to actually
get the GPU drawing on the back buffer.
6. swap the render buffers putting everything just
drawn to the back buffer up on screen, and making
the old viewport buffer the new back buffer to be
cleared and drawn on the next rendering pass.
7. start over at step 1 :)
That pretty much covers the basics and should generally apply no matter what underlying API or engine you are working with.
Since XNA is closed-sourced, it is not possible to see the code for the game loop, as defined in the Game class. The Tick() you have referenced is not exposed nor referenced in any XNA documentation(Although, it may be used behind the scenes, I do not know).
As #Ascendion has pointed out, MonoGame has an equivalent class named Game see here. While, this may not reflect the XNA code exactly, it is the best compatible source (all tests performed between the two, they return the same values), that we the public, have available.
The main side effect to the MonoGame implementation is the platform independent code, which may be hard to comprehend, since some of the implementation code is located in additional files. It is not hard to trace the sources back to the expected platform source code.
This post is to serve as a clarification to all who may stumble upon this later.

AI for enemy in SpriteKit

I have been making a game in sprite kit for some time now. I have added enemies and am wondering how I can control them around the map using AI(just like in any other game).
What I want is for the enemy to wonder around the TMX map, turning corner depending on random number. I have tried to do this but have run into many problems. Does anyone know of any articles that would help me with this? I have done some research. "PathFinding" and "A*" come up, but with know explanation or sample code on how to do it. Any help would be greatly appreciated.
Welcome to SO. Let me start by saying that I too am currently searching for exactly the same thing. Unfortunately the pickings have been kinda weak, at least what I found so far.
I did find a couple of very interesting reads:
An article on the ghosts behavior of PacMan. Simple but very effective.
Amit’s Game Programming Information. A more general discussion about making games.
Gamasutra. An excellent resources for all things game design.
Designing AI Algorithms For Turn-Based Strategy Games. A Gamasutra article which is extremely useful in explaining turn based AI in plain english.
All these are very useful in their own ways and make you think. However, nothing I have come across provides a plain worded explination on a bad guy's logic (the PacMan article came close). Searching Amazon for Game AI books yields a bunch of books which are extremely expensive and read more like advanced quantum theory.
I ended up deciding on a simple approach for my game. I have my bad guy decide between 2 possible states. Patrol Mode and Attack Mode.
In Patrol Mode he sits idle for a few seconds, walks left or right until he hits a wall or other object, runs (same rules as walking), climbs up and down ladders, and does the occasional jump. I use arc4random() to decide what he does next when his current action is completed. This provides a truly random behavior and makes the bad guy completely unpredictable.
Attack Mode happens when the player is within X distance of the bad guy. The bad guy now has a different set of actions to choose from. Run towards player, swing sword at player, jump up, and so on. Again I use the random function to make the bad guy unpredictable. The results so far have been really excellent. My bad guy behaves like one of those hard to beat 12 year kids playing Halo.
The bad guy will continue to do battle until he dies, the player dies or the player runs away and is no longer within the X distance required for Attack Mode. You can of course fine tune bad guy's behavior by limiting jumping, time between attacks and so on.
Hope this helps and got your creative juices flowing.
The Wikipedia page for the A* Search Algorithm here contains a psuedocode example. All you really need is an understanding of how the algorithm works then you should be able to implement it. Googling for iOS and A* brings up several tutorials.
You have a tile map so it should be relatively easy to do what you want without A* like this bit of psuedocode.
// An Update function for your Enemy
function Update
paths = GetAllPathways()
if paths.length == 0
TurnAround()
else
randomPath = paths.Random()
TurnTowards(randomPath)
endif
MoveForward()
end function
function GetAllPathways()
paths = new Array()
if CanGoForward()
paths.push(forward)
end if
if CanGoLeft()
paths.push(left)
end
if CanGoRight()
paths.push(right)
end
return paths
end function
An algorithm like A* is really for more complex things (not random decision making) where you
can have a complex or dynamic map and want your enemies to target something or the player dynamically which is where A* comes into play, determining a way through the world to find the path to it's target.

Hardware/Software rasterizer vs Ray-tracing

I saw the presentation at the High-Perf Graphics "High-Performance Software Rasterization on GPUs" and I was very impressed of the work/analysis/comparison..
http://www.highperformancegraphics.org/previous/www_2011/media/Papers/HPG2011_Papers_Laine.pdf
http://research.nvidia.com/sites/default/files/publications/laine2011hpg_paper.pdf
My background was Cuda, then I started learning OpenGL two years ago to develop the 3d interface of EMM-Check, a field-of-view-analyze program to check if a vehicle is going to fulfill a specific standard or not. essentially you load a vehicle (or different parts), then you can move it completely or separately, add mirrors/cameras, analyze the point of view and shadows for the point of view of the driver, etc..
We are dealing with some transparent elements (mainly the field of views, but also vehicle themselves might be), therefore I wrote some rough algorithm to sort on fly the elements to be rendered (at primitive level, a kind of Painter's algorithm) but of course there are cases in which it easily fails, although for most of cases is enough..
For this reason I started googling, I found many techniques, like (dual) depth peeling, A/R/K/F-buffer, ecc ecc
But it looks like all of them suffer at high resolution and/or large number of triangles..
Since we also deal with millions of triangles (up to 10 more or less), I was looking for something else and I ended up to software renderers, compared to the hw ones, they offer free programmability but they are slower..
So I wonder if it might be possible to implement something hybrid, that is using the hardware renderer for the opaque elements and the software one (cuda/opencl) for the transparent elements and then combining the two results..
Or maybe a simple (no complex visual effect required, just position, color, simple light and properly transparency) ray-tracing algorithm in cuda/opencl might be much simpler from this point of view and give us also a lot of freedom/flexibility in the future?
I did not find anything on the net regarding this... maybe is there any particular obstacle?
I would like to know every single think/tips/idea/suggestion that you have regarding this
Ps: I also found "Single Pass Depth Peeling via CUDA Rasterizer" by Liu, but the solution from the first paper seems fair faster
http://webstaff.itn.liu.se/~jonun/web/teaching/2009-TNCG13/Siggraph09/content/talks/062-liu.pdf
I might suggest that you look at OpenRL, which will let you have hardware-accelerated raytracing?

Resources