where is the actual code for the XNA "game loop"? - xna

I am beginner trying to learn C# and XNA. I am trying to get a deeper understanding of the XNA game loop.
Although there are plenty of articles explaining what the game loop is, I can't seem to find the actual loop implementation anywhere.
The closest I think I have gone to finding the loop is this member of the game class in the MSDN documentation:
public void Tick ()
If this is the correct one, where can I find the inner implementation of this method to see how it calls the Update and Draw methods, or is that not possible?

Monogame is an open source replica of XNA based on modern rendering pipelines, and SharpDX.Toolkit implements a very XNA-Like interface for DX11 (Monogame actually uses SharpDX under the hood for DirectX integration)... you can probably find the game loop in the source code from either of those projects and it will likely be be close to if not identical to what MS XNA actually uses.
That being said, the game loop actually doesn't do much for simple demo applications (they tend to pile up everything in a single method to update/render a simple scene), though in full games with component based scene graphs it can get a little complicated. For single threaded engines, the general idea is to:
1. update the state of any inputs,
2. process any physics simulation if needed,
3. call update on all the updatable objects in
your scene graph (the engine I'm working on
uses interfaces to avoid making wasteful calls
on empty stub methods),
4. clear the working viewport/depth buffers,
5. call render on each renderable object to actually
get the GPU drawing on the back buffer.
6. swap the render buffers putting everything just
drawn to the back buffer up on screen, and making
the old viewport buffer the new back buffer to be
cleared and drawn on the next rendering pass.
7. start over at step 1 :)
That pretty much covers the basics and should generally apply no matter what underlying API or engine you are working with.

Since XNA is closed-sourced, it is not possible to see the code for the game loop, as defined in the Game class. The Tick() you have referenced is not exposed nor referenced in any XNA documentation(Although, it may be used behind the scenes, I do not know).
As #Ascendion has pointed out, MonoGame has an equivalent class named Game see here. While, this may not reflect the XNA code exactly, it is the best compatible source (all tests performed between the two, they return the same values), that we the public, have available.
The main side effect to the MonoGame implementation is the platform independent code, which may be hard to comprehend, since some of the implementation code is located in additional files. It is not hard to trace the sources back to the expected platform source code.
This post is to serve as a clarification to all who may stumble upon this later.

Related

Is there a programmatic way to see what graphics API a game is using?

For games like DOTA 2 which can be run with different graphics API's such as DX9, DX11, Vulkan, I have not been able to come up with a viable solution to checking which of the API's its currently using. I want to do this to correctly inject a dll in order to display images over the game.
I have looked into manually checking what dll's the games have loaded,
this tool for example: https://learn.microsoft.com/en-us/sysinternals/downloads/listdlls
however, in the case of DOTA, it loads in both d3d9.dll and d3d11.dll libraries if none is specified in launch options on steam. Anyone have any other ideas as to how to determine the correct graphics API used?
In Vulkan, a clean way would be to implement a Vulkan Layer doing the overlay. It is slightly cleaner than outright injecting dlls. And it could work on multiple platforms.
In DirectX, screencap software typically does this. Some software adds FPS counter and such overlays. There seems to be open source with similar goals e.g. here: https://github.com/GPUOpen-Tools/OCAT. I believe conventionally the method is to intercept (i.e. "hook" in win32 api terminology) all the appropriate API calls.
As for simple detection, if it calls D3D12CreateDevice then it likely is Direct3D 12. But then again the app could create devices for all the APIs too and proceed not to use them. But I think the API detection is not particularly important for you if you only want to make an overlay; as long as you just intercept all the present calls and draw your stuff on top of it.

WebGL Constructive Solid Geometry to Static Vertices

I'm super new to 3D graphics (I've been trying to learn actual WebGL instead of using a framework) and I'm now in the constructive solid geometry phase. I know sites like tinkercad.com use CSG with WebGL, but they have things set so that your design is calculated every time you load the page instead of doing the subtraction, addition and intersection of primitive objects once and then storing those end design vertices for later use. I'm curious if anybody knows why they're doing things that way (maybe just to conserve resources on the server?) and if there isn't some straightforward way of extracting those vertices right before the draw call? Maybe a built in function of WebGL? Haven't found anything so far, when I try logging the object data from gl.bufferData() I'm getting multiple Float32Arrays (one for each object that was unionized together) instead of one complete set of vertices.
By the way, the only github I've found with CSG for WebGL is this https://github.com/evanw/csg.js/ and it's pretty straightforward, however it uses a framework and was curious if you know of any other CSG WebGL code out there that doesn't rely on a framework. I'd like to write it myself either way, but just being able to see what others have done would be nice.

DMX software to control lights with programmable interface

I find myself in the need of a software to control lights with a programmable interface. Basically what I want to do is to automatically control the lights using some criteria that I programmed inside a program. My program will then control the lights passing through the software I'm searching for, of course this would need a programmable interface to which I should pass the commands to control the lights.
I've been searching for a software like that in the last couple of days without success, what I found are only softwares with GUIs for users, but no specification whatsoever about programming the light behavior instead of manipulating it by hand.
There's some really good information & code samples (including a working class that I wrote) here: Lighting USB OpenDMX FTD2XX DMXking
Ultimately, you end up setting byte values (between 0 and 255[FF] (brightest) in a byte array.
It's fairly trivial to implement simple effects such as fades or chases.
If you haven't got that far yet (e.g. up to the code) you'll need to get ahold of a USB DMX controller.
There are a number of them out there, but the thread above has sample code for two different flavours.
I also wanted an environment where I could quickly write code that would create interesting effects for my DMX effect lights and lasers, and ended up creating it myself. I just announced the first public release of Afterglow, my free, open-source live-coding environment for light shows. You can find it at https://github.com/brunchboy/afterglow
I needed precise control of individual mutli-channel (RGBAW) DMX512 lights and wanted to write code in C++ for Windows. I adapted the C# example from Enttec's website for OpenUSB and released the code:
https://github.com/chloelle/DMX_CPP

3D library recommendations for interactive spatial data visualisation?

Our software produces a lot of data that is georeferenced and recorded over time. We are considering ways to improve the visualisation, and showing the (processed) data in a 3D view, given it's georeferenced, seems a good idea.
I am looking for SO's recommendations for what 3D libraries are best to use as a base when building these kind of visualisations in a Delphi- / C++Builder-based Windows application. I'll add a bounty when I can.
The data
Is recorded over time (hours to days) and is GPS-tagged. So, we have a lot of data following a path over time.
Is spatial: it represents real 3D elements of the earth, such as the land, or 3D elements of objects around the earth.
Is high volume: we could have a point cloud, say, of hundreds of thousands to millions of points. Processed data may display as surfaces created from these point clouds.
From that, you can see that an interactive, spatially-based 3D visualisation seems a good approach. I'm envisaging something where you can easily and quickly navigate around in space, and data will load or be generated on the fly depending on what you're looking at. I would prefer we don't try to write our own 3D library from scratch - for something like this, there have to be good existing libraries we can work from.
So, I'm hoping for a library which supports:
good navigation (is the library based on Euler rotations only, for example? Can you 'pick' objects to rotate around or move with easily?);
modern GPUs (shader-only rendering is ok; being able to hook into the pipeline to write shaders that map values to colours and change dynamically would be great - think data values given a colour through a colour lookup table);
dynamic data / objects (data can be added as it's recorded; and if the data volume is too high, we should be able to page things in and out or recreate them, and only show a sensible subset so that whatever the user's viewport is looking at is there onscreen, but other data can be loaded/regenerated, preferably asynchronously, or at least quickly as the user navigates. Obviously data creation is dependent on us, but a library that has hooks for this kind of thing would be great.)
and technologically, works with Delphi / C++Builder and the VCL.
Libraries
There are two main libraries I've considered so far - I'm looking for knowledgeable opinions about these, or for other libraries I haven't considered.
1. FireMonkey
This is Embarcadero's new UI library, which is only available in XE2 and above. Our app is based on the VCL and we'd want to host this in a VCL window; that seems to be officially unsupported but unofficially works fine, or is available through third-parties.
The mix of UI framework and 3D framework with shaders etc sounds great. But I don't know how complex the library is, what support it has for data that's not a simple object like a cube or sphere, and how well-designed it is. That last link has major criticisms of the 3D side of the library - severe enough I am not sure it's worthwhile in its current state at the time of writing for a non-trivial 3D app.
Is it worth trying to write a new visualisation window in our VCL app using FireMonkey?
2. GLScene
GLScene is a well-known 3D OpenGL framework for Delphi. I have never used it myself so have no experience about how it works or is designed. However, I believe it integrates well into VCL windows and supports shaders and modern GPUs. I do not know how its scene graph or navigation work or how well dynamic data can be implemented.
Its feature list specifically mentions some things I'm interested in, such as easy rotation/movement, procedural objects (implying dynamic data is easy to implement), and helper functions for picking. It seems shaders are Cg only (not GLSL or another non-vendor-specific language.) It also supports "polymorphic image support for texturing (allows many formats as well as procedural textures), easily extendable" - that may just mean many image formats, or it may indicate something where the texture can be dynamically changed, such as for dynamic colour mapping.
Where to from here?
These are the only two major 3D libraries I know of for Delphi or C++Builder. Have I missed any? Are there pros and cons I'm not aware of? Do you have any experience using either of these for this kind of purpose, and what pitfalls should we be wary of or features should we know about and use?
We currently use Embarcadero RAD Studio 2010 and most of our software is written in C++. We have small amounts of Delphi and may consider upgrading IDEs, but we are most likely to wait until the 64-bit C++ compiler is released. For that reason, a library that works in RS2010 might be best.
Thanks for your input :) I'm after high-quality answers, so I'll add a bounty when I can!
I have used GLScene in my 3D geomapping software and although it's not used to an extent you're looking for I can vouch that it seems the most appropriate for what you're trying to do.
GLScene supports terrain rendering and adding customizable objects to the scene. Objects can be interacted with and you can create complex 3D models of objects using the various building blocks of GLScene.
Unfortunately I cannot state how it will work with millions of points, but I do know that it is quite optimized and performs great on minimal hardware - that being said - the target PC I found required a dedicated graphics card capable of using OpenGL 2.1 extensions or higher (I found small issues with integrated graphics cards).
The other library I looked at was DXscene - which appears quite similar to GLScene albeit using DirectX instead of OpenGL. From memory this was a commercial product where GLScene was licensed under GPL. (EDIT - the page seems to be down at the moment : http://www.ksdev.com/index.html)
GLScene is still in active development and provides a fairly comprehensive library of functions, base objects and texturing etc. Things like rotation, translation, pitch, roll, turn, ray casting - to name a few - are all provided for you. Visibility culling is provided for each base object as well as viewing cameras, lighting and meshes. Base objects include cubes, spheres, pipes, tetrahedrons, cones, terrain, grids, 3d text, arrows to name a few.
Objects can be picked with the mouse and moved along 1,2 or 3 axes. Helper functions are included to automatically calculate the top-most object the mouse is under. Complex 3D shapes can be built by attaching base objects to other base objects in a hierarchical manner. So, for example, a car could be built using a rectangle as the base object and attaching four cylinders to it for the wheels - then you can manipulate the 'car' as a whole - since the four cylinders are attached to the base rectangle.
The only downside I could bring to your attention is the sometimes limited help/support available to you. Yes, there is a reference manual and a number of demo applications to show you how to do things such as select objects and move them around, however the reference manual is not complete and there is potential to get 'stuck' on how to accomplish a certain task. Forum support is somewhat limited/sparse. If you have a sound knowledge of 3D basics and concepts I'm sure you could nut it out.
As for Firemonkey - I have had no experience with this so I can't comment. I believe this is more targeted at mobile applications with lower hardware requirements so you may have issues with larger data sets.
Here are some other links that you may consider - I have no experience with them:
http://www.truevision3d.com/
http://www.3impact.com/
Game Development in Delphi
The last one is targeted at game development - but may provide useful information.
Have you tried glData? http://gldata.sourceforge.net/
It is old (~2004, Delphi 7), and I have not personally used the library, but some of the output is amazing.
you can use the GLScene or OpenGL they are good 3D rendering and its very easy to use.
Since you are already using georeferenced data, maybe you should consider embedding GoogleEarth in your Delphi application like this? Then you can add data to it as points, paths, or objects.

Can you prewarm a shader on a background thread with its own context?

I am developing a large game that streams in level data (including shaders) as you move through the game world. I do not want to have hitches in my frame rate as shaders are compiled/linked or on the first time they are used.
I have my shader compilation and linking working on a separate thread with its own open-gl context. But I have not been able to get the prewarming of the shaders to work on the separate thread (so that there is no performance hit when the shader is first used).
Prewarming is really not mentioned anywhere in the iOS or OpenGL docs. It is however mentioned in the OpenGL ES Analyzer (one of the instruments available when profiling from xcode). In this tool I get a "Shader Compiled Outside of Prewarming Phase" warning each time something is rendered with a shader that has not been used to render something before. The "Extended detail" says this:
"OpenGL ES Analyzer detected a shader compilation that is not part of an initial prewarming phase. Shader compilation can be a time consuming operation. To avoid them, prewarm all shaders used for rendering. To do this, make a prewarming passwhen your application launches and execute a drawing call with each of the shader programs to be used, using any gl state settings the shader program will be used in conjunction with. States such as blending, color mask, logic ops, multisamping, texture formats, and point primitive state can all affect shader compilation."
The term "compilation" is a little confusing here. The vertex and fragment shaders have already been compiled and the program has been linked. But the first time something is rendered with a given OpenGL state it does some more work on the shader to optimize it for that state I guess.
I have code to pre-warm the shaders by rendering a zero sized triangle before it's first use.
If I compile, link and pre-warm the shaders on the main thread with the same Open GL context as the normal rendering then it works. However if I do it on the background thread with its separate Open GL context it does not work (it still gets the Analyzer warning on first use).
So... it could be that prewarming a shader on a separate context has no effect on other contexts. Or it could be that I don't have all the same state set up the separate context. There is a lot of potential Open GL state that might need to be set up. I'm using an offscreen render buffer on the background thread so that could be considered part of the state.
Has anyone succeeded in getting prewarming working on a background thread?
To be honest with you I was quite ignorant on this matter until yesterday though I have been working on my engine optimization for a while. So, first of all, thank you for the tip :).
I have studied since then the shader warming topic and I have not found much around.
I have found a mention the official AMD documentation in a document titled "ATI OpenGL Programming and Optimization Guide":
http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=6&ved=0CEoQFjAF&url=http%3A%2F%2Fdeveloper.amd.com%2Fmedia%2Fgpu_assets%2FATI_OpenGL_Programming_and_Optimization_Guide.pdf&ei=3HIeT_-jKYbf8AOx3o3BDg&usg=AFQjCNFProzLiXf5Aqqs4jZ2jOb4x0pssg&sig2=6YV7SVA97EFglXv_SX5weg
This is an excerpt of which refers to the warming of the shaders:
Quote:
While the R500 natively supports flow control in the fragment shading unit, the R300 and R400
asics does not. Static flow control for the R300 and R400 is emulated by the driver compiling out
unused conditionals and unrolling loops based on the set constants. Even though the R500 asics family
natively support flow control, the driver will still attempt to compile out static flow conditions enabling
it to reorganize shader instructions for better instruction scheduling. The driver will also try to cache
away the compiled shader for a specific static flow condition set in anticipation for its reuse. So when
writing a fragment program that uses static flow control, it is recommended to “warm” the shader cache
by rendering a dummy triangle on the very first frame that uses the common static conditional
permutations relevant for the life of the shader.
The best explanation I have found around is the following:
http://fgiesen.wordpress.com/2011/07/01/a-trip-through-the-graphics-pipeline-2011-part-1/
Quote:
Incidentally, this is also the reason why you’ll often see a delay the first time you use a new shader or resource; a lot of the creation/compilation work is deferred by the driver and only executed when it’s actually necessary (you wouldn’t believe how much unused crap some apps create!). Graphics programmers know the other side of the story – if you want to make sure something is actually created (as opposed to just having memory reserved), you need to issue a dummy draw call that uses it to “warm it up”. Ugly and annoying, but this has been the case since I first started using 3D hardware in 1999 – meaning, it’s pretty much a fact of life by this point, so get used to it. :)
In this presentation, it is mentioned how the cryteck engined performed it on the far cry engine though it is mostly related to DirectX.
http://www.powershow.com/view/11f2b1-MzUxN/Far_Cry_and_DirectX_flash_ppt_presentation
I hope these links help in some way.

Resources