XNA draw : using one spritebatch at whole game - xna

I am developing an XNA game. This is time I am careful about the architecture. Til today, I have always implemented my own draw method in this way:
public void Draw(SpriteBatch sb, GameTime gameTime)
{
sb.Begin();
// ... to draw ...
sb.End();
}
I was digging the DrawableGameComponent and saw the Draw method comes in this way:
public void Draw(GameTime gameTime)
{
// ...
}
First of all, I know that SpriteBatch can collect many Texture2D between Begin and End, so that it can be useful to sort them, or draw with same Effects.
My question is about the performance and the cost of passing the SpriteBatch. At DrawableGameComponent it is possible to call spritebatch of game object if its spritebatch is public.
What is suggested, what should a xna-game programmer do?
Thanks in advice.

One of the serious disadvantages of DrawableGameComponent is that you're locked into its provided method signature. While there's nothing "wrong", per se, with DrawableGameComponent, do not think of it as the "one true architecture". You're better off thinking of it as an example of a possible architecture.
If you find yourself needing to pass a SpriteBatch (or anything else) to the draw method of a "game component" - the best way is to pass it as an argument. Anything else is convoluted.
Obviously this means that you can't use XNA's provided GameComponent system, and you have to make your own alternative. But this is almost trivial: At its most basic level, it's just a list of some base type that has appropriate virtual methods.
Of course, if you must use GameComponent - or your game is so simple (eg: a prototype) that you don't really care about the architecture - then you can use basically any method you like to get a SpriteBatch to your draw method. They all have disadvantages.
Probably the next-most architecturally robust method is to pass your SpriteBatch instance into the constructor of each of your components. This keeps your components decoupled from your game class.
On the other hand, if you're throwing architecture to the wind, I'd suggest making your MyGame.spriteBatch field public static. This is the simplest way to allow it to be accessed anywhere. It's easy to implement and easy to clean up later when/if you need to.
To answer your question about performance: Anything to do with passing a SpriteBatch around will have almost negligible effect on performance (providing the order of calls to Draw/Begin/End stays the same). Don't worry about it.
(If you see SpriteBatch whatever in code, that represents a reference. A reference is a 32-bit value (in a 32-bit program, which all XNA games are). That's the same size as an int or a float. It's very small and very cheap to pass around/access/etc.)

If you stumbled upon this question you were probably looking at a nice and generic solution. I would suggest you have a look at this:
https://gamedev.stackexchange.com/questions/14217/several-classes-need-to-access-the-same-data-where-should-the-data-be-declared/14232#14232
I felt the need to correct this question because "the best way is to pass it as an argument. Anything else is convoluted." is simply not correct.
Personally i am now doing it this way in my GameBase class:
protected override void LoadContent()
{
// Create a new SpriteBatch, which can be used to draw textures.
SpriteBatch = new SpriteBatch(GraphicsDevice);
this.Services.AddService(typeof(SpriteBatch), SpriteBatch);
}
Now, since you're adding DrawableGameComponents in the Initialize Method of your Game class you will be able to call
this.Game.Services.GetService(typeof(SpriteBatch))
I'd say that's the cleanest approach to solve the problem.

If the DrawableGameComponent should be part of the parent's SpriteBatch, then just pass it in via the constructor and store it in a private member (or expose it as a property if you wish).
You could also expose the SpriteBatch as a property of your Game class if you wanted, like you suggested, but every time you referenced this.Game, you would need to cast it to your specific Game class type.
((MyGame)this.Game).SpriteBatch.Draw(...)
Or you can just have the DrawableGameComponent create a new SpriteBatch. There's nothing wrong with that (as far as I've ever seen). I suppose it depends how many DrawableGameComponents you'll be creating and how often.
Also browse through the results for a search for DrawableGameComponent - there's a lot of good advice there.

Related

Is making WebGL context object a global/semi-global variable a bad idea?

So, my idea is to do something like that (the code is simplified of course):
var gl;
function Renderer(canvas) {
gl = this.gl = canvas.getContext('experimental-webgl');
}
function Object() {
}
Object.prototype.render = function() {
...
gl.drawElements(...);
}
The gl variable itself can be placed into a namespace for better consistency, it can also be incapsulated by wrapping all the code into an anonymous function to make sure it won't clash with anything.
I can see one obvious tradeoff here: problems with running multiple WebGL canvases on the same page. But I'm totally fine with it.
Why doing that? Because otherwise it's more painful to call any WebGL functions, you have to pass your renderer as a parameter here and there. That's actually the thing I don't like about Three.js: all the graphics stuff is handled inside a Renderer object, which makes the whole Renderer object huge and complicated.
If using a globally visible context, you don't have to bother about OpenGL constants, you don't have to worry about your renderer object's visibility, and so on.
So, my question is: should I expect any traps with this approach? Aside from potential emptiness of the gl variable, of course.
Define bad
Lots of WebGL programs do this. OpenGL does this by default since the functions are global in scope. In normal OpenGL you have to call eglMakeCurrent (or equivalent) to switch contexts which effectively is just doing a hidden gl = contextToMakeCurrent under the hood.
So, basically it's up to you. If you think someday you're going to need multiple WebGL contexts then it might be wise to not have your contexts use global variables. But you can always fallback to the eglMakeCurrent style of coding. Both have their pluses and minuses.

Integrating Ninject with Xna 4.0

Trying to integrate Ninject with XNA however I am having a bit of pain trying to get it all hooked up.
The problem is I am trying to do it as I think it should done, so I am decoupling things as much as possible to make it more modular, and Xna isn't happy with me doing this... an example of this is where the Game object news up a lot of things internally, and a lot of those things could do with being passed around to my objects. So I decided to add these as constructor arguments to the Game class, so I can setup all the dependencies then use Ninject to give me the Game instance with all its dependencies resolved.
Unfortunately In this example normally XNA doesn't new up its SpriteBatch until it has gotten to the Initilize stage, which I assume is because it needs the GraphicsDeviceManager to get the window handle to create the GraphicsDevice before it can be exposed for anything to use...
So because of this they cannot be injected into the game because it needs to do its stuff first, and I cannot really push them in after the game is made, because I need other things like the World component to be injected with things as well and pushed into the game.
At this point I thought maybe I could create my own GraphicsDevice, but as I need the handle I would need to make my own window too, which I later realised XNA would dispose and recreate anyway. So it leaves me with a horrible taste in my mouth... as I dont fancy re-writing the Game and GraphicsDeviceManager classes, which would require me to re-write lots of other functionality as they are all internal to the XNA assembly...
So has anyone managed to setup their dependency injection without having to re-write large chunks of XNA?
Here is an example of my current module:
public class XnaModule : NinjectModule
{
public override void Load()
{
Kernel.Bind<IServiceProvider>().To<NinjectServiceProvider>().InSingletonScope();
Kernel.Bind<IGraphicsDeviceService>().To<GraphicsDeviceManager>().InSingletonScope();
Kernel.Bind<ContentManager>().ToSelf().InSingletonScope();
Kernel.Bind<SpriteBatch>().ToSelf().InSingletonScope();
var contentManager = new ContentManager(Kernel.Get<IServiceProvider>());
contentManager.RootDirectory = "Content";
Kernel.Bind<ContentManager>().ToConstant(contentManager).InSingletonScope();
var game = new MyGame(Kernel.Get<IGraphicsDeviceService>() as GraphicsDeviceManager,
Kernel.Get<SpriteBatch>(), contentManager);
Kernel.Bind<Game>().ToConstant(game).InSingletonScope();
}
}
Then the game constructor:
public class MyGame : Game
{
public SpriteBatch SpriteBatch { get; private set; }
public GraphicsDeviceManager Graphics { get; private set; }
public MyGame(GraphicsDeviceManager graphics, SpriteBatch spriteBatch, ContentManager contentManager)
{
Graphics = graphics;
SpriteBatch = spriteBatch;
Content = contentManager;
}
// Other stuffs
}
Both are just examples so you can see the sort of approach I am taking, the module bypasses the circular dependency issue as I new the game up myself. And I have excluded the GameModule, which would contain the dependencies for my actual Game objects, i.e factories, gui components etc, and the problem is they ultimately need to be injected into the Game too, and we then we have a catch22.
As they cannot be injected until the dependencies are resolved, but you cannot resolve the dependencies until you have started the game... it seems a lose/lose situation. So can anyone tell me how they got round this issue?
I did find http://steveproxna01di.codeplex.com/ but was looking if there are any more examples as this is more of a service locator approach, rather than an Injection approach.
As no one has answered this, I will put in that I basically turned my Game instance into a wrapper and bootstrapper for a custom IGame instance which is the root injection point. This way I am able to inject whatever I want.

XNA/C#: Entity Factories and typeof(T) performance

In our game (targeted at mobile) we have a few different entity types and I'm writing a factory/repository to handle instantiation of new entities. Each concrete entity type has its own factory implementation and these factories are managed by an EntityRepository.
I'd like to implement the repository as such:
Repository
{
private Dictionary <System.Type, IEntityFactory<IEntity>> factoryDict;
public T CreateEntity<T> (params) where T : IEntity
{
return factoryDict[typeof(T)].CreateEntity() as T;
}
}
usage example
var enemy = repo.CreateEntity<Enemy>();
but I am concerned about performance, specifically related to the typeof(T) operation in the above. It is my understanding that the compiler would not be able to determine T's type and it will have to be determined at runtime via reflection, is this correct? One alternative is:
Repository
{
private Dictionary <System.Type, IEntityFactory> factoryDict;
public IEntity CreateEntity (System.Type type, params)
{
return factoryDict[type].CreateEntity();
}
}
which will be used as
var enemy = (Enemy)repo.CreateEntity(typeof(Enemy), params);
in this case whenever typeof() is called, the type is on hand and can be determined by the compiler (right?) and performance should be better. Will there be a noteable difference? any other considerations? I know I can also just have a method such as CreateEnemy in the repository (we only have a few entity types) which would be faster but I would prefer to keep the repository as entity-unaware as possible.
EDIT:
I know that this may most likely not be a bottleneck, my concern is just that it is such a waste to use up time on reflecting when there is a slightly less sugared alternative available. And I think it's an interesting question :)
I did some benchmarking which proved quite interesting (and which seem to confirm my initial suspicions).
Using the performance measurement tool I found at
http://blogs.msdn.com/b/vancem/archive/2006/09/21/765648.aspx
(which runs a test method several times and displays metrics such as average time etc) I conducted a basic test, testing:
private static T GenFunc<T>() where T : class
{
return dict[typeof(T)] as T;
}
against
private static Object ParamFunc(System.Type type)
{
var d = dict[type];
return d;
}
called as
str = GenFunc<string>();
vs
str = (String)ParamFunc(typeof(String));
respectively. Paramfunc shows a remarkable improvement in performance (executes on average in 60-70% the time it takes GenFunc) but the test is quite rudimentary and I might be missing a few things. Specifically how the casting is performed in the generic function.
An interesting aside is that there is little (neglible) performance gained by 'caching' the type in a variable and passing it to ParamFunc vs using typeof() every time.
Generics in C# don't use or need reflection.
Internally types are passed around as RuntimeTypeHandle values. And the typeof operator maps to Type.GetTypeFromHandle (MSDN). Without looking at Rotor or Mono to check, I would expect GetTypeFromHandle to be O(1) and very fast (eg: an array lookup).
So in the generic (<T>) case you're essentially passing a RuntimeTypeHandle into your method and calling GetTypeFromHandle in your method. In your non-generic case you're calling GetTypeFromHandle first and then passing the resultant Type into your method. Performance should be near identical - and massively outweighed by other factors, like any places you're allocating memory (eg: if you're using the params keyword).
But it's a factory method anyway. Surely it won't be called more than a couple of times per second? Is it even worth optimising?
You always hear how slow reflection is, but in C#, there is actually fast reflection and slow reflection. typeof is fast-reflection - it has basically the overhead of method call, which is nearly infinitesimal.
I would bet a steak and lobster dinner that this isn't going to be a performance bottleneck in your application, so it's not even worth your (or our) time in trying to optimize it. It's been said a million times before, but it's worth saying again: "Premature optimization is the root of all evil."
So, finish writing the application, then profile to determine where your bottlenecks are. If this turns out to be one of them, then and only then spend time optimizing it. And let me know where you'd like to have dinner.
Also, my comment above is worth repeating, so you don't spend any more time reinventing the wheel: Any decent IoC container (such as AutoFac) can [create factory methods] automatically. If you use one of those, there is no need to write your own repository, or to write your own CreateEntity() methods, or even to call the CreateEntity() method yourself - the library does all of this for you.

A pragmatic view on private vs public

I've always wondered on the topic of public, protected and private properties. My memory can easily recall times when I had to hack somebody's code, and having the hacked-upon class variables declared as private was always upsetting.
Also, there were (more) times I've written a class myself, and had never recognized any potential gain of privatizing the property. I should note here that using public vars is not in my habit: I adhere to the principles of OOP by utilizing getters and setters.
So, what's the whole point in these restrictions?
The use of private and public is called Encapsulation. It is the simple insight that a software package (class or module) needs an inside and an outside.
The outside (public) is your contract with the rest of the world. You should try to keep it simple, coherent, obvious, foolproof and, very important, stable.
If you are interested in good software design the rule simply is: make all data private, and make methods only public when they need to be.
The principle for hiding the data is that the sum of all fields in a class define the objects state. For a well written class, each object should be responsible for keeping a valid state. If part of the state is public, the class can never give such guarantees.
A small example, suppose we have:
class MyDate
{
public int y, m, d;
public void AdvanceDays(int n) { ... } // complicated month/year overflow
// other utility methods
};
You cannot prevent a user of the class to ignore AdvanceDays() and simply do:
date.d = date.d + 1; // next day
But if you make y, m, d private and test all your MyDate methods, you can guarantee that there will only be valid dates in the system.
The whole point is to use private and protected to prevent exposing internal details of your class, so that other classes only have access to the public "interfaces" provided by your class. This can be worthwhile if done properly.
I agree that private can be a real pain, especially if you are extending classes from a library. Awhile back I had to extend various classes from the Piccolo.NET framework and it was refreshing that they had declared everything I needed as protected instead of private, so I was able to extend everything I needed without having to copy their code and/or modify the library. An important take-away lesson from that is if you are writing code for a library or other "re-usable" component, that you really should think twice before declaring anything private.
The keyword private shouldn't be used to privatize a property that you want to expose, but to protect the internal code of your class. I found them very helpful because they help you to define the portions of your code that must be hidden from those that can be accessible to everyone.
One example that comes to my mind is when you need to do some sort of adjustment or checking before setting/getting the value of a private member. Therefore you'd create a public setter/getter with some logic (check if something is null or any other calculations) instead of accessing the private variable directly and always having to handle that logic in your code. It helps with code contracts and what is expected.
Another example is helper functions. You might break down some of your bigger logic into smaller functions, but that doesn't mean you want to everyone to see and use these helper functions, you only want them to access your main API functions.
In other words, you want to hide some of the internals in your code from the interface.
See some videos on APIs, such as this Google talk.
Having recently had the extreme luxury of being able to design and implement an object system from scratch, I took the policy of forcing all variables to be (equivalent to) protected. My goal was to encourage users to always treat the variables as part of the implementation and not the specification. OTOH, I also left in hooks to allow code to break this restriction as there remain reasons to not follow it (e.g., the object serialization engine cannot follow the rules).
Note that my classes did not need to enforce security; the language had other mechanisms for that.
In my opinion the most important reason for use private members is hiding implementation, so that it can changed in the future without changing descendants.
Some languages - Smalltalk, for instance - don't have visibility modifiers at all.
In Smalltalk's case, all instance variables are always private and all methods are always public. A developer indicates that a method's "private" - something that might change, or a helper method that doesn't make much sense on its own - by putting the method in the "private" protocol.
Users of a class can then see that they should think twice about sending a message marked private to that class, but still have the freedom to make use of the method.
(Note: "properties" in Smalltalk are simply getter and setter methods.)
I personally rarely make use of protected members. I usually favor composition, the decorator pattern or the strategy pattern. There are very few cases in which I trust a subclass(ing programmer) to handle protected variables correctly. Sometimes I have protected methods to explicitly offer an interface specifically for subclasses, but these cases are actually rare.
Most of the time I have an absract base class with only public pure virtuals (talking C++ now), and implementing classes implement these. Sometimes they add some special initialization methods or other specific features, but the rest is private.
First of all 'properties' could refer to different things in different languages. For example, in Java you would be meaning instance variables, whilst C# has a distinction between the two.
I'm going to assume you mean instance variables since you mention getters/setters.
The reason as others have mentioned is Encapsulation. And what does Encapsulation buy us?
Flexibility
When things have to change (and they usually do), we are much less likely to break the build by properly encapsulating properties.
For example we may decide to make a change like:
int getFoo()
{
return foo;
}
int getFoo()
{
return bar + baz;
}
If we had not encapsulated 'foo' to begin with, then we'd have much more code to change. (than this one line)
Another reason to encapsulate a property, is to provide a way of bullet-proofing our code:
void setFoo(int val)
{
if(foo < 0)
throw MyException(); // or silently ignore
foo = val;
}
This is also handy as we can set a breakpoint in the mutator, so that we can break whenever something tries to modify our data.
If our property was public, then we could not do any of this!

Elegantly reducing the number of dependencies in ASP.NET MVC controllers

We are developing what is becoming a sizable ASP.NET MVC project and a code smell is starting to raise its head.
Every controller has 5 or more dependencies, some of these dependencies are only used for 1 of the action methods on the controller but obviously are created for every instance of the controller.
I'm struggling to think of a good way to reduce the number of objects that are created needlessly for 90% of calls.
Here are a few ideas I'm toying around with:
Splitting the controllers down into smaller, more targeted ones.
Currently we have roughly a controller per domain entity, this has led to nice looking URLs which we would like to emulate, meaning we would end up with a much more complicated routing scheme.
Passing in an interface wrapping the IoC container.
This would mean the objects would only be created when they were explicitly required. However, this just seems like putting lipstick on a pig.
Extending the framework in some way to achieve some crazy combination of the two.
I feel that others must have come across this same problem; so how did you solve this or did you just live with it because it isn't really that big a problem in your eyes?
I've been pondering solutions to this very problem, and this is what I've come up with:
Inject your dependencies into your controller actions directly, instead of into the controller constructor. This way you are only injecting what you need to.
I've literally just whipped this up, so its slightly naive and not tested in anger, but I intend to implement this asap to try it out. Suggestions welcome!
Its of course StructureMap specific, but you could easily use a different container.
in global.asax:
protected void Application_Start()
{
ControllerBuilder.Current.SetControllerFactory(
new StructureMapControllerFactory());
}
here is structuremapcontrollerfactory:
public class StructureMapControllerFactory : DefaultControllerFactory
{
protected override IController GetControllerInstance(Type controllerType)
{
try
{
var controller =
ObjectFactory.GetInstance(controllerType) as Controller;
controller.ActionInvoker =
new StructureMapControllerActionInvoker();
return controller;
}
catch (StructureMapException)
{
System.Diagnostics.Debug.WriteLine(ObjectFactory.WhatDoIHave());
throw;
}
}
}
and structuremapcontrolleractioninvoker (could do with being a bit more intelligent)
public class StructureMapControllerActionInvoker : ControllerActionInvoker
{
protected override object GetParameterValue(
ControllerContext controllerContext,
ParameterDescriptor parameterDescriptor)
{
object parameterValue;
try
{
parameterValue = base.GetParameterValue(
controllerContext, parameterDescriptor);
}
catch (Exception e)
{
parameterValue =
ObjectFactory.TryGetInstance(
parameterDescriptor.ParameterType);
if (parameterValue == null)
throw e;
}
return parameterValue;
}
}
There is the concept of "service locator" that has been added to works like Prism. It has the advantage of reducing that overhead.
But, as you say, it's just hiding things under the carpet. The dependencies do not go away, and you just made them less visible, which goes against one of the goals of using DI (clearly stating what you depend on), so I'd be careful not to overuse it.
Maybe you'd be better served by delegating some of the work. If there is some way you were intending to re-partition your controller, you might just want to create that class and make your controller obtain an instance of it through DI.
It will not reduce the creation cost, since the dependencies would still be resolved at creation time, but at least you'd isolate those dependencies by functionality and keep your routing scheme simple.
I would consider separately the problem of dependencies and creation of dependent objects. The dependency is simply the fact that the controller source code references a certain type. This has a cost in code complexity, but no runtime cost to speak of. The instantiation of the object, on the other hand, has a runtime cost.
The only way to reduce the number of code dependencies is to break up the controllers. Anything else is just making the dependencies a bit prettier, as you say. But making the dependencies (as opposed to instantiation of the dependent objects, which I'll cover in a second) prettier may well be enough of a solution that you don't need to break up the controllers. So IoC is a decent solution for this, I think.
Re: creating the objects, you write, "...some of these dependencies are only used for 1 of the action methods on the controller but obviously are created for every instance of the controller." This strikes me as the real problem, rather than the dependency, per se. Because it's only going to get worse as your project expands. You can fix this problem by changing the instantiation of the objects so that it does not happen until they are needed. One way would be to use properties with lazy instantiation. Another way would be to use arguments to your action methods with model binders which instantiate the objects you need. Yet another way would be to write functions which return the instances you need. It's hard to say which way is best without knowing the purpose of the objects you're using.
Your controllers may be becoming too "Fat". I suggest creating an application tier which sit beneath your controllers. The application tier can encapsulate a lot of the orchestration going on inside your controller actions and is much more testable. It will also help you organize your code without the constraints of designated action methods.
Using ServiceLocation will also help (and yes, I'm essentially reiterating Denis Troller's answer- which probably isn't good but I did vote his answer up).

Resources