I was reading Real-Time Rendering (Third Edition). In Section 7.9 Combining Lights and Materials, it says that "If dynamic branches are not used, then a different shader needs to be generated for each possible combination of lights and material type".
I think static branch is well suited for this problem. Is that right?
Presumably your 'engine' code is determining which sorts of lights and materials are in your scene, and that is (usually) dynamic. To communicate this to your shader, you would need to pass the information in some way - through uniforms, textures or buffers. That data is not static, and thus, any conditional expressions based on the information contained in them is dynamic branching.
Some graphics APIs/drivers do an optimization under-the-hood, and essentially recompile your shaders with multiple branches predicted, and call this 'static branching'. For example, the shader documentation for D3D9, it states that:
Static branching allows blocks of shader code to be switched on or off
based on a Boolean shader constant.
In other APIs such as OpenGL, it's not well advertised exactly when these sort of optimizations occur, although presumably they do happen. It also may (will) be different from driver to driver. However, it's far more reliable to write the combinations yourself, this way you know what type of branching is being used (none!).
If your scene has unchanging lights and materials, then you could "hardcode" the light and material types in your shader (say as constants). However, this is essentially what the quote is saying - you can use static branches, but a different shader needs to be generated for every combination of lights and materials.
Related
I am a newbie in WebGL here. :)
I understand vertex data and texture should not not be updated very often BUT when they do change, which one is preferred:
- Destroy previous buffer (static_draw) by calling gl.deleteBuffer and create a new one.
- Reuse the same one (Dynamic_Draw to begin with)
(NO i am not using any library, just webgl directly)
Does the same rule apply to texture? Thanks
Interesting enough I cannot find existing discussions ..or maybe just missed them.
Lets first look at when we would want to delete a resource:
Since OpenGL is a C-Style API it's assumed that the user is tasked to manage memory, part of that is to also manage GPU memory. Due to encapsulation and other design principles one is not always able to share and reuse resources so the delete* functions exists to free up the allocated resources. In javascript the garbage collector makes sure that WebGL resources that go out of scope are flagged for deletion just like calling delete* does¹ resulting in the delete* functions to become rather superfluous in the context of WebGL.
With that cleared up you'd always want to update your resources where possible since you'll also have to redo all the setup that you've already done to them, e.g. setting vertex attribute pointers or filter and wrapping modes for textures.
My team uses a lot of aggregators (custom counters) for many of dataflow pipelines we use for monitoring and analysis purposes.
We mostly write DoFn classes to do so, but we sometimes use Combine.perKey(), by writing our own combine class that implements SerializableFunction<Iterable<T>, S> (usually in our case, T and S are the same). Some of the jobs we run have a small fraction of very hot keys, and we are looking to utilize some of the features offered by Combine (such as hot key fanout), but there is one issue with this approach.
It appears that aggregators are only available within DoFn, and I am wondering if there is a way around this, or this is a likely feature to be added in the future. Mostly, we use a bunch of custom counters to count the number of certain events/objects of different types for analysis and monitoring purposes. In some cases, we can probably apply another DoFn after the Combine step to do this, but in other cases we really need to count things during the combine process -- for instance, we want to know the distribution of objects over keys to understand how many hot keys we have and what draws the line between hot keys and very hot keys, for instance. There are a few other cases that seem tricky to us.
I searched around, but I couldn't find much resource around how one can use aggregators during the Combine step, so any help will be really appreciated!
If needed, I can perhaps describe what kind of Combine step we use and what we are trying to count, but it'll take some time and I'd like to have a general solution around this.
This is not currently possible. In the future (as part of Apache Beam) it is likely to be possible to define metrics (which are like aggregators) within a CombineFn which should address this.
In the meantime, for your use case you can do as you describe. You can have a Combine.perKey(), and then have multiple steps consuming the result -- one for your actual processing and others to report various metrics.
You could also look at the methods in CombineFns which allow creating a composed CombineFn. For instance, you could use your CombineFn and a simple Count, so that the reporting DoFn can report the number of elements in each key (consuming the Count) and the actual processing DoFn can consume the result of your CombineFn.
I am just starting to create the models for my newest game, which will be my first game in full 3D.
I have read in a couple unreliable places that it is far better to create all 3D objects as just one mesh and apply different materials to the mesh with weight painting, etc, rather than creating multiple meshes and parenting them to the same armature for animations.
According to these sources, this is because of UV mapping.
Is this true?
At the stage that I am at in creating my models, it would be far easier for me to make each individual part (arms, legs, knees) out of individual meshes and link them all to the same armature. If I do this (not merging vertices together, simply leaving each piece separate and overlapping, while linking them to the same armature), 2 questions:
Will the animations work in a Game Engine, moving all pieces and keeping them all attached where they should be?
If so, will it slow my game's performance to a significant (noticeable) degree because the characters are made of 7 or 8 separate meshes?
NOTE: I am, at this point, at least, planning on using the OpenGL game engine to run my game.
What you can do is create your models from separate meshes, UV unwrap each one, texture it, apply it's material and then when you're done and only then, merge the meshes into one and link that one mesh to your armature.
I think for blender it's Ctrl-J for joining.
I know this doesn't necessarily answer your question but it's just what I would do.
What you should do depends on what you need your model to do with the model. If your character needs to have its equipment or appearance be customizable in some way, or if you need to do special effects involving the change in appearance of the in-game character, keeping the body parts separate may be the more ideal solution. Team Fortress 2 does this with its customization system (or else you wouldn't be able to do things such as replacing the Heavy's gloves with something else or give them bird heads), while the Super Smash Brothers games takes heavy advantage of them in order to make the in-character animations seem good.
And then there are cases where a character's inherent design would've made animations regarding them be very difficult to manage should their joints be all within a single model. Examples of such characters include the Goombas from Mario and Kirby from the Kirby games.
And then there's the artistic choice of it. For example, there's no reason to make Pikachu's ears separate from the rest of the model, but the one used within Pokémon Gen VI onwards (including Pokémon Go) does so anyway. If you look real closely, you can even see the seam where the ear model ends and the head part begins.
To answer your questions as to the technical details, note that as you're using a low level API such as OpenGL for your game, whether or not you'd be able to do this depends entirely on your skills. As to the performance costs, it only depends on the amount of vertices that you have as well as the way you do the shaders. All that OpenGL ever sees is a group of vertices which has "faces" defined to them. It passes said information to the whatever shaders you had coded (to which it then processes the data you sent to it).
Granted, I'm no expert on the API, but even those starting out knowing the modern forms of OpenGL know this.
We are using Jasper Reports to generate reports from our application, based on Oracle DBMS.
The thing works fine, but it is likely we're going to use different paper formats, languages and orientations for the same document, or to add columns and other elements, or have the elements' contents change size.
Doing this in iReport/Jasper isn't easy AFAIU.
If something doesn't work you have to move or resize elements by hand, checking they're of appopriate size and position.
When I was a student I would use LaTeX for typesetting and it could easily handle "reshaping" well. Isn't there something like that?
I heard BRIT doesn't follow the "pixel position" paradigim of Jasper and Pentaho, and as such it strive to handle positioning and, possibily sizing, alone, after the user had specified the document's abstract structure, i.e. what elements are there and their relative position.
EDIT
Forgot to mention: we are looking for a solution that involves as little code as possible. The reasons are manifold, but the most important are:
first: avoid learning another library (we managed to stay away from Jasper's and liked it).
second: giving a tool that even those that aren't programmers, or hardcore ones, could manage.
The lower the entry barrier the better.
For example I know people in the humanities that can pick up LaTeX decently. They could even digest iReport. I don't know of anyone who can do the same with real-world Java.
Greetings,
I am working on distributed pub-sub system expected to have minimum latency. I am now having to choose between using serialization/deserialization and raw data buffer. I prefer raw data approach as there's almost no overhead, which keeps the latency low. But my colleague argues that I should use marshaling as the parser will be less complex and buggier. I stated my concern about latency but he said it's gonna be worth it in the long run and there's even FPGA devices to accelerate.
What's your opinions on this?
TIA!
Using a 'raw data' approach, if hardcoded in one language, for one platform, has a few problems when you try to write code on another platform in another language (or sometimes even the same language/platform for a different compiler, if your fields don't have natural alignment).
I recommend using an IDL to describe your message formats. If you pick one that is reducible to 'raw data' in your language of choice, then the generated code to access message fields for that language won't do anything but access variables in a structure that is memory overlayed on your data buffer, but it still represents the metadata in a platform and language neutral way, so parsers can make more complicated code for other platforms.
The downside of picking something that is reducible to a C structure overlay is that it doesn't handle optional fields, and doesn't handle variable-length arrays, and may not handle backwards compatible extensions in the future (unless you just tack them on the end of the structure). I'd suggest you read about google's protocol buffers if you haven't yet.