What is the purpose of "bake" option in SceneKit editor. Does it have an impact on performance?
Type offers 2 options: Ambient Occlusion and Light Map
Destination offers: Texture and Vertex
For me, it crashes Xcode. It's supposed to render lighting (specifically shadows) into the textures on objects so you don't need static lights.
This should, theoretically, mean that all you need in your scene are the lights used to create dynamic lighting on objects that move, and you can save all the calculations required to fill the scene with static lights on static geometry.
In terms of performance, yes, baking in the lighting can create a HUGE jump in performance because it's saving you all the complex calculations that create ambient light, occlusion and direct shadows and soft shadows.
If you're using ambient occlusion and soft shadows in real-time you'll be seeing VERY low frame rates.
And the quality possible with baking is far beyond what you can achieve with a super computer in real time, particularly in terms of global illumination.
What's odd is that Scene Kit has a bake button. It has never worked for me, always crashing Xcode. But the thing is... to get the most from baking, you need to be a 3D artist, in which case you'll be much more inclined to do the baking in a 3D design app.
And 3D design apps have lighting solutions that are orders of magnitude better than the best Scene Kit lighting possible. I can't imagine that there's really a need for baking in Scene Kit. It's a strange thing for the development team to have spent time on as it simply could never come close to the quality afforded by even the cheapest 3D design app.
What I remember from college days:
Baking is actually process in 3D rendering and textures. You have two kind of bakings: texture baking and physics baking.
Texture baking:
You calculate some data and save that data to a texture. You use that on your material. With that, you reduce rendering time. Every single frame, everything is calculated again and again. If you have animations, that is a lot of time wasted there.
Physics baking:
You can pre calculate physics simulations exactly like above baking and you use that data. For example you use it in Rigid Body.
Related
I have just delved into the world of Metal, and I thought that I'd got the hang of it! But then it occurred to me that if I wanted to make a game, then static objects moving around a screen wouldn't suffice. So my question is, 'Is it possible to create animations for models with Metal?'
I have looked at using other APIs, such as SpriteKit, and SceneKit, but I found that they do not support shaders, and are not as powerful as Metal.
The only way that I can think about how I would go about this, is by creating 60 different models, and then loading each one one after the other, to give a 'stop-motion' kind of effect, but I think that this would probably be incredibly inefficient, and was hoping that there was an easier answer?
Thanks a lot!
Yes, there are other, more efficient ways to do animation. But before getting into that, a warning: it really looks like you're barking up the wrong tree here.
Metal is a (conceptually) very low-level interface. You use Metal to talk (almost) directly to the GPU, so to work with it you need to think (sort of) like a GPU: in terms of data buffers, vertex transformations, etc. You seem to be working at a much higher conceptual level, so you're probably better served by one of the high-level game engines: SpriteKit for 2D or SceneKit for 3D. (Or a third party engine like Cocos or Unity.) Metal, on the other hand, is better suited for building those game engines.
SpriteKit and SceneKit do support shaders. Look at SKShader and SCNShadable in the docs (and be sure to click the "More" links to read the full overviews). SceneKit also supports character animations (aka skeletal animation aka skinning): typically one designs and rigs a model for animation in an external authoring tool (Maya, Blender, etc), then uses SceneKit to work with the animations at run time.
It is possible to do things like GPU-based skeletal animation in Metal. But I haven't seen any tutorials or similar written about it yet, probably because Metal is such a new technology. Fundamentally, though, it'd be based on the same sorts of techniques you'd use for skeletal animation in OpenGL or Direct3D — and much has been written about animation for those technologies. If you're willing to invest the time and energy to work at a low level, adapting the subject matter from GL/D3D tutorials is relatively easy.
You can do skeletal animation in Metal, SCNKit would be using the GPU to deform the mesh as well. But to do it in Metal you would need to pass skin weights, along with bone matrices for the bind pose and the transformations of the bones as they animate then calculate the new vertex positions based on these. In fact I think you need the inverse of the bind pose matrices. Each mesh vertex is then transformed by a weighted sum of transformations dictated by the skin weights.
I tried it but screwed it up somehow it didn’t deform properly, I don’t know if I’d obtained the wrong matrices from my custom script to grab animation data from blender or a bug in my shader maths or from the weights.
It was probably close, but with all the possible things that I may have got wrong in the process it was difficult to fix so I abandoned it in the end.
Probably easier to stick with SceneKit and let apple take care of the rest or use an existing game engine such as Unity.
Then again if you want a challenge, I’m sure it’s possible, just a little tricky. You could try CPU first to make sure the maths is ok then port it to the GPU to make it faster?
SceneKit do support shaders. And an object that manages the relationship between skeletal animations and the nodes and geometries they animate is SCNSkinner from SceneKit.
Typically, you need to create a skinned model using, for example, Autodesk Maya, save it along with animations that use the skeleton, in a scene file. You load the model from the scene file and pose or animate it in your app, either by using animation objects also loaded from the scene file or by directly manipulating the nodes in the skeleton. That's it.
Watch this 7-parts video about Blender's skeletal system and how to use it in SceneKit.
convenience init(baseGeometry: SCNGeometry?, //character
bones: [SCNNode], //array of bones
boneInverseBindTransforms: [NSValue]?, //ibt of matrix4
boneWeights: SCNGeometrySource, //influence on geometry
boneIndices: SCNGeometrySource //index mapping
)
I'm having a very simple terrain map with tiles! All the tiles are same size, just different height (z value) !
I can render them OK, but there are thousands of tiles , but not all of them are on screen, only a portion (that ahead of view)! So i'm doing a batch rendering, collect only tiles that appear on screen then Render them all in 1 call!
I try to use D3DXVec3Project to project vertex on World space to Screen space, then detect which triangle is on Screen, however this is very slow, call this for whole map take to 7ms (about 250x250 calls ).
Right now i'm using iso view (D3DXMatrixOrthoLH), there is no camera or eye, when I want to move arround the map, I just translate the world!
I think this is a very common problem that all engine must face to optimize, but I cant search for it ! Is it visible detection , culling or clipping... ?
Thanks! Should I just render all the tiles on screen, and let DirectX auto clip for us ? (If I remember well, last time I try render them all, it's still very slow)
img : http://i1335.photobucket.com/albums/w666/greenpig83/terrain2_zps24b77283.png
Yes, in complex scenes, typically, we must cull invisible geometry to achieve interactive frame-rates. Of course it greatly depends on scene itself, capabilities of API, and target hardware.
Here are first steps of a good terrain renderer (in order of complexity):
Frustum culling - test for collision between camera's frustum (visible volume) and objects (such as meshes and terrain tiles). No collision means object is invisible. Based on collision detection algorithms. Of course, you will need camera (view and projection matrices) for that. Also you will need a good math lib.
Spatial partitioning (ex: "Quad tree" in case of terrain) - grouping objects to a specific data structures, which allows avoid collision tests which are known being impossible in advance. Incredibly speeds up frustum culling. For example, we don't need to test all tiles that are behind the camera.
Level of Detail (LOD) - different techniques which allows render objects, that are far away from camera, less detailed, reducing resources consumption. Allows render amazing, realistic, detailed scenes with huge terrains.
Now you know what to ask Google for ;) , but still I'll add some links.
For beginners:
braynzarsoft's tutorials - you probably be interested in latest ones, about terrain and collision detection
rastertek terrain tutorials
Advanced:
vterrain.org - source of infinite knowledge about terrain rendering (articles, papers, links to implementations)
Mr. Hoppe's papers on progressive meshes
Hope it helps =)
Hi I'm using Firemonkey because of it's cross platform capabilities. I want to render a particle system. Now I'm using a TMesh which works well enough to display the particles fast. Each particle is represented in the mesh via a two textured triangles. Using different texture coordinates I can show many different particle types with the same mesh. The problem is, that every particle can have its own transparency/opacity. With my current approach I cannot set the tranparency individually for each triangle (or even vertex). What can I do?
I realized that there are some other properties in TMesh.Data.VertexBuffer, like Diffuse or other sets of textures (TexCoord1-3), but these properties are not used (not even initalized) in TMesh. It also seems not easy to simply change this behavior by inheriting from TMesh. It seems one have to inherit from a lower level control to initialize the VertextBuffer with more properties. Before I try that, I'd like to ask if it would be possible to control the transparency of a triangle with that. E.g. can I set a transparent color (Diffuse) or use a transparent texture (TextCoor1)? Or is there a better way to draw the particles in Firemonkey.
I admit that I don't know much about that particular framework, but you shouldn't be able to change transparency via vertex points in a 3D model. The points are usually x,y,z coordinates. Now, the vertex points would have an effect on how the sprites are lighted if you are using a lighting system. You can also use the vertex information to apply different transparency effects
Now, there's probably a dozen different ways to do this. Usually you have a texture with different degrees of alpha values that can be set at runtime. Graphics APIs usually have some filtering function that can quickly apply values to sprites/textures, and a good one will use your graphics chip if available.
If you can use an effect, it's usually better since the nuclear way is to make a bunch of different copies of a sprite and then apply effects to them individually. If you are using Gouraud Shading, then it gets easier since Gouraud uses code to fill in texture information.
Now, are you using light particles? Some graphics APIs actually have code that makes light particles.
Edit: I just remembered Vertex Shaders, which could.
I'd like to build an app using the new GLKit framework, and I'm in need of some design advice. I'd like to create an app that will present up to a couple thousand "bricks" (objects with very simple geometry). Most will have identical texture, but up to a couple hundred will have unique texture. I'd like the bricks to appear every few seconds, move into place and then stay put (in world coords). I'd like to simulate a camera whose position and orientation are controlled by user gestures.
The advice I need is about how to organize the code. I'd like my model to be a collection of bricks that have a lot more than graphical data associated with them:
Does it make sense to associate a view-like object with each handle geometry, texture, etc.?
Should every brick have it's own vertex buffer?
Should each have it's own GLKBaseEffect?
I'm looking for help organizing what object should do what during setup, then rendering.
I hope I can stay close to the typical MVC pattern, with my GLKViewController observing model state changes, controlling eye coordinates based on gestures, and so on.
Would be much obliged if you could give some advice or steer me toward a good example. Thanks in advance!
With respect to the models, I think an approach analogous to the relationship between UIImage and UIImageView is appropriate. So every type of brick has a single vertex buffer,GLKBaseEffect, texture and whatever else. Each brick may then appear multiple times just as multiple UIImageViews may use the same UIImage. In terms of keeping multiple reference frames, it's actually a really good idea to build a hierarchy essentially equivalent to UIView, each containing some transform relative to the parent and one sort being able to display a model.
From the GLKit documentation, I think the best way to keep the sort of camera you want (and indeed the object locations) is to store it directly as a GLKMatrix4 or a GLKQuaternion — so you don't derive the matrix or quaternion (plus location) from some other description of the camera, rather the matrix or quaternion directly is the storage for the camera.
Both of those classes have methods built in to apply rotations, and GLKMatrix4 can directly handle translations. So you can directly map the relevant gestures to those functions.
The only slightly non-obvious thing I can think of when dealing with the camera in that way is that you want to send the inverse to OpenGL rather than the thing itself. Supposing you use a matrix, the reasoning is that if you wanted to draw an object at that location you'd load the matrix directly then draw the object. When you draw an object at the same location as the camera you want it to end up being drawn at the origin. So the matrix you have to load for the camera is the inverse of the matrix you'd load to draw at that location because you want the two multiplied together to be the identity matrix.
I'm not sure how complicated the models for your bricks are but you could hit a performance bottleneck if they're simple and all moving completely independently. The general rule when dealing with OpenGL is that the more geometry you can submit at once, the faster everything goes. So, for example, an entirely static world like that in most games is much easier to draw efficiently than one where everything can move independently. If you're drawing six-sided cubes and moving them all independently then you may see worse performance than you might expect.
If you have any bricks that move in concert then it is more efficient to draw them as a single piece of geometry. If you have any bricks that definitely aren't visible then don't even try to draw them. As of iOS 5, GL_EXT_occlusion_query_boolean is available, which is a way to pass some geometry to OpenGL and ask if any of it is visible. You can use that in realtime scenes by building a hierarchical structure describing your data (which you'll already have if you've directly followed the UIView analogy), calculating or storing some bounding geometry for each view and doing the draw only if the occlusion query suggests that at least some of the bounding geometry would be visible. By following that sort of logic you can often discard large swathes of your geometry long before submitting it.
In my app, I have a bunch of CCSprites and I want to have a collision detection feature that will work only when the non-transparent pixels in the CCSprites collide. I don't want to be restricted to color between the colliding sprites. I think thats what the 'Pixel Perfect Collision Detection' thread does in the Cocos2D forum, but I want to use any color for the real collision. This collision detection would be in my game loop so it can't be too expensive. Anyway, does anyone have any ideas on how I can do this?
I am willing to use Cocos2D, Box2D or Chipmunk or even UIKit if it can do it.
Thanks!
When talking about hardware rendered graphics, "I want pixel perfect collisions" and "I don't want them to be too expensive" are pretty mutually exclusive.
Either write a simpler renderer that doesn't allows such complex transformations, anti-aliasing or sub-pixel placement or use the actual GPU to render some sort of collision mask. The problem with doing that on the GPU is that it's fast to send stuff to the GPU and expensive to get it back. There's a reason why this technique is quite uncommon.
Chipmunk Pro's auto-geometry stuff supports turning images of various varieties into collision shapes, but isn't complete yet.
It`s imposible to do if you dont want lose performance. Try to do a system colission based in circles, this in best way to do a collision