I've been confronted with the extremely bad drawing Performance of Quartz/Core Graphics.
I don't believe its bad in every scenario, but in my occasion, where i need to redraw something like 3000 short lines frequently, it performs super bad.
Since the Modal (of MVC) is fixed I can not change how it spits out the data (if I could, i would have followed the advice, to only draw the changes, so the lines dont have to be redrawn every frame).
So as a conclusion I am considering using opengl for that purpose and I would like to ask u (experienced) guys for an estimation of how well it could work using opengl, before starting to work into that topic, as it seems by far more difficult
than Quartz.
You almost certainly see a speed performance lift from OpenGL over Quartz, however remember that whereas Quartz uses point to point drawing, OpenGL is based on the use of vertices and vertices points (essentially co-ordinates). You may find you need to do some mid-weight parsing work on your existing data source to re-work it into this vertices point system.
Also keep in mind that drawing text on top of an OpenGL ES object is a tricky task - it can be done (ironically) by using Quartz to generate an image, and then using this image as a texture.
I'd definitely recommend using OpenGL Kit as it will make life a bit easier for you as a beginner to OpenGL. Ray Wenderlich has an excellent starting point tutorial here :
http://www.raywenderlich.com/5223/beginning-opengl-es-2-0-with-glkit-part-1
Related
I have just delved into the world of Metal, and I thought that I'd got the hang of it! But then it occurred to me that if I wanted to make a game, then static objects moving around a screen wouldn't suffice. So my question is, 'Is it possible to create animations for models with Metal?'
I have looked at using other APIs, such as SpriteKit, and SceneKit, but I found that they do not support shaders, and are not as powerful as Metal.
The only way that I can think about how I would go about this, is by creating 60 different models, and then loading each one one after the other, to give a 'stop-motion' kind of effect, but I think that this would probably be incredibly inefficient, and was hoping that there was an easier answer?
Thanks a lot!
Yes, there are other, more efficient ways to do animation. But before getting into that, a warning: it really looks like you're barking up the wrong tree here.
Metal is a (conceptually) very low-level interface. You use Metal to talk (almost) directly to the GPU, so to work with it you need to think (sort of) like a GPU: in terms of data buffers, vertex transformations, etc. You seem to be working at a much higher conceptual level, so you're probably better served by one of the high-level game engines: SpriteKit for 2D or SceneKit for 3D. (Or a third party engine like Cocos or Unity.) Metal, on the other hand, is better suited for building those game engines.
SpriteKit and SceneKit do support shaders. Look at SKShader and SCNShadable in the docs (and be sure to click the "More" links to read the full overviews). SceneKit also supports character animations (aka skeletal animation aka skinning): typically one designs and rigs a model for animation in an external authoring tool (Maya, Blender, etc), then uses SceneKit to work with the animations at run time.
It is possible to do things like GPU-based skeletal animation in Metal. But I haven't seen any tutorials or similar written about it yet, probably because Metal is such a new technology. Fundamentally, though, it'd be based on the same sorts of techniques you'd use for skeletal animation in OpenGL or Direct3D — and much has been written about animation for those technologies. If you're willing to invest the time and energy to work at a low level, adapting the subject matter from GL/D3D tutorials is relatively easy.
You can do skeletal animation in Metal, SCNKit would be using the GPU to deform the mesh as well. But to do it in Metal you would need to pass skin weights, along with bone matrices for the bind pose and the transformations of the bones as they animate then calculate the new vertex positions based on these. In fact I think you need the inverse of the bind pose matrices. Each mesh vertex is then transformed by a weighted sum of transformations dictated by the skin weights.
I tried it but screwed it up somehow it didn’t deform properly, I don’t know if I’d obtained the wrong matrices from my custom script to grab animation data from blender or a bug in my shader maths or from the weights.
It was probably close, but with all the possible things that I may have got wrong in the process it was difficult to fix so I abandoned it in the end.
Probably easier to stick with SceneKit and let apple take care of the rest or use an existing game engine such as Unity.
Then again if you want a challenge, I’m sure it’s possible, just a little tricky. You could try CPU first to make sure the maths is ok then port it to the GPU to make it faster?
SceneKit do support shaders. And an object that manages the relationship between skeletal animations and the nodes and geometries they animate is SCNSkinner from SceneKit.
Typically, you need to create a skinned model using, for example, Autodesk Maya, save it along with animations that use the skeleton, in a scene file. You load the model from the scene file and pose or animate it in your app, either by using animation objects also loaded from the scene file or by directly manipulating the nodes in the skeleton. That's it.
Watch this 7-parts video about Blender's skeletal system and how to use it in SceneKit.
convenience init(baseGeometry: SCNGeometry?, //character
bones: [SCNNode], //array of bones
boneInverseBindTransforms: [NSValue]?, //ibt of matrix4
boneWeights: SCNGeometrySource, //influence on geometry
boneIndices: SCNGeometrySource //index mapping
)
I'm making an iOS app and I want to be able to render with individual "layers" so that I can do blending between them and use shaders on each individually before blending them all together and rendering to the screen.
I understand that I will be rendering to Textures and then rendering these textures on top of each other in the framebuffer, but I am not understanding clearly what code needs to be written to follow this procedure. In another answer I found what I want to do, but I don't know what code accomplishes this task: How to achieve multi-layered drawing with OpenGL ES on iOS? (For example how do I "Bind texture 1, then draw it"? What does it mean to "Attach texture 1"?)
I've also looked at Apple's documentation regarding this technique but it isn't very clear about the steps or code for the actual rendering part of the process.
How would I go about doing this? (hopefully with code examples of each step because I haven't understood spotty instructions that expect me to just know what is needed for each step)
Here is an example of what I want to do with this. The spheres would be rendered into a "layer" or Texture2D which I would then pass through the shader, then render on top of a already partially rendered scene. I don't know exactly what kind of openGL code could do that.
You're looking at wrong place. To use OpenGL, you need to study OpenGL, not anything else. Apple doesn't provide any OpenGL documentation because it's an open standard, so the specs are freely published. Apple assumes you're already familiar with it.
OpenGL ES 2.0 spec
manual pages
I think you are having trouble because you don't have understanding of GL specific terms. The spec describes them very well and clearly. So, please read the spec. That will save your time A LOT. Or you will keep the trouble.
Also, I like to introduce a site which has very nice conceptual description of OpenGL pipeline.
http://www.songho.ca/opengl/
This site is targeting desktop GL, and some API may be different a little. Please focus on conceptual understanding. For example, here's an illustration from the site.
For more tutorials, google with proper keyword like OpenGL ES 2.0 tutorials (or how-tos). Here's an example link, and would be helpful. There're also many more tutorials. If spec is too boring, it's also good to have some fun with tutors.
Update
I like to say one more. IMO, the OpenGL is all about drawing triangles. Everything is ultimately converted into triangles in 3D space to represent some shape. Anything else all exists only for optimization. And in most cases, GL chooses batch processing for major optimization strategy. Because overhead of each drawing call is not affordable for most games.
It's hard to start OpenGL ES because it's an optimized version of desktop GL, so all convenient or easy drawing features are stripped off. This is same even on recent version of desktop GL.
So there's no such drawOneTriangle function. Instead GL has something like
make a buffer,
put list of many triangles there
select the buffer for next drawing.
draw all triangles in current buffer at once
delete the buffer.
By using buffer, you don't need to dispatch duplicated data to GPU from CPU. And GL uses this approach everywhere. For example, you don't have such drawOneTriangleWithTexture function to use textures. Instead, you have to
make a buffer
put list of many pixels there (bitmap)
select the buffer for next drawing.
draw all triangles with the texture pixel data in current buffers.
delete the buffer.
Everything overly complex stuffs on GL are all exists for optimization. This may look weird at first, but there're usually very good reasons for the design.
Update 2
Now I think you're looking for render to texture feature. (well actually you already mentioned this…)
You can use a rendered image as a texture source. To do this,
you need to bind a frame-buffer with texture object rather then render-buffer object using some functions like glFramebufferTexture.
Once you render to a texture, and switch frame-buffer to final buffer, and bind the texture you drawn and others, and perform the final drawing. You need two frame buffers: one for render-to-texture, and one for final output.
I understand that cocos2d it's really simple API, and that I can use it to do simple and huge 2D or even sometimes 3D games/applications. As well I understand that OpenGL it's more complicated, it's lower level API etc.
Question: What is better for implementing 2D/3D games? Why do we need to learn OpenGL if we have simple frameworks like cocos2d? What you can do with OpenGL that you can't do with cocos2d?
Thanks in advance!
What is better for implementing 2D/3D games?
Hard to tell, but a higher level API is always there to make things easier for you. For example you are writing a 2D shootem up. You will likely use a game loop, you will want to use sprites and make those move on the screen. You may want animations like explosions taking place. You'll end up writing your own higher level API to do those things. Cocos2D has solved those problems for you already. Any other frameworld should have solved it.
Why do we need to learn OpenGL if we have simple frameworks like cocos2d?
In case you like to cusomize the standard behaviour of a framework, especially the drawing part, you should get into openGL. If there is something you like to have which doesn't come out of the box you may find yourself reimplementing a base framework class. For example, look at the shaders used in Cocos2D 2.0. If you like some special blending mode, like a tinting effect, you won't get it for free. There is a colour attribute for a CCSprite but this may not be the result you're expecting. So you'll have to write your own shader and plug it into the sprite you like to be displayed in a different way.
What you can do with OpenGL that you can't do with cocos2d?
This comparison doesn't really work out, since cocos2d facilitates opengGL for the drawing part to build up that higher level api and make your life easier as a game developer.
Cocos2d is a wrapper around the 2D features of OpenGL (as of this: http://www.cocos2d-iphone.org/about) . Under the hood it itself uses OpenGL ES to implement its features. This is good because it means that there will be minimal performance overhead so you can start using its simpler API without having to get immersed to the definitely bigger learning path of OpenGL.
It has however only strong 2D support and if you plan to write later 3d games you loose all benefits of Cocos2d: why would you rewrite a 3d rendering engine with a 2d framework that under the hood uses a very strong 3d engine? You loose performance for a lot of unnecessary work.
So the simpler answer is: for 2d Cocos2d, for 3d OpenGL.
If you want to start OpenGL ES, this is a very good tutorial for beginners: http://iphonedevelopment.blogspot.it/2009/05/opengl-es-from-ground-up-table-of.html
Greetings each and all.
I've been struggling with OpenGL ES 2.0 and a particular problem for the last few days now. I'm looking to implement a Geometry Wars clone, for the iPhone, for fun and to learn this technology. So, my background in 3d programming is fairly good, although mainly concentrated around vector mathematics rather then draw calls towards the graphical API, as I've been working with DirectX on and off for the last couple of years. The problem, however, is that I've mainly been working with bigger meshes, loading, translating and transforming them in several ways and now I find myself in a position where I want to handle small meshes, and lots of them.
The objects are triangles, rectangles, hexagons etc. and I want the ability to modify them all separately (eg making the other edge wavy or pulsating). When I've worked with multiple big meshes I've made separate draw calls for them, easily attaching shaders and their respective parameters, but in this case I would like to render it all in one call and there's where my knowledge fails me.
So, to clearify my question. How are you to modify small meshes, preferably stored in one vertex array, individually and render them all at once using shaders with OpenGL ES 2.0?
Although code examples are more then welcome, a "simple" explanation would be enough to get me started. I assume I'm missing something trivial here and any help is greatly appreciated.
Thanks in advance,
Karl
Sounds like Instancing (and instanced arrays) can be an answer to your problem, although it might be a bit too advanced for iOS or ES in general to be supported. This way you can render many copies of the same geometry with per instance data (like a specific texture index or sub-texture or shader parameters). But of course, you cannot render different objects with completely different shaders in one draw call.
Otherwise the much simpler (and maybe much less optimized) function glMultiDrawArrays/Elements renders multiple completely different geometries in one call, but you cannot tell which triangle belongs to which object in the shader and I also doubt that it gives that much of a performance boost.
I'm really new to OpenGL, which is a really bad thing to me :|
I need to draw a star(sort of) with openGl but I'm not really sure where I should start.
The results should be something like this:
Is there an easy way to do this?
the easiest way would be to draw a texture mapped quad with a "star" texture. You can read a tutorial on texture mapping here: http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=06
That tutorial teaches how to draw a cube using textures.
You just have to draw a single face, instead of all six.
The tutorial is written in C++, but near the end you can download the source of a Delphi version.
There are other effects you might want to add later, such as transparency. You can also read about that in the NeHe site. It has a lot of useful tutorials on OpenGL. It's a great place to learn OpenGL.
If you're new to OpenGL and if you're using Delphi, then most probably what you need is GLScene. Mature, alive, very good quality of code and, of course, free.
Why not write an algorithm to generate a texture procedurally in code using a 2D GLuByte array as in the "checker.c" example in Redbook? Instead of following a perfect checkerboard pattern, figure out how to make a 2D texture of that star and map it into a quad using glTexImage2D(...).