What's the simplest way to render the "inside" on a polyhedron in WebGL?
In other words, I want an octagonal prism (like a cylinder) that's "hollow" and I want to be "standing" inside of it. That's all.
I am guessing at your problem, but typically polygons are drawn single-sided -- that is, they don't render when you see them from behind. This is more efficient in most cases.
If the polygons are disappearing when the camera is inside the solid prism, consider either rendering them two-sided (if you want to render BOTH inside and outside at different times), or just reverse the winding (vertex order) of your polygons, or reverse the one-sided polygonal culling state (back/front) of OpenGL to show you the backs rather than the fronts.
Most WebGL frameworks turn on culling so WebGL/OpenGL culls back facing triangles.
You can make sure culling is off with
gl.disable(gl.CULL_FACE);
You can enable culling with
gl.enable(gl.CULL_FACE);
If culling is on you can chose which faces WebGL culls. By default a back facing triangle is one where in screen space the vertices go in clockwise order. For most 3D models that's faces when viewed from inside the model. To cull those back facing triangles you can use
gl.cullFace(gl.BACK);
Or you can tell WebGL to cull front facing triangles with.
gl.cullFace(gl.FRONT);
This article has a section on culling.
Depending on the framework you use, you might use something like below:
Draw the object
Ensure culling is disabled (GL_CULL). In three.js you can use "Doubleside" property for the object
Translate the viewer position (or the object) so that the viewpoint is inside the
object. For example three.sphere.translate
Related
I'm using python kivy to render meshes with opengl onto a canvas. I want to return vertex data from the fragment shader so i can build a collider (to use on my cpu event listeners after doing projection and model view transforms). I can replicate the matrix multiplications on the cpu (i guess that's the easy way out), but then i would have to do the same calculations twice (not good).
The only way I can think of doing this (after some browsing) is to imprint an object id onto my rendered mesh alpha channel (wouldn't affect much if i'd keep data coding near value 1 for alpha ). And create some kind of 'color picker' on the cpu side to decode it (I'm guessing that's not hard to do using kivy).
Anyone has a better idea to deal with this? Or a better approach?
First criterion here is: do you need collision for picking or for physics simulation?
If it is for physics: you almost never want the same mesh for rendering and for physics collisions. Typically, you use a very rough approximation for the physics shape, nearly always a convex shape, or a union of convex shapes. (Colliding arbitrary concave meshes is something that no physics engine can do well, and if they attempt it at all, performance will be poor.)
If it is for the purpose of picking an object with a mouse-click: you can go two different ways for this:
You replicate the geometry on the CPU, and use the mouse-location plus camera-view to create a ray that intersects this geometry, to see what is hit first.
After rendering your scene, you read back a single pixel from the depth buffer. (The pixel that your mouse is over.) With the depth value you get back, plus camera info, you can reconstruct a corresponding 3D position in your world. Once you have a 3D location, you can query your world to see which object is the closest to this point, and you will have your hit.
Hi I'm using Firemonkey because of it's cross platform capabilities. I want to render a particle system. Now I'm using a TMesh which works well enough to display the particles fast. Each particle is represented in the mesh via a two textured triangles. Using different texture coordinates I can show many different particle types with the same mesh. The problem is, that every particle can have its own transparency/opacity. With my current approach I cannot set the tranparency individually for each triangle (or even vertex). What can I do?
I realized that there are some other properties in TMesh.Data.VertexBuffer, like Diffuse or other sets of textures (TexCoord1-3), but these properties are not used (not even initalized) in TMesh. It also seems not easy to simply change this behavior by inheriting from TMesh. It seems one have to inherit from a lower level control to initialize the VertextBuffer with more properties. Before I try that, I'd like to ask if it would be possible to control the transparency of a triangle with that. E.g. can I set a transparent color (Diffuse) or use a transparent texture (TextCoor1)? Or is there a better way to draw the particles in Firemonkey.
I admit that I don't know much about that particular framework, but you shouldn't be able to change transparency via vertex points in a 3D model. The points are usually x,y,z coordinates. Now, the vertex points would have an effect on how the sprites are lighted if you are using a lighting system. You can also use the vertex information to apply different transparency effects
Now, there's probably a dozen different ways to do this. Usually you have a texture with different degrees of alpha values that can be set at runtime. Graphics APIs usually have some filtering function that can quickly apply values to sprites/textures, and a good one will use your graphics chip if available.
If you can use an effect, it's usually better since the nuclear way is to make a bunch of different copies of a sprite and then apply effects to them individually. If you are using Gouraud Shading, then it gets easier since Gouraud uses code to fill in texture information.
Now, are you using light particles? Some graphics APIs actually have code that makes light particles.
Edit: I just remembered Vertex Shaders, which could.
Basically it's like a SkyBox,but a plane that is perfectly flat and infront of the screen.My idea is to have a big texture and depending on how you rotate the camera it renders different parts of the texture on the plane as if you are moving relative to the "sky" drawn on the plane and when you reach the edge it renders it +the part from the other side(I'll use a seamless texture,so it won't look seamed).I have figured out the formulas to do it,but I'm not sure what method to use.I mean I'm not sure if I should do it in C++ or is it supposed to be done in the shader in some .fx file and directly on the GPU?
All you need to do is draw a full-screen quad behind the rest of your scene, or use depth states to push it out to infinity.
If you want a simple plane, then you create the four necessary verts, bind your texture, and draw (disable writing to depth), then go about drawing the rest of the scene. This is done with a simple draw call in your C++, although you can use vertex buffers and such if necessary.
If you need something more complex, like layers or parallax, you'll need to use multiple planes and shift them, or a shader to composite multiple textures.
I have a very loose grasp of the OpenGL environment, so this is something I'm trying to understand. Could someone explain in lay terms what is the difference between these two styles of rendering? Primarily I would like to understand what it means to render to a texture and when would it be appropriate to choose to do that?
If you render to a texture then the image you've rendered is processed in such a manner as to make it immediately usable as a texture, which in practice usually means with negligible or zero effort from the CPU. You normally render to texture as part of a more complicated rendering pipeline.
Shadow buffers are the most obvious example — they're one way of rendering shadows. You position the camera where the light source would be and render the scene from there so that the final depth information ends up in a texture. You don't show that to the user. For each pixel you do intend to show to the user you work out its distance from the light and where it would appear in the depth map, then check if it is closer or further from the light than whatever was left in the depth map. Hence, with some effort expended on precision issues, you check whether each pixel is 'visible' from the light and hence whether it is lit.
Rendering to a CAEAGLLayer-backed view is a way of producing OpenGL output that UIKit knows how to composite to the screen. So it's the means by which iOS allows you to present your final OpenGL output to the user, within the hierarchy of a normal Cocoa Touch display.
I'd like to build an app using the new GLKit framework, and I'm in need of some design advice. I'd like to create an app that will present up to a couple thousand "bricks" (objects with very simple geometry). Most will have identical texture, but up to a couple hundred will have unique texture. I'd like the bricks to appear every few seconds, move into place and then stay put (in world coords). I'd like to simulate a camera whose position and orientation are controlled by user gestures.
The advice I need is about how to organize the code. I'd like my model to be a collection of bricks that have a lot more than graphical data associated with them:
Does it make sense to associate a view-like object with each handle geometry, texture, etc.?
Should every brick have it's own vertex buffer?
Should each have it's own GLKBaseEffect?
I'm looking for help organizing what object should do what during setup, then rendering.
I hope I can stay close to the typical MVC pattern, with my GLKViewController observing model state changes, controlling eye coordinates based on gestures, and so on.
Would be much obliged if you could give some advice or steer me toward a good example. Thanks in advance!
With respect to the models, I think an approach analogous to the relationship between UIImage and UIImageView is appropriate. So every type of brick has a single vertex buffer,GLKBaseEffect, texture and whatever else. Each brick may then appear multiple times just as multiple UIImageViews may use the same UIImage. In terms of keeping multiple reference frames, it's actually a really good idea to build a hierarchy essentially equivalent to UIView, each containing some transform relative to the parent and one sort being able to display a model.
From the GLKit documentation, I think the best way to keep the sort of camera you want (and indeed the object locations) is to store it directly as a GLKMatrix4 or a GLKQuaternion — so you don't derive the matrix or quaternion (plus location) from some other description of the camera, rather the matrix or quaternion directly is the storage for the camera.
Both of those classes have methods built in to apply rotations, and GLKMatrix4 can directly handle translations. So you can directly map the relevant gestures to those functions.
The only slightly non-obvious thing I can think of when dealing with the camera in that way is that you want to send the inverse to OpenGL rather than the thing itself. Supposing you use a matrix, the reasoning is that if you wanted to draw an object at that location you'd load the matrix directly then draw the object. When you draw an object at the same location as the camera you want it to end up being drawn at the origin. So the matrix you have to load for the camera is the inverse of the matrix you'd load to draw at that location because you want the two multiplied together to be the identity matrix.
I'm not sure how complicated the models for your bricks are but you could hit a performance bottleneck if they're simple and all moving completely independently. The general rule when dealing with OpenGL is that the more geometry you can submit at once, the faster everything goes. So, for example, an entirely static world like that in most games is much easier to draw efficiently than one where everything can move independently. If you're drawing six-sided cubes and moving them all independently then you may see worse performance than you might expect.
If you have any bricks that move in concert then it is more efficient to draw them as a single piece of geometry. If you have any bricks that definitely aren't visible then don't even try to draw them. As of iOS 5, GL_EXT_occlusion_query_boolean is available, which is a way to pass some geometry to OpenGL and ask if any of it is visible. You can use that in realtime scenes by building a hierarchical structure describing your data (which you'll already have if you've directly followed the UIView analogy), calculating or storing some bounding geometry for each view and doing the draw only if the occlusion query suggests that at least some of the bounding geometry would be visible. By following that sort of logic you can often discard large swathes of your geometry long before submitting it.