I'm using Ray Wenderlich's tutorials to make a simple OpenGlES 2 app using GLKit, and I've come across some problems.
I changed the sample code to display two cubes by adding vertex and indices data to the existing vertex and indices data structs. It works, and draws two cubes to the screen.
The problem is that when the new cube is behind the old one, it shows through. However, when the old cube is behind the new one, it doesn't show through.
Perhaps my depth testing is messed up?
I can't post images because of my reputation :(
Here's a link to the source code though:
https://www.dropbox.com/s/4xrq3gmnmbcz02m/EthanGillCubeSnap.zip
Any help is much appreciated!
On line 279 of HelloGLKitViewController.m I added the line below and it rendered correctly:
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
You need to make sure to set the depth buffer size on your GLKView or else no buffer will be created, which is what was happening to you before.
Related
I have a simple program that renders a couple of 3D objects, using DirectX 3D 9 and HLSL. I'm just starting off with HLSL, I have no experience with 3D rendering.
I am able to change the texture & color of the models and fade between two textures without problems, however I was wondering what the best way to simply fade a 3D object (blend it with the background) would be. I would assume that it wouldn't be done as fading between two textures (using lerp), since I want the object faded to the entire background, so there would be many different textures behind it.
I'm using the LPD3DXEFFECT as my effect class, DrawIndexedPrimitive as the drawing function in each pass, and I only have a single pass. I'm also using Shader Model 3, as this is an older project.
The only way that I thought it possible would be to simply get the color of the pixel before you apply any changes, and then do calculations on it with the color of the texture of the model to attain a faded pixel. However, after looking over the internet, it does not appear that it's actually possible to get the color of a pixel before doing anything to it with HLSL.
Is it even possible to do something like this using HLSL? Am I missing something that could assist me here?
Any help is appreciated!
Forgive me if I'm misunderstanding, but it sounds like you're trying to simulate transparency instead of using built-in transparency.
If you're trying to get the color of the pixels behind the object and want to avoid using transparency, I'd start by trying to use the last rendered frame as a texture, then reference that texture in your current shader. There may be some way to do it within the same frame - to force all other rendering to go first, then handle the one object - but I don't know it.
After a long grind, I finally found a very good workaround for my problem, and I will try to explain my understanding of it for anyone else that has a smillar issue. Thanks to Alexander Stewart for suggesting that there may be an in-built way to do it.
Method Description
Instead of taking care of the background fade in the HLSL pixel shader, there is another way to do it, using a method called Frame Buffer Alpha Blending (full MS Docs documentation: https://learn.microsoft.com/en-us/windows/win32/direct3d9/frame-buffer-alpha).
The basic idea behind this method is to provide a simple way of blending a given pixel that is to be rendered, with the existing pixel on the screen. There is a formula that is followed: FinalColor = ObjectPixelColor * SourceBlendFactor + BackgroundPixelColor * DestinationBlendFactor, all of these "variables" being groups of 4 float values, in the format (R, G, B, A).
How I Implemented it
Before doing anything with the actual shaders, in my Visual Studio C++ file I have to pass a few flags to my render device (I used LPDIRECT3DDEVICE9 as my device class). I had to set render states for both D3DRS_SRCBLEND and D3DRS_DESTBLEND, which are reffering to ObjectPixelColor and DestinationBlendFactor respectivelly in the formula above. These will be my factors that will be multiplying each one of my object and background pixel colors. There are many possible values that can be assigned to D3DRS_SRCBLEND and D3DRS_DESTBLEND, full list is available in the MS Docs link above, but in order to achieve what I wanted to (simply a way to fade an object into the background with an alpha number going from 0 to 1), I figured out the flags should be like this: SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA); SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);.
After setting these flags, before passing through my shaders & rendering, I just needed to set one more flag: SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);. I was also able to alternate between TRUE and FALSE here without changing anything else with no rendering problems (although my project was very simple, it will probably cause issues on larger projects). You can then pass any arguments you want, such as the alpha number, to the HLSL shader as a global variable (I did it using SetValue()).
Going back to my HLSL shader, after these changes, passing a color float4 variable taken from the tex2D() function from my pixel shader with an alpha value between 0 and 1 yielded the correct alpha, provided there aren't other issues (another issue that I had but hadn't realized at the time was the fact that my transparent object was actually rendering before the background, so I can only reccomend to check the rendering order when working on rendering projects).
I'm sure there could have probably been a better way of implementing this with the latest DirectX, but my compiler only supports Shader Model 3 and lower.
I am trying to develop my own mini game engine in Apple metal on a mac and I am stuck at a place where I want to render text on the GPU. I do not have much graphics programming experience and hence I am not sure how to do it. I stumbled upon an article written by warren more using signed distance fields. But I do not know how it works and I am unable to understand it completely (lack of my graphics knowledge) to implement it myself. The blog post has a code sample which is written in obj-c but unfortunately i do not know obj-c. Is there some swift version of it? Or can someone explain / give pointers on how to render text in metal?
I have been down this road before. I think you might find SceneKit useful if you are after 3D text.
If you are OK with using SceneKit to drive your rendering: SCNText with a SCNView.
If you have your own command buffer, and you can get away with blending your text on TOP of the rest of your graphics: you can still use SCNText, by using the render() method of a SCNRenderer to render to encode a scene's render commands onto a command buffer.
If you want to avoid SceneKit's rendering process, I would recommend doing this: create a SCNText in a SCNTransaction like so:
import SceneKit
SCNTransaction.begin()
let sceneText = SCNText(string: text, extrusionDepth: extrusionDepth)
SCNTransaction.commit()
let mdlMesh = MDLMesh(scnGeometry: sceneText, bufferAllocator: yourBufferAllocator)
let mesh = try MTKMesh(mesh: mdlMesh, device: MTLCreateSystemDefaultDevice()!)
This MTKMesh will have three vertex buffers; the first one (0) is a list of positions in packed_float3 format, the second (1) a list of normals in packed_float3 format, the third (2) a list of texture coordinates in packed_float2 format. Just make sure to reflect that in your vertex shader. It will have 1-5 submeshes with their own index buffers, corresponding I believe to front, back, front chamfer, back chamfer, and extrusion side.
Now, if you are after 2D text, you can either use this method above with an extrusionDepth close to zero, or you can harness CoreText directly to do font metrics and render textured quads with a font atlas texture like the commenter suggested.
The ability to understand Objective-C is certainly useful as well, but you may not need it for this problem specifically. I tried to be brief on my explanations since I don't know what your exact goal is with this problem, but I can provide more detail on any of those methods upon request.
I'm trying to put together a quick demo using iOS GLKit to render a retail store map using OpenGL using the source CAD files. I was able to render the walls and aisles in 2D, then programmatically add some artificial depth to create a series of cubes. All of this looks fine when looking top down, but I noticed that when I turned on the floor (with a z-value that is well below the aisles and walls that some of those objects are actually rendered under the floor:
...however if you rotate the model you can see that nothing is actually below the floor and some of the aisles are rendering outside of the wall:
You can view the code at StoreMapGLKitViewController.m, it all seems pretty simple to me, but I'm sure I'm making some kind of OpenGL rookie mistake.
So when you are messing with the Z values, and z = 0 for all the things, I'd imagine you'd still be able to see some of your walls and aisles, but they would also hang out the bottom a bit. As long as you don't care about that (its a demo, right) then that should be fine for now I would thenk.
Ends up that the depth buffer wasn't setup correcting so the depth test wasn't doing anything. Adding the code below fixed it.
GLKView *view = (GLKView *)self.view;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
I'm using SceneKit on iOS and I have a geometry I want to render as a wireframe. So basically I want to draw only the lines, so no textures.
I figured out that I could use the shaderModifiers property of the used SCNMaterial to accomplish this. Example of a shader modifier:
material.shaderModifiers = [
SCNShaderModifierEntryPointFragment: "_output.color.rgb = vec3(1.0) - _output.color.rgb;"
]
This example apparently simply inverts the output colors. I know nothing about this 'GLSL' language I have to use for the shader fragment.
Can anybody tell me what code I should use as the shader fragment to only draw near the edges, to make the geometry look like a wireframe?
Or maybe there is a whole other approach to render a geometry as a wireframe. I would love to hear it.
Try setting the material fillMode to .lines (iOS 11+, and macOS 10.13+):
sphereNode.geometry?.firstMaterial?.fillMode = .lines
Now it is possible (at least in Cocoa) with:
gameView.debugOptions.insert(SCNDebugOptions.showWireframe)
or you can do it interactively if enabling the statistics with:
gameView.showsStatistics = true
(gameView is an instance of SCNView)
This is not (quite) an answer, because this a question without an easy answer.
Doing wireframe rendering entirely in shader code is a lot more difficult than it seems like it should be, especially on mobile where you don't have a geometry shader. The problem is that the vertex shader (and subsequently the fragment shader) just doesn't have the information needed to know where polygon edges are.
I know nothing about this 'GLSL' language I have to use for the shader fragment.
If you really want to tackle this problem, you'll need to learn some more about GLSL (the OpenGL Shading Language). There are loads of books and tutorials out there for that.
Once you've got some GLSL under your belt, take a look at some of the questions (like this one pulled from the Related sidebar) and other stuff people have written about the problem. (Note that when you're looking for mobile-specific limitations, OpenGL ES has the same limitations as WebGL on the desktop.)
With SceneKit, you have the additional wrinkle that you probably don't have a barycentric-coordinates vertex attribute (aka SCNGeometrySource) for the geometry you're working with, and you probably don't want to do the hard work of generating one. In OS X, you can use an SCNProgram with a geometryShader to add barycentric coordinates before the vertex/fragment shaders run — but then you have to do your own shading (i.e. you can't piggyback on the SceneKit shading like you can with shader modifiers). And that isn't available in iOS — the hardware there doesn't do geometry shaders. You might be able to fake it using texture coordinates if those happen to be lined up right in your geometry.
It might be easier to just draw the object using lines — try making a new SCNGeometry from the sources and elements of your original (solid) geometry, but when recreating the SCNGeometryElement, use SCNPrimitiveTypeLine.
We are working on a Three.js based WebGL project, and have trouble understanding how transparency is handled in WebGL. The image shows a doublesided surface drawn with alpha = 0.7, which behaves correctly on its right side. However closer to the middle strange artifacts appear, and on the left side the transparency does not seem to work at all.
http://emilaxelsson.se/sandbox/vis1/alpha.png
The problem can also be seen here:
http://emilaxelsson.se/sandbox/vis1/
Has anyone seen anything similar before? What could the reason be?
Your problem is that transparent objects needs to be sorted and rendered in a back-to-front order (if you try to change the opacity of your mesh from 0.7 (transparent) to 1.0 (opaque), you can see that the z-buffer works just fine).
See:
http://www.opengl.org/wiki/Transparency_Sorting
http://www.opengl.org/archives/resources/faq/technical/transparency.htm (15.050)
In your case it might be less trivial to solve, since I assume that you only have one mesh.
Edit: Just to summarize the discussion below. It is possible to achieve correct rendering of such a double-sided transparent mesh. To do this, you need to create 6 versions of the mesh, corresponding to 6 sides of a cube. Each version needs to be sorted in a back-to-front order based on the 'side of the cube' (front, back, left, right, top, bottom).
When rendering choose the correct mesh (based on the camera viewing direction) and render that single mesh.
The easy solution for your case (based on the picture you attached), without going to expensive sorting and multiple meshes, is to disable depth test and enable face culling. That produces acceptable results if you do not have any opaque objects in front of the mesh.