I've been recently trying to create some 3D rendering code in Silverlight 5 with XNA. Unfortunately I have been having trouble getting anything ( using my custom shader ) to work.
The basic effect is used on a cube and uses only VertexPositionColor information but when I switch to using a custom shader nothing seems to render ( or renders off-screen ).
To try and help myself with this issue I even got hold of the BasicEffect hlsl code but it doesn't do anything I am not doing.
The code takes in a world, view and projection matrix and multiplies each one by a position in the following order:
float4 pos_ws = mul(position, World);
float4 pos_vs = mul(pos_ws, View);
float4 pos_ps = mul(pos_vs, Projection);
I changed my code to do the same thing ( instead of passing in a single WorldViewProjection matrix ) and my shader uses this to calculate a position and then just applies a color to the pixel. Yet nothing is rendering.
I'm pretty stuck on this, I'm passing ok at basic 3D but passing ok doesn't seem to cut it! :)
So it turns out the issue is fairly simple!
I actually deleted this question initially because I knew the issue was likely my matrices and so it was unlikely I'd get much help!
After some stumbling on google, and more coffee than I'd like to admit to, I found the answer.
XNA transposes it's matricies on the sly and doesn't tell you! I had tried transposing the view and projection matrices in some vain hope that I'd know what I was doing but it wasn't helping.
Instead I now pass in a single WorldViewProjection_Transposed matrix which is calculated using the following.
Matrix worldViewProjection_Transpose = Matrix.Transpose(world * view * projection);
This seems to work at the moment and I am hoping it is this simple.
I am sure I will come across a million more problems as the models I need to render become more complex but I decided to leave this on in case anyone in a similar situation ( and experience level ) to me is struggling :)
Related
I have a simple program that renders a couple of 3D objects, using DirectX 3D 9 and HLSL. I'm just starting off with HLSL, I have no experience with 3D rendering.
I am able to change the texture & color of the models and fade between two textures without problems, however I was wondering what the best way to simply fade a 3D object (blend it with the background) would be. I would assume that it wouldn't be done as fading between two textures (using lerp), since I want the object faded to the entire background, so there would be many different textures behind it.
I'm using the LPD3DXEFFECT as my effect class, DrawIndexedPrimitive as the drawing function in each pass, and I only have a single pass. I'm also using Shader Model 3, as this is an older project.
The only way that I thought it possible would be to simply get the color of the pixel before you apply any changes, and then do calculations on it with the color of the texture of the model to attain a faded pixel. However, after looking over the internet, it does not appear that it's actually possible to get the color of a pixel before doing anything to it with HLSL.
Is it even possible to do something like this using HLSL? Am I missing something that could assist me here?
Any help is appreciated!
Forgive me if I'm misunderstanding, but it sounds like you're trying to simulate transparency instead of using built-in transparency.
If you're trying to get the color of the pixels behind the object and want to avoid using transparency, I'd start by trying to use the last rendered frame as a texture, then reference that texture in your current shader. There may be some way to do it within the same frame - to force all other rendering to go first, then handle the one object - but I don't know it.
After a long grind, I finally found a very good workaround for my problem, and I will try to explain my understanding of it for anyone else that has a smillar issue. Thanks to Alexander Stewart for suggesting that there may be an in-built way to do it.
Method Description
Instead of taking care of the background fade in the HLSL pixel shader, there is another way to do it, using a method called Frame Buffer Alpha Blending (full MS Docs documentation: https://learn.microsoft.com/en-us/windows/win32/direct3d9/frame-buffer-alpha).
The basic idea behind this method is to provide a simple way of blending a given pixel that is to be rendered, with the existing pixel on the screen. There is a formula that is followed: FinalColor = ObjectPixelColor * SourceBlendFactor + BackgroundPixelColor * DestinationBlendFactor, all of these "variables" being groups of 4 float values, in the format (R, G, B, A).
How I Implemented it
Before doing anything with the actual shaders, in my Visual Studio C++ file I have to pass a few flags to my render device (I used LPDIRECT3DDEVICE9 as my device class). I had to set render states for both D3DRS_SRCBLEND and D3DRS_DESTBLEND, which are reffering to ObjectPixelColor and DestinationBlendFactor respectivelly in the formula above. These will be my factors that will be multiplying each one of my object and background pixel colors. There are many possible values that can be assigned to D3DRS_SRCBLEND and D3DRS_DESTBLEND, full list is available in the MS Docs link above, but in order to achieve what I wanted to (simply a way to fade an object into the background with an alpha number going from 0 to 1), I figured out the flags should be like this: SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA); SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);.
After setting these flags, before passing through my shaders & rendering, I just needed to set one more flag: SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);. I was also able to alternate between TRUE and FALSE here without changing anything else with no rendering problems (although my project was very simple, it will probably cause issues on larger projects). You can then pass any arguments you want, such as the alpha number, to the HLSL shader as a global variable (I did it using SetValue()).
Going back to my HLSL shader, after these changes, passing a color float4 variable taken from the tex2D() function from my pixel shader with an alpha value between 0 and 1 yielded the correct alpha, provided there aren't other issues (another issue that I had but hadn't realized at the time was the fact that my transparent object was actually rendering before the background, so I can only reccomend to check the rendering order when working on rendering projects).
I'm sure there could have probably been a better way of implementing this with the latest DirectX, but my compiler only supports Shader Model 3 and lower.
Working with Delphi / Firemonkey XE8. Had some decent luck with it recently, although you have to hack the heck out of it to get it to do what you want. My current project is to evaluate it's Low-Level 3D capabilities to see if I can use them as a starting point for a Game Project. I also know Unity3D quite well, and am considering using Unity3D instead, but I figure that Delphi / Firemonkey might give me some added flexibility in my game design because it is so minimal.
So I decided to dig into an Embarcadero-supplied sample... specifically the LowLevel3D sample. This is the cross-platform sample that shows you how to do nothing other than draw a rotating square on the screen with some custom shaders of your choice and have it look the same on all platforms (although it actually doesn't work AT ALL the same on all platforms... but we won't get into that).
Embc does not supply the original uncompiled shaders for the project (which I might add is really lame), and requires you to supply your own compatible shaders (some compiled, some not) for the various platforms you're targeting (also lame)...so my first job has been to create a shader that would work with their existing sample that does something OTHER than what the sample already does. Specifically, if I'm creating a 2D game, I wanted to make sure that I could do sprites with alpha transparency, basic stuff.... if I can get this working, I'll probably never have to write another shader for the whole game.
After pulling my hair out for many hours, I came up with this little shader that works with the same parameters as the demo.
Texture2D mytex0: register(t0);
Texture2D mytex1: register(t1);
float4 cccc : register(v0) ;
struct PixelShaderInput
{
float4 Pos: COLOR;
float2 Tex: TEXCOORDS;
};
SamplerState g_samLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
RasterizerState MyCull {
FrontCounterClockwise = FALSE;
};
float4 main(PixelShaderInput pIn): SV_TARGET
{
float4 cc,c;
float4 ci = mytex1.Sample(g_samLinear, pIn.Tex.xy);
c = ci;
c.a = 0;//<----- DOES NOT actually SET ALPHA TO ZERO ... THIS IS A PROBLEM
cc = c;
return cc;
}
Never-mind that it doesn't actually do much with the parameters, but check out the line where I set the output's ALPHA to 0. Well.... I found that this actually HAS NO EFFECT!
But it gets spookier than this. I found that turning on CULLING in the Delphi App FIXED this issue. So I figure... no big deal then, I'll just manually draw both sides of the sprite... right? Nope! When I manually drew a double sided sprite.. the problem came back!
Check this image: shader is ignoring alpha=0 when double-sided
In the above picture, clearly alpha is SOMEWHAT obeyed because the clouds are not surrounded by a black box, however, the cloud itself is super saturated (I find that if I multiply rgb*a, then the colors come out approximately right, but I'm not going to do that in real-life for obvious reasons.
I'm new to the concept of writing custom shaders. Any insight is appreciated.
I'm using SceneKit on iOS and I have a geometry I want to render as a wireframe. So basically I want to draw only the lines, so no textures.
I figured out that I could use the shaderModifiers property of the used SCNMaterial to accomplish this. Example of a shader modifier:
material.shaderModifiers = [
SCNShaderModifierEntryPointFragment: "_output.color.rgb = vec3(1.0) - _output.color.rgb;"
]
This example apparently simply inverts the output colors. I know nothing about this 'GLSL' language I have to use for the shader fragment.
Can anybody tell me what code I should use as the shader fragment to only draw near the edges, to make the geometry look like a wireframe?
Or maybe there is a whole other approach to render a geometry as a wireframe. I would love to hear it.
Try setting the material fillMode to .lines (iOS 11+, and macOS 10.13+):
sphereNode.geometry?.firstMaterial?.fillMode = .lines
Now it is possible (at least in Cocoa) with:
gameView.debugOptions.insert(SCNDebugOptions.showWireframe)
or you can do it interactively if enabling the statistics with:
gameView.showsStatistics = true
(gameView is an instance of SCNView)
This is not (quite) an answer, because this a question without an easy answer.
Doing wireframe rendering entirely in shader code is a lot more difficult than it seems like it should be, especially on mobile where you don't have a geometry shader. The problem is that the vertex shader (and subsequently the fragment shader) just doesn't have the information needed to know where polygon edges are.
I know nothing about this 'GLSL' language I have to use for the shader fragment.
If you really want to tackle this problem, you'll need to learn some more about GLSL (the OpenGL Shading Language). There are loads of books and tutorials out there for that.
Once you've got some GLSL under your belt, take a look at some of the questions (like this one pulled from the Related sidebar) and other stuff people have written about the problem. (Note that when you're looking for mobile-specific limitations, OpenGL ES has the same limitations as WebGL on the desktop.)
With SceneKit, you have the additional wrinkle that you probably don't have a barycentric-coordinates vertex attribute (aka SCNGeometrySource) for the geometry you're working with, and you probably don't want to do the hard work of generating one. In OS X, you can use an SCNProgram with a geometryShader to add barycentric coordinates before the vertex/fragment shaders run — but then you have to do your own shading (i.e. you can't piggyback on the SceneKit shading like you can with shader modifiers). And that isn't available in iOS — the hardware there doesn't do geometry shaders. You might be able to fake it using texture coordinates if those happen to be lined up right in your geometry.
It might be easier to just draw the object using lines — try making a new SCNGeometry from the sources and elements of your original (solid) geometry, but when recreating the SCNGeometryElement, use SCNPrimitiveTypeLine.
I'm using Ray Wenderlich's tutorials to make a simple OpenGlES 2 app using GLKit, and I've come across some problems.
I changed the sample code to display two cubes by adding vertex and indices data to the existing vertex and indices data structs. It works, and draws two cubes to the screen.
The problem is that when the new cube is behind the old one, it shows through. However, when the old cube is behind the new one, it doesn't show through.
Perhaps my depth testing is messed up?
I can't post images because of my reputation :(
Here's a link to the source code though:
https://www.dropbox.com/s/4xrq3gmnmbcz02m/EthanGillCubeSnap.zip
Any help is much appreciated!
On line 279 of HelloGLKitViewController.m I added the line below and it rendered correctly:
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
You need to make sure to set the depth buffer size on your GLKView or else no buffer will be created, which is what was happening to you before.
I have converted a Depth Of Field shader from XNA 3.1 to 4.0. The problem is, it is completely draining my colours and not rendering anything else. You can see the problem here:
Project Vanquish - Depth of Field issue
Any ideas would be very grateful.
EDIT
I thought it best to add the Render method too:
public void PostProcess(GraphicsDevice device)
{
// Gaussian Blur Horizontal
device.SetRenderTarget(this.GaussianHRT);
device.Clear(Color.White);
device.SetRenderTarget(null);
this.SetBlurEffectParameters(1.0f / this.device.Viewport.Width, 0);
this.DrawQuad(this.resolveTarget, this.gaussianBlur.Effect);
// Gaussian Blur Vertical
device.SetRenderTarget(this.GaussianVRT);
device.Clear(Color.White);
device.SetRenderTarget(null);
this.SetBlurEffectParameters(0, 1.0f / this.device.Viewport.Height);
this.DrawQuad(this.GaussianHRT, this.gaussianBlur.Effect);
// Render result
device.Textures[0] = this.resolveTarget;
device.Textures[1] = this.GaussianVRT;
device.Textures[2] = this.depthRT;
this.DrawQuad(this.resolveTarget, this.combine.Effect);
// Reset RenderStates
this.ResetRenderStates();
}
By outputting this to seperate RenderTargets, I can see some potential issues, but I can't understand why I get a blank RenderTarget even though I set the resolveTarget RenderTarget before I render all of the scene. This is then rendered after the scene has been rendered.
are you saying that you ported it line for line to xna 4 and it now exhibits this behavior, or did you make code changes? I mean, it looks to me like the pixel shader is just returning invalid color values.
can you give us some details about what changed in your shader from 3.1 -> 4 ?
Edit: To go down a different line of thought ... another reason that you'd see black rendering such as this is around lighting. I would double check and verify that all of your light parameters are being passed into the shader, and that they are calculating accordingly.
You call DrawQuad function before setting the textures; this should result in invalid textures bound to Combine shader inputs.
There is a number of other things that can go wrong here, but this is the most likely suspect.
The problem existed in a function before this where I set the DepthBuffer. Thanks for the assistance.