I have converted a Depth Of Field shader from XNA 3.1 to 4.0. The problem is, it is completely draining my colours and not rendering anything else. You can see the problem here:
Project Vanquish - Depth of Field issue
Any ideas would be very grateful.
EDIT
I thought it best to add the Render method too:
public void PostProcess(GraphicsDevice device)
{
// Gaussian Blur Horizontal
device.SetRenderTarget(this.GaussianHRT);
device.Clear(Color.White);
device.SetRenderTarget(null);
this.SetBlurEffectParameters(1.0f / this.device.Viewport.Width, 0);
this.DrawQuad(this.resolveTarget, this.gaussianBlur.Effect);
// Gaussian Blur Vertical
device.SetRenderTarget(this.GaussianVRT);
device.Clear(Color.White);
device.SetRenderTarget(null);
this.SetBlurEffectParameters(0, 1.0f / this.device.Viewport.Height);
this.DrawQuad(this.GaussianHRT, this.gaussianBlur.Effect);
// Render result
device.Textures[0] = this.resolveTarget;
device.Textures[1] = this.GaussianVRT;
device.Textures[2] = this.depthRT;
this.DrawQuad(this.resolveTarget, this.combine.Effect);
// Reset RenderStates
this.ResetRenderStates();
}
By outputting this to seperate RenderTargets, I can see some potential issues, but I can't understand why I get a blank RenderTarget even though I set the resolveTarget RenderTarget before I render all of the scene. This is then rendered after the scene has been rendered.
are you saying that you ported it line for line to xna 4 and it now exhibits this behavior, or did you make code changes? I mean, it looks to me like the pixel shader is just returning invalid color values.
can you give us some details about what changed in your shader from 3.1 -> 4 ?
Edit: To go down a different line of thought ... another reason that you'd see black rendering such as this is around lighting. I would double check and verify that all of your light parameters are being passed into the shader, and that they are calculating accordingly.
You call DrawQuad function before setting the textures; this should result in invalid textures bound to Combine shader inputs.
There is a number of other things that can go wrong here, but this is the most likely suspect.
The problem existed in a function before this where I set the DepthBuffer. Thanks for the assistance.
Related
I have a simple program that renders a couple of 3D objects, using DirectX 3D 9 and HLSL. I'm just starting off with HLSL, I have no experience with 3D rendering.
I am able to change the texture & color of the models and fade between two textures without problems, however I was wondering what the best way to simply fade a 3D object (blend it with the background) would be. I would assume that it wouldn't be done as fading between two textures (using lerp), since I want the object faded to the entire background, so there would be many different textures behind it.
I'm using the LPD3DXEFFECT as my effect class, DrawIndexedPrimitive as the drawing function in each pass, and I only have a single pass. I'm also using Shader Model 3, as this is an older project.
The only way that I thought it possible would be to simply get the color of the pixel before you apply any changes, and then do calculations on it with the color of the texture of the model to attain a faded pixel. However, after looking over the internet, it does not appear that it's actually possible to get the color of a pixel before doing anything to it with HLSL.
Is it even possible to do something like this using HLSL? Am I missing something that could assist me here?
Any help is appreciated!
Forgive me if I'm misunderstanding, but it sounds like you're trying to simulate transparency instead of using built-in transparency.
If you're trying to get the color of the pixels behind the object and want to avoid using transparency, I'd start by trying to use the last rendered frame as a texture, then reference that texture in your current shader. There may be some way to do it within the same frame - to force all other rendering to go first, then handle the one object - but I don't know it.
After a long grind, I finally found a very good workaround for my problem, and I will try to explain my understanding of it for anyone else that has a smillar issue. Thanks to Alexander Stewart for suggesting that there may be an in-built way to do it.
Method Description
Instead of taking care of the background fade in the HLSL pixel shader, there is another way to do it, using a method called Frame Buffer Alpha Blending (full MS Docs documentation: https://learn.microsoft.com/en-us/windows/win32/direct3d9/frame-buffer-alpha).
The basic idea behind this method is to provide a simple way of blending a given pixel that is to be rendered, with the existing pixel on the screen. There is a formula that is followed: FinalColor = ObjectPixelColor * SourceBlendFactor + BackgroundPixelColor * DestinationBlendFactor, all of these "variables" being groups of 4 float values, in the format (R, G, B, A).
How I Implemented it
Before doing anything with the actual shaders, in my Visual Studio C++ file I have to pass a few flags to my render device (I used LPDIRECT3DDEVICE9 as my device class). I had to set render states for both D3DRS_SRCBLEND and D3DRS_DESTBLEND, which are reffering to ObjectPixelColor and DestinationBlendFactor respectivelly in the formula above. These will be my factors that will be multiplying each one of my object and background pixel colors. There are many possible values that can be assigned to D3DRS_SRCBLEND and D3DRS_DESTBLEND, full list is available in the MS Docs link above, but in order to achieve what I wanted to (simply a way to fade an object into the background with an alpha number going from 0 to 1), I figured out the flags should be like this: SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA); SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);.
After setting these flags, before passing through my shaders & rendering, I just needed to set one more flag: SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);. I was also able to alternate between TRUE and FALSE here without changing anything else with no rendering problems (although my project was very simple, it will probably cause issues on larger projects). You can then pass any arguments you want, such as the alpha number, to the HLSL shader as a global variable (I did it using SetValue()).
Going back to my HLSL shader, after these changes, passing a color float4 variable taken from the tex2D() function from my pixel shader with an alpha value between 0 and 1 yielded the correct alpha, provided there aren't other issues (another issue that I had but hadn't realized at the time was the fact that my transparent object was actually rendering before the background, so I can only reccomend to check the rendering order when working on rendering projects).
I'm sure there could have probably been a better way of implementing this with the latest DirectX, but my compiler only supports Shader Model 3 and lower.
UPDATE:
I have posted the code online to demonstrate the issue:
http://cutama.github.io/
To see the problem, position the mouse on the red rectangle and zoom in and out using the mouse scroller. After a while you will see the triangles flickering, then use the left mouse button to rotate.
Controls:
left mouse click&drag: orbit, middle mouse click&drag: pan, mouse scroll: zoom
END UPDATE
I have encountered a strange rendering problem with webgl. Whenever I moved the camera around, some triangles appear to be missing randomly. See pictures below.
I have been digging around but could not find the cause. Any ideas what may be causing this?
This is the normal rendering of the geometry:
Missing triangles:
Another missing triangles:
Did some debugging with webgl inspector.
GL Trace:
Clicking on the missing pixel show that it is being depth culled, but nothing is in front of it...so why is it culled?
Comparison with normal unculled pixel:
Vertex data inside the buffer. The triangles are very small. Is this causing the problem?
I don't know what makes this bug. But, I will list what you might need to consider.
Check you specified correct PrimitiveTopology(It should be gl.TRIANGLES in many case)
Check you rendered wether gl.DrawElements or gl.DrawArrays.
If you are using index buffer, you should use gl.DrawElements
Check you are using correct culling configuration.
Check you are using correct depth comparing function.(If you are not using depth test,you don't need to care about this)
Found the issue is caused by incorrect triangle indices in the model..still not sure why it would cause flickering.
I've been recently trying to create some 3D rendering code in Silverlight 5 with XNA. Unfortunately I have been having trouble getting anything ( using my custom shader ) to work.
The basic effect is used on a cube and uses only VertexPositionColor information but when I switch to using a custom shader nothing seems to render ( or renders off-screen ).
To try and help myself with this issue I even got hold of the BasicEffect hlsl code but it doesn't do anything I am not doing.
The code takes in a world, view and projection matrix and multiplies each one by a position in the following order:
float4 pos_ws = mul(position, World);
float4 pos_vs = mul(pos_ws, View);
float4 pos_ps = mul(pos_vs, Projection);
I changed my code to do the same thing ( instead of passing in a single WorldViewProjection matrix ) and my shader uses this to calculate a position and then just applies a color to the pixel. Yet nothing is rendering.
I'm pretty stuck on this, I'm passing ok at basic 3D but passing ok doesn't seem to cut it! :)
So it turns out the issue is fairly simple!
I actually deleted this question initially because I knew the issue was likely my matrices and so it was unlikely I'd get much help!
After some stumbling on google, and more coffee than I'd like to admit to, I found the answer.
XNA transposes it's matricies on the sly and doesn't tell you! I had tried transposing the view and projection matrices in some vain hope that I'd know what I was doing but it wasn't helping.
Instead I now pass in a single WorldViewProjection_Transposed matrix which is calculated using the following.
Matrix worldViewProjection_Transpose = Matrix.Transpose(world * view * projection);
This seems to work at the moment and I am hoping it is this simple.
I am sure I will come across a million more problems as the models I need to render become more complex but I decided to leave this on in case anyone in a similar situation ( and experience level ) to me is struggling :)
We are working on a Three.js based WebGL project, and have trouble understanding how transparency is handled in WebGL. The image shows a doublesided surface drawn with alpha = 0.7, which behaves correctly on its right side. However closer to the middle strange artifacts appear, and on the left side the transparency does not seem to work at all.
http://emilaxelsson.se/sandbox/vis1/alpha.png
The problem can also be seen here:
http://emilaxelsson.se/sandbox/vis1/
Has anyone seen anything similar before? What could the reason be?
Your problem is that transparent objects needs to be sorted and rendered in a back-to-front order (if you try to change the opacity of your mesh from 0.7 (transparent) to 1.0 (opaque), you can see that the z-buffer works just fine).
See:
http://www.opengl.org/wiki/Transparency_Sorting
http://www.opengl.org/archives/resources/faq/technical/transparency.htm (15.050)
In your case it might be less trivial to solve, since I assume that you only have one mesh.
Edit: Just to summarize the discussion below. It is possible to achieve correct rendering of such a double-sided transparent mesh. To do this, you need to create 6 versions of the mesh, corresponding to 6 sides of a cube. Each version needs to be sorted in a back-to-front order based on the 'side of the cube' (front, back, left, right, top, bottom).
When rendering choose the correct mesh (based on the camera viewing direction) and render that single mesh.
The easy solution for your case (based on the picture you attached), without going to expensive sorting and multiple meshes, is to disable depth test and enable face culling. That produces acceptable results if you do not have any opaque objects in front of the mesh.
I have written a 2D Jump&Run Engine resulting in a 320x224 (320x240) image. To maintain the old school "pixely"-feel to it, I would like to scale the resulting image by 2 or 3 or 4, according to the resolution of the user.
I don't want to scale each and every sprite, but the resulting image!
Thanks in advance :)
Bob's answer is correct about changing the filtering mode to TextureFilter.Point to keep things nice and pixelated.
But possibly a better method than scaling each sprite (as you'd also have to scale the position of each sprite) is to just pass a matrix to SpriteBatch.Begin, like so:
sb.Begin(/* first three parameters */, Matrix.CreateScale(4f));
That will give you the scaling you want without having to modify all your draw calls.
However it is worth noting that, if you use floating-point offsets in your game, you will end up with things not aligned to pixel boundaries after you scale up (with either method).
There are two solutions to this. The first is to have a function like this:
public static Vector2 Floor(Vector2 v)
{
return new Vector2((float)Math.Floor(v.X), (float)Math.Floor(v.Y));
}
And then pass your position through that function every time you draw a sprite. Although this might not work if your sprites use any rotation or offsets. And again you'll be back to modifying every single draw call.
The "correct" way to do this, if you want a plain point-wise scale-up of your whole scene, is to draw your scene to a render target at the original size. And then draw your render target to screen, scaled up (with TextureFilter.Point).
The function you want to look at is GraphicsDevice.SetRenderTarget. This MSDN article might be worth reading. If you're on or moving to XNA 4.0, this might be worth reading.
I couldn't find a simpler XNA sample for this quickly, but the Bloom Postprocess sample uses a render target that it then applies a blur shader to. You could simply ignore the shader entirely and just do the scale-up.
You could use a pixelation effect. Draw to a RenderTarget2D, then draw the result to the screen using a Pixel Shader. There's a tool called Shazzam Shader Editor that let's you try out pixel shaders and it includes one that does pixelation:
http://shazzam-tool.com/
This may not be what you wanted, but it could be good for allowing a high-resolution mode and for having the same effect no matter what resolution was used...
I'm not exactly sure what you mean by "resulting in ... an image" but if you mean your end result is a texture then you can draw that to the screen and set a scale:
spriteBatch.Draw(texture, position, source, color, rotation, origin, scale, effects, depth);
Just replace the scale with whatever number you want (2, 3, or 4). I do something similar but scale per sprite and not the resulting image. If you mean something else let me know and I'll try to help.
XNA defaults to anti-aliasing the scaled image. If you want to retain the pixelated goodness you'll need to draw in immediate sort mode and set some additional parameters:
spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.None);
GraphicsDevice.SamplerStates[0].MagFilter = TextureFilter.Point;
GraphicsDevice.SamplerStates[0].MinFilter = TextureFilter.Point;
GraphicsDevice.SamplerStates[0].MipFilter = TextureFilter.Point;
It's either the Point or the None TextureFilter. I'm at work so I'm trying to remember off the top of my head. I'll confirm one way or the other later today.