Occasional missing polygons drawing a sky sphere (at far plane) - directx

I am drawing a sky sphere as the background for a 3D view. Occasionally, when navigating around the view, there is a visual glitch that pops in:
Example of the glitch: a black shape where rendering has apparently not placed fragments onscreen
Black is the colour the device is cleared to at the beginning of each frame.
The shape of the black area is different each time, and is sometimes visibly many polygons. They are always centred around a common point, usually close to the centre of the screen
Repainting without changing the navigation (eye position and look) doesn't make the glitch vanish, i.e. it does seem to be dependent on specific navigation
The moment navigation is changed, even an infinitesimal amount, it vanishes and the sky draws solidly. The vast majority of painting is correct. Eventually as you move around you will spot another glitch.
Changing the radius of the sphere (to, say, 0.9 of the near/far plane distance) doesn't seem to remove the glitches
Changing Z-buffer writing or the Z test in the effect technique makes no difference
There is no DX debug output (when running with the debug version of the runtime, maximum validation, and shader debugging enabled.)
What could be the cause of these glitches?
I am using Direct3D9 (June 2010 SDK), shaders compiled to SM3, and the glitch has been observed on ATI cards and VMWare Fusion virtual cards on Windows 7 and XP.
Example code
The sky is being drawn as a sphere (error-checking etc removed the the below code):
To create
const float fRadius = GetScene().GetFarPlane() - GetScene().GetNearPlane()*2;
D3DXCreateSphere(GetScene().GetDevicePtr(), fRadius, 64, 64, &m_poSphere, 0);
Changing the radius doesn't seem to affect the presence of glitches.
Vertex shader
OutputVS ColorVS(float3 posL : POSITION0, float4 c : COLOR0) {
OutputVS outVS = (OutputVS)0;
// Center around the eye
posL += g_vecEyePos;
// Transform to homogeneous clip space.
outVS.posH = mul(float4(posL, 1.0f), g_mWorldViewProj).xyzw; // Always on the far plane
Pixel shader
Does't matter, even one outputting a solid colour will glitch:
float4 ColorPS(float altitude : COLOR0) : COLOR {
return float4(1.0, 0.0, 0.0, 1.0);
The same image with a solid-colour pixel shader, to be certain the PS isn't the cause of the problem
Technique
technique BackgroundTech {
pass P0 {
// Specify the vertex and pixel shader associated with this pass.
vertexShader = compile vs_3_0 ColorVS();
pixelShader = compile ps_3_0 ColorPS();
// sky is visible from inside - cull mode is inverted (clockwise)
CullMode = CW;
}
}
I tried adding in state settings affecting the depth, such as ZWriteEnabled = false. None made any difference.

The problem is certainly caused by far plane clipping. If changing the sphere's radius a bit doesn't help, then the sphere's position may be wrong...
Make sure you're properly initializing g_vecEyePos constant (maybe you've mispelled it in one of the DirectX SetShaderConstant functions?).
Also, if you've included the translation to the eye's position in the world matrix of g_mWorldViewProj, you shouldn't do posL += g_vecEyePos; in your VS, because it causes a vertex to be moved twice the eye's position.
In other words you should choose one of these options:
g_mWorldViewProj = mCamView * mCamProj; and posL += g_vecEyePos;
g_mWorldViewProj = MatrixTranslation(g_vecEyePos) * mCamView * mCamProj;

Related

Dissolve SKShader works as expected on simulator, strange behaviour on actual device

I encountered weird behaviour when trying to create dissolve shader for iOS spritekit. I have this basic shader that for now only changes alpha of texture depending on black value of noise texture:
let shader = SKShader(source: """
void main() {\
vec4 colour = texture2D(u_texture, v_tex_coord);\
float noise = texture2D(noise_tex, v_tex_coord).r;\
gl_FragColor = colour * noise;\
}
""", uniforms: [
SKUniform(name: "noise_tex", texture: spriteSheet.textureNamed("dissolve_noise"))
])
Note that this code is called in spriteSheet preload callback.
On simulator this consistently gives expected result ie. texture with different alpha values all over the place. On actual 14.5.1 device it varies:
Applied directly to SKSpriteNode - it makes whole texture semi-transparent with single value
Applied to SKEffectNode with SKSpriteNode as its child - I see miniaturized part of a whole spritesheet
Same as 2 but texture is created from image outside spritesheet - it works as on simulator (and as expected)
Why does it behave like this? Considering this needs to work on iOS 9 devices I'm worried 3 won't work everywhere. So I'd like to understand why it happens and ideally get sure way to force 1 or at least 2 to work on all devices.
After some more testing I finally figured out what is happening. The textures in the shader are whole spritesheets instead of separate textures on devices, so the coordinates go all over the place. (which actually makes more sense than simulator behaviour now that I think about it)
So depending if I want 1 or 2 I need to apply different maths. 2 is easier, since display texture is first rendered onto a buffer, so v_text_coord will take full [0.0, 1.0], so all I need is noise texture rect to do appropriate transform. For 1 I need to additionally provide texture rect to first change it into [0.0, 1.0] myself and then apply that to noise coordinates.
This will work with both spritesheets loaded into the shader or separate images, just in later case it will do some unnecessary calculations.

Back face culling in SceneKit

I am currently trying to set up a rotating ball in scene kit. I have created the ball and applied a texture to it.
ballMaterial.diffuse.contents = UIImage(named: ballTexture)
ballMaterial.doubleSided = true
ballGeometry.materials = [ballMaterial]
The current ballTexture is a semi-transparent texture as I am hoping to see the back face roll around.
However I get some strange culling where only half of the back facing polygons are shown even though the doubleSided property is set to true.
Any help would be appreciated, thanks.
This happens because the effects of transparency are draw-order dependent. SceneKit doesn't know to draw the back-facing polygons of the sphere before the front-facing ones. (In fact, it can't really do that without reorganizing the vertex buffers on the GPU for every frame, which would be a huge drag on render performance.)
The vertex layout for an SCNSphere has it set up like the lat/long grid on a globe: the triangles render in order along the meridians from 0° to 360°, so depending on how the sphere is oriented with respect to the camera, some of the faces on the far side of the sphere will render before the nearer ones.
To fix this, you need to force the rendering order — either directly, or through the depth buffer. Here's one way to do that, using a separate material for the inside surface to illustrate the difference.
// add two balls, one a child of the other
let node = SCNNode(geometry: SCNSphere(radius: 1))
let node2 = SCNNode(geometry: SCNSphere(radius: 1))
scene.rootNode.addChildNode(node)
node.addChildNode(node2)
// cull back-facing polygons on the first ball
// so we only see the outside
let mat1 = node.geometry!.firstMaterial!
mat1.cullMode = .Back
mat1.transparent.contents = bwCheckers
// my "bwCheckers" uses black for transparent, white for opaque
mat1.transparencyMode = .RGBZero
// cull front-facing polygons on the second ball
// so we only see the inside
let mat2 = node2.geometry!.firstMaterial!
mat2.cullMode = .Front
mat2.diffuse.contents = rgCheckers
// sphere normals face outward, so to make the inside respond
// to lighting, we need to invert them
let shader = "_geometry.normal *= -1.0;"
mat2.shaderModifiers = [SCNShaderModifierEntryPointGeometry: shader]
(The shader modifier bit at the end isn't required — it just makes the inside material get diffuse shading. You could just as well use a material property that doesn't involve normals or lighting, like emission, depending on the look you want.)
You can also do this using a single node with a double-sided material by disabling writesToDepthBuffer, but that could also lead to undesirable interactions with the rest of your scene content — you might also need to mess with renderingOrder in that case.
macOS 10.13 and iOS 11 added SCNTransparencyMode.dualLayer which as far as I can tell doesn't even require setting isDoubleSided to true (the documentation doesn't provide any information at all). So a simple solution that's working for me would be:
ballMaterial.diffuse.contents = UIImage(named: ballTexture)
ballMaterial.transparencyMode = .dualLayer
ballGeometry.materials = [ballMaterial]

How draw only the models " front of the camera " full or partially displayed - XNA

I am developing a small game in XNA style "MinecraftGame."
As there are a lot of cubes to draw. I created a function that allows you to draw only the cubes in front of the camera! But the problem is that if a cube is not completely full in my field of vision, it will not be drawn.
As you can see on the "screenshot" below. Cubes located on the edges are not drawn.
How to draw cubes fully and partially displayed in front of the camera? and not only entirely.
Thanks a lot
Here my code to check if the Frustum contain the model:
//Initialize frustum
private void GenerateFrustum()
{
Matrix viewProjection = View * Projection;
Frustum = new BoundingFrustum(viewProjection);
}
//private void UpdateFrustum
{
Matrix viewProjection = View * Projection;
Frustum.Matrix = viewProjection;
}
//Function that will add models instantiated in the transformation matrix only if the model is in the field of view !
private udpateTransformModelInstancied()
{
for (int i = 0; i < ListInstance.Count; i++)
{
if(camera.Frustum.Contains(ListInstance[i].Transform.Translation) != ContainmentType.Disjoint)
{
instanceTransforms.Add(ListInstance[i].Transform);
}
}
.......
}
SreenShot :
You're checking the position of the cubes. This means that you're not taking the cubes' physical size into account; you're treating them as a point, and if that point is out of view, then you won't render it. What you need to do is check whether any part of the cube is visible. The two simplest ways to do this are to work out a bounding shape and use that to check, or to check whether your view contains any of the cube's corner points, rather than just its position point.
Testing for containment of bounding structures for your cubes instead of just the cube's position would work but that adds to your game's complexity by needing to manage a bunch of bounding structures plus the math of testing a bounding structure as opposed to a point. If you need the bounding structures for other stuff, then go that route. But if not, then I would simply take the cube's position, determine a point to the cube's width left or right of it and test those points. If either are 'in', then draw the cube.
Vector3 leftRightVector = Matrix.Transpose(view).Right;//calc once for each frame(or only when camera rotates)
Vestor3 testPoint1 = cubePosition + (leftRightVector * maxCubeWidth);//calc for each cube
Vestor3 testPoint2 = cubePosition + (leftRightVector * -maxCubeWidth);
if(frustum.Contains(testPoint1 || testPoint2)
{
//draw
}
As far as I can see, you are just checking the position of each cube. The most effective fix would be to make a BoundingSphere that would fully encompass one of your cubes, translate it to the cubes location and done the Frustum.Contains with that sphere instead of position :)
Also; Make the sphere just a tad bit larger than needed to account for the edges of the frustum. And remember; If you want to make a minecraft-clone, you will need to use some sort of batch-rendering technique. I reccomend using an instance buffer, as there will be less data to send to the GPU, as well as easier to implement.

Scaling entire screen in XNA

Using XNA, I'm trying to make an adventure game engine that lets you make games that look like they fell out of the early 90s, like Day of the Tentacle and Sam & Max Hit the Road. Thus, I want the game to actually run at 320x240 (I know, it should probably be 320x200, but shh), but it should scale up depending on user settings.
It works kind of okay right now, but I'm running into some problems where I actually want it to look more pixellated that it currently does.
Here's what I'm doing right now:
In the game initialization:
public Game() {
graphics = new GraphicsDeviceManager(this);
graphics.PreferredBackBufferWidth = 640;
graphics.PreferredBackBufferHeight = 480;
graphics.PreferMultiSampling = false;
Scale = graphics.PreferredBackBufferWidth / 320;
}
Scale is a public static variable that I can check anytime to see how much I should scale my game relative to 320x240.
In my drawing function:
spriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.NonPremultiplied, SamplerState.PointClamp, DepthStencilState.Default, RasterizerState.CullNone, null, Matrix.CreateScale(Game.Scale));
This way, everything is drawn at 320x240 and blown up to fit the current resolution (640x480 by default). And of course I do math to convert the actual coordinates of the mouse into 320x240 coordinates, and so forth.
Now, this is great and all, but now I'm getting to the point where I want to start scaling my sprites, to have them walk into the distance and so forth.
Look at the images below. The upper-left image is a piece of a screenshot from when the game is running at 640x480. The image to the right of it is how it "should" look, at 320x240. The bottom row of images is just the top row blown up to 300% (in Photoshop, not in-engine) so you can see what I'm talking about.
In the 640x480 image, you can see different "line thicknesses;" the thicker lines are how it should really look (one pixel = 2x2, because this is running at 640x480), but the thinner lines (1x1 pixel) also appear, when they shouldn't, due to scaling (see the images on the right).
Basically I'm trying to emulate a 320x240 display but blown up to any resolution using XNA, and matrix transformations aren't doing the trick. Is there any way I could go about doing this?
Render everything in the native resolution to a RenderTarget instead of the back buffer:
SpriteBatch targetBatch = new SpriteBatch(GraphicsDevice);
RenderTarget2D target = new RenderTarget2D(GraphicsDevice, 320, 240);
GraphicsDevice.SetRenderTarget(target);
//perform draw calls
Then render this target (your whole screen) to the back buffer:
//set rendering back to the back buffer
GraphicsDevice.SetRenderTarget(null);
//render target to back buffer
targetBatch.Begin();
targetBatch.Draw(target, new Rectangle(0, 0, GraphicsDevice.DisplayMode.Width, GraphicsDevice.DisplayMode.Height), Color.White);
targetBatch.End();

Making parts of Texture2D transparent in XNA

I'm just starting game development and I thought a game like Tank wars or Worms would be nice.
The hardest part I can think of so far is making the terrain destructible and I want to know how it's done before doing the easy parts.
I thought that explosion could have a mask texture which could be scaled for different weapons. Then using that mask I should make underlying terrain transparent (and optionally draw a dark border).
(source: mikakolari.fi)
How do I achieve that?
Do I have to change the alpha value pixel by pixel or can I use some kind of masking technique? Drawing a blue circle on top of the terrain isn't an option.
I have versions 3.1 and 4.0 of XNA.
This tutorial is what you are searching:
http://www.riemers.net/eng/Tutorials/XNA/Csharp/series2d.php
Capter 20: Adding explosion craters
In short:
You have 2 textures: 1 Color Texture (visible), 1 Collision Texture (invisible)
You substract the explosion image from your collision texture.
To get the dark border: expand the explosion texture and darken the color in this area.
Now you generate a new Color Texture (old color - collison = new color).
This is a difficult question to answer - because there are many ways you could do it. And there are pros and cons to each method. I'll just give an overview:
As an overall design, you need to keep track of: the original texture, the "darkness" applied, and the "transparency" applied. One thing I can say almost for sure is you want to "accumulate" the results of the explosions somewhere - what you don't want to be doing is maintaining a list of all explosions that have ever happened.
So you have surfaces for texture, darkness and transparency. You could probably merge darkness and transparency into a single surface with a single channel that stores "normal", "dark" (or a level of darkness) and "transparent".
Because you probably don't want the dark rings to get progressively darker where they intersect, when you apply an explosion to your darkness layer with the max function (Math.Max in C#).
To produce your final texture you could just write from the darkness/transparency texture to your original texture or a copy of it (you only need to update the area that each explosion touches).
Or you could use a pixel shader to combine them - the details of which are beyond the scope of this question. (Also a pixel shader won't work on XNA 4.0 on Windows Phone 7.)
You should Make a new Texure2D with the Color of desired pixels.Alpha = 0.
Color[] bits = new Color[Texture.Width * Texture.Height];
Texture.GetData(bits);
foreach(Vector2D pixel in overlapedArea)
{
int x = (int)(pixel.X);
int y = (int)(pixel.Y);
bits[x + y * texture.Width] = Color.FromNonPremultiplied(0,0,0,0));
}
Texture2D newTexture = new Texture2D(texture.GraphicsDevice, texture.Width, texture.Height);
newTexture.SetData(bits);
Now replace the new Texture2D with the Last Texture and you're good to go!
For more code about Collision, or changing texture pixels color go to this page for codes:
http://www.codeproject.com/Articles/328894/XNA-Sprite-Class-with-useful-methods

Resources