XNA 4.0 and unsolvable (by me) depth curious rendering - xna

Big headhache on XNA 4.0 concerning a depth problem:
I've already found many answers to similar problems, but no one work for me...
The device is set like this:
xnaPanel1.Device.BlendState = BlendState.Opaque;
xnaPanel1.Device.DepthStencilState = DepthStencilState.Default;
xnaPanel1.Device.PresentationParameters.DepthStencilFormat = DepthFormat.Depth24Stencil8;
[...]
Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, 4.0f / 3.0f, 0.1f, 1000f);
As a brutal problem resolver, I have tried most DepthStencilFormat and DepthStencilState possibilities... No one works like i want.
Concerning the projection matrix, I've tried many nearclip and farclip value too. (cube width: 10f) but can't get the correct result.
I've tested this with many different texture, all opaque.
I don't use a BasicEffect but an effect using texture + normal map, can it be the source of the problem?
CubeEffect.fx
[...]
sampler2D colorMap = sampler_state
{
Texture = <colorMapTexture>;
MagFilter = Linear;
MinFilter = Anisotropic;
MipFilter = Linear;
MaxAnisotropy = 16;
};
sampler2D normalMap = sampler_state
{
Texture = <normalMapTexture>;
MagFilter = Linear;
MinFilter = Anisotropic;
MipFilter = Linear;
MaxAnisotropy = 16;
};
[...]
Edit: I tried with a BasicEffect and problem is the same...
So... Thanks for any help ;)

Ok that's it.
pp.DepthStencilFormat = DepthFormat.Depth24Stencil8;
have to be before device creation call.
So I don't know at this time why this:
Device.PresentationParameters.DepthStencilFormat = DepthFormat.Depth24Stencil8;
previously called in my main Draw function, doesn't work...
Conclusions?
PresentationParameters pp = new PresentationParameters();
pp.IsFullScreen = false;
pp.BackBufferHeight = this.renderControl.Height;
pp.BackBufferWidth = this.renderControl.Width;
pp.DeviceWindowHandle = renderControl.Handle;
pp.DepthStencilFormat = DepthFormat.Depth24Stencil8;
this.graphicsDevice = new GraphicsDevice(GraphicsAdapter.DefaultAdapter, GraphicsProfile.HiDef, pp);
Now working fine!

PresentationParameters is a structure that defines how the device is created. You've already seen that when you create the graphics device you need to pass the structure in, which is only used for initial configuration.
The device stores the presentation parameters on it, but changing it does nothing unless you call Reset on the device, which will reinitialize the device to use whatever parameters you've changed. This is an expensive operation (so you won't want to do it very often).

Basically GraphicsDevice.PresentationParameters is an output - writing to it doesn't actually change the device state. It gets updated whenever the device is set-up or reset.
Generally you will be setting up the GraphicsDevice using GraphicsDeviceManager - it handles setting-up, resetting, and tearing-down the device for you. It is part of the default XNA Game template project.
The correct way to modify states is to set the desired values on GraphicsDeviceManager. In your case, you can simply set PreferredDepthStencilFormat.
Once you do this, you need to either set-up the device (ie: specify your settings in your game constructor and XNA will do the rest), or reset the device by calling GraphicsDeviceManager.ApplyChanges - which you should usually only do in response to user input (and obviously not every frame). See this answer for details.
You'll note that there are some presentation parameters that are not directly settable on the GraphicsDeviceManager. To change these you would have to attach an event handler to GraphicsDeviceManager.PreparingDeviceSettings. The event argument will give you access to a version of the presentation parameters you can usefully modify (e.GraphicsDeviceInformation.PresentationParameters) - the settings in there are what gets used when the graphics device is created.

Related

Metal Shader Debugging - Capture GPU Frame

I want to debug my metal shader, tho the "Capture GPU Frame" button is not visible and unavailable in the debug menu.
My scheme was initially set up like this:
Capture GPU Frame: Automatically Enabled
Metal API Validation: Enabled
Tho when I change the Capture GPU Frame option to Metal, I do see the capture button, tho my app crashes when I'm trying to make the render command encoder:
commandBuffer.makeRenderCommandEncoder(descriptor: ...)
validateRenderPassDescriptor:644: failed assertion `Texture at colorAttachment[0] has usage (0x01) which doesn't specify MTLTextureUsageRenderTarget (0x04)'
Question one: Why do I need to specify the usage? (It works in Automatically Enabled mode)
Question two: How do I specify the MTLTextureUsageRenderTarget?
Running betas; Xcode 10 and iOS 12.
With newer versions of Xcode you need to explicitly set MTLTextureDescriptor.usage, for my case (a render target) that looks like this:
textureDescriptor.usage = MTLTextureUsageRenderTarget|MTLTextureUsageShaderRead;
The above setting indicates that a texture can be used as a render target and that it could also be read after that by another shader. As the comment above mentioned, you may also want to set the framebufferOnly property, here is how I do that:
if (isCaptureRenderedTextureEnabled) {
mtkView.framebufferOnly = false;
}
Note that this framebufferOnly only setting is left as the default of true when for the optimized case (isCaptureRenderedTextureEnabled = false) which makes it easy to inspect the data that will be rendered in the view (the output of the shader).
Specify usage purpose
textureDescriptor.usage = [.renderTarget , .shaderRead]
or
textureDescriptor.usage = MTLTextureUsage(rawValue: MTLTextureUsage.renderTarget.rawValue | MTLTextureUsage.shaderRead.rawValue)

Mixed topology (quad/tri) with ModelIO

I'm importing some simple OBJ assets using ModelIO like so:
let mdlAsset = MDLAsset(url: url, vertexDescriptor: nil, bufferAllocator: nil, preserveTopology: true, error: nil)
... and then adding them to a SceneKit SCN file. But, whenever I have meshes that have both quads/tris (often the case, for example eyeball meshes), the resulting mesh is jumbled:
Incorrect mesh topology
Re-topologizing isn't a good option since I sometimes have low-poly meshes with very specific topology, so I can't just set preserveTopology to false... I need a result with variable topology (i.e. MDLGeometryType.variableTopology).
How do I import these files correctly preserving their original topology?
I reported this as a bug at Apple Bug Reporter on 25th of November, bug id: 35687088
Summary: SCNSceneSourceLoadingOptionPreserveOriginalTopology does not actually preserve the original topology. Instead, it converts the geometry to all quads, messing up the 3D model badly. Based on its name it should behave exactly like preserveTopology of Model IO asset loading.
Steps to Reproduce: Load an OBJ file that has both triangles and polygons using SCNSceneSourceLoadingOptionPreserveOriginalTopology and load the same file into an MDLMesh using preserveTopology of ModelIO. Notice how it only works properly for the latter. Even when you create a new SCNGeometry based on the MDLMesh, it will "quadify" the mesh again to contain only quads (while it should support 3-gons and up).
On December 13th I received a reply with a request for sample code and assets, which I supplied 2 days later. I have not received a reply since (hopefully because they are just busy from catching up from the holiday season...).
As I mentioned in my bug report's summary, loading the asset with Model I/O does work properly, but then when you create a SCNNode based on that MDLMesh it ends up messing up the geometry again.
In my case the OBJ files I load have a known format as they are always files also exported with my app (no normals, colors, UV). So what I do is load the information of the MDLMesh (buffers, facetopology etc) manually into arrays, from which I then create a SCNGeometry manually. I don't have a complete separate piece of code of that for you as it is a lot and mixed with a lot of code specific to my app, and it's in Objective C. But to illustrate:
NSError *scnsrcError;
MDLAsset *asset = [[MDLAsset alloc] initWithURL:objURL vertexDescriptor:nil bufferAllocator:nil preserveTopology:YES error:&scnsrcError];
NSLog(#"%#", scnsrcError.localizedDescription);
MDLMesh * newMesh = (MDLMesh *)[asset objectAtIndex:0];
for (MDLSubmesh *faces in newMesh.submeshes) {
//MDLSubmesh *faces = newMesh.submeshes.firstObject;
MDLMeshBufferData *topo = faces.topology.faceTopology;
MDLMeshBufferData *vertIx = faces.indexBuffer;
MDLMeshBufferData *verts = newMesh.vertexBuffers.firstObject;
int faceCount = (int)faces.topology.faceCount;
int8_t *faceIndexValues = malloc(faceCount * sizeof(int8_t));
memcpy(faceIndexValues, topo.data.bytes, faceCount * sizeof(int8_t));
int32_t *vertIndexValues = malloc(faces.indexCount * sizeof(int32_t));
memcpy(vertIndexValues, vertIx.data.bytes, faces.indexCount * sizeof(int32_t));
SCNVector3 *vertValues = malloc(newMesh.vertexCount * sizeof(SCNVector3));
memcpy(vertValues, verts.data.bytes, newMesh.vertexCount * sizeof(SCNVector3));
....
....
}
In short, the preserveTopology option in SceneKit isn't working properly. To get from the working version in Model I/O to SceneKit I basically had to write my own converter.

webgl replace program shader

I'm trying to swap the fragement-shader used in a program. The fragment-shaders all have the same variables, just different calculations. I am trying to provide alternative shaders for lower level hardware.
I end up getting single color outputs (instead of a texture), does anyone have an idea what I could be doing wrong? I know the shaders are being used, due to the color changing accordingly.
//if I don't do this:
//WebGL: INVALID_OPERATION: attachShader: shader attachment already has shader
gl.detachShader(program, _.attachedFS);
//select a random shader, all using the same parameters
attachedFS = fragmentShaders[~~(Math.qrand()*fragmentShaders.length)];
//attach the new shader
gl.attachShader(program, attachedFS);
//if I don't do this nothing happens
gl.linkProgram(program);
//if I don't add this line:
//globject.js:313 WebGL: INVALID_OPERATION: uniform2f:
//location not for current program
updateLocations();
I am assuming you have called gl.compileShader(fragmentShader);
Have you tried to test the code on a different browser and see if you get the same behavior? (it could be standards implementation specific)
Have you tried to delete the fragment shader (gl.deleteShader(attachedFS); ) right after detaching it. The
previous shader may still have a pointer in memory.
If this does not let you move forward, you may have to detach both shaders (vertex & frag) and reattach them or even recreate the program from scratch
I found the issue, after trying about everything else without result. It also explains why I was seeing a shader change, but just getting a flat color. I was not updating some of the attributes.

CVPixelBufferRef as a GPU Texture

I have one (or possibly two) CVPixelBufferRef objects I am processing on the CPU, and then placing the results onto a final CVPixelBufferRef. I would like to do this processing on the GPU using GLSL instead because the CPU can barely keep up (these are frames of live video). I know this is possible "directly" (ie writing my own open gl code), but from the (absolutely impenetrable) sample code I've looked at it's an insane amount of work.
Two options seem to be:
1) GPUImage: This is an awesome library, but I'm a little unclear if I can do what I want easily. First thing I tried was requesting OpenGLES compatible pixel buffers using this code:
#{ (NSString *)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA],
(NSString*)kCVPixelBufferOpenGLESCompatibilityKey : [NSNumber numberWithBool:YES]};
Then transferring data from the CVPixelBufferRef to GPUImageRawDataInput as follows:
// setup:
_foreground = [[GPUImageRawDataInput alloc] initWithBytes:nil size:CGSizeMake(0,0)pixelFormat:GPUPixelFormatBGRA type:GPUPixelTypeUByte];
// call for each frame:
[_foreground updateDataFromBytes:CVPixelBufferGetBaseAddress(foregroundPixelBuffer)
size:CGSizeMake(CVPixelBufferGetWidth(foregroundPixelBuffer), CVPixelBufferGetHeight(foregroundPixelBuffer))];
However, my CPU usage goes from 7% to 27% on an iPhone 5S just with that line (no processing or anything). This suggests there's some copying going on on the CPU, or something else is wrong. Am I missing something?
2) OpenFrameworks: OF is commonly used for this type of thing, and OF projects can be easily setup to use GLSL. However, two questions remain about this solution: 1. can I use openframeworks as a library, or do I have to rejigger my whole app just to use the OpenGL features? I don't see any tutorials or docs that show how I might do this without actually starting from scratch and creating an OF app. 2. is it possible to use CVPixelBufferRef as a texture.
I am targeting iOS 7+.
I was able to get this to work using the GPUImageMovie class. If you look inside this class, you'll see that there's a private method called:
- (void)processMovieFrame:(CVPixelBufferRef)movieFrame withSampleTime:(CMTime)currentSampleTime
This method takes a CVPixelBufferRef as input.
To access this method, declare a class extension that exposes it inside your class
#interface GPUImageMovie ()
-(void) processMovieFrame:(CVPixelBufferRef)movieFrame withSampleTime:(CMTime)currentSampleTime;
#end
Then initialize the class, set up the filter, and pass it your video frame:
GPUImageMovie *gpuMovie = [[GPUImageMovie alloc] initWithAsset:nil]; // <- call initWithAsset even though there's no asset
// to initialize internal data structures
// connect filters...
// Call the method we exposed
[gpuMovie processMovieFrame:myCVPixelBufferRef withSampleTime:kCMTimeZero];
One thing: you need to request your pixel buffers with kCVPixelFormatType_420YpCbCr8BiPlanarFullRange in order to match what the library expects.

How to debug WebGL uncaught type error

I'm getting
Uncaught TypeError: Type error
When I have the WebGL Inspector enabled (in Chrome), this error originates in a file that starts with 'VM' and ends in a sequence of digits (not sure what code owns that -- is it core browser behavior or the WebGLInspector?). This is the line.
// Call real function
var result = originalFunction.apply(context.rawgl, arguments);
I enabled the debug context and am logging all WebGL calls. This is the call that breaks:
uniform1i(3, 0)
In the WebGL inspector, I see that the uniform at index 3 is my uniform sampler2D uSampler in my fragment shader. The API documentation says that this is a GLint, so the type is correct. I also tried setting some other uniforms first and they also fail with the same error.
I'm reworking some existing code I wrote after following tutorials and one of the things I'm adding is interleaved vertex data. I'm sure that that is the root cause, however, this is the third time I've come across an error like this and my only recourse has been to massage the code until it goes away. It feels random and it's frustrating.
Are there any more debugging techniques? I assume it's an error in the shaders. Is there some way to get a stack trace from them?
uniform1i(3, 0)
Is not valid WebGL. The uniform functions require a WebGLUniformLocation object which can only be gotten by calling gl.getUniformLocation
This is different from OpenGL. The reason is you are not allowed to do math on uniform locations. In OpenGL developers often make that mistake. They'll do something like this
--in shader--
uniform float arrayOfFloats[4];
--in code--
GLint location = glGetUniformLocation(program, "arrayOfFloats");
glUniform1f(location, 123.0f);
glUniform1f(location + 1, 456.0f); // BAD!!!
That second line is not valid OpenGL but it might work depending on the driver.
In WebGL they wanted to make that type of mistake impossible because web pages need to work everywhere whereas OpenGL programs only need to work on the platform they are compiled on. To make it work everywhere they had gl.getUniformLocation return an object so you can't do math on the result.
The correct way to write the code above in OpenGL is
--in shader--
uniform float arrayOfFloats[4];
--in code--
GLint location0 = glGetUniformLocation(program, "arrayOfFloats[0]");
GLint location1 = glGetUniformLocation(program, "arrayOfFloats[1]");
glUniform1f(location0, 123.0f);
glUniform1f(location1, 456.0f);
And in WebGL is
--in shader--
uniform float arrayOfFloats[4];
--in code--
var location0 = gl.getUniformLocation(program, "arrayOfFloats[0]");
var location1 = gl.getUniformLocation(program, "arrayOfFloats[1]");
gl.uniform1f(location0, 123.0);
gl.uniform1f(location1, 456.0);

Resources