Webgl Inspector does not display uniform values - webgl

I am new to shader programming and webgl, and am trying to use the Webgl inspector to debug a problem I am having. However, when I open the UI and select the shader program I wish to debug, no values show up for any of the uniforms, although the uniform name, type, and size show up fine for all my uniforms.
Does anyone know what may be wrong? I tried to see the values both when I froze my program and while it ran.
Thanks!

The only way to see the uniform values is to capture a frame and look in the trace tab. This captures an entire frame, so you might have to do some digging to find the right draw call. Click the "i" to the right on the draw call instructions to get a better summary of what's been set for that particular draw call.
BTW: you cannot step through a program, unfortunately.

Related

Stage3D iOS Antialiasing on AIR 24

With AIR 24 release we are able to set anti aliasing on Stage3D now, but there are some issues with it. Can anybody help how to use it in right way without changing entire project code ?
The issue I have is that anti alias works great, and no more jagged edges, but there are rendering issues and I guess some texture normals are being inverted, also when using Occlusion Material there are some jagged material shadows...
Next thing I notice is when drawing Wireframe Globe with Lines Segments - the lines are visible on the globe all the time, no matter if you add some object in front or not.
So, intersecting line segments with other materials don't work at all, and lines are on the screen forever.
Please, help if you find any trick fixing the issues.
Thanks
Just to add some more information: the issue seems to happen when shareContext = true. Without Starling there is antialiasing and the lineSegments are rendered at the current depth. It would be interesting to see if it works with other sharedContext besides Starling to isolate the issue. If I find an answer I will come back and post it. It would be nice to get this working. Any idea the performance hit on mobile of having a second instance of away3d? Layering that way might be a dirty work around.
*****EDIT****
AntiAliasing on the line Segments only occurs with sharedContext. View3D class does not seem to have it's antiAlias value set anywhere and when I forced it to a value of 2, all hell broke loose.
Edit#2
Mesh appears above line segments, Sprite3D do not.

Strange rendering behavior with transparent texture in WebGL

I've been writing a little planet generator using Haxe + Away3D, and deploying to HTML5/WebGL. But I'm having a strange issue when rendering my clouds. I have the planet mesh, and then the clouds mesh slightly bigger in the same position.
I'm using a perlin noise function to generate the planetary features and the cloud formations, writing them to a bitmap and applying the bitmap as the texture. Now, strangely, when I deploy this to iOS or C++/OSX, it renders exactly how I wanted it to:
Now, when I deploy to WebGL, it generates an identical diffuse map, but renders as:
(The above was at a much lower resolution, due to how often I was reloading the page. The problem persisted at higher resolutions.)
The clouds are there, and the edges look alright, wispy and translucent. But the inside is opaque and seemingly being rendered differently (each pixel is the same color, only the alpha channel is changed)
I realize this is likely something to do with how the code is ultimately compiled/generated in haxe, but I'm hoping it's something simple like a render setting or blending mode I'm not setting. But since I'm not even sure exactly what is happening, I wouldn't know where to look.
Here's the diffuse map being produced. I overlaid it on red so the clouds would be viewable.
Bitmapdata.perlinNoise does not work on html5.
You should implement it by yourself, or you could use pre-rendered image.
public function perlinNoise (baseX:Float, baseY:Float, numOctaves:UInt, randomSeed:Int, stitch:Bool, fractalNoise:Bool, channelOptions:UInt = 7, grayScale:Bool = false, offsets:Array = null):Void {
openfl.Lib.notImplemented ("BitmapData.perlinNoise");
}
https://github.com/openfl/openfl/blob/c072a98a3c6699f4d334dacd783be947db9cf63a/openfl/display/BitmapData.hx
Also, WebGL-Inspector is very useful for debugging WebGL apps. Have you used it?
http://benvanik.github.io/WebGL-Inspector/
Well, then, did you upload that image from ByteArray?
Lime once allowed access ByteArray with array index operator, even though it shouldn't on js. This is fixed in the lastest version of Lime to avoid mistakes.
I used __get and __set method instead of [] to access a byte array.
Away3d itself might be the cause of this issue too, because the code of backend is generated from different source files depending on the target you use.
For example, byteArrayOffset parameter of Texture.uploadFromByteArray is supported on html5, but not on native.
If away3d is the cause of the problem, which part of the code is causing the problem? I'm not sure for now.
EDIT: I've also experienced a problem with OpenFL's latest WebGL backend. I think legacy OpenFL doesn't have this problem. OpenFL's sprite renderer was changing colorMask (and possibly other OpenGL render states) without my knowledge! This problem occured because my code and OpenFL's sprite renderer was actually using the same OpenGL context. I got rid of this problem by manually disabling OpenFL's sprite renderer.

Remeber state of pixel in HLSL DirectX10

i have a little problem and i wanna know it's a good way to resolve it.
I change many pixel color on my application (cellular automata) on GPU.
I swap render targets to get actual back-buffer and later i put it to my Pixel-shader, in next frame operation is repeat.
My problem i when wanna know the pixel is changed in last frame.
I know i can solve it by use one more render target (3 RT) and remember my specific data per pixel, but i think it can be made some performance issue. Maybe is some other way to do it. I use DirectX10.
Really thanks for help.
One simple common way (I'm not sure that it applies in your case), is if you only use 3 channels for color, you can store this information in the alpha channel.

Clear single viewport in DirectX 10

I am preparing to start on a C++ DirectX 10 application that will consist of multiple "panels" to display different types of information. I have had some success experimenting with multiple viewports on one RenderTargetView. However, I cannot find a definitive answer regarding how to clear a single viewport at a time. These panels (viewports) in my application will overlap in some areas, so I would like to be able to draw them from "bottom to top", clearing each viewport as I go so the drawing from lower panels doesn't show through on the higher ones. In DirectX 9, it seems that there was a Clear() method of the device object that would clear only the currently set viewport. DirectX 10 uses ClearRenderTargetView(), which clears the entire drawing area, and I cannot find any other option that is equivalent to the way DirectX 9 did it.
Is there a way in DirectX 10 to clear only a viewport/rectangle within the drawing area? One person speculated that the only way may be to draw a quad in that space. It seems that another possibility would be to have a seprate RenderTargetView for each panel, but I would like to avoid that as it requires other redundant resources, such as a separate depth/stencil buffers (unless that is a misunderstanding on my part).
Any help will be greatly appreciated! Thanks!
I would recommend using one render target per "viewport", and compositing them together using quads for the final view. I know of no way to scissor a clear in DX 10.
Also, according to the article here, "An array of render-target views may be passed into ID3D10Device::OMSetRenderTargets, however all of those render-target views will correspond to a single depth stencil view."
Hope this helps.
Could you not just create a shader together with the appropriate blendstate settings and a square mesh (or other shape of mesh) and use it to clear the area where you want to clear? I haven't tried this but I think it can be done.

Antialiasing/Multisampling in D3D9

I'm writing a 3d modeling application in D3D9 that I'd like to make as broadly compatible as possible. This means using few hardware-dependent features, i.e. multisampling. However, while the realtime render doesn't need to be flawless, I do need to provide nice-looking screen captures, which without multisampling, look quite aliased and poor.
To produce my screen captures, I create a temporary surface in memory, render the scene to it once, then save it to a file. My first thought of how I could achieve an antialiased capture was to create my off-screen stencilsurface as multisampled, but of course DX wouldn't allow that since the device itself had been initialized with D3DMULTISAMPLE_NONE.
To start off, here's a sample of exactly how I create the screencapture. I know that it'd be simpler to just save the backbuffer of an already-rendered frame, however I need the ability to save images of dimension different than the actual render window - which is why I do it this way. Error checking, code for restoring state, and releasing resource are ommitted here for brevity. m_d3ddev is my LPDIRECT3DDEVICE9.
//Get the current pp
LPDIRECT3DSWAPCHAIN9 sc;
D3DPRESENT_PARAMETERS pp;
m_d3ddev->GetSwapChain(0, &sc);
sc->GetPresentParameters(&pp);
//Create a new surface to which we'll render
LPDIRECT3DSURFACE9 ScreenShotSurface= NULL;
LPDIRECT3DSURFACE9 newDepthStencil = NULL;
LPDIRECT3DTEXTURE9 pRenderTexture = NULL;
m_d3ddev->CreateDepthStencilSurface(_Width, _Height, pp.AutoDepthStencilFormat, pp.MultiSampleType, pp.MultiSampleQuality, FALSE, &newDepthStencil, NULL );
m_d3ddev->SetDepthStencilSurface( newDepthStencil );
m_d3ddev->CreateTexture(_Width, _Height, 1, D3DUSAGE_RENDERTARGET, pp.BackBufferFormat, D3DPOOL_DEFAULT, &pRenderTexture, NULL);
pRenderTexture->GetSurfaceLevel(0,&ScreenShotSurface);
//Render the scene to the new surface
m_d3ddev->SetRenderTarget(0, ScreenShotSurface);
RenderFrame();
//Save the surface to a file
D3DXSaveSurfaceToFile(_OutFile, D3DXIFF_JPG, ScreenShotSurface, NULL, NULL);
You can see the call to CreateDepthStencilSurface(), which is where I was hoping I could replace pp.MultiSampleType with i.e. D3DMULTISAMPLE_4_SAMPLES, but that didn't work.
My next thought was to create an entirely different LPDIRECT3DDEVICE9 as a D3DDEVTYPE_REF, which always supports D3DMULTISAMPLE_4_SAMPLES (regardless of the video card). However, all of my resources (meshes, textures) have been loaded into m_d3ddev, my HAL device, thus I couldn't use them for rendering the scene under the REF device. Note that resources can be shared between devices under Direct3d9ex (Vista), but I'm working on XP. Since there are quite a lot of resources, reloading everything to render this one frame, then unloading them, is too time-inefficient for my application.
I looked at other options for antialiasing the image post-capture (i.e. 3x3 blur filter), but they all generated pretty crappy results, so I'd really like to try and get an antialiased scene right out of D3D if possible....
Any wisdom or pointers would be GREATLY appreciated...
Thanks!
Supersampling by either rendering to a larger buffer and scaling down or combining jittered buffers is probably your best bet. Combining multiple jittered buffers should give you the best quality for a given number of samples (better than the regular grid from simply rendering an equivalent number of samples at a multiple of the resolution and scaling down) but has the extra overhead of multiple rendering passes. It has the advantage of not being limited by the maximum supported size of your render target though and allows you to choose pretty much an arbitrary level of AA (though you'll have to watch out for precision issues if combining many jittered buffers).
The article "Antialiasing with Accumulation Buffer" at opengl.org describes how to modify your projection matrix for jittered sampling (OpenGL but the math is basically the same). The paper "Interleaved Sampling" by Alexander Keller and Wolfgang Heidrich talks about an extension of the technique that gives you a better sampling pattern at the expense of even more rendering passes. Sorry about not providing links - as a new user I can only post one link per answer. Google should find them for you.
If you want to go the route of rendering to a larger buffer and down sampling but don't want to be limited by the maximum allowed render target size then you can generate a tiled image using off center projection matrices as described here.
You could always render to a texture that is twice the width and height (ie 4x the size) and then supersample it down.
Admittedly you'd still get problems if the card can't create a texture 4x the size of the back buffer ...
Edit: There is another way that comes to mind.
If you repeat the frame n-times with tiny jitters to the view matrix you will be able to generate as many images as you like which you can then add together afterwards to form a very highly anti-aliased image. The bonus is, it will work on any machine that can render the image. It is, obviously, slower though. Still 256xAA really does look good when you do this!
This article http://msdn.microsoft.com/en-us/library/bb172266(VS.85).aspx seems to imply that you can use the render state flag D3DRS_MULTISAMPLEANTIALIAS to control this. Can you create your device with antialiasing enabled but turn it off for screen rendering and on for your offscreen rendering using this render state flag?
I've not tried this myself though.

Resources