constant parameters sfactor and dfactor of blendfunc - webgl

why in the parameters of the blendFunc(sfactor, dfactor), the following particular case is not possible ?
*If a constant color and a constant alpha value are used together as source and destination factors, a gl.INVALID_ENUM error is thrown.
I wanted to do this particular case:
gl.blendColor(0.2,0.5,0.7,0.8);
gl.blendFunc(gl.CONSTANT_ALPHA, gl.CONSTANT_COLOR);
// color(RGBA ) = (sourceColor * CONSTANT_ALPHA) + (destination* CONSTANT_COLOR)
i think that means :
color(RGBA) = sourceColor * (0.8,0.8,0.8,0.8) + destination * (0.2,0.5,0.7,0.8)
but then if this formula is correct, why does it generate an error message?

why does it generate an error message
The "easy" answer is that "the spec requires it." Section 6.15 of the WebGL 1.0 spec says:
In the WebGL API, constant color and constant alpha cannot be used together as source and destination factors in the blend function. A call to blendFunc will generate an INVALID_OPERATION error if one of the two factors is set to CONSTANT_COLOR or ONE_MINUS_CONSTANT_COLOR and the other to CONSTANT_ALPHA or ONE_MINUS_CONSTANT_ALPHA.
So, even though you could mathematically define what would happen for this combination of blend factors, WebGL forbids it from being used. Now as to why that's the case... This github issue (https://github.com/KhronosGroup/WebGL/issues/2938) explains that it's due to a difference between Direct3D and OpenGL: D3D doesn't allow constant alpha (only constant color), so ANGLE simulates the constant alpha by using a color of (a,a,a,a). But if you are using the color slots to hold the alpha, you don't have any slots left over for the color. Hence the restriction on having both in use at the same time.

Related

Events changing visual geometries

I'm trying to visualize collisions and different events visually, and am searching for the best way to update color or visual element properties after registration with RegisterVisualGeometry.
I've found the GeometryInstance class, which seems like a promising point for changing mutable illustration properties, but have yet to find and example where an instance is called from the plant (from a GeometryId from something like GetVisualGeometriesForBody?) and its properties are changed.
As a basic example, I want to change the color of a box's visual geometry when two seconds have passed. I register the geometry pre-finalize with
// box : Body added to plant
// X_WA : Identity transform
// FLAGS_box_l : box side length
geometry::GeometryId box_visual_id = plant.RegisterVisualGeometry(
box, X_WA,
geometry::Box(FLAGS_box_l, FLAGS_box_l, FLAGS_box_l),
"BoxVisualGeometry",
Eigen::Vector4d(0.7, 0.5, 0, 1));
Then, I have a while loop to create a timed event at two seconds where I would like for the box to change it's color.
double current_time = 0.0;
const double time_delta = 0.008;
bool changed(false);
while( current_time < FLAGS_duration ){
if (current_time > 2.0 && !changed) {
std::cout << "Change color for id " << box_visual_id.get_value() << "\n";
// Change color of box using its GeometryId
changed = true;
}
simulator.StepTo(current_time + time_delta);
current_time = simulator_context.get_time();
}
Eventually I'd like to call something like this with a more specific trigger like proximity to another object, or velocity, but for now I'm not sure how I would register a simple visual geometry change.
Thanks for the details. This is sufficient for me to provide a meaningful answer of the current state of affairs as well as the future (both near- and far-term plans).
Taking your question as a representative example, changing a visual geometry's color can mean one of two things:
The color of the object changes in an "attached" visualizer (drake_visualizer being the prime example).
The color of the object changes in a simulated rgb camera (what is currently dev::RgbdCamera, but imminently RgbdSensor).
Depending on what other properties you might want to change mid simulation, there might be additional subtleties/nuances. But using the springboard above, here are the details:
A. Up until recently (drake PR 11796), changing properties after registration wasn't possible at all.
B. PR 11796 was the first step in enabling that. However, it only enables it for changing ProximityProperties. (ProximityProperties are associated with the role geometry plays in proximity queries -- contact, signed distance, etc.)
C. Changing PerceptionProperties is a TODO in that PR and will follow in the next few months (single digit unless a more pressing need arises to bump it up in priority). (PerceptionProperties are associated with the properties geometry has in simulated sensors -- how they appear, etc.)
D. Changing IllustrationProperties is not supported and it is not clear what the best/right way to do so may be. (IllustrationProperties are what get fed to an external visualizer like drake_visualizer.) This is the trickiest, due to the way the LCM communication is currently articulated.
So, when we compare possible implications of changing an object's color (1 or 2, above) with the state of the art and near-term art (C & D, above), we draw the following conclusions:
In the near future, you should be able to change it in a synthesized RGB image.
No real plan for changing it in an external visualizer.
(Sorry, it seems the answer is more along the lines of "oops...you can't do that".)

What exactly is a constant buffer (cbuffer) used for in hlsl?

Currently I have this code in my vertex shader class:
cbuffer MatrixBuffer {
matrix worldMatrix;
matrix viewMatrix;
matrix projectionMatrix; };
I don't know why I need to wrap those variables in a cbuffer. If I delete the buffer my code works aswell. I would really appreciate it if someone could give me a brieve explanation why using cbuffers are necessary.
The reason it works either way is due to the legacy way constants were handled in Direct3D 8/Direct3D 9. Back then, there was only a single shared array of constants for the entire shader (one for VS and one for PS). This required that you had to change the constant array every single time you called Draw.
In Direct3D 10, constants were reorganized into one or more Constant Buffers to make it easier to update some constants while leaving others alone, and thus sending less data to the GPU.
See the classic presentation Windows to Reality: Getting the Most out of Direct3D 10 Graphics in Your Games for a lot of details on the impacts of constant update.
The up-shot of which here is that if you don't specify cbuffer, all the constants get put into a single implicit constant buffer bound to register b0 to emulate the old 'one constants array' behavior.
There are compiler flags to control the acceptance of legacy constructs: /Gec for backwards compatibility mode to support old Direct3D 8/9 intrinsics, and /Ges to enable a more strict compilation to weed out older constructs. That said, the HLSL compiler will pretty much always accept global constants without cbuffer and stick them into a single implicit constant buffer because this pattern is extremely common in shader code.

iOS Import .obj file to Model I/O without duplicating vertices

I'm trying to import a .obj file to use in Scene Kit using the Model I/O framework. I initially used the simple MDLAsset initWithURL: function, but after transferring the mesh to a SCNGeometry, I realized this function was triangulizing the mesh, such that each face had 3 unique vertices, and there were separate vertices at the same location for border faces. This was causing some major problems with my other functions, so I tried to fix it by instead using the MDLAsset initWithURL:vertexDescriptor:bufferAllocator:preserveTopology function with preserveTopology set to YES with the descriptor/allocator set to the default with nil. This preserving topology fixed my problem of duplicating vertices, so the faces/edges were all good, but in the process I lost the normals data.
By lost the normals, I don't mean multiple indexing, I mean after setting preserveTopology to YES, the buffer did not contain any normals values at all. Whereas before it was v1/n1/v2/n2... and the stride was 24 bytes (3 dimensions *4 bytes/float * 2 attributes), now the first half of the buffer is v1/v2/... with a stride of 12 and the entire 2nd half of the buffer is just 0.0 floats.
Also something weird with this, when you look at the SCNGeometrySources of the Geometry, there are 2 sources, 1 with semantic kGeometrySourceSemanticVertex, and 1 with semantic kGeometrySourceSemanticNormal. You would think that the semantic vertex source would contain the position data, and the semantic normal source would contain the normal data. However that is not the case. No matter what you set preserveTopology, they are buffers of size to contain both position and normal data with identical values. So when I said before there was no normal data, I mean both of these buffers, semantic vertex AND semantic normal went from being v1/n1/v2/n2... to v1/v2/.../(0.0, 0.0, 0.0)/(0.0, 0.0, 0.0)/... I went into the mdlmesh's buffer (before the transfer to scene kit) at found the same problem, so the problem must be with the initWithURL, not with the model i/o to scenekit bridge.
So I figured there must be something wrong with the default vertex descriptor and buffer allocator (since I was using nil) and went about trying to create my own that matched these 2 possible data formats. Alas after much trying I was unable to get something that worked.
Any ideas on how I should do this? How to give MDLAsset the proper vertexDescriptor and bufferAllocator (I feel like nil should be ok here) for importing a .obj file? Thanks
An obj file with vertices and normals has vertices, indicated by v lines, normals, indicated by vn lines, and faces, indicated by f lines.
The v and vn lines will just be the floating point values you expect, and the f line will be of the form -
f v0//n0 v1//n1 etc
Since OpenGL and Metal don't allow multiple indexing, you'll see the first effect of vertices being duplicated. For example,
f 0//0 1//2 2//0
can't work as a vertex buffer because it would require different indices per vertex. So typical OBJ parsers have to create new vertices that allow the face to become
f 0//0 1//1 2//2
The preserve topology option doesn't help you. It preserves the connectivity and shape of the mesh (no triangulation occurs, shared edges remain shared) but it still enforces a single index per vertex component.
One solution would be to make sure that your tool that is outputting the OBJ files uses single indexing during export, if that is an option.
Another option, and this won't solve the problem immediately, would be file a request that multiple-indexing be supported at the Model I/O level. SceneKit would still have to uniquely-index because it is has to be able to render.
Another option would be to use a format like PLY that doesn't have multiple indexing.

How do I choose a pixel format when creating a new Texture2D?

I'm using the SharpDX Toolkit, and I'm trying to create a Texture2D programmatically, so I can manually specify all the pixel values. And I'm not sure what pixel format to create it with.
SharpDX doesn't even document the toolkit's PixelFormat type (they have documentation for another PixelFormat class but it's for WIC, not the toolkit). I did find the DirectX enum it wraps, DXGI_FORMAT, but its documentation doesn't give any useful guidance on how I would choose a format.
I'm used to plain old 32-bit bitmap formats with 8 bits per color channel plus 8-bit alpha, which is plenty good enough for me. So I'm guessing the simplest choices will be R8G8B8A8 or B8G8R8A8. Does it matter which I choose? Will they both be fully supported on all hardware?
And even once I've chosen one of those, I then need to further specify whether it's SInt, SNorm, Typeless, UInt, UNorm, or UNormSRgb. I don't need the sRGB colorspace. I don't understand what Typeless is supposed to be for. UInt seems like the simplest -- just a plain old unsigned byte -- but it turns out it doesn't work; I don't get an error, but my texture won't draw anything to the screen. UNorm works, but there's nothing in the documentation that explains why UInt doesn't. So now I'm paranoid that UNorm might not work on some other video card.
Here's the code I've got, if anyone wants to see it. Download the SharpDX full package, open the SharpDXToolkitSamples project, go to the SpriteBatchAndFont.WinRTXaml project, open the SpriteBatchAndFontGame class, and add code where indicated:
// Add new field to the class:
private Texture2D _newTexture;
// Add at the end of the LoadContent method:
_newTexture = Texture2D.New(GraphicsDevice, 8, 8, PixelFormat.R8G8B8A8.UNorm);
var colorData = new Color[_newTexture.Width*_newTexture.Height];
_newTexture.GetData(colorData);
for (var i = 0; i < colorData.Length; ++i)
colorData[i] = (i%3 == 0) ? Color.Red : Color.Transparent;
_newTexture.SetData(colorData);
// Add inside the Draw method, just before the call to spriteBatch.End():
spriteBatch.Draw(_newTexture, new Vector2(0, 0), Color.White);
This draws a small rectangle with diagonal lines in the top left of the screen. It works on the laptop I'm testing it on, but I have no idea how to know whether that means it's going to work everywhere, nor do I have any idea whether it's going to be the most performant.
What pixel format should I use to make sure my app will work on all hardware, and to get the best performance?
The formats in the SharpDX Toolkit map to the underlying DirectX/DXGI formats, so you can, as usual with Microsoft products, get your info from the MSDN:
DXGI_FORMAT enumeration (Windows)
32-bit-textures are a common choice for most texture scenarios and have a good performance on older hardware. UNorm means, as already answered in the comments, "in the range of 0.0 .. 1.0" and is, again, a common way to access color data in textures.
If you look at the Hardware Support for Direct3D 10Level9 Formats (Windows) page you will see, that DXGI_FORMAT_R8G8B8A8_UNORM as well as DXGI_FORMAT_B8G8R8A8_UNORM are supported on DirectX 9 hardware. You will not run into compatibility-problems with both of them.
Performance is up to how your Device is initialized (RGBA/BGRA?) and what hardware (=supported DX feature level) and OS you are running your software on. You will have to run your own tests to find it out (though in case of these common and similar formats the difference should be a single digit percentage at most).

Sampling BC5_SNORM texture yields incorrect value range

I'm working with Direct3d 11, and I've come across something strange. I have taken a normal map and encoded it to a DDS file twice. Once with R8G8B8A8_SNORM encoding, and once with BC5_SNORM.
Next I load each texture using D3DX11CreateShaderResourceViewFromFile in conjunction with D3DX11GetImageInfoFromFile. When I sample these textures in my pixel shader I find that the R8G8B8A8_SNORM texture is returning values in the range [-1,1], which is what I would expect for a SNORM texture. However, the BC5_SNORM texture is returning values in the range [0,1], which doesn't make any sense to me.
I double an triple checked with my debugger and PIX. The format of the texture is correct (BC5_*S*NORM), so I am at a loss for why it's not returning signed values.
I managed to reproduce the same issue as you and I also got the same behaviour when doing a conversion from a R8G8B8A8_SNORM texture (with -1 to +1 values) to BC5_SNORM (producing only 0 to 1 values) when doing the conversion through D3Dx11LoadTextureFromTexture. There does appear to be a fault in D3DX11, at least regarding BC5_SNORM, in that, regardless of all kinds of input formats, the (BC5)SNORM output is always in the 0 to 1 range.
As suggested by #chuckwalbourn I can confirm that the DirectXTex utilities, which supersedes the now deprecated D3DX11, does respect and correctly handle signed values for BC5_SNORM outputs.
You can either have your program write out a temporary .dds (using D3DX11SaveTextureToFile with a R8G8B8A8_SNORM texture) and then invoke the standalone DirectXTex 'texconv.exe' utility to convert to BC5_SNORM, or wrangle the DirectXTex library into your program and use the 'Convert(...)' function appropriately.

Resources