getShaderPrecisionFormat return values - webgl

I'm confused about the method getShaderPrecisionFormat, what it's used for, and what it's telling me because for me it always returns the exact same precision for all arguments, only differences are between INT / FLOAT.
to be clear:
calls with gl.FRAGMENT_SHADER and gl.VERTEX_SHADER in combinations with gl.LOW_FLOAT, gl.MEDIUM_FLOAT and gl.HIGH_FLOAT always return
WebGLShaderPrecisionFormat { precision: 23, rangeMax: 127, rangeMin: 127 }
calls with gl.FRAGMENT_SHADER and gl.VERTEX_SHADER in combinations with gl.LOW_INT, gl.MEDIUM_INT and gl.HIGH_INT always return
WebGLShaderPrecisionFormat { precision: 0, rangeMax: 24, rangeMin: 24 }
I experimented with also supplying two additional arguments "range" and "precision" but was unable to get any different results. I assume I made a mistake but from the docs I'm unable to figure out on my own how to use it correctly.

It looks like you're using these calls correctly.
If you're running on a desktop/laptop, the result is not surprising. I would expect WebGL to be layered on top of a full OpenGL implementation on such systems. Even if these systems support ES 2.0, which mostly matches the WebGL feature level, that's most likely just a reduced API that ends up using the same underlying driver/GPU features as the full OpenGL implementation.
Full OpenGL does not really support precisions. It does have the keywords in GLSL, but that's just for source code compatibility with OpenGL ES. In the words of the GLSL 4.50 spec:
Precision qualifiers are added for code portability with OpenGL ES, not for functionality. They have the same syntax as in OpenGL ES, as described below, but they have no semantic meaning, which includes no effect on the precision used to store or operate on variables.
It then goes on to define the use of IEEE 32-bit floats, which have the 23 bits of precision you are seeing from your calls.
You would most likely get a different result if you try the same thing on a mobile device, like a phone or tablet. Many mobile GPUs support 16-bit floats (aka "half floats"), and take advantage of them. Some of them can operate on half floats faster than they can on floats, and the reduced memory usage and bandwidth is beneficial even if the operations themselves are not faster. Reducing memory/bandwidth usage is critical on mobile devices to improve performance, as well as power efficiency.

Related

What exactly is a constant buffer (cbuffer) used for in hlsl?

Currently I have this code in my vertex shader class:
cbuffer MatrixBuffer {
matrix worldMatrix;
matrix viewMatrix;
matrix projectionMatrix; };
I don't know why I need to wrap those variables in a cbuffer. If I delete the buffer my code works aswell. I would really appreciate it if someone could give me a brieve explanation why using cbuffers are necessary.
The reason it works either way is due to the legacy way constants were handled in Direct3D 8/Direct3D 9. Back then, there was only a single shared array of constants for the entire shader (one for VS and one for PS). This required that you had to change the constant array every single time you called Draw.
In Direct3D 10, constants were reorganized into one or more Constant Buffers to make it easier to update some constants while leaving others alone, and thus sending less data to the GPU.
See the classic presentation Windows to Reality: Getting the Most out of Direct3D 10 Graphics in Your Games for a lot of details on the impacts of constant update.
The up-shot of which here is that if you don't specify cbuffer, all the constants get put into a single implicit constant buffer bound to register b0 to emulate the old 'one constants array' behavior.
There are compiler flags to control the acceptance of legacy constructs: /Gec for backwards compatibility mode to support old Direct3D 8/9 intrinsics, and /Ges to enable a more strict compilation to weed out older constructs. That said, the HLSL compiler will pretty much always accept global constants without cbuffer and stick them into a single implicit constant buffer because this pattern is extremely common in shader code.

Fast way to swap endianness using opencl

I'm reading and writing lots of FITS and DNG images which may contain data of an endianness different from my platform and/or opencl device.
Currently I swap the byte order in the host's memory if necessary which is very slow and requires an extra step.
Is there a fast way to pass a buffer of int/float/short having wrong endianess to an opencl-kernel?
Using an extra kernel run just for fixing the endianess would be ok; using some overheadless auto-fixing-read/-write operation would be perfect.
I know about the variable attribute ((endian(host/device))) but this doesn't help with a big endian FITS file on a little endian platform using a little endian device.
I thought about a solution like this one (neither implemented nor tested, yet):
uint4 mask = (uint4) (3, 2, 1, 0);
uchar4 swappedEndianness = shuffle(originalEndianness, mask);
// to be applied on a float/int-buffer somehow
Hoping there's a better solution out there.
Thanks in advance,
runtimeterror
Sure. Since you have a uchar4 - you can simply swizzle the components and write them back.
output[tid] = input[tid].wzyx;
swizzling is very also performant on SIMD architectures with very little cost, so you should be able to combine it with other operations in your kernel.
Hope this helps!
Most processor architectures perform best when using instructions to complete the operation which can fit its register width, for example 32/64-bit width. When CPU/GPU performs such byte-wise operators, using subscripts .wxyz for uchar4, they needs to use a mask to retrieve each byte from the integer, shift the byte, and then using integer add or or operator to the result. For the endianness swaping, the processor needs to perform above integer and, shift, add/or for 4 times because there are 4 bytes.
The most efficient way is as follows
#define EndianSwap(n) (rotate(n & 0x00FF00FF, 24U)|(rotate(n, 8U) & 0x00FF00FF)
n could be in any gentype, for example, an uint4 variable. Because OpenCL does not allow C++ type overloading, so the best choice is macro.

False autovectorization in Intel C compiler (icc)

I need to vectorize with SSE a some huge loops in a program. In order to save time I decided to let ICC deal with it. For that purpose, I prepare properly the data, taking into account the alignment and I make use of the compiler directives #pragma simd, #pragma aligned, #pragma ivdep. When compiling with the several -vec-report options, compiler tells me that loops were vectorized. A quick look to the assembly generated by the compiler seems to confirm that, since you can find there plenty of vectorial instructions that works with packed single precision operands (all operations in the serial code handler float operands).
The problem is that when I take hardware counters with PAPI the number of FP operations I get (PAPI_FP_INS and PAPI_FP_OPS) is pretty the same in the auto-vectorized code and the original one, when one would expect to be significantly less in the auto-vectorized code. What's more, a vectorized by-hand a simplified problem of the one that concerns and in this case I do get something like 3 times less of FP operations.
Has anyone experienced something similar with this?
Spills may destroy the advantage of vectorization, thus 64-bit mode may gain significantly over 32-bit mode. Also, icc may version a loop and you may be hitting a scalar version even though there is a vector version present. icc versions issued in the last year or 2 have fixed some problems in this area.

Float comparison (equality) in CoreGraphics

Apple CoreGraphics.framework, CGGeometry.h:
CG_INLINE bool __CGSizeEqualToSize(CGSize size1, CGSize size2)
{
return size1.width == size2.width && size1.height == size2.height;
}
#define CGSizeEqualToSize __CGSizeEqualToSize
Why do they (Apple) compare floats with ==? I can't believe this is a mistake. So can you explain me?
(I've expected something like fabs(size1.width - size2.width) < 0.001).
Floating point comparisons are native width on all OSX and iOS architectures.
For float, that comes to:
i386, x86_64:
32 bit XMM register (or memory for second operand)
using an instructions in the family of ucomiss
ARM:
32 bit register
using instructions in the same family as vcmp
Some of the floating point comparison issues have been removed by restricting storage to 32/64 for these types. Other platforms may use the native 80 bit FPU often (example). On OS X, SSE instructions are favored, and they use natural widths. So, that reduces many of the floating point comparison issues.
But there is still room for error, or times when you will favor approximation. One hidden detail about CGGeometry types' values is that they may be rounded to a nearby integer (you may want to do this yourself in some cases).
Given the range of CGFloat (float or double-x86_64) and typical values, it's reasonable to assume the rounded values generally be represented accurately enough, such that the results will be suitably comparable in the majority of cases. Therefore, it's "pretty safe", "pretty accurate", and "pretty fast" within those confines.
There are still times when you may prefer approximated comparisons in geometry calculations, but apple's implementation is what I'd consider the closest to a reference implementation for the general purpose solution in this context.

What's the deal with 17- and 40-bit math in TI DSPs?

The TMS320C55x has a 17-bit MAC unit and a 40-bit accumulator. Why the non-power-of-2-width units?
The 40-bit accumulator is common in a few TI DSPs. The idea is basically that you can accumulate up to 256 arbitrary 32-bit products without overflow. (vs. in C where if you take a 32-bit product, you can overflow fairly quickly unless you resort to using 64-bit integers.)
The only way you access these features is by assembly code or special compiler intrinsics. If you use regular C/C++ code, the accumulator is invisible. You can't get a pointer to it.
So there's not any real need to adhere to a power-of-2 scheme. DSP cores have been fairly optimized for power/performance tradeoffs.
I may be talking through my hat here, but I'd expect to see the 17-bit stuff used to avoid the need for a separate carry bit when adding/subtracting 16-bit samples.

Resources