In the "OpenGL ES Programming Guide for iOS" documentation included with XCode, in the "Best Practices for Working with Vertex Data" section, under the heading "Use the Smallest Acceptable Types for Attributes", it says
Specify texture coordinates using 2 or 4 unsigned bytes
(GL_UNSIGNED_BYTE) or unsigned short (GL_UNSIGNED_SHORT).
I'm a bit puzzled. I thought that texture coordinates would be < 1 most of the time, and so would require a float to represent fractional values. How do you use unsigned bytes or unsigned shorts? Divide it by 255 in the shader?
If you use GL_UNSIGNED_BYTE you'll end up passing normalized of GL_TRUE, to (for example) glVertexAttribPointer, indicating that the values you're passing are not between 0 and 1, but should be normalized from their full range (eg. 0 to 255) to the normalized range of 0 to 1 before being passed to your shader. (See Section 2.1.2 of the OpenGL ES 2.0 spec for more details).
In other words, when you're passing integer types like unsigned byte, use a "normalized" value of GL_TRUE and use the full range of that type (such as 0 to 255), so a value of 127 would be approximately equivalent to floating point 0.5.
Related
Alright, so this has been bugging me for a while now, and could not find anything on MSDN that goes into the specifics that I need.
This is more of a 3 part question, so here it goes:
1-) When creating the swapchain applications specify backbuffer pixel formats, and most often is either B8G8R8A8 or R8G8B8A8. This gives 8 bit per color channel so a total of 4 bytes is used per pixel....so why does the pixel shader has to return a color as a float4 when float4 is actually 16 bytes?
2-) When binding textures to the Pixel Shader my textures are DXGI_FORMAT_B8G8R8A8_UNORM format, but why does the sampler need a float4 per pixel to work?
3-) Am I missing something here? am I overthinking this or what?
Please provide links to to support your claim. Preferably from MSDN!!!!
GPUs are designed to perform calculations on 32bit floating point data, at least if they want to support D3D11. As of D3D10 you can also perform 32bit signed and unsigned integer operations. There's no requirement or language support for types smaller than 4 bytes in HLSL, so there's no "byte/char" or "short" for 1 and 2 byte integers or lower precision floating point.
Any DXGI formats that use the "FLOAT", "UNORM" or "SNORM" suffix are non-integer formats, while "UINT" and "SINT" are unsigned and signed integer. Any reads performed by the shader on the first three types will be provided to the shader as 32 bit floating point irrespective of whether the original format was 8 bit UNORM/SNORM or 10/11/16/32 bit floating point. Data in vertices is usually stored at a lower precision than full-fat 32bit floating point to save memory, but by the time it reaches the shader it has already been converted to 32bit float.
On output (to UAVs or Render Targets) the GPU compresses the "float" or "uint" data to whatever format the target was created at. If you try outputting float4(4.4, 5.5, 6.6, 10.1) to a target that is 8-bit normalised then it'll simply be truncated to (1.0,1.0,1.0,1.0) and only consume 4 bytes per pixel.
So to answer your questions:
1) Because shaders only operate on 32 bit types, but the GPU will compress/truncate your output as necessary to be stored in the resource you currently have bound according to its type. It would be madness to have special keywords and types for every format that the GPU supported.
2) The "sampler" doesn't "need a float4 per pixel to work". I think you're mixing your terminology. The declaration that the texture is a Texture2D<float4> is really just stating that this texture has four components and is of a format that is not an integer format. "float" doesn't necessarily mean the source data is 32 bit float (or actually even floating point) but merely that the data has a fractional component to it (eg 0.54, 1.32). Equally, declaring a texture as Texture2D<uint4> doesn't mean that the source data is 32 bit unsigned necessarily, but more that it contains four components of unsigned integer data. However, the data will be returned to you and converted to 32 bit float or 32 bit integer for use inside the shader.
3) You're missing the fact that the GPU decompresses textures / vertex data on reads and compresses it again on writes. The amount of storage used for your vertices/texture data is only as much as the format that you create the resource in, and has nothing to do with the fact that the shader is operating on 32 bit floats / integers.
When I develop Image Processing Program to use OpenCV, I can usually see 'IPL_DEPTH_8U' or 'IPL_DEPTH_16U'
But, I don't know what does that mean.
What is the meaning of 'Depth' in the context of Image Processing?
Depth is the "precision" of each pixel. Typically it can be 8/24/32 bit for displaying, but any precision for computations.
Instead of precision you can also call it the data type of the pixel. The more bits per element, the better to represent different colors or intensities.
Your examples mean:
8U : 8 bit per element (maybe 8 bit per channel if multiple channels) of unsigned integer type. So probably you can access elements as unsigned char values, because that's 8 bit unsigned type.
16U : 16 bit per element => unsigned short is typically the 16 bit unsigned integer type on your system.
In OpenCV you typically have those types:
8UC3 : 8 bit unsigned and 3 channels => 24 bit per pixel in total.
8UC1 : 8 bit unsigned with a single channel
32S: 32 bit integer type => int
32F: 32 bit floating point => float
64F: 64 bit floating point => double
hope this helps
I code on XNA and only has access to shader model 3, hence no bitshift operators. I need to pack two random 16-bit floating point variables (meaning NOT in range [0,1] but ANY RANDOM FLOAT VARIABLE) into two 8-bit variables. There is no way to normalize them.
I thought about doing bitshifting manually but I can't find a good article on how to convert a random decimal float (not [0,1]) into binary and back.
Thanks
This is not really a good idea - a 16-bit float already has very limited range and precision. Remember that 8-bits leaves you with just 256 possible values!
Getting an 8-bit value into a shader is trivial. As a colour is one method. You can use each channel as a normalised range, from 0 to 1.
Of course, you say you don't want to normalise your values. So I assume you want to maintain the nice floating-point property of a wide range with better precision closer to zero.
(Now would be a good time to read some background info on floating-point. Especially about half-precision floating-point and minifloats and microfloats.)
One way to do that would be to encode your values using a logarithm and an exponent (to encode and decode, respectivly). This is basically exactly what the floating-point format itself does. The exact maths will depend on the precision and the range that you desire - (which 256 values will you represent?) - so I will leave it as an exercise.
I have a 2D texture formatted as DXGI_FORMAT_R32_FLOAT. In my pixel shader I sample from it thusly:
float sample = texture.Sample(sampler, coordinates);
This results in the following compiler warning:
warning X3206: implicit truncation of vector type
I'm confused by this. Shouldn't Sample return a single channel, and therefore a scalar value, as opposed to a vector?
I'm using shader model 4 level 9_1.
Either declare your texture as having one channel, or specify which channel you want. Without the <float> bit, it'll assume it's a 4 channel texture and so therefore Sample will return a float4.
Texture2D<float> texture;
or
float sample = texture.Sample(sampler, coordinates).r;
OpenCV SURF implementation returns a sequence of 64/128 32 bit float values (descriptor) for each feature point found in the image. Is there a way to normalize this float values and take them to an integer scale (for example, [0, 255])?. That would save important space (1 or 2 bytes per value, instead of 4). Besides, the conversion should ensure that the descriptors remain meaningful for other uses, such as clustering.
Thanks!
There are other feature extractors than SURF. The BRIEF extractor uses only 32 bytes per descriptor. It uses 32 unsigned bytes [0-255] as its elements. You can create one like this: Ptr ptrExtractor = DescriptorExtractor::create("BRIEF");
Be aware that a lot of image processing routines in OpenCV need or assume that the data is stored as floating-point numbers.
You can treat the float features as an ordinary image (Mat or cvmat) and then use cv::normalize(). Another option is using cv::norm() to find the range of descriptor values and then cv::convertTo() to convert to CV_8U. Look up the OpenCV documentation for these functions.
The descriptor returned by cv::SurfFeatureDetector is already normalized. You can verify this by taking the L2 Norm of the cv::Mat returned, or refer to the paper.