WebGL texture with transparency mipmappable? Turns opaque when mipmapping turned on - webgl

The simple question is - is there any difference between gl.LINEAR_MIPMAP_NEAREST and gl.NEAREST_MIPMAP_LINEAR? I've used the first, with bad results (see below) and found the second on the web. Interestingly, both are defined (in Chrome), and I wonder what their difference is.
The real question is - If I have a texture atlas with transparency (containing glyphs), can I use mipmapping? When zooming to small sizes, the glyphs flicker, which I want to eliminate by mipmapping.
But when I turn on mipmapping (only changing the TEXTURE_MIN_FILTER from LINEAR to LINEAR_MIPMAP_NEAREST, and calling generateMipmap() afterwards), the transparency is completely gone and the entire texture turns black.
I understand that mipmapping may cause bleeding of the black ink into the transparent area, but not fill the entire texture at all mipmap levels (including the original size).
What scrap of knowledge do I miss?

From the docs
GL_NEAREST
Returns the value of the texture element that is nearest (in Manhattan distance) to the center of the pixel being textured.
GL_LINEAR
Returns the weighted average of the four texture elements that are closest to the center of the pixel being textured.
GL_NEAREST_MIPMAP_NEAREST
Chooses the mipmap that most closely matches the size of the pixel being textured and uses the GL_NEAREST criterion (the texture element nearest to the center of the pixel) to produce a texture value.
GL_LINEAR_MIPMAP_NEAREST
Chooses the mipmap that most closely matches the size of the pixel being textured and uses the GL_LINEAR criterion (a weighted average of the four texture elements that are closest to the center of the pixel) to produce a texture value.
GL_NEAREST_MIPMAP_LINEAR
Chooses the two mipmaps that most closely match the size of the pixel being textured and uses the GL_NEAREST criterion (the texture element nearest to the center of the pixel) to produce a texture value from each mipmap. The final texture value is a weighted average of those two values.
GL_LINEAR_MIPMAP_LINEAR
Chooses the two mipmaps that most closely match the size of the pixel being textured and uses the GL_LINEAR criterion (a weighted average of the four texture elements that are closest to the center of the pixel) to produce a texture value from each mipmap. The final texture value is a weighted average of those two values.
As for why your stuff turns black have you checked the JavaScript console for errors? The most likely reason is your texture is not a power of 2 in both dimensions. If that's the case, trying to use mips by switching from gl.LINEAR to gl.LINEAR_MIPMAP_NEAREST will not work because in WebGL mips are not supported textures that are not a power of 2 in both dimensions.

Related

What does the equalizer do in MeshLab's QualityMapperDialog?

In MeshLab, you can use the quality mapper to maps some qualities (values) on your mesh points to specific colors. The QualityMapperDialog offers an equalizer function that has three bars and affects the histogram of the quality. What is this effect, and what how does the three bars adjustment affect it?
Meshlab calls "quality" to a scalar value you can attach to each vertex or face of
the mesh. Examples of some qualities can be:
Area of each triangle.
ShapeFactor of each triangle (how it differs from an equilateral triangle).
Euclidean distance from each vertex to a given point.
Geodesic distance from each vertex to a given point.
Distance from each vertex to nearest border.
In general, any measure you can compute by vertex/face. For example, you can create a vertex quality using the formula of Relative Luminance (0.2126*R + 0.7152*G + 0.0722*B)
You can see if your mesh has some quality if the info panel shows the flags "VQ" (Vertex Quality) or "FQ" (Face Quality). To represent all those quality values meshlab offers you several options:
An histogram, available in the option Render->Show Quality Histogram
Contour lines, available in the option Render->Show Quality Contour
A mapping from scalar quality values to RGB values in vertex/faces.
For example, this is the Pythagoras mesh with "Distance to border" quality represented both as histogram and RGB values on vertex.
The Edit->Quality Mapper dialog allows the user to control the RGB color assigned to each scalar value. There is two panels there:
The "Transfer Function" panel lets you choose the color (R,G,B values) to each value inside your quality window of interest.
The "Equalizer" panel lets you to define that quality window of interest. The three bars allows you to define the lower and upper quality values in which you are interested, and also the main value of interest inside that interval in which you want to focus.
Here is the default window of interest, which map all the quality values to the complete RGB ramp.
Here we will use a window of interest for the qualities, so your ramp will map only the interval (25.43 ..45.57)
Here we use the same window of interest, but will focus on values around quality 43.00. Half of your ramp will be under value 43 and half of the ramp will be above 43. It may be easier to understand if you look at the "Gamma correction" graph... the input values are adapted to follow that red curve, that deform the qualities to adapt to a curve where 24.43 -> 0%, 25.57->100% and 43.00 -> 50%.

Determining pixel coordinates across display resolutions

If a program displays a pixel at X,Y on a display with resolution A, can I precisely predict at what coordinates the same pixel will display at resolution B?
MORE INFORMATION
The 2 display resolutions are:
A-->1366 x 768
B-->1600 x 900
Dividing the max resolutions in each direction yields:
X-direction scaling factor = 1600/1366 = 1.171303075
Y-direction scaling factor = 900/768 = 1.171875
Say for example that the only red pixel on display A occurs at pixel (1,1). If I merely scale up using these factors, then on display B, that red pixel will be displayed at pixel (1.171303075, 1.171875). I'm not sure how to interpret that, as I'm used to thinking of pixels as integer values. It might help if I knew the exact geometry of pixel coordinates/placement on a screen. e.g., do pixel coordinates (1,1) mean that the center of the pixel is at (1,1)? Or a particular corner of the pixel is at (1,1)? I'm sure diagrams would assist in visualizing this--if anyone can post a link to helpful resources, I'd appreciate it. And finally, I may be approaching this all wrong.
Thanks in advance.
I think, your problem is related to the field of scaling/resampling images. Bitmap-, or raster images are digital photographs, so they are the most common form to represent natural images that are rich in detail. The term bitmap refers to how a given pattern (bits in a pixel) maps to a specific color. A bitmap images take the form of an array, where the value of each element, called a pixel picture element, correspond to the color of that region of the image.
Sampling
When measuring the value for a pixel, one takes the average color of an area around the location of the pixel. A simplistic model is sampling a square, and a more accurate measurement is to calculate a weighted Gaussian average. When perceiving a bitmap image the human eye should blend the pixel values together, recreating an illusion of the continuous image it represents.
Raster dimensions
The number of horizontal and vertical samples in the pixel grid is called raster dimensions, it is specified as width x height.
Resolution
Resolution is a measurement of sampling density, resolution of bitmap images give a relationship between pixel dimensions and physical dimensions. The most often used measurement is ppi, pixels per inch.
Scaling / Resampling
Image scaling is the name of the process when we need to create an image with different dimensions from what we have. A different name for scaling is resampling. When resampling algorithms try to reconstruct the original continuous image and create a new sample grid. There are two kind of scaling: up and down.
Scaling image down
The process of reducing the raster dimensions is called decimation, this can be done by averaging the values of source pixels contributing to each output pixel.
Scaling image up
When we increase the image size we actually want to create sample points between the original sample points in the original raster, this is done by interpolation the values in the sample grid, effectively guessing the values of the unknown pixels. This interpolation can be done by nearest-neighbor interpolation, bilinear interpolation, bicubic interpolation, etc. But the scaled up/down image must be also represented over discrete grid.

Difference between Texture2D and Texture2DMS in DirectX11

I'm using SharpDX and I want to do antialiasing in the Depth buffer. I need to store the Depth Buffer as a texture to use it later. So is it a good idea if this texture is a Texture2DMS? Or should I take another approach?
What I really want to achieve is:
1) Depth buffer scaling
2) Depth test supersampling
(terms I found in section 3.2 of this paper: http://gfx.cs.princeton.edu/pubs/Cole_2010_TFM/cole_tfm_preprint.pdf
The paper calls for a depth pre-pass. Since this pass requires no color, you should leave the render target unbound, and use an "empty" pixel shader. For depth, you should create a Texture2D (not MS) at 2x or 4x (or some other 2Nx) the width and height of the final render target that you're going to use. This isn't really "supersampling" (since the pre-pass is an independent phase with no actual pixel output) but it's similar.
For the second phase, the paper calls for doing multiple samples of the high-resolution depth buffer from the pre-pass. If you followed the sizing above, every pixel will correspond to some (2N)^2 depth values. You'll need to read these values and average them. Fortunately, there's a hardware-accelerated way to do this (called PCF) using SampleCmp with a COMPARISON sampler type. This samples a 2x2 stamp, compares each value to a specified value (pass in the second-phase calculated depth here, and don't forget to add some epsilon value (e.g. 1e-5)), and returns the averaged result. Do 2x2 stamps to cover the entire area of the first-phase depth buffer associated with this pixel, and average the results. The final result represents how much of the current line's spine corresponds to the foremost depth of the pre-pass. Because of the PCF's smooth filtering behavior, as lines become visible, they will slowly fade in, as opposed to the aliased "dotted" line effect described in the paper.

Math behind Tex2D in HLSL

Could someone explain the math behind the function Tex2D in HLSL?
One of the examples is: given a quad with 4 vertices, the texture coordinates are (0,0) (0,1) (1,0) (1,1) on it and the texture's width and height are 640 and 480. How is the shader able to determine the number of times of sampling to be performed? If it is to map texels to pixels directly, does it mean that the shader needs to perform 640*480 times of sampling with the texture coordinates increasing in some kind of gradients? Also, I would appreciate if you could provide more references and articles on this topic.
Thanks.
After the vertex shader the rasterizer "converts" triangles into pixels. Each pixel is associated with a screen position, and the vertex attributes of the triangles (eg: texture coordinates) are interpolated across the triangles and an interpolated value is stored in each pixel according to the pixel position.
The pixel shader runs once per pixel (in most cases).
The number of times the texture is sampled per pixel depends on the sampler used. If you use a point sampler the texture is sampled once, 4 times if you use a bilinear sampler and a few more if you use more complex samplers.
So if you're drawing a fullscreen quad, the texture you're sampling is the same size of the render target and you're using a point sampler the texture will be sampled width*height times (once per pixel).
You can think about textures as an 2-dimensional array of texels. tex2D simply returns the texel at the requested position performing some kind of interpolation depending on the sampler used (texture coordinates are usually relative to the texture size so the hardware will convert them to absolute coordinates).
This link might be useful: Rasterization

How to deal with texture distortion caused by "squaring" textures, and the interactions with mipmapping?

Suppose I have a texture which is naturally not square (for example, a photographic texture of something with a 4:1 aspect ratio). And suppose that I want to use PVRTC compression to display this texture on an iOS device, which requires that the texture be square. If I scale up the texture so that it is square during compression, the result is a very blurry image when the texture is viewed from a distance.
I believe that this caused by mipmapping. Since the mipmap filter sees the new larger stretched dimension, it uses that to choose a low mip level, which is actually not correct, since those pixels were just stretched to that size. If it looked at the other dimension, it would choose a higher resolution mip level.
This theory is confirmed (somewhat) by the observation that if I leave the texture in a format that doesn't have to be square, the mipmap versions look just dandy.
There is a LOD Bias parameter, but the docs say that is applied to both dimensions. It seems like what is called for is a way to bias the LOD but only in one dimension (that is, to bias it toward more resolution in the dimension of the texture which was scaled up).
Other than chopping up the geometry to allow the use of square subsets of the original texture (which is infeasible, give our production pipeline), does anyone have any clever hacks they've used to deal with this issue?
It seems to me that you have a few options, depending on what you can do with, say, the vertex UVs.
[Hmm Just realised that in the following I'm assuming that the V coordinates run from the top to the bottom... you'll need to allow for me being old school :-) ]
The first thing that comes to mind is to take your 4N*N (X*Y) source texture and repeat it 4x vertically to give a 4N*4N texture, and then adjust the V coordinates on the model to be 1/4 of their current values. This won't save you much in terms of memory (since it effectively means a 4bpp PVRTC becomes 4x larger) but it will still save bandwidth and cache space, since the other parts of the texture won't be accessed. MIP mapping will also work all the way down to 1x1 textures.
Alternatively, if you want to save a bit of space and you have a pair of 4N*N textures, you could try packing them together into a "sort of" 4N*4N atlas. Put the first texture in the top N rows, then follow it by the N/2 of the top rows. The pack the bottom N/2 rows of the 2nd texture, followed by the second texture, and then the top N/2 rows. Finally, do the bottom N/2 rows of the first texture. For the UVs that access the first texture, do the same divide by 4 for the V parameter. For the second texture, you'll need to divide by 4 and add 0.5
This should work fine until the MIP map level is so small that the two textures are being blended together... but I doubt that will really be an issue.

Resources