Interpolation in texture3D in OpenGL ES 3.0 on iOS - ios

I am passing a GL_TEXTURE_3D to the fragment shader in an iOS application. I am using minification and magnification filters as GL_LINEAR for this texture. However, the resulting texture rendered in the app has blocks instead of having a smooth transition of colors, which implies that it is using GL_NEAREST interpolation.
Here are the screenshots of the expected vs received output image
PS: If I use a 2D texture instead, pass in the 3D texture as a flattened 2D texture and do the interpolation manually in the shader, it works all fine and I get the expected output.
Here is the code for setting up GL_LINEAR:
GLenum target, minificationFilter, magnificationFilter;
target = GL_TEXTURE_3D;
minificationFilter = GL_LINEAR;
magnificationFilter = GL_LINEAR;
glTexParameteri(target, GL_TEXTURE_MIN_FILTER, minificationFilter);
glTexParameteri(target, GL_TEXTURE_MAG_FILTER, magnificationFilter);

Linear filtering of textures with internal format GL_RGBA32F is not supported in ES 3.0.
You can see which formats support linear filtering in table 3.13 of the ES 3.0 spec document, on pages 130-132. The last column, with header "Texture-filterable", indicates which formats support filtering. RGBA32F does not have a checkmark in that column.
If you need linear filtering for float textures, you're limited to 16-bit component floats. RGBA16F in the same table has the checkmark in the last column.
This limitation is still in place in the latest ES 3.2 spec.
There is an extension to lift this limitation: OES_texture_float_linear. However, this extension is not listed under the supported extensions on iOS.

If you switch from the single precision float format GL_RGBA32F to the half-float format GL_RGBA16F then GL_LINEAR magnification works fine.
I can't find any documentation to suggest why this shouldn't work, and the only limitation on single precision float textures seems to be when used as render targets, so I guess this is a bug to be filed under "GL_RGBA32F ignores GL_LINEAR magnification on iOS 9".
If it genuinely is a bug, then be understanding - I imagine an OpenGLES 3 implementation to be one of the largest, most awful switch-statement-driven pieces of code that one could possibly have the misfortune to work on. If you consider that whatever glamour the job might have entailed previously has since been sucked out by the release of the faster, sexier and legacy-free Metal then you're probably talking about a very unloved codebase, maintained by some very unhappy people. You're lucky flat shaded triangles even work.
p.s. when using GL_TEXTURE_3D don't forget to clamp in the third coordinate (GL_TEXTURE_WRAP_R)
p.p.s test this on a device. neither GL_RGBA32F nor GL_RGBA16F seem to work with GL_LINEAR on the simulator

Related

Webgl Upload Texture Data to the gpu without a draw call

I'm using webgl to do YUV to RGB conversions on a custom video codec.
The video has to play at 30 fps. In order to make this happen I'm doing all my math every other requestAnimationFrame.
This works great, but I noticed when profiling that uploading the textures to the gpu takes the longest amount of time.
So I uploaded the "Y" texture and the "UV" texture separately.
Now the first "requestAnimationFrame" will upload the "Y" texture like this:
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, yTextureRef);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.LUMINANCE, textureWidth, textureHeight, 0, gl.LUMINANCE, gl.UNSIGNED_BYTE, yData);
The second "requestAnimationFrame" will upload the "UV" texture in the same way, and make a draw call to the fragment shader doing the math between them.
But this doesn't change anything in the profiler. I still show nearly 0 gpu time on the frame that uploads the "Y" texture, and the same amount of time as before on the frame that uploads the "UV" texture.
However if I add a draw call to my "Y" texture upload function, then the profiler shows the expected results. Every frame has nearly half the gpu time.
From this I'm guessing the Y texture isn't really uploaded to the gpu using the texImage2d function.
However I don't really want to draw the Y texture on the screen as it doesn't have the correct UV texture to do anything with until a frame later. So is there any way to force the gpu to upload this texture without performing a draw call?
Update
I mis-understood the question
It really depends on the driver. The problem is OpenGL/OpenGL ES/WebGL's texture API really sucks. Sucks is a technical term for 'has unintended consequences'.
The issue is the driver can't really fully upload the data until you draw because it doesn't know what things you're going to change. You could change all the mip levels in any order and any size and then fix them all in between and so until you draw it has no idea which other functions you're going to call to manipulate the texture.
Consider you create a 4x4 level 0 mip
gl.texImage2D(
gl.TEXTURE_2D,
0, // mip level
gl.RGBA,
4, // width
4, // height
...);
What memory should it allocate? 4(width) * 4(height) * 4(rgba)? But what if you call gl.generateMipmap? Now it needs 4*4*4+2*2*4+1*1*4. Ok but now you allocate an 8x8 mip on level 3. You intend to then replace levels 0 to 2 with 64x64, 32x32, 16x16 respectively but you did level 3 first. What should it do when you replace level 3 before replacing the levels above those? You then add in levels 4 8x8, 5 as 4x4, 6 as 2x2, and 7 as 1x1.
As you can see the API lets you change mips in any order. In fact I could allocate level 7 as 723x234 and then fix it later. The API is designed to not care until draw time when all the mips must be the correct size at which point they can finally allocate memory on the GPU and copy the mips in.
You can see a demonstration and test of this issue here. The test uploads mips out of order to verify that WebGL implementations correctly fail with they are not all the correct size and correctly start working once they are the correct sizes.
You can see this was arguably a bad API design.
They added gl.texStorage2D to fix it but gl.texStorage2D is not available in WebGL1 only WebGL2. gl.texStorage2D has new issues though :(
TLDR; textures get uploaded to the driver when you call gl.texImage2D but the driver can't upload to the GPU until draw time.
Possible solution: use gl.texSubImage2D since it does not allocate memory it's possible the driver could upload sooner. I suspect most drivers don't because you can use gl.texSubImage2D before drawing. Still it's worth a try
Let me also add that gl.LUMIANCE might be a bottleneck as well. IIRC DirectX doesn't have a corresponding format and neither does OpenGL Core Profile. Both support a RED only format but WebGL1 does not. So LUMIANCE has to be emulated by expanding the data on upload.
Old Answer
Unfortunately there is no way to upload video to WebGL except via texImage2D and texSubImage2D
Some browsers try to make that happen faster. I notice you're using gl.LUMINANCE. You might try using gl.RGB or gl.RGBA and see if things speed up. It's possible browsers only optimize for the more common case. On the other hand it's possible they don't optimize at all.
Two extensions what would allow using video without a copy have been proposed but AFAIK no browser as ever implemented them.
WEBGL_video_texture
WEBGL_texture_source_iframe
It's actually a much harder problem than it sounds like.
Video data can be in various formats. You mentioned YUV but there are others. Should the browser tell the app the format or should the browser convert to a standard format?
The problem with telling is lots of devs will get it wrong then a user will provide a video that is in a format they don't support
The WEBGL_video_texture extensions converts to a standard format by re-writing your shaders. You tell it uniform samplerVideoWEBGL video and then it knows it can re-write your color = texture2D(video, uv) to color = convertFromVideoFormatToRGB(texture(video, uv)). It also means they'd have to re-write shaders on the fly if you play different format videos.
Synchronization
It sounds great to get the video data to WebGL but now you have the issue that by the time you get the data and render it to the screen you've added a few frames of latency so the audio is no longer in sync.
How to deal with that is out of the scope of WebGL as WebGL doesn't have anything to do with audio but it does point out that it's not as simple as just giving WebGL the data. Once you make the data available then people will ask for more APIs to get the audio and more info so they can delay one or both and keep them in sync.
TLDR; there is no way to upload video to WebGL except via texImage2D and texSubImage2D

Max number of textures in WebGL?

I know that there is a limit of 8 textures in WebGL.
My question is that, is 8 the limit globally, or per shader/program wise?
If it's per shader/program wise limit, does that mean, once I load the textures to uniforms of one shader, I can start reusing these slots for other shaders? Say I used TEXTURE0 for one shape, can I use TEXTURE0 in another shape?
The limit is per draw call. When you make a draw call, and invoke a particular shader program, you are constrained by the limit, but your next draw call can use completely different textures in the same animation frame.
Also, 8 is just the minimum guarantee. Systems are required to support at least eight to be considered WebGL conformant. But nicer graphics cards support more than eight. You can query the max number of image textures for the platform you're on like this:
var maxTextures = gl.getParameter(gl.MAX_TEXTURE_IMAGE_UNITS);
You can also look for vertex textures:
gl.getParameter(gl.MAX_VERTEX_TEXTURE_IMAGE_UNITS)
Or a combination of the two:
gl.getParameter(gl.MAX_COMBINED_TEXTURE_IMAGE_UNITS)
You can also use a site like WebGL Report (Disclaimer, I'm a contributor) to look up this stat for the platform you're on (under Fragment Shader -> Max Texture Units).
EDIT: When this answer was first written, there was another useful site called "WebGL Stats" that would show aggregate data for WebGL support in a variety of browsers. Sadly, that site disappeared a couple years ago without warning. But even back then, most devices supported at least 16 textures.

WARNING: Output of vertex shader 'v_gradient' not read by fragment shader

When i run my app in ios 10 using xcode 8 i am getting following message in debug console, and by UI getting freezed can any one know why this is happening
ERROR
/BuildRoot/Library/Caches/com.apple.xbs/Sources/VectorKit/VectorKit-1228.30.7.17.9/GeoGL/GeoGL/GLCoreContext.cpp
1763: InfoLog SolidRibbonShader: ERROR
/BuildRoot/Library/Caches/com.apple.xbs/Sources/VectorKit/VectorKit-1228.30.7.17.9/GeoGL/GeoGL/GLCoreContext.cpp
1764: WARNING: Output of vertex shader 'v_gradient' not read by
fragment shader
Answer
One of the situations where you might get this warning in Xcode is when using an app that uses shaders such as the Maps app with an MKMapView. You'll find that the map view works as expected without that warning on a real device with real hardware/native OS.
In the sim the SolidRibbonShader fragment shader is not able to read the output of the v_gradient vertex shader probably because it's in beta or there might be an incompatibility between Xcode version and SIM version. However the shaders are recognized on a real device.
Explanation
Those shaders belong to the OpenGL Rendering Pipeline. The Rendering Pipeline is the sequence of steps that OpenGL takes when rendering objects.
The rendering pipeline is responsible for things like applying texture, converting the vertices to the right coordinate system and displaying the character on the screen etc.
There are six stages in this pipeline.
Per-Vertex Operation
Primitive Assembly
Primitive Processing
Rasterization
Fragment Processing
Per-Fragment Operation
Finally, an image appears on the screen of your device. These six stages are called the OpenGL Rendering Pipeline and all data used for rendering must go through it.
What is a shader?
A shader is a small program developed by you that lives in the GPU. A shader is written in a special graphics language called OpenGL Shading Language(GLSL).
A shader takes the place of two important stages in the OpenGL Rendering Pipeline: Per-Vertex Processing and Per-Fragment Processing stage. There is one shader for each of these two stages.
The ultimate goal of the Vertex Shader is to provide the final transformation of the mesh vertices to the rendering pipeline. The goal of the Fragment shader is to provide Coloring and Texture data to each pixel heading to the framebuffer.
Vertex shaders transform the vertices of a triangle from a local model coordinate system to the screen position. Fragment shaders compute the color of a pixel within a triangle rasterized on screen.
Separate Shader Objects Speed Compilation and Linking
Many OpenGL ES apps use several vertex and fragment shaders, and it is often useful to reuse the same fragment shader with different vertex shaders or vice versa. Because the core OpenGL ES specification requires a vertex and fragment shader to be linked together in a single shader program, mixing and matching shaders results in a large number of programs, increasing the total shader compile and link time when you initialize your app.
Update: the issue seems to be gone now on Xcode9/iOS11.
Firstly, the freezing problem happens only when run from Xcode 8 and only on iOS 10 (currently 10.0.2), whether in debug or release mode. MKMapView though seems fine when the app is distributed via App Store or 3rd party ad hoc distribution systems. The warnings you are seeing may or may not be related to the problem, I don't know.
What I've found is that the offending code is in MKMapView's destructor, and it doesn't matter what you do with the map view object or how you configure it, i.e. merely calling
[MKMapView new];
anywhere in your code will freeze the app. The main thread hangs on a semaphore and it's not clear why.
One of the things I've tried was to destroy the map view object in a separate thread but that didn't help. Eventually I decided to retain my map objects at least in DEBUG builds.
NOTE: this is a really sh*tty workaround but at least it will help you to debug your app without freezing. Retaining these objects means your memory usage will grow by about 45-50MB every time you create a view controller with a map.
So, let's say if you have a property mapView, then you can do this in your view controller's dealloc:
- (void)dealloc
{
#if DEBUG
// Xcode8/iOS10 MKMapView bug workaround
static NSMutableArray* unusedObjects;
if (!unusedObjects)
unusedObjects = [NSMutableArray new];
[unusedObjects addObject:_mapView];
#endif
}

Anyone using glGenerateMipmap in OpenGL ES 2.0 on iOS getting blurry unusable textures?

I am able to generate mipmaps using glGenerateMipMap and I am using min_filter gl_linear_mipmap_linear.
Without mipmaps the texture looks fine when displayed around the same size as the actual texture size (512x512) (as expected) but shows aliasing effects when I zoom out (as expected).
With mipmaps the texture looks fine when displayed around the same size as the actual texture size (512x512) (as expected) and does not show aliasing effects when I zoom out (as expected). HOWEVER, I get ugly blurry textures that makes the mipmap version unusable to the point I may as well put up with the aliasing.
Any idea what I may be doing wrong here? I do not know if the generated mipmaps are ending up looking like that or whether a mipmap too small is being selected when it should be choosing a larger one. Has anyone actually got good results using glGenerateMipMap on OpenGL ES 2.0 on iOS?
Try setting hint to generate nice mipmaps, this could affect what filter is used to generate them, but no guarantee since effect is implementation-dependent.
glHint(GL_GENERATE_MIPMAP_HINT, GL_NICEST)
you can also affect resulting blurriness by setting lod bias in your texture sampling code.
To get best results you can use anisotropy if this extension is supported. Run this alongside with setting magnification and minification filters:
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, 2.0);
instead of 2.0 you can acquire max anisotropy level and set anything bellow it:
glGetFloatv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &maxAniso);

What format for #greyscale' render targets in directX?

I have a directx9 game engine that creates its normal adaptor with this format:
D3DFMT_X8R8G8B8
I have a system where I render some objects to an offscreen render target, as lightmaps. I then use that lightmap data to composite back to the back buffer where they act as a full screen 'mask' and let me get the effect of torches or other light sources on a dark scene.
Everything works just great.
The problem is, I'm aware that my big offscreen lightmap render targets are 16MB each, at a large res, and I only really need 8 bits of data (greyscale) from them, so 75% of the 32 bit render target memory is a waste. (I'm targeting low spec cards).
I tried creating the render targets as
D3DFMT_A8
But directx silently fails on that (if I add CheckDeviceFormat() I see it happen) and creates 32 bit anyway. I use the D3DXCreateTexture function
My question is, what format is best for creating these offscreen buffers?
Thankyou for your help, I'm not good at render target related stuff :)
D3DFMT_L8 is 8 bit luminance. I believe it's supported on GeForce 3 (i.e. the first consumer card with shader 1.1!), so must be available everywhere. I think the colour is read as L, L, L, 1, i.e. rgb = luminance value, alpha = 1.
Edit: this tool is useful for finding caps:
http://zp.lo3.wroc.pl/cdragan/wizard.php
Ontopic: If you are targeting lower spec cards, you are very likely to be running on systems where 8-bit single channel render targets are not supported at all.
If you are using shaders to do the rendering and compositing, it should be possible to use the rgba channels for 4 alternating pixels of your lightmap, packing your information. Perhaps you can tell us a little bit more about your current rendering setup?
Offtopic: AWESOME to have you here on StackOverflow, big fan of your work!

Resources