WARNING: Output of vertex shader 'v_gradient' not read by fragment shader - ios

When i run my app in ios 10 using xcode 8 i am getting following message in debug console, and by UI getting freezed can any one know why this is happening
ERROR
/BuildRoot/Library/Caches/com.apple.xbs/Sources/VectorKit/VectorKit-1228.30.7.17.9/GeoGL/GeoGL/GLCoreContext.cpp
1763: InfoLog SolidRibbonShader: ERROR
/BuildRoot/Library/Caches/com.apple.xbs/Sources/VectorKit/VectorKit-1228.30.7.17.9/GeoGL/GeoGL/GLCoreContext.cpp
1764: WARNING: Output of vertex shader 'v_gradient' not read by
fragment shader

Answer
One of the situations where you might get this warning in Xcode is when using an app that uses shaders such as the Maps app with an MKMapView. You'll find that the map view works as expected without that warning on a real device with real hardware/native OS.
In the sim the SolidRibbonShader fragment shader is not able to read the output of the v_gradient vertex shader probably because it's in beta or there might be an incompatibility between Xcode version and SIM version. However the shaders are recognized on a real device.
Explanation
Those shaders belong to the OpenGL Rendering Pipeline. The Rendering Pipeline is the sequence of steps that OpenGL takes when rendering objects.
The rendering pipeline is responsible for things like applying texture, converting the vertices to the right coordinate system and displaying the character on the screen etc.
There are six stages in this pipeline.
Per-Vertex Operation
Primitive Assembly
Primitive Processing
Rasterization
Fragment Processing
Per-Fragment Operation
Finally, an image appears on the screen of your device. These six stages are called the OpenGL Rendering Pipeline and all data used for rendering must go through it.
What is a shader?
A shader is a small program developed by you that lives in the GPU. A shader is written in a special graphics language called OpenGL Shading Language(GLSL).
A shader takes the place of two important stages in the OpenGL Rendering Pipeline: Per-Vertex Processing and Per-Fragment Processing stage. There is one shader for each of these two stages.
The ultimate goal of the Vertex Shader is to provide the final transformation of the mesh vertices to the rendering pipeline. The goal of the Fragment shader is to provide Coloring and Texture data to each pixel heading to the framebuffer.
Vertex shaders transform the vertices of a triangle from a local model coordinate system to the screen position. Fragment shaders compute the color of a pixel within a triangle rasterized on screen.
Separate Shader Objects Speed Compilation and Linking
Many OpenGL ES apps use several vertex and fragment shaders, and it is often useful to reuse the same fragment shader with different vertex shaders or vice versa. Because the core OpenGL ES specification requires a vertex and fragment shader to be linked together in a single shader program, mixing and matching shaders results in a large number of programs, increasing the total shader compile and link time when you initialize your app.

Update: the issue seems to be gone now on Xcode9/iOS11.
Firstly, the freezing problem happens only when run from Xcode 8 and only on iOS 10 (currently 10.0.2), whether in debug or release mode. MKMapView though seems fine when the app is distributed via App Store or 3rd party ad hoc distribution systems. The warnings you are seeing may or may not be related to the problem, I don't know.
What I've found is that the offending code is in MKMapView's destructor, and it doesn't matter what you do with the map view object or how you configure it, i.e. merely calling
[MKMapView new];
anywhere in your code will freeze the app. The main thread hangs on a semaphore and it's not clear why.
One of the things I've tried was to destroy the map view object in a separate thread but that didn't help. Eventually I decided to retain my map objects at least in DEBUG builds.
NOTE: this is a really sh*tty workaround but at least it will help you to debug your app without freezing. Retaining these objects means your memory usage will grow by about 45-50MB every time you create a view controller with a map.
So, let's say if you have a property mapView, then you can do this in your view controller's dealloc:
- (void)dealloc
{
#if DEBUG
// Xcode8/iOS10 MKMapView bug workaround
static NSMutableArray* unusedObjects;
if (!unusedObjects)
unusedObjects = [NSMutableArray new];
[unusedObjects addObject:_mapView];
#endif
}

Related

How to compile fragment shader ahead of time?

The documentation for SKShader says:
Compiling a shader and the uniform data associated with it can be
expensive. Because of this, you should initialize shader objects when
your game launches, not while the game is running.
From that I assumed that the shader would compile when it is initialized, but that's not the case (probably a bug?); it seems to compile the moment it makes its first appearance, so I get a short app-wide lag the first time this happens, along with a shader compilation succeeded message in the console.
I can fix this by adding a zero size node to the scene, and cycling through any needed shaders 🥴.
let node = SKSpriteNode()
node.shader = someShader
scene.addChild(node)
Kind of hack...is there a better way to compile a shader ahead of time?

Wanting to ditch MTKView.currentRenderPassDescriptor

I have an occasional issue with my MTKView renderer stalling on obtaining a currentRenderPassDescriptor for 1.0s. According to the docs, this is either due the view's device not being set (it is) or there are no drawables available.
If there are no drawables available, I don't see a means of just immediately bailing or skipping that video frame. The render loop will stall for 1.0s.
Is there a workaround for this?. Any help would be appreciated.
My workflow is a bunch of kernel shader work then one final vertex shader. I could do the drawing of the final shader onto my own texture (instead of using the currentPassDescriptor), then hoodwink that texture into the view's currentDrawable -- but in the obtaining of that drawable we're back to the same stalling situation.
Should I get rid of MTKView entirely and fall back to using a CAMetalLayer instead? Again, I suspect the same stalling issues will arise. Is there a way to set the maximumDrawableCount on an MTKView like there is on CAMetalLayer?
I'm a little baffled as, according the Metal System Trace, my work is invariably completed under 5.0ms per frame on an iMac 2015 R9 M395.

Max number of textures in WebGL?

I know that there is a limit of 8 textures in WebGL.
My question is that, is 8 the limit globally, or per shader/program wise?
If it's per shader/program wise limit, does that mean, once I load the textures to uniforms of one shader, I can start reusing these slots for other shaders? Say I used TEXTURE0 for one shape, can I use TEXTURE0 in another shape?
The limit is per draw call. When you make a draw call, and invoke a particular shader program, you are constrained by the limit, but your next draw call can use completely different textures in the same animation frame.
Also, 8 is just the minimum guarantee. Systems are required to support at least eight to be considered WebGL conformant. But nicer graphics cards support more than eight. You can query the max number of image textures for the platform you're on like this:
var maxTextures = gl.getParameter(gl.MAX_TEXTURE_IMAGE_UNITS);
You can also look for vertex textures:
gl.getParameter(gl.MAX_VERTEX_TEXTURE_IMAGE_UNITS)
Or a combination of the two:
gl.getParameter(gl.MAX_COMBINED_TEXTURE_IMAGE_UNITS)
You can also use a site like WebGL Report (Disclaimer, I'm a contributor) to look up this stat for the platform you're on (under Fragment Shader -> Max Texture Units).
EDIT: When this answer was first written, there was another useful site called "WebGL Stats" that would show aggregate data for WebGL support in a variety of browsers. Sadly, that site disappeared a couple years ago without warning. But even back then, most devices supported at least 16 textures.

Interpolation in texture3D in OpenGL ES 3.0 on iOS

I am passing a GL_TEXTURE_3D to the fragment shader in an iOS application. I am using minification and magnification filters as GL_LINEAR for this texture. However, the resulting texture rendered in the app has blocks instead of having a smooth transition of colors, which implies that it is using GL_NEAREST interpolation.
Here are the screenshots of the expected vs received output image
PS: If I use a 2D texture instead, pass in the 3D texture as a flattened 2D texture and do the interpolation manually in the shader, it works all fine and I get the expected output.
Here is the code for setting up GL_LINEAR:
GLenum target, minificationFilter, magnificationFilter;
target = GL_TEXTURE_3D;
minificationFilter = GL_LINEAR;
magnificationFilter = GL_LINEAR;
glTexParameteri(target, GL_TEXTURE_MIN_FILTER, minificationFilter);
glTexParameteri(target, GL_TEXTURE_MAG_FILTER, magnificationFilter);
Linear filtering of textures with internal format GL_RGBA32F is not supported in ES 3.0.
You can see which formats support linear filtering in table 3.13 of the ES 3.0 spec document, on pages 130-132. The last column, with header "Texture-filterable", indicates which formats support filtering. RGBA32F does not have a checkmark in that column.
If you need linear filtering for float textures, you're limited to 16-bit component floats. RGBA16F in the same table has the checkmark in the last column.
This limitation is still in place in the latest ES 3.2 spec.
There is an extension to lift this limitation: OES_texture_float_linear. However, this extension is not listed under the supported extensions on iOS.
If you switch from the single precision float format GL_RGBA32F to the half-float format GL_RGBA16F then GL_LINEAR magnification works fine.
I can't find any documentation to suggest why this shouldn't work, and the only limitation on single precision float textures seems to be when used as render targets, so I guess this is a bug to be filed under "GL_RGBA32F ignores GL_LINEAR magnification on iOS 9".
If it genuinely is a bug, then be understanding - I imagine an OpenGLES 3 implementation to be one of the largest, most awful switch-statement-driven pieces of code that one could possibly have the misfortune to work on. If you consider that whatever glamour the job might have entailed previously has since been sucked out by the release of the faster, sexier and legacy-free Metal then you're probably talking about a very unloved codebase, maintained by some very unhappy people. You're lucky flat shaded triangles even work.
p.s. when using GL_TEXTURE_3D don't forget to clamp in the third coordinate (GL_TEXTURE_WRAP_R)
p.p.s test this on a device. neither GL_RGBA32F nor GL_RGBA16F seem to work with GL_LINEAR on the simulator

glDrawArrays takes long on first call using OpenGL ES on iOS

I'm trying to use multiple GLSL fragment shaders with OpenGL ES on iOS 7 and upwards. The shaders itself are running fine after the first call to glDrawArrays. Nevertheless, the very first call to glDrawArrays after the shaders and their program have been compiled and linked takes ages to complete. Afterwards some pipeline or whatever seems to have been loaded and everything goes smooth. Any ideas what the cause of this issue are and how to prevent it?
The most likely cause is that your shaders may not be fully compiled until you use them the first time. They might have been translated to some kind of intermediate form when you call glCompileShader(), which would be enough for the driver to provide a compile status and to act as if the shaders had been compiled. But the full compilation and optimization could well be deferred until the first draw call that uses the shader program.
A commonly used technique for games is to render a few frames without actually displaying them while some kind of intro screen is still shown to the user. This prevents the user from seeing stuttering that could otherwise result from all kinds of possible deferred initialization or data loading during the first few frames.
You could also look into using binary shaders to reduce slowdowns from shader compilation. See glShaderBinary() in the ES 2.0 documentation.
What actually helped speeding up the first draw call was the following (which is fine in my use case since I'm rendering a video so no depth testing is needed).
glDisable(GL_DEPTHTEST)

Resources