GLSL code make app crash on iPhone 6 Plus - ios

I found some special GLSL writing style will make iPhone 6 Plus crash without any log.
For example, if you write GLSL like code below, it would crash at glLinkProgram.
float testFun(float co) {
return co;
}
float a = testFun(0.1);
void main()
{
// your code here
}
But if you move the define of "a" into a function, then it would work correctly.
This wouldn't happen in iPhone5 or 5s.
You could reproduce this bug by download the sample project at
http://www.raywenderlich.com/3664/opengl-tutorial-for-ios-opengl-es-2-0
then replace SimpleFragment.glsl with
varying lowp vec4 DestinationColor;
varying lowp vec2 TexCoordOut; // New
uniform sampler2D Texture; // New
precision highp float;
float testFun(float co) {
return co;
}
float a = testFun(0.1);
void main()
{
gl_FragColor = vec4(0.7, 0.5, 0.3, 1.0);
}
and run it on your iPhone 6 Plus. It would crash immediately.

At first, these 3 iPhone you mentioned have 3 different GPU:
iPhone 5 -> SGX543
iPhone 5s -> A7
iPhone 6/Plus -> A8
That means it probably have different driver in iOS, and the glsl shader compile implement will also may be different, but no one actually knows that except Apple's guy. On your side, that means you really need to run/debug your App on real device, but not soft simulator.
On the other hand, your iPhone 5/5s/6 Plus are on the same iOS version, right? [I assume yes, ;)]
Turn back to your question, I think your should not use a global variable like c in your glsl shader, since there are no stack/heap storage layout in shader, but most variables are register.
That means your float a; will hold a register place, and that's limited resources in GPU! it's not recommended to use global variable in glsl, or more clear, in most program language, I think.
You can try to check the status about your shader using the function call like below for more detail explain about your shader's compile failure:
glGetProgramiv(program, GL_LINK_STATUS, &link_status);
glGetProgramiv(program, GL_INFO_LOG_LENGTH, &length);
glGetProgramInfoLog(program, length, NULL, &log[0]);
Hope it helps.

Your shader code contains an error. This line is invalid:
float a = testFun(0.1);
In the ES 2.0 GLSL spec, section "4.3 Storage Qualifiers" on page 29 says (emphasis added):
Declarations of globals without a storage qualifier, or with just the const qualifier, may include initializers, in which case they will be initialized before the first line of main() is executed. Such initializers must be a constant expression.
Now the question becomes if testFun(0.1) is a constant expression. Section "5.10 Constant Expressions" on page 49 clarifies that:
The following may not be used in constant expressions:
User-defined functions
The fact that the shader compiler crashes looks like an Apple bug. File it with them.

Related

How to bind a variable number of textures to Metal shader?

On the CPU I'm gathering an array of MTLTexture objects that I want to send to the fragment shader. There can be any number of these textures at any given moment. How can I send a variable-length array of MTLTextures to a fragment shader?
Example.)
CPU:
var txrs: [MTLTexture] = []
for ... {
txrs.append(...)
}
// Send array of textures to fragment shader.
GPU:
fragment half4 my_fragment(Vertex v [[stage_in]], <array of textures>, ...) {
...
for(int i = 0; i < num_textures; i++) {
texture2d<half> txr = array_of_textures[i];
}
...
}
The array other person suggested won't work, because textures will take up all the bind points up to 31, at which point it will run out.
Instead, you need to use argument buffers.
So, for this to work, you need a tier 2 argument buffer support. You can check it with argumentBuffersSupport property on an MTLDevice.
You can read more about argument buffers here or watch this talk about bindless rendering.
The basic idea is to use MTLArgumentEncoder to encode textures you need in argument buffers. Unfortunately, I don't think there's a direct way to just encode a bunch of MTLTextures, so instead, you would create a struct in your shaders like this
struct SingleTexture
{
texture2d<half> texture;
};
The texture in this struct has an implicit id of 0. To learn more about id, read Argument Buffers section in the spec, but it's basically a unique index for each entry in the ab.
Then, change your function signature to
fragment half4 my_fragment(Vertex v [[stage_in]], device ushort& textureCount [[ buffer(0), device SingleTexture* textures [[ buffer(1) ]])
You will then need to bind the count (use uint16_t instead of uint32_t in most cases). Just as a 2 (or 4) byte buffer. (You can use set<...>Bytes function on an encoder for that).
Then, you will need to compile that function to MTLFunction and from it, you can create a MTLArgumentEncoder using newArgumentEncoderForBufferAtIndex method. You will use buffer index 1 in this case, because that's where your AB is bound in the function.
From MTLArgumentEncoder you can get encodedLength, which is basically a size for one SingleTexture struct in AB. After you get that, multiply it by number of textures to get a buffer of a proper size to encode your argument buffer to.
After that, in your setup code, you can just do this
for(size_t i = 0; i < textureCount; i++)
{
// We basically just offset into an array of SignlaTexture
[argumentEncoder setArgumentBuffer:<your buffer you just created> offset:argumentEncoder.encodedLength * i];
[argumentEncoder setTexture:textures[i] atIndex:0];
}
And then, when you are done encoding the buffer, you can hold on to it, until your texture array changes (you don't need to reencode it every frame).
Then, you need to bind the argument buffer to buffer binding point 1, just as you would bind any other buffer.
Last thing you need to do is to make sure all the resources referenced indirectly are resident on the GPU. Since you encoded your textures into AB, driver has no way to know whether you used them or not, because you are not binding them directly.
To do that, use useResource or useResources variation on an encoder you are using, kinda like this:
[encoder useResources:&textures[0] count:textureCount usage:MTLResourceUsageRead];
This is kinda a mouthful, but this is the proper way to bind anything you want to your shaders.

MSL - How to specify uniform array parameters in Metal shader?

I'm trying to pass an uniform array into a Metal shader, e.g.:
fragment vec4 fragment_func(constant float4& colors[3] [[buffer(0)]], ...) {...}
I'm getting the error:
"NSLocalizedDescription" : "Compilation failed: \n\nprogram_source:2:1917: error: 'colors' declared as array of references of type 'const constant float4 &'\nprogram_source:2:1923:
error: 'buffer' attribute cannot be applied to types\nprogram_source:2:1961:
I understand that the 'buffer' attribute can only be applied to pointers and references. In that case, what's the correct way to pass in uniform arrays in MSL?
Edit:
MSL specs state that "Arrays of buffer types" are supported for buffer attributes. I must be doing something syntactically wrong?
Arrays of references aren't permitted in C++, nor does MSL support them as an extension.
However, you can take a pointer to the type contained in the array:
fragment vec4 fragment_func(constant float4 *colors [[buffer(0)]], ...) {...}
If necessary, you can pass the size of the array as another buffer parameter, or you can just ensure that your shader function doesn't read more elements than are present in the buffer.
Accessing elements is then as simple as an ordinary dereference:
float4 color0 = *colors; // or, more likely:
float4 color2 = colors[2];
You may also use:
fragment vec4 fragment_func(constant float4 colors [[buffer(0)]][3], ...) {...}
This is an unfortunate side-effect of how the attribute syntax works in C++. The advantage of doing it this way is that it retains the type on colors more directly.

Using the [[clip_distance]] attribute in a Metal shader?

In a Metal shader, I am trying to make use of the [[clip_distance]] attribute in the output structure of a vertex shader function as follows:
struct vtx_out
{
float4 gl_Position [[position]];
float gl_ClipDistance[1] [[clip_distance]];
};
However, this results in the following shader compilation error:
<program source>:86:32: error: 'clip_distance' attribute cannot be applied to types
float gl_ClipDistance[1] [[clip_distance]];
^
I am trying to compile this for running on a Mac running OS X El Capitan.
Why am I getting this error, and how can I make use of the [[clip_distance]] attribute?
Use this:
struct vtx_out
{
float4 gl_Position [[position]];
float gl_ClipDistance [[clip_distance]] [1];
};
In the Metal shading language clip_distance is a declaration attribute.
The C++ spec [dcl.array] states:
In a declaration T D where D has the form
D1 [ constant-expressionopt] attribute-specifier-seqopt
... The optional attribute-specifier-seq appertains to the array.
This is why placing the attribute at the end makes Clang treat it like a type attribute and you get the error that you see.

IOS 9, IPhone 6S plus, metal, unable to link to function frexp

New to IOS metal and I am trying to write a kernel. My function needs to link to the frexp function. Unfortunately, my kernel referencing the frexp function will not compile.
float exponent = 0.0;
float mantissa = frexp(value, exponent);
The Metal documentation lists the function protocol as:
T frexp(T x, Ti &exp)
I am able to compile to other similar math functions such as exp, exp2, exp10, ldexp.
Has anyone been able to link to Metal's frexp function? Or know how I can view the metal_math include file to see the frexp protocol the compiler is referencing?
Thanks!
After reading my own question I found my mistake, the corrected code looks as follows.
int exponent = 0.0;
float mantissa = frexp(value, exponent);

iPad missing OpenGL extension string GL_APPLE_texture_2D_limited_npot

In my iOS game, I want to use the GL_APPLE_texture_2D_limited_npot extension when available to save memory (the game have NPOT textures, and in my current implementation I add some padding to make those power of two).
I am testing in my iPad (first generation). Every thing I have read so far says that all iOS devices which supports OpenGLES2 (including iPad) also support GL_APPLE_texture_2D_limited_npot (which is very good, since my game use OpenGLES2). I have tested in my iPad, and it does support (I removed the padding and the images work if I set wrap to GL_CLAMP_TO_EDGE), but the extension does not show when I call glGetString(GL_EXTENSIONS). The code:
const char *extensions = (const char *)glGetString(GL_EXTENSIONS);
std::cout << extensions << "\n";
Results in:
GL_OES_depth_texture GL_OES_depth24 GL_OES_element_index_uint GL_OES_fbo_render_mipmap GL_OES_mapbuffer GL_OES_packed_depth_stencil GL_OES_rgb8_rgba8 GL_OES_standard_derivatives GL_OES_texture_float GL_OES_texture_half_float GL_OES_vertex_array_object GL_EXT_blend_minmax GL_EXT_debug_label GL_EXT_debug_marker GL_EXT_discard_framebuffer GL_EXT_read_format_bgra GL_EXT_separate_shader_objects GL_EXT_shader_texture_lod GL_EXT_texture_filter_anisotropic GL_APPLE_framebuffer_multisample GL_APPLE_rgb_422 GL_APPLE_texture_format_BGRA8888 GL_APPLE_texture_max_level GL_IMG_read_format GL_IMG_texture_compression_pvrtc
Why does this extension does not show with glGetString(GL_EXTENSIONS)? What is the proper way to check for it? Does all OpenGLES2 iOS devices really support it?
OpenGL ES 2.0 supports non power of 2 textures in specification. There is no need for extension. Here is the spec: http://www.khronos.org/registry/gles/specs/2.0/es_full_spec_2.0.25.pdf (Page 69):
If wt and ht are the specified image width and height, and if either wt or ht are
less than zero, then the error INVALID_VALUE is generated.
The maximum allowable width and height of a two-dimensional texture image
must be at least 2k-lod for image arrays of level zero through k, where k is the log
base 2 of MAX_TEXTURE_SIZE. and lod is the level-of-detail of the image array.
It may be zero for image arrays of any level-of-detail greater than k. The error
INVALID_VALUE is generated if the specified image is too large to be stored under
any conditions.
Not a word about power of two restriction (that is in OpenGL ES 1.x standard).
And if you read the specification of extension - http://www.khronos.org/registry/gles/extensions/APPLE/APPLE_texture_2D_limited_npot.txt, then you'll notice that it is written agains OpenGL ES 1.1 spec.

Resources