Unity application crashes on iOS due to shader not compiled - ios

I am trying to build my Unity 5.4.2f2 application for iOS. It is done with no compile errors. But when I try to run the application using Xcode 8.0, it immediately crashes and the debugger reports the following error.
Initialize engine version: 5.4.2f2 (b7e030c65c9b)
-------- Shader compilation failed
#version 100
#extension GL_EXT_frag_depth : enable
precision highp float;
uniform highp vec4 _ProjectionParams;
uniform highp vec4 _ZBufferParams;
uniform highp mat4 unity_CameraToWorld;
uniform highp mat4 _NonJitteredVP;
uniform highp mat4 _PreviousVP;
uniform highp sampler2D _CameraDepthTexture;
varying highp vec2 xlv_TEXCOORD0;
varying highp vec3 xlv_TEXCOORD1;
void main ()
{
highp vec4 tmpvar_1;
tmpvar_1 = texture2D (_CameraDepthTexture, xlv_TEXCOORD0);
mediump vec2 tmpvar_2;
highp vec4 tmpvar_3;
tmpvar_3.w = 1.0;
tmpvar_3.xyz = ((xlv_TEXCOORD1 * (_ProjectionParams.z / xlv_TEXCOORD1.z)) * (1.0/((
(_ZBufferParams.x * tmpvar_1.x)
+ _ZBufferParams.y))));
highp vec4 tmpvar_4;
tmpvar_4 = (unity_CameraToWorld * tmpvar_3);
highp vec4 tmpvar_5;
tmpvar_5 = (_PreviousVP * tmpvar_4);
highp vec4 tmpvar_6;
tmpvar_6 = (_NonJitteredVP * tmpvar_4);
highp vec2 tmpvar_7;
tmpvar_7 = (((tmpvar_5.xy / tmpvar_5.w) + 1.0) / 2.0);
highp vec2 tmpvar_8;
tmpvar_8 = (((tmpvar_6.xy / tmpvar_6.w) + 1.0) / 2.0);
tmpvar_2 = (tmpvar_8 - tmpvar_7);
mediump vec4 tmpvar_9;
tmpvar_9.zw = vec2(0.0, 1.0);
tmpvar_9.xy = tmpvar_2;
gl_FragDepthEXT = tmpvar_1.x;
gl_FragData[0] = tmpvar_9;
}
failed compiling:
fragment evaluation shader
WARNING: 0:4: extension 'GL_EXT_frag_depth' is not supported
ERROR: 0:38: Use of undeclared identifier 'gl_FragDepthEXT'
Note: Creation of internal variant of shader 'Hidden/Internal-MotionVectors' failed.
WARNING: Shader Unsupported: 'Hidden/Internal-MotionVectors' - Pass '' has no vertex shader
WARNING: Shader Unsupported: 'Hidden/Internal-MotionVectors' - Setting to default shader.
Xcode 8.0 contains OPenGL 2.0.
At the Unity forum people tell us that it should be fine for Unity 5.4. But it's not working for me. On Android devices my application runs quite OK.

Open Unity -> Edit -> Project settings -> Graphics
Then see Depth Normals under built-in shader setting and Choose option no Support

From Edit/Project Settings/Graphics can see always included shaders, see if its there
Or if you have 3D objects in scene, disable [ ] Motion Vectors from all the mesh renderers..
You can search in hierarchy to see all of them: t:meshrendere
For me it was the "Motion Vectors" setting (also under Edit/Project Settings/Graphics).
Reference:
https://forum.unity3d.com/threads/hidden-shader-motionvectors.431470/

this Blit shader crash is mostly because of texture compilation, IOs doesnot support dds format textures, if you are using dds textures, replace them with jpeg or any other supported extensions and it will build on IOS safely. worked for me:) after long research.

Related

Is there a way to test the GLSL-ES version in the shader?

In web tools such as shadertoy, my fragment shader source is included in a main() I don't control or see. It would be the same if I were distributing some GLSL library.
My problem is ensuring compatibility between webGL2 and 1: I would like to write GLSL fallbacks to emulate missing webGL2 built-in if the browser or OS is only WebGL1 capable.
->
Is there a way to test the current webGL/GLSL version from the shader, as we do for the availability of extension ?
( BTW, testing extensions is getting complicated now that some are included in the language: e.g., GL_EXT_shader_texture_lod is undefined in webGL2 despite the function is there. So being able to test the GLSL version is crucial.)
AFAICT there's no good way to test. The spec says the preprocessor macro __VERSION__ will be set to the version as in integer 300 for GLSL version 3.00 so
#if __VERSION__ == 300
// use 300 es stuff
#else
// use 100 es tuff
#endif
The problem is for WebGL2 when using 300 es shaders the very first line of the shader must be
#version 300 es
So you can't do this
#if IMAGINARY_WEBGL2_FLAG
#version 300 es // BAD!! This has to be the first line
...
#else
...
#
So, given that you already have to have update the first line why not just have 2 shaders, one for WebGL1, another for WebGL2. Otherwise all major engines generate their shaders so it should be pretty trivial to generate WebGL1 or WebGL2 in your code if you want to go down that path.
In the first place, there's no reason to use WebGL2 shader features if you can get by with WebGl1 and if you are using WebGL2 features then they're not really the same shader anymore are they? They'd need different setup, different inputs, etc...
Let's pretend we could do it all in GLSL though, what would you want it to look like?
// IMAGINARY WHAT IF EXAMPLE ....
#if WEBGL2
#version 300 es
#define texture2D texture
#define textureCube texture
#else
#define in varying
#define out varying
#endif
in vec4 position;
in vec2 texcoord;
out vec2 v_texcoord;
uniform sampler2D u_tex;
uniform mat4 u_matrix;
void main() {
gl_Position = u_matrix * (position + texture2D(u_tex, texcoord));
v_texcoord = texcoord;
}
Let's assume you wanted to do that. You could do in JavaScript (not suggesting this way, just showing an example)
const webgl2Header = `#version 300 es
#define texture2D texture
#define textureCube texture
`;
const webglHeader = `
#define in varying
#define out varying
`;
function prepShader(gl, source) {
const isWebGL2 = gl.texImage3D !== undefined
const header = isWebGL2 ? webgl2Header : webglHeader;
return header + source;
}
const vs = `
in vec4 position;
in vec2 texcoord;
out vec2 v_texcoord;
uniform sampler2D u_tex;
uniform mat4 u_matrix;
void main() {
gl_Position = u_matrix * (position + texture2D(u_tex, texcoord));
v_texcoord = texcoord;
}
`;
const vsSrc = prepSource(gl, vs);
You can make your JavaScript substitutions as complicated as you want. For example this library

GLSL ES equivalent to OpenGL GLSL 'out' keyword?

I have a vertex shader which works fine on Windows with OpenGL. I want to use the same shader on an iPad which supports OpenGL ES2.0.
Compilation of the shader fails with:
Invalid storage qualifiers 'out' in global variable context
From what I have read, the 'out' keyword required GLSL 1.5 which the iPad won't support. Is there an equivalent keyword to 'out' that I can use to pass the color into my fragment shader?
attribute vec4 vPosition;
attribute vec4 vColor;
uniform mat4 MVP;
out vec4 pass_Color;
void main()
{
gl_Position = MVP * vPosition;
pass_Color = vColor;
}
This vertex shader is used by me to create gradient blends, so I'm assigning a color to each vertex of a triangle and then the fragment shader interpolates the color between each vertex. That's why I'm not passing a straight color directly into the fragment shader.
Solved! In GLSL ES 1.0 that I'm using, I need to use 'varying' instead of 'in' and 'out'. Here's the working shader:
attribute vec4 vPosition;
attribute vec4 vColor;
uniform mat4 MVP;
varying vec4 pass_Color;
void main()
{
gl_Position = MVP * vPosition;
pass_Color = vColor;
}

Sampler3D in iOS

I have just include OpenGL ES 3.0 in my iOS app and it is working fine.
I have a working shader below:
#version 300 es
precision mediump float;
uniform sampler2D texSampler;
uniform float fExposure;
in vec2 fTexCoord;
in vec3 fColor;
out vec4 fragmentColor;
void main()
{
fragmentColor = texture(texSampler, fTexCoord) * vec4(fColor, 1.0) * fExposure;
}
Now, I want to use a sampler3D so I have:
#version 300 es
precision mediump float;
uniform sampler3D texSampler;
uniform float fExposure;
in vec3 fTexCoord;
in vec3 fColor;
out vec4 fragmentColor;
void main()
{
fragmentColor = texture(texSampler, fTexCoord) * vec4(fColor, 1.0) * fExposure;
}
and it doesn't compile. Also, I changed the vec2 texCoord to vec3 texCoord in the vertex shader.
Actually, sampler3D is not recognized, but as far as i know it exists in OpenGL ES 3.0.
Any ideas?
Similar to float, sampler3D does not have a default precision. Add this at the start of your fragment shader, where you also specify the default float precision:
precision mediump sampler3D;
Of course you can use lowp instead if that gives you sufficient precision.
The only sampler types that have a default precision in ES 3.0/3.1 are sampler2D and samplerCube (both default to lowp). For all others, the precision has to be specified either as a default precision, or as part of the variable declaration.

OpenGL ES 2.0 draw Fullscreen Quad very slow

When I'm rendering my content onto a FBO with a texture bound to it and then render this bound texture to a fullscreen quad using a basic shader the performance drops ridiculously.
For example:
Render to screen directly (with basic shader):
And when render to texture first, then render texture with fullscreen quad: (with same basic shader, would be something like blur or bloom normally):
Anyone got an idea how to speed this up? Since the current performance is not usable. Also I'm using GLKit for the basic OpenGL stuff.
Need to use precisions in places where it's needed.
lowp - for colors, textures coord, normals etc.
highp - for matrices and vertices/positions
Quick reference , check the range of precisions, on 3 page in "Qualifiers".
// BasicShader.vsh
precision mediump float;
attribute highp vec2 position;
attribute lowp vec2 texCoord;
attribute lowp vec4 color;
varying lowp vec2 textureCoord;
varying lowp vec4 textureColor;
uniform highp mat4 projectionMat;
uniform highp mat4 worldMat;
void main() {
highp mat4 worldProj = worldMat * projectionMat;
gl_Position = worldProj * vec4(position, 0.0, 1.0);
textureCoord = texCoord;
textureColor = color;
}
// BasicShader.fsh
precision mediump float;
varying lowp vec2 textureCoord;
varying lowp vec4 textureColor;
uniform sampler2D sampler;
void main() {
lowp vec4 Color = texture2D(sampler, textureCoord);
gl_FragColor = Color * textureColor;
}
This is very likely caused by ill-performant openGL ES API calls.
You should attach a real device and do an openGL ES frame capture. (It really needs a real device, the option for frame capture won't be available with a simulator).
The frame capture will indicate memory and other warnings along with suggestions to fix them alongside each API call. Step through these and fix each. The performance should improve considerably.
Here's a couple of references to get this done:
Debugging openGL ES frame
Xcode tools overview

OpenGL ES performance 2.0 vs 1.1 (iPad)

In my simple 2D game I have 2x framerate drop when using ES 2.0 implementation for drawing. Is it OK or 2.0 should be faster if used properly?
P.S. If you are interested in details. I use very simple shaders:
vertex program:
uniform vec2 u_xyscale;
uniform vec2 u_st_to_uv;
attribute vec2 a_vertex;
attribute vec2 a_texcoord;
attribute vec4 a_diffuse;
varying vec4 v_diffuse;
varying vec2 v_texcoord;
void main(void)
{
v_diffuse = a_diffuse;
// convert texture coordinates from ST space to UV.
v_texcoord = a_texcoord * u_st_to_uv;
// transform XY coordinates from screen space to clip space.
gl_Position.xy = a_vertex * u_xyscale + vec2( -1.0, 1.0 );
gl_Position.zw = vec2( 0.0, 1.0 );
}
fragment program:
precision mediump float;
uniform sampler2D t_bitmap;
varying lowp vec4 v_diffuse;
varying vec2 v_texcoord;
void main(void)
{
vec4 color = texture2D( t_bitmap, v_texcoord );
gl_FragColor = v_diffuse * color;
}
"color" is a mediump variable, due to the default precision that you specified. This forces the implementation to convert the lowp sampled result to mediump. It also requires that diffuse be converted to mediump to perform the multiplication.
You can fix this by declaring "color" as lowp.

Resources