SpriteKit shaders on iOS 16 - Y-axis is inverted - ios

My SpriteKit app, which uses several GLSL fragment shaders, is showing different shader behaviour on iOS 16 as compared to iOS 15 and earlier. On iOS 15 the y-axis increased in the direction of the bottom of the screen, but now suddenly iOS 16 appears to have inverted this and now the y-axis is increasing in the direction of the top of the screen.
The fact that this change is occurring only in my fragment shaders while SpriteKit node positioning remains unchanged between iOS 15 and 16 leads me to believe that this might be a change made in Metal 3.
Is there an elegant solution to achieving consistent behaviour between iOS versions here? I would prefer not to have to detect the user's iOS version and supply a shader uniform to invert the y-axis manually, on a per-shader basis.

Update 9.15.2022
Based on the comments, it looks like this is indeed a bug (95579020) and an Apple Engineer confirmed that a fix has been identified (https://developer.apple.com/forums/thread/713945).
However, the SKShader issue doesn't seem to happen when building on an Apple silicon laptop with Xcode 14.0. Not sure about the CIFilter issue.
Original
Are you using v_tex_coord? I have tested this and it doesn't look like there's a problem between iOS 15 and iOS 16, using v_tex_coord. There must be something else.
Note: I have tested this on the iOS 16.0 simulator, not on a real device.
The documentation also still mentions the bottom-left corner:
vec2 v_tex_coord; Varying The coordinates used to access the texture. These coordinates are normalized so that the point (0.0,0.0) is in the bottom-left corner of the texture.
The results are the same on both iOS versions using the SKShader code below and v_tex_coord.y:
void main() {
vec3 red = vec3(1.0, 0.0, 0.0);
vec3 blue = vec3(0.0, 0.0, 1.0);
vec3 col = mix(red, blue, v_tex_coord.y);
gl_FragColor = vec4(col, 1.0);
}
iOS 15.0 ->
            iOS 16.0 ->

Related

Dissolve SKShader works as expected on simulator, strange behaviour on actual device

I encountered weird behaviour when trying to create dissolve shader for iOS spritekit. I have this basic shader that for now only changes alpha of texture depending on black value of noise texture:
let shader = SKShader(source: """
void main() {\
vec4 colour = texture2D(u_texture, v_tex_coord);\
float noise = texture2D(noise_tex, v_tex_coord).r;\
gl_FragColor = colour * noise;\
}
""", uniforms: [
SKUniform(name: "noise_tex", texture: spriteSheet.textureNamed("dissolve_noise"))
])
Note that this code is called in spriteSheet preload callback.
On simulator this consistently gives expected result ie. texture with different alpha values all over the place. On actual 14.5.1 device it varies:
Applied directly to SKSpriteNode - it makes whole texture semi-transparent with single value
Applied to SKEffectNode with SKSpriteNode as its child - I see miniaturized part of a whole spritesheet
Same as 2 but texture is created from image outside spritesheet - it works as on simulator (and as expected)
Why does it behave like this? Considering this needs to work on iOS 9 devices I'm worried 3 won't work everywhere. So I'd like to understand why it happens and ideally get sure way to force 1 or at least 2 to work on all devices.
After some more testing I finally figured out what is happening. The textures in the shader are whole spritesheets instead of separate textures on devices, so the coordinates go all over the place. (which actually makes more sense than simulator behaviour now that I think about it)
So depending if I want 1 or 2 I need to apply different maths. 2 is easier, since display texture is first rendered onto a buffer, so v_text_coord will take full [0.0, 1.0], so all I need is noise texture rect to do appropriate transform. For 1 I need to additionally provide texture rect to first change it into [0.0, 1.0] myself and then apply that to noise coordinates.
This will work with both spritesheets loaded into the shader or separate images, just in later case it will do some unnecessary calculations.

What happened to glDepthRange in OpenGL ES 2.0 for iOS?

I used glDepthRange(1.0, 0.0) in a Mac OS X program to give myself a right-handed coordinate system. Apparently I don't have that option with iOS using OpenGL ES 2.0. Is there a quick fix so that higher z-values show up in front, or do I have to rework all of my math?
well you can try glDepthFunc. the default value is GL_LESS, if you use GL_GREATER, pixels with higher z values will be rendered.
glDepthFunc(GL_GREATER);
alternatively, you can add this line on your vertex shader
gl_Position.z = -gl_Position.z;

Occasional missing polygons drawing a sky sphere (at far plane)

I am drawing a sky sphere as the background for a 3D view. Occasionally, when navigating around the view, there is a visual glitch that pops in:
Example of the glitch: a black shape where rendering has apparently not placed fragments onscreen
Black is the colour the device is cleared to at the beginning of each frame.
The shape of the black area is different each time, and is sometimes visibly many polygons. They are always centred around a common point, usually close to the centre of the screen
Repainting without changing the navigation (eye position and look) doesn't make the glitch vanish, i.e. it does seem to be dependent on specific navigation
The moment navigation is changed, even an infinitesimal amount, it vanishes and the sky draws solidly. The vast majority of painting is correct. Eventually as you move around you will spot another glitch.
Changing the radius of the sphere (to, say, 0.9 of the near/far plane distance) doesn't seem to remove the glitches
Changing Z-buffer writing or the Z test in the effect technique makes no difference
There is no DX debug output (when running with the debug version of the runtime, maximum validation, and shader debugging enabled.)
What could be the cause of these glitches?
I am using Direct3D9 (June 2010 SDK), shaders compiled to SM3, and the glitch has been observed on ATI cards and VMWare Fusion virtual cards on Windows 7 and XP.
Example code
The sky is being drawn as a sphere (error-checking etc removed the the below code):
To create
const float fRadius = GetScene().GetFarPlane() - GetScene().GetNearPlane()*2;
D3DXCreateSphere(GetScene().GetDevicePtr(), fRadius, 64, 64, &m_poSphere, 0);
Changing the radius doesn't seem to affect the presence of glitches.
Vertex shader
OutputVS ColorVS(float3 posL : POSITION0, float4 c : COLOR0) {
OutputVS outVS = (OutputVS)0;
// Center around the eye
posL += g_vecEyePos;
// Transform to homogeneous clip space.
outVS.posH = mul(float4(posL, 1.0f), g_mWorldViewProj).xyzw; // Always on the far plane
Pixel shader
Does't matter, even one outputting a solid colour will glitch:
float4 ColorPS(float altitude : COLOR0) : COLOR {
return float4(1.0, 0.0, 0.0, 1.0);
The same image with a solid-colour pixel shader, to be certain the PS isn't the cause of the problem
Technique
technique BackgroundTech {
pass P0 {
// Specify the vertex and pixel shader associated with this pass.
vertexShader = compile vs_3_0 ColorVS();
pixelShader = compile ps_3_0 ColorPS();
// sky is visible from inside - cull mode is inverted (clockwise)
CullMode = CW;
}
}
I tried adding in state settings affecting the depth, such as ZWriteEnabled = false. None made any difference.
The problem is certainly caused by far plane clipping. If changing the sphere's radius a bit doesn't help, then the sphere's position may be wrong...
Make sure you're properly initializing g_vecEyePos constant (maybe you've mispelled it in one of the DirectX SetShaderConstant functions?).
Also, if you've included the translation to the eye's position in the world matrix of g_mWorldViewProj, you shouldn't do posL += g_vecEyePos; in your VS, because it causes a vertex to be moved twice the eye's position.
In other words you should choose one of these options:
g_mWorldViewProj = mCamView * mCamProj; and posL += g_vecEyePos;
g_mWorldViewProj = MatrixTranslation(g_vecEyePos) * mCamView * mCamProj;

Profiling OpenGL ES app on iOS

I'm looking at a game I'm working on in the "OpenGL ES Driver" template in Instruments. The sampler is showing that I'm spending nearly all my time in a function called gfxODataGetNewSurface with a call tree that looks like this:
gfxIODataGetNewSurface
gliGetNewIOSurfaceES
_ZL29native_window_begin_iosurfaceP23_EAGLNativeWindowObject
usleep
__semwait_signal
(sorry for the weird formatting, safari or stack overflow is eating my line breaks)
The game is only getting about 40 FPS (on iPhone 4) under what I don't believe is a heavy workload which makes me think I'm doing something pathological with my OpenGL code.
Does anyone know what gliGetNewIOSurfaceES/gfxIODataGetNewSurface is doing? And what it indicates is happening in my app. Is it constantly creating new renderbuffers or something?
EDIT: New info...
I've discovered that with the following pixel shader:
varying vec2 texcoord;
uniform sampler2D sampler ;
const vec4 color = vec4(...);
void main()
{
gl_FragColor = color*texture2D(sampler,texcoord);
}
(yet again my formatting is getting mangled!)
If I change the const 'color' to a #define, the Renderer Utilization drops from 75% to 35% when drawing a full-screen (960x640) sprite to the screen. Really I want this color to be an interpolated 'varying' quantity from the vertex shader, but if making it a global constant kills performance I can't imagine there's any hope that the 'varying' version would be any better.

iOS Simulator GL_OES_standard_derivatives

On iOS4 GL_OES_standard_derivatives is only supported on the device (from what I see when I output the extensions), is there a way to be able to:
Detect in the fragment shader if the extension is supported or not
If not supported, does anyone have a the code for the dFdx and dFdy? Can't seems that find anything on google.
TIA!
I had the same issue for antialiasing SDM fonts. You can calculate a similar dfdx/dfdx by
Translating 2 2d vectors using the current transform matrix :
vec2 p1(0,0); vec2 p2(1,1);
p1=TransformUsingCurrentMatrix(p1);
p2=TransformUsingCurrentMatrix(p2);
float magic=35; // you'll need to play with this - it's linked to screen size I think :P
float dFdx=(p2.x-p1.x)/magic;
float dFdy=(p2.y-p1.y)/magic;
then send dFdx/dFdy to your shader as uniforms - and simply multiply with your parameter to get the same functionality i.e.
dFdx(myval) now becomes
dFdx*myval;
dFdy(myval) dFdy*myval;
fwidth(myval) abs(dFdx*myval)+abs(dFdy*myval);

Resources