My macOS version: macOS Mojave, Version 10.14.4
My iPhone device: iPhone X, iOS 11.2.2
My Xcode version: Xcode 10.2.1
In my scheme -> Options:
1. GPU Frame Capture: Automatically Enabled
2. Metal API Validation: Enabled
When I want to use capture GPU frame to debug metal shader, the shader debugger icon was gray, suggesting that Shader Debugger is not supported, update your operating system in order to support the shader debugger.
I want to know whether Xcode supports shader debugger on iPhone. If do suppport, then how to use it.
Thank you!
Related
Running React Native with (expo's) GLView has probably terrible performance in iOS simulator, making it unusable to develop your application.
My setup:
"expo": "~47.0.8",
"expo-gl": "~12.0.0",
"expo-three": "^7.0.0",
"react": "18.1.0",
"react-native": "0.70.5",
"three": "~0.145.0"
Running simple rotating Cube scene renders few FPS on M1 Mac running in iOS Simulator (iPhone 14Pro, iOS 16.1)
The same project runs flawlesly on Androd simulator (Android 13 SDK, API33, pixel device)
What can be done to speedup the rendering?
TLDR:
Disable the antialiasing that is enabled by default for iOS devices in GLView component.
When creating GLView pass msaaSamples property and set it to 0 (default: 4)
<GLView
msaaSamples={0}
onContextCreate={....
This will bring the performance close to the Android simulator.
Why this helps?
I am just guessing its some memory issue. Mobile retina displays on its own use like 2K pixels. Adding 4 times Multisample anti-aliasing (short msaa) quadruples the amount of video memory needed to store one frame.
Note: Feel free to add more comments how to squeeze more performance out of it and I can incorporate it in this answer.
SceneKit uses for SCNView in iPhone 6 on simulator OpenGL,
this causes to changing lightning model from PBR to Phong
Error: Physically based lighting model is not supported by the OpenGL
renderer, using Phong instead
Is there a way to fix it?
Newer iPhones render model in SceneKit correctly in PBR
SceneKit uses Metal in the simulator since Xcode 11 running on macOS Catalina. On previous versions of the system only OpenGL was supported.
In Xcode 11, Simulator adds support for Metal development. You can write iOS and tvOS apps that use Metal and test them in the Simulator, gaining the benefits of hardware acceleration on the Mac during development of your app. If you use frameworks built on top of Metal, such as SceneKit, Core Animation, and UIKit, you'll also see better performance when testing your apps in Simulator.
(https://developer.apple.com/documentation/metal/developing_metal_apps_that_run_in_simulator)
As per title, simulator is super slow, so my questions is there any way to improve OpenGL performance in iOS simulator?
The Simulator supports GPU acceleration for Metal starting with iOS 13 on macOS Catalina. OpenGL ES is still software rendered and is likely to remain that way.
I strongly recommend moving to Metal.
The Xcode->GPU Capture frame->GPU shader profiler can't work, I run the Capture frame, and open the Metal compute shader source code by GPU shader profiler. But I can't see the shader code performance profile like thatGPU shader profiler
Set your project's "iOS Deployment Target" to a higher version. I had this issue and in my case it was set to 9.x. When I set it to 11.4 I could see the percentage on source lines.
Screenshot on iPhone6s :
Unity3D version:5.3.5f1
Texture Format:TrueColor (Not Compressed)
iOS device:iPhone6s iOS 9.3.4
And this issue only occured on our trunk branch recently, and other branches do not have this issue.
So we are sure about is:
Shaders are correct.
Meshes and texture coordinates(uv) of each vertex are correct.
By using Unity Profiler, what can also be sure:
Memory in use is not too large ( compared with other branches )
By using GPU Frame Capture, I found that the texture is incorrect in the captured frame:
GPU Frame Capture :
Has anyone ever solved this issue?