iOS crash: MTLRenderPassDescriptor null after rotation - ios

I'm writing a iOS app using Metal. At some point during the MTKViewDelegate draw, I create a render pass descriptor and render things on screen,
let encoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor)
encoder.setViewport(camera.viewport)
encoder.setScissorRect(camera.scissorRect)
At the beginning of my draw function, I have a semaphore, the same code found in the Metal game template found in Xcode, and then a check to verify that the view hasn't changed size. If it has, I recreate my buffers,
let w = _gBuffer?.width ?? 0
let h = _gBuffer?.height ?? 0
if let metalLayer = view.layer as? CAMetalLayer {
let size = metalLayer.drawableSize
if w != Int(size.width) || h != Int(size.height ){
_gBuffer = GBuffer(device: device, size: size)
}
}
Everything works fine, and rotation was working fine on my iPhone6. However, when I tried on an iPad Pro, it always generate a SIGABRT when I try to rotate the device. The debugger tells me the encoder is null. I also get this exception in the console,
MTLDebugRenderCommandEncoder.mm:2028: failed assertion `(rect.x(1024) + rect.width(1024))(2048) must be <= 1536'
The exception must occur because I'm updating "camera" inside mtkView,
func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {
camera.setBounds(view.bounds)
}
When I run without the debugger attached, it doesn't crash.
I guess mtkView is called asynchronously and I should do something to stop the rendering midway through when mtkView is called, but the mutex should be in the library, not in my code? Although both draw and mtkView are being called from the same thread (Thread 1 in the debugger)... If I step-debug putting breakpoints in draw and mtkView, it seems I manually sync'ing and it doesn't crash. I'm a bit lost...
The full source code is here: https://github.com/endavid/VidEngine
Any ideas?

The exception message was the hint. I got distracted by the encoder being null. I guess it becomes null once the exception is thrown, but the problem wasn't in the encoder.
The code in camera.setBounds(view.bounds) wasn't updating the scissorRect...
I have a CADisplayLink that updates the CPU objects at a different rate, and the scissorRect was being updated there when it detected a change.
I've added a call to the full camera update inside mtkView() and the crash is gone now :)

I can resolve this, unchecking "Debug executable" in the Scheme

Related

SceneKit physics contact method crashes with EXC_BAD_ACCESS

In a SceneKit project, the following method is intermittently (but consistently) crashing with EXC_BAD_ACCESS. Specifically, it says Thread 1: EXC_BAD_ACCESS (code=1, address=0x0).
contactTestBetween(_:_:options:)
The method is called from inside SceneKit's SCNSceneRendererDelegate method. It's also being run on the main thread because otherwise, this code crashes even more often. So, here's the greater context:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
var ball = self.scene?.rootNode.childNode(withName: "ball", recursively: true)
var ballToFloorContact: [SCNPhysicsContact]?
let theNodes: [SCNNode]? = self.scene?.rootNode.childNodes.filter({ $0.name?.contains("floor") == true})
let optionalWorld: SCNPhysicsWorld? = self.scene?.physicsWorld
DispatchQueue.main.async {
if let allNodes = theNodes {
for i in 0..<allNodes.count {
let n = allNodes[i]
if let b = n.physicsBody, let s = ball?.physicsBody {
ballToFloorContact = optionalWorld?.contactTestBetween(b, s)
}
}
}
}
}
The SCNSceneRendererDelegate is set in viewDidLoad:
scnView.delegate = scnView
Additional info:
When the crash occurs, optionalWorld, b, and s are all properly defined.
I originally had the call to filter located inside the DispatchQueue, but it was causing a crash that seemed identical to this one. Moving that line outside the DispatchQueue solved that problem.
Question: Any idea what might be causing this crash, and how I might avoid it? Am I doing something wrong, here?
Thanks!
UPDATE: I tried adding the following guard statement to protect against a situation where the contactTestBetween method is, itself, nil (after all, that seems to be what Xcode is telling me):
guard let optionalContactMethod = optionalWorld?.contactTestBetween else {
return
}
However, after some additional testing time, contactTestBetween eventually crashed once again with EXC_BAD_ACCESS on the line guard let optionalContactMethod = optionalWorld?.contactTestBetween else {. I truly do not understand how that could be, but it be. Note that I tried this guard paradigm both with and without the presence of the DispatchQueue.main.async call, with the same result.
I did two things, here:
I added Accelerometer and Gyroscope to the UIRequiredDeviceCapabilities key in my Info.plist file. I did this because my game uses Core Motion, but I had neglected to include the necessary values.
On a hunch, I replaced the SCNSceneRendererDelegate method renderer(_: SCNSceneRenderer, updateAtTime: TimeInterval) with the alternative method renderer(_: SCNSceneRenderer, didRenderScene: SCNScene, atTime: TimeInterval).
Since doing these things, I haven't been able to reproduce a crash.
An alternative unsafe fix is
try! await Task.sleep(nanoseconds: 1)
somewhere before invoking contactTestBetween().
This answer sits here as a warning to anyone who might want to use async/await as a fix or is inadvertendly using some await (any kind of await works) to break out of the run loop which somehow makes debugging EXC_BAD_ACCESS nearly impossible because it will get triggered rarely enough.
I know from experience that if the above fixed a race then races WILL still happen especially and mostly with CPUs running at 100% and actually I got contactTestBetween to crash once after many tries.
I have no clue what's going on behind the scenes or how to synchronize with physicsWorld.
#West1 solution is still the best if it works as advertised.

iOS CoreMotion.MotionThread EXC_BAD_ACCESS is thrown after stopDeviceMotionUpdates() is called

I have a view controller that uses CoreMotion to monitor the device's Attitude.
Here is the handler that is used in the call to startDeviceMotionUpdates():
/**
* Receives device motion updates from Core Motion and uses the attitude of the device to update
* the position of the attitude tracker inside the bubble level view.
*/
private func handleAttitude(deviceMotion: CMDeviceMotion?, error: Error?) {
guard let attitude = deviceMotion?.attitude else {
GLog.Error(message: "Could not get device attitude.")
return
}
// Calculate the current attitude vector
let roll = attitude.roll
let pitch = attitude.pitch - optimalAngle
let magnitude = sqrt(roll * roll + pitch * pitch)
// Drawing can only happen on the main thread
DispatchQueue.main.async {
[weak self] in
guard let weakSelf = self else {
GLog.Log("could not get weak self")
return
}
// Move the bubble in the attitude tracker to match the current attitude
weakSelf.bubblePosX.constant = CGFloat(roll * weakSelf.attitudeScalar)
weakSelf.bubblePosY.constant = CGFloat(pitch * weakSelf.attitudeScalar)
// Set the border color based on the current attitude.
if magnitude < weakSelf.yellowThreshold {
weakSelf.attitudeView.layer.borderColor = weakSelf.green.cgColor
} else if magnitude < weakSelf.redThreshold {
weakSelf.attitudeView.layer.borderColor = weakSelf.yellow.cgColor
} else {
weakSelf.attitudeView.layer.borderColor = weakSelf.red.cgColor
}
// Do the actual drawing
weakSelf.view.layoutIfNeeded()
}
}
I added [weak self] to see if it would fix things, but it has not. This crash is not easy to reproduce.
When I am done the with VC that uses CoreMotion, I call stopDeviceMotionUpdates() in the VC's viewWillDisappear() method. This VC is the only class in the app that imports CoreMotion.
However, when I arrive in the next VC, occasionally I see EXC_BAD_ACCESS getting thrown on a co m.apple.CoreMotion.MotionThread.
Anybody know why CoreMotion would spawn a thread after the VC that used it has been dismissed? I've verified that the VC is no longer in memory when the crash happens. And yes, the two VCs I'm dealing with are presented modally.
I've examined the memory graph, and when the crash happens, these CoreMotion objects are being reported:
And:
I don't know if those objects should still be in memory after the instance of the CoreMotionManager has been deallocated or not. According to the memory graph, there is no instance of CoreMotionManager in memory.
The VC that imports CoreMotion also imports ARKit. Not sure if some crazy interaction between CoreMotion and ARKit is the problem.
There does seem to be something going on between the main thread and the MotionThread(14):
I'm not sure what to make of the main thread stack trace.
Sometimes when the CoreMotion VC is dismissed, I've noticed that there is a lag in the memory it uses getting released. The release always happens eventually.
Thanks to anybody who can help!
We have a ARSCNView member. We were not calling sceneView.session.pause() when we dismissed the VC that used the sceneView member. One line of code. That was it.
Are you passing the function handleAttitude direct to startDeviceMotionUpdates
as in startDeviceMotionUpdates(to:someQueue, withHandler:handleAttitude)
That will set up a retain cycle between your VC and CMMotionManager
Try
singletonMM.startDeviceMotionUpdates(to:someQueue) { [weak self] motion,error in
self?.handleAttitude(motion,error)
}
To prevent a strong ref to your VC.

Affdex AFDXDetector delegate functions are never called when using the camera?

I’m having some trouble getting the Affdex iOS SDK to work with streaming input from the onboard camera. I’m using XCode 7.1.1 and an iPhone 5S. Here’s my initialization code:
let detector = AFDXDetector.init(delegate: self, usingCamera: AFDX_CAMERA_FRONT, maximumFaces: 1)
detector.setDetectAllEmotions(true)
detector.setDetectAllExpressions(true)
detector.maxProcessRate = 5.0
detector.licensePath = NSBundle.mainBundle().pathForResource("sdk_kevin#sideapps.com", ofType: "license”)
if let error = detector.start() {
log.warning("\(error)")
}
No error is produced by detector.start() and the app requests access to the camera the first time it is called, as expected. However, none of the delegate functions are ever called. I have tested with both AFDX_CAMERA_FRONT and AFDX_CAMERA_BACK.
I am able to process single images captured by the onboard camera as expected using the following:
let detector = AFDXDetector(delegate: self, discreteImages: true, maximumFaces: 1)
detector.setDetectAllEmotions(true)
detector.setDetectAllExpressions(true)
detector.licensePath = NSBundle.mainBundle().pathForResource("sdk_kevin#sideapps.com", ofType: "license")
if let error = detector.start() {
log.warning("\(error)")
}
detector.processImage(image)
Am I missing something obvious?
The issue appears to be the declaration of the detector variable. The lifetime of that variable is only scoped for the function if you declare it inside of the function — it is deallocated when the function exits.
Make the variable an instance variable in the class; this guarantees its lifetime is for the life of the object that it is instantiated in, and the delegate functions should also be called.

iDevice crashes due to animated sprite

I'm trying to have one of my sprites have an animation when my hero hits it, my app crashes on my iDevice and xCode seems to blame it on the animated SKAction that I made.
This is my code:
var fire: SKSpriteNode! = SKSpriteNode(imageNamed:"1")//I've set this as a universal variable
var anime = SKAction.animateWithTextures([SKTexture(imageNamed: "1"), // apparently this line causes the crash
SKTexture(imageNamed:"2"),
SKTexture(imageNamed:"3"),
SKTexture(imageNamed:"4"),
SKTexture(imageNamed:"5"),
SKTexture(imageNamed:"6"),//this line is also a problem apparently
SKTexture(imageNamed:"7")], timePerFrame: 0.1)
fire.runAction(SKAction.repeatActionForever(anime))
fire.size = CGSize(width: targetSprite.size.width*1.3, height: self.frame.size.height*0.9)
fire.position = targetSprite.position
fire.runAction(SKAction.sequence([fall,kill]))
addChild(fire)
return fire
after the crash xCode points to one of the two lines I've pointed out and says EXC_BAD_ACCESS code=1 and it has no crash message.

PrintCanvas3D won't work

I have some trouble tring to print graphics from Java3d some computer (Intel based Graphic cards) crash completly when printing. I got this exception.
javax.media.j3d.IllegalRenderingStateException: GL_VERSION
at javax.media.j3d.NativePipeline.createNewContext(Native Method)
at javax.media.j3d.NativePipeline.createNewContext(NativePipeline.java:2736)
at javax.media.j3d.Canvas3D.createNewContext(Canvas3D.java:4895)
at javax.media.j3d.Canvas3D.createNewContext(Canvas3D.java:2421)
at javax.media.j3d.Renderer.doWork(Renderer.java:895)
at javax.media.j3d.J3dThread.run(J3dThread.java:256)
DefaultRenderingErrorListener.errorOccurred:
CONTEXT_CREATION_ERROR: Renderer: Error creating Canvas3D graphics context
graphicsDevice = Win32GraphicsDevice[screen=0]
canvas = visualization.show3D.show.print.OffScreenCanvas3D[canvas0,0,0,3000x2167,invalid]
Java 3D ERROR : OpenGL 1.2 or better is required (GL_VERSION=1.1)
Java Result: 1
I know it said i have to upgrade to OpenGL 1.2 but after checking i already have 1.5 installed (error message is not accurate)
String glVersion = (String)getCanvas3D().queryProperties().get("native.version");
I tried to catch IllegalRenderingStateException but it doesn't work, JVM just crash in any case.
Doesnt anyone know how to have a printing function to work on Intel based Graphic cards ?
I found out the cause of my problem.
Some computer haven't OffScreenRendering support needed by PrintCanvas3D.java.
So i used robot to create a screen capture
public BufferedImage canvasCapture(Dimension size, Point locationOnScreen) {
Rectangle bounds = new Rectangle(locationOnScreen.x, locationOnScreen.y, size.width, size.height);
try{
Robot robot = new Robot(this.getGraphicsConfiguration().getDevice());
return robot.createScreenCapture(bounds);
}catch (Exception e){
e.printStackTrace();
return null;
}
}
Last tricky part was to detect when to switch from proper printing method to ScreenCapture method (since catching the raised exception doesn't work), after some search i found out that queryProperties() could give me this information
here is the code in my Frame3D to choose proper method
Boolean OffScreenRenderingSupport = (Boolean)getCanvas3D().queryProperties().get("textureLodOffsetAvailable");
if (OffScreenRenderingSupport){
bImage = getOffScreenCanvas3D().doRender(dim.width, dim.height);
}else{
bImage = getOffScreenCanvas3D().canvasCapture(getCanvas3D().getSize(), getCanvas3D().getLocationOnScreen());
}
If anyone can find a better way to handle this, please let me know ;)

Resources