I have a renderer class in my Metal Swift (iOS/MacOS) project that is an MTKViewDelegate. I extract the MTLDevice using MTLCreateSystemDefaultDevice(), however after init, it becomes nil? I wonder if I've missed a quirk of Swift or Metal here. This is roughly how the code goes,
class Renderer: NSObject, MTKViewDelegate {
var device: MTLDevice!
init(metalView: MTKView) {
guard let device = MTLCreateSystemDefaultDevice() else
{
fatalError("GPU not available")
}
metalView.device = device
if device != nil {
print (“device not nil”)
}
}
func draw(in view: MTKView) {
if device == nil {
print (“device is nil here”)
}
}
}
In my ViewController I do
guard let metalView = view as? MTKView else {
fatalError("Metal View not setup")
}
renderer = Renderer(metalView: metalView)
What I see happen is:
device not nil
device is nil here
device is nil here
device is nil here
device is nil here
at 60hz on every draw call
EDIT: Edited code to make it clear that the device actually is being assigned to a variable in the global scope (metalView).
As per your code you are not assigning local device to implicitly unwrapped global device variable. Assign local device to global one to fix the issue.
guard let device = MTLCreateSystemDefaultDevice() else
{
fatalError("GPU not available")
}
self.device = device
Related
I am using Openvidu for call in my iOS app and it is working fine. But When I am closing call, app is not releasing microphone, an orange dot always showing on the status bar. To closing call I am using same code which Openvidu team provided but not worked.
This code provided by Openvidu team
override func dispose() {
super.dispose()
if (videoTrack != nil){
self.videoTrack?.remove(self.renderer!)
}
self.videoTrack = nil
}
In the super.dispose() method they are closing peer connection that is
func dispose() { // This method called from previous method as super.dispose()
peerConnection?.close()
peerConnection = nil
}
With this code both camera and microphone was not releasing so I added code to stop capture in the dispose method and it started releasing camera but microphone issue is present.
override func dispose() {
super.dispose()
if let videoCapturer = self.videoCapturer as? RTCCameraVideoCapturer {
videoCapturer.stopCapture()
}
if (videoTrack != nil){
self.videoTrack?.remove(self.renderer!)
}
self.videoTrack = nil
}
To resolve microphone issue I tried several code and one of them worked. But the problem is after releasing microphone next call is not getting microphone to use and no audio there.The code I used is
override func dispose() {
super.dispose()
if let videoCapturer = self.videoCapturer as? RTCCameraVideoCapturer {
videoCapturer.stopCapture()
}
self.videoCapturer = nil
if (videoTrack != nil && self.renderer != nil){
self.videoTrack?.remove(self.renderer!)
}
self.renderer = nil
self.videoTrack = nil
self.audioTrack = nil
// This code is releasing microphone but next time not able to use microphone
self.rtcAudioSession.lockForConfiguration()
do {
try self.rtcAudioSession.setActive(false)
} catch let error {
debugPrint("Error deactivating AVAudioSession: \(error)")
}
self.rtcAudioSession.unlockForConfiguration()
}
any help would be appreciated
I am able to detect when a screen is detected, associate it with an appropriate windowScene and add a view to it. Slightly hacky but approximately working (code for disconnection not included here), thanks to this SO question:
class ExternalViewController: UIViewController {
override func viewDidLoad() {
view.backgroundColor = .cyan
print("external frame \(view.frame.width)x\(view.frame.height)")
}
}
class ViewController: UIViewController {
var additionalWindows: [UIWindow] = []
override func viewDidLoad() {
//nb, Apple documentation seems out of date.
//https://stackoverflow.com/questions/61191134/setter-for-screen-was-deprecated-in-ios-13-0
NotificationCenter.default.addObserver(forName: UIScreen.didConnectNotification, object: nil, queue: nil) { [weak self] notification in
guard let self = self else {return}
guard let newScreen = notification.object as? UIScreen else {return}
// Give the system time to update the connected scenes
DispatchQueue.main.asyncAfter(deadline: .now() + 0.05) {
// Find matching UIWindowScene
let matchingWindowScene = UIApplication.shared.connectedScenes.first {
guard let windowScene = $0 as? UIWindowScene else { return false }
return windowScene.screen == newScreen
} as? UIWindowScene
guard let connectedWindowScene = matchingWindowScene else {
NSLog("--- Connected scene was not found ---")
return
//fatalError("Connected scene was not found") // You might want to retry here after some time
}
let screenDimensions = newScreen.bounds
let newWindow = UIWindow(frame: screenDimensions)
NSLog("newWindow \(screenDimensions.width)x\(screenDimensions.height)")
newWindow.windowScene = connectedWindowScene
let vc = ExternalViewController()
vc.mainVC = self
newWindow.rootViewController = vc
newWindow.isHidden = false
self.additionalWindows.append(newWindow)
}
}
}
}
When I do this in the iOS simulator, I see my graphics fill the screen as intended, but when running on my actual device, it appears with a substantial black border around all sides.
Note that this is not the usual border seen with the default display mirroring behaviour - the 16:9 aspect ratio is preserved, and I do see different graphics as expected, (flat cyan color in my example code, normally I'm doing some Metal rendering that has some slight anomalies that are out of scope here, although perhaps might lead to some different clues on this if I dig into it deeper).
The print messages report the expected 1920x1080 dimensions. I don't know UIKit very well, and haven't been doing much active Apple development (I'm dusting off a couple of old side projects here in the hopes of being able to use them to project visuals at a gig in the near future), so I don't know if there's something else to do with sizing constraints etc that I might be missing, but even so it's hard to see why it would behave differently in the simulator.
Other apps I have installed from the app store do indeed show fullscreen graphics on the external display - Netflix shows fullscreen video as you would expect, Concepts shows a different representation of the document than the one you see on the device.
So, in this instance the issue is to do with Overscan Compensation. Thanks to Jerrot on Discord for pointing me in the right direction.
In the context of my app, it is sufficient to add newScreen.overscanCompensation = .none in the connection notification delegate (actually, in the part that is delayed a few ms after that - it doesn't work if applied directly in the connection notification). In the question linked above, there is further discussion of other aspects that may be important in a different context.
This is my ViewController modified to achieve the desired result:
class ViewController: UIViewController {
var additionalWindows: [UIWindow] = []
override func viewDidLoad() {
//nb, Apple documentation seems out of date.
//https://stackoverflow.com/questions/61191134/setter-for-screen-was-deprecated-in-ios-13-0
NotificationCenter.default.addObserver(forName: UIScreen.didConnectNotification, object: nil, queue: nil) { [weak self] notification in
guard let self = self else {return}
guard let newScreen = notification.object as? UIScreen else {return}
// Give the system time to update the connected scenes
DispatchQueue.main.asyncAfter(deadline: .now() + 0.05) {
// Find matching UIWindowScene
let matchingWindowScene = UIApplication.shared.connectedScenes.first {
guard let windowScene = $0 as? UIWindowScene else { return false }
return windowScene.screen == newScreen
} as? UIWindowScene
guard let connectedWindowScene = matchingWindowScene else {
NSLog("--- Connected scene was not found ---")
return
//fatalError("Connected scene was not found") // You might want to retry here after some time
}
let screenDimensions = newScreen.bounds
////// new code here --->
newScreen.overscanCompensation = .none
//////
let newWindow = UIWindow(frame: screenDimensions)
NSLog("newWindow \(screenDimensions.width)x\(screenDimensions.height)")
newWindow.windowScene = connectedWindowScene
let vc = ExternalViewController()
vc.mainVC = self
newWindow.rootViewController = vc
newWindow.isHidden = false
self.additionalWindows.append(newWindow)
}
}
}
}
In this day and age, I find it pretty peculiar that overscan compensation is enabled by default.
protocol Device {
}
protocol ActiveDevice: Device {
}
protocol NoActive: Device {
}
ViewController:
class ViewController : UIViewController {
let device: Device
}
Setting device for ViewController. currentDevice is an object which conforms to protocol Device
vc.device = currentDevice as! ActiveDevice
Checking if it conforms to the protocol:
if let currentDevice = device as? NoActive {
print("Its not active device")
}else if let currentDevice = device as? ActiveDevice {
print("Its active device")
}else {
print("Its just a device")
}
It always prints Its not active device what I would expect in this case that it would print Its active device
Please check the following code and let me know if this helps.
protocol Device {
}
protocol ActiveDevice: Device {
}
protocol NoActive: Device {
}
// class TestDevice: Device {
// class TestDevice: ActiveDevice {
class TestDevice: NoActive {
}
let currentDevice = TestDevice()
// let device: Device = currentDevice as! ActiveDevice
(It threw error as "Could not cast value of type '__lldb_expr_9.TestDevice' (0x11a2f9090) to '__lldb_expr_9.ActiveDevice' (0x11a6d0628)."). We cannot do this.
let device: Device = currentDevice
if device is NoActive {
print("Its not active device")
}else if device is ActiveDevice {
print("Its active device")
}else {
print("Its just a device")
}
Now, the output is "Its not active device". And after changing the TestDevice to "ActiveDevice", it printed "Its active device" and so on.
I am having an issue creating a capture session in a custom UIView. I set the delegate like this
class Camera: UIView, AVCaptureFileOutputRecordingDelegate, AVAudioRecorderDelegate {
}
and then I set everything up and set the delegate like this
self.recordingDelegate? = self
captureSession.sessionPreset = AVCaptureSessionPresetHigh
let devices = AVCaptureDevice.devices()
for device in devices {
if (device.hasMediaType(AVMediaTypeVideo)) {
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
beginSession()
}
}
}
}
and all goes well. However, in the beginSession function:
func beginSession() {
let err : NSError? = nil
do {
self.captureSession.addInput(try AVCaptureDeviceInput(device: self.captureDevice!))
}
catch {
print("dang")
}
if err != nil {
print("error: \(err?.localizedDescription)")
}
...
The catch is thrown when I try to add the capture device input and there for it is not being added and I can not figure out why.
All of my code I am currently using was working fine before when I had it inside a UIViewController but when I switched it over to a subclass of UIView it stopped working. Any help would be appreciated if more code is needed let me know thank you!
I figured it out the iOS device I was using did not have the camera enabled for some reason there for the input could not be added which made the preview layer unable to capture any data
Fatal error: unexpectedly found nil while unwrapping an Optional value (lldb)
This message is because my variable is set to nil but the code is expecting it to not be nil. But I don't have a solution. When I remove the question mark from the casting and assignment other errors happen.
Thread1
Fatal error green highlighted line at if deviceInput == nil!. And
another error green highlighted line at beginSession() call.
The app starts, camera torch gets turned on automatically as per my code but then the app crashes there. App stays running stuck on the launch screen with the torch still on.
Could you please see how much camera session is set up and tell me what's wrong? Thanks
import UIKit
import Foundation
import AVFoundation
import CoreMedia
import CoreVideo
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
let captureSession = AVCaptureSession()
var captureDevice : AVCaptureDevice?
var validFrameCounter: Int = 0
// for sampling from the camera
enum CurrentState {
case statePaused
case stateSampling
}
var currentState = CurrentState.statePaused
override func viewDidLoad() {
super.viewDidLoad()
captureSession.sessionPreset = AVCaptureSessionPresetHigh
let devices = AVCaptureDevice.devices()
// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the back camera
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
//println("Capture device found")
beginSession() // fatal error
}
}
}
}
}
// configure device for camera and focus mode
// start capturing frames
func beginSession() {
// Create the AVCapture Session
var err : NSError? = nil
captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &err))
if err != nil {
println("error: \(err?.localizedDescription)")
}
// Automatic Switch ON torch mode
if captureDevice!.hasTorch {
// lock your device for configuration
captureDevice!.lockForConfiguration(nil)
// check if your torchMode is on or off. If on turns it off otherwise turns it on
captureDevice!.torchMode = captureDevice!.torchActive ? AVCaptureTorchMode.Off : AVCaptureTorchMode.On
// sets the torch intensity to 100%
captureDevice!.setTorchModeOnWithLevel(1.0, error: nil)
// unlock your device
captureDevice!.unlockForConfiguration()
}
// Create a AVCaptureInput with the camera device
var deviceInput : AVCaptureInput = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: &err) as! AVCaptureInput
if deviceInput == nil! { // fatal error: unexpectedly found nil while unwrapping an Optional value (lldb)
println("error: \(err?.localizedDescription)")
}
// Set the output
var videoOutput : AVCaptureVideoDataOutput = AVCaptureVideoDataOutput()
// create a queue to run the capture on
var captureQueue : dispatch_queue_t = dispatch_queue_create("captureQueue", nil)
// setup ourself up as the capture delegate
videoOutput.setSampleBufferDelegate(self, queue: captureQueue)
// configure the pixel format
videoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_32BGRA)]
// kCVPixelBufferPixelFormatTypeKey is a CFString btw.
// set the minimum acceptable frame rate to 10 fps
captureDevice!.activeVideoMinFrameDuration = CMTimeMake(1, 10)
// and the size of the frames we want - we'll use the smallest frame size available
captureSession.sessionPreset = AVCaptureSessionPresetLow
// Add the input and output
captureSession.addInput(deviceInput)
captureSession.addOutput(videoOutput)
// Start the session
captureSession.startRunning()
func setState(state: CurrentState){
switch state
{
case .statePaused:
// what goes here? Something like this?
UIApplication.sharedApplication().idleTimerDisabled = false
case .stateSampling:
// what goes here? Something like this?
UIApplication.sharedApplication().idleTimerDisabled = true
}
}
// sampling from the camera
currentState = CurrentState.stateSampling
// stop the app from sleeping
UIApplication.sharedApplication().idleTimerDisabled = true
// update our UI on a timer every 0.1 seconds
NSTimer.scheduledTimerWithTimeInterval(0.1, target: self, selector: Selector("update"), userInfo: nil, repeats: true)
func stopCameraCapture() {
captureSession.stopRunning()
}
// pragma mark Pause and Resume of detection
func pause() {
if currentState == CurrentState.statePaused {
return
}
// switch off the torch
if captureDevice!.isTorchModeSupported(AVCaptureTorchMode.On) {
captureDevice!.lockForConfiguration(nil)
captureDevice!.torchMode = AVCaptureTorchMode.Off
captureDevice!.unlockForConfiguration()
}
currentState = CurrentState.statePaused
// let the application go to sleep if the phone is idle
UIApplication.sharedApplication().idleTimerDisabled = false
}
func resume() {
if currentState != CurrentState.statePaused {
return
}
// switch on the torch
if captureDevice!.isTorchModeSupported(AVCaptureTorchMode.On) {
captureDevice!.lockForConfiguration(nil)
captureDevice!.torchMode = AVCaptureTorchMode.On
captureDevice!.unlockForConfiguration()
}
currentState = CurrentState.stateSampling
// stop the app from sleeping
UIApplication.sharedApplication().idleTimerDisabled = true
}
}
}
Looking at your code, you should really try to get out of the habit of force-unwrapping optionals using ! at any opportunity, especially just to “make it compile". Generally speaking, if you ever find yourself writing if something != nil, there’s probably a better way to write what you want. Try looking at the examples in this answer for some idioms to copy. You might also find this answer useful for a high-level explanation of why optionals are useful.
AVCaptureDeviceInput.deviceInputWithDevice returns an AnyObject, and you are force-casting it to a AVCaptureInput with this line:
var deviceInput = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: &err) as! AVCaptureInput
(you don’t need to state the type of deviceInput by the way, Swift can deduce it from the value on the right-hand side)
When you write as!, you are telling the compiler “don’t argue with me, force the result to be of type AVCaptureInput, no questions asked”. If it turns out what is returned is something of a different type, your app will crash with an error.
But then on the next line, you write:
if deviceInput == nil! {
I’m actually quite astonished this compiles at all! But it turns out it does, and it’s not surprising it crashes. Force-unwrapping a value that is nil will crash, and you are doing this in it’s purest form, force-unwrapping a nil literal :)
The problem is, you’ve already stated that deviceInput is a non-optional type AVCaptureInput. Force-casting the result is probably not the right thing to do. As the docs for state,
If the device cannot be opened because it is no longer available or because it is in use, for example, this method returns nil, and the optional outError parameter points to an NSError describing the problem.
The right way to handle this is to check is the result is nil, and act appropriately. So you want to do something like:
if let deviceInput = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: &err) as? AVCaptureInput
// use deviceInput
}
else {
println("error: \(err?.localizedDescription)")
}