Can anybody please explain to me why I am getting an error in this piece of code
My code never gets to the print statement that says, "Video Preview Layer added as sublayer". I am not sure why this is at all. The video preview layer is obviously created just before hand and then sent to be the sublayer of the camera view. My application is a paged based application. This is my second view that I have. My root view controller is blank for now, I am just trying to make sure the transition to my camera is seamless. Everything works when the camera is the 1st page in my application, or only one view controller. But for some reason now, it is telling me there is an unwrapped nil
/* Start The Capture Session */
func startSession() {
println("Now starting the session")
println("About to add session inputs...")
var error: NSError? = nil
let videoCapture = AVCaptureDeviceInput(device: self.cameraCaptureDevice, error: &error)
if error != nil {
println("Error, failed to add camera Capture Device: \(error?.description)")
}
// add video input
if self.session.canAddInput(videoCapture) {
self.session.addInput(videoCapture)
}
println("Start configuring the capture")
// config capture session
if !session.running {
// set JPEG output
self.stillImageOutput = AVCaptureStillImageOutput()
let outputSettings = [ AVVideoCodecKey : AVVideoCodecJPEG ]
self.stillImageOutput!.outputSettings = outputSettings
println("Successfully configured stillImageOutput")
// add output to session
println("Adding still image output to capture session")
if self.session.canAddOutput(stillImageOutput) {
self.session.addOutput(stillImageOutput)
}
println("Successfully added still image output")
println("Displaying camera in UI")
// display camera in UI
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: session)
println("Video Preview Layer set")
cameraView.layer.addSublayer(videoPreviewLayer)
println("Video Preview Layer Added as sublayer")
videoPreviewLayer!.frame = cameraView.layer.frame
println("Video Preview frame set")
videoPreviewLayer!.videoGravity = AVLayerVideoGravityResizeAspectFill
println("Camera successully can display")
// start camera
self.session.startRunning()
println("Capture Session initiated")
}
}
The problem is that you are running this code at a time when cameraView is nil. You need to ask yourself why that is. If cameraView is an #IBOutlet, then this could happen, for example, if you call startSession from outside this view controller, at a time when the view controller is being created but has not yet called its own viewDidLoad (outlet connections are not connected until viewDidLoad has been called).
Related
I am using ios 11, swift 4 and capturing a picture with av foundation library. I have a custom preview as shown and mysettings are as suggested. The problem is when I capture and save the CMSample buffer, it is leftLandscape oriented. I tried to change CapturePhotoOutput orientation but it resist to change?(changing photoOutputConnection.videoOrientation changes nothing?)
if let photoOutputConnection = capturePhotoOutput.connection(with: AVMediaType.video) {
if(photoOutputConnection.isVideoOrientationSupported) {
print("video oryantasyonu = \(photoOutputConnection.videoOrientation)")
} else {
print("video oryantasyonu desteklenmiyor ?!")
}
}
Here is the preview (phono screen):
and here is the capture output taken from xcode debug quick view:
Here is session configuration :
self.capturePhotoOutput = AVCapturePhotoOutput()
capturePhotoOutput.isHighResolutionCaptureEnabled = true
// A Live Photo captures both a still image and a short movie centered on the moment of capture,
// which are presented together in user interfaces such as the Photos app.
capturePhotoOutput.isLivePhotoCaptureEnabled = capturePhotoOutput.isLivePhotoCaptureSupported
guard self.captureSession.canAddOutput(capturePhotoOutput) else { return }
// The sessionPreset property of the capture session defines the resolution and quality level of the video output.
// For most photo capture purposes, it is best set to AVCaptureSessionPresetPhoto to deliver high resolution photo quality output.
self.captureSession.sessionPreset = AVCaptureSession.Preset.photo
self.captureSession.addOutput(capturePhotoOutput)
self.captureSession.commitConfiguration()
This is a continuation of the discussion in here.
I'm building a voice recorder app for iOS in Swift, and I have a custom waveform graphic that I feed data to from a AKFFTTap object. I had a problem that the FFT starts generating all zeros after a while. In order to diagnose and solve this, I'm trying to re-initialize all the nodes and taps whenever the user starts recording (assuming that would solve the issue). Previously, AudioKit was initialized and started when the view was loaded, and that's it.
So, now I try to re-allocate everything each recording, and it works, except that every re-recording (so not the first one, but the one after), the FFT doesn't work again. This time it's consistent and reproducible.
So, here's what I'm doing, and if anyone can show me where I'm going wrong, I'll be very grateful:
When recording starts, I'm doing:
mic = AKMicrophone() //needs to be started
fft = AKFFTTap.init(mic) //will start when mic starts
//now, let's define a mixer, and add the mic node to it, and initialize the recorder to it
micMixer = AKMixer(mic)
recorder = try AKNodeRecorder(node: micMixer)
micBooster = AKBooster(micMixer, gain: 0)
AudioKit.output = micBooster
try AudioKit.start()
mic.start()
micBooster.start()
try recorder.record()
When recording stops:
//now go back deallocating stuff
recorder.stop()
micBooster.stop()
micMixer.stop()
mic.stop()
//now set player file to recorder file, since I want to play it later
do {
if let file = recorder.audioFile {
player = try AKAudioPlayer(file: file, looping: false, lazyBuffering: false, completionHandler: playingEnded)
try AudioKit.stop()
} else {
//handle no file error
}
}
catch {
//handle error
}
So, can anyone please help me figure out why the FFT doesn't work the second time around?
Thanks!
I am working on a function in my app to write images from my sample buffer to an AVAssetWriter. Curiously, this works fine on a 10.5" iPad Pro, but causes a crash on a 7.9" iPad Mini 2. I can't fathom how the same code could be problematic on two different devices. But here's my code;
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
// Setup the pixel buffer image
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
// Setup the format description
let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer)!
// Setup the current video dimensions
self.currentVideoDimensions = CMVideoFormatDescriptionGetDimensions(formatDescription)
// Setup the current sample time
self.currentSampleTime = CMSampleBufferGetOutputPresentationTimeStamp(sampleBuffer)
// Handle record
if self.isCapturing {
// Setup auto release pool
autoreleasepool {
// Setup the output image
let outputImage = CIImage(cvPixelBuffer: pixelBuffer)
// Ensure the video writer is ready for more data
if self.videoWriter?.assetWriterPixelBufferInput?.assetWriterInput.isReadyForMoreMediaData == true {
// Setup the new pixel buffer (THIS IS WHERE THE ERROR OCCURS)
var newPixelBuffer: CVPixelBuffer? = nil
// Setup the pixel buffer pool
CVPixelBufferPoolCreatePixelBuffer(nil, (self.videoWriter?.assetWriterPixelBufferInput!.pixelBufferPool!)!, &newPixelBuffer)
// Render the image to context
self.context.render(outputImage, to: newPixelBuffer!, bounds: outputImage.extent, colorSpace: nil)
// Setup a success case
let success = self.videoWriter?.assetWriterPixelBufferInput?.append(newPixelBuffer!, withPresentationTime: self.currentSampleTime!)
// Ensure the success case exists
guard let mySuccess = success else { return }
// If unsuccessful, log
if !mySuccess {
print("Error with the sample buffer. Check for dropped frames.")
}
}
}
}
}
I receive an error that newPixelBuffer is nil, but again, only on a 7.9" iPad. The iPad Pro functions without any errors. Any thoughts? Thanks!
I eventually resolved this issue by tracing the problem back to my chosen codec in my Asset Writer's video output settings. I had my codec set to:
let codec: AVVideoCodecType = AVVideoCodecType.hevc
In doing some research, I found this article, which indicates that only certain devices can capture media in HEVC. As my first device was a 10.5" iPad Pro, it captured media with no problem. My second device was an iPad Mini, which resulted in the original problem occurring each time I tried to capture.
I have since changed my codec choice to:
let codec: AVVideoCodecType = AVVideoCodecType.h264, and the issue has now disappeared.
I am setting a microphone on a AVCaptureSession and I am in need of a switch for the mic. How should I proceed with this?
Do I really need to the captureSession?.removeInput(microphone), or is there an easies way?
let microphone = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio)
do {
let micInput = try AVCaptureDeviceInput(device: microphone)
if captureSession.canAddInput(micInput) {
captureSession.addInput(micInput)
}
} catch {
print("Error setting device audio input: \(error)")
return false
}
You can always just leave the mic input attached and then using your switch decide what to do with the audio buffer. If the switch is off then don't process the audio data. I found an objc.io article that talks about how to set up the separate audio and video buffers before writing the data with an AVAssetWriter.
By default, all AVCaptureAudioChannel objects exposed by a connection are enabled. You may set enabled to false to stop the flow of data for a particular channel.
https://developer.apple.com/documentation/avfoundation/avcaptureaudiochannel/1388574-isenabled
I would like to use ARKit to calculate the amount of ambient light that is in the current video frame. However after creating an ARSCNView object when I retrieve the current frame it returns a null value.
What am I doing wrong?
public class EyeAlignmentUICameraPreview : UIView, IAVCaptureVideoDataOutputSampleBufferDelegate
{
void Initialize()
{
CaptureSession = new CaptureSession();
PreviewLayer = new AVCaptureVideoPreviewLayer(CaptureSession)
{
Frame = Bounds,
VideoGravity = AVLayerVideoGravity.ResizeAspectFill
};
var device = AVCaptureDevice.GetDefaultDevice(AVCaptureDeviceType.BuiltInTelephotoCamera, AVMediaType.Video, AVCaptureDevicePosition.Back);
ARSCNView SceneView = new ARSCNView();
// frame is null after this line is executed
var frame = SceneView.Session.CurrentFrame;
}
}
Update my comment to answer for more details.
ARFrame
A video image and position tracking information captured as part of an AR session.
currentFrame
The video frame image, with associated AR scene information, most recently captured by the session.
According to these Apple ARKit Documentations, the currentFrame will have value when the ARSession get the video and associated AR scene information. So, we have to run the session at first.
To run the ARSession, we need a session configuration:
Running a session requires a session configuration: an instance of the ARConfiguration class, or its subclass ARWorldTrackingConfiguration. These classes determine how ARKit tracks a device's position and motion relative to the real world, and thus affect the kinds of AR experiences you can create.
Thus, the code snippet for ARSession running is like this:
public override void ViewWillAppear(bool animated)
{
base.ViewWillAppear(animated);
//Create a session configuration
var configuration = new ARWorldTrackingConfiguration
{
PlaneDetection = ARPlaneDetection.Horizontal,
LightEstimationEnabled = true
};
// Run the view's session
SceneView.Session.Run(configuration, ARSessionRunOptions.ResetTracking);
}