AVCaptureSession captures black/dark frames after changing preset - ios

I'm developing app which supports both still image and video capture with AVFoundation. Capturing them requires different AVCaptureSession presets. I check for canSetSessionPreset(), begin change with beginConfiguration(), set required preset with sessionPreset and end with commitConfiguration().
I found if I'm capturing still image with AVCaptureStillImageOutput immediately after changing preset, it returns no errors, but the resulting image is black or very dark sometimes.
If I start capturing video with AVCaptureMovieFileOutput immediately after changing preset, first several frames in a resulting file are also black or very dark at times.
Right after changing preset the screen flickers likely due to the camera adjusting the exposure. So it looks like immediately after changing preset camera start measuring exposure from very fast shutter speed, which results in black/dark frames.
Both problems goes away if I insert a 0.1 second delay between changing the preset and starting capture, but that's ugly and no one can guarantee it will work all the time on all devices.
Is there a clean solution to this problem?

This is for future users...
It was happening for me when I was setting the sessionPreset as high and as soon as I was starting recording I was making changes to video output connection and setting focus which I then moved to while setting up camera and it worked!!!

Related

Result of AVCaptureMovieFileOutput is different from what's seen in AVCaptureVideoPreviewLayer when zoomed in with stabilization turned on

I'm working on an app that records and saves video to the user's camera roll. My setup: I'm using an AVCaptureSession with an AVCaptureVideoPreviewLayer for the preview and AVCaptureMovieFileOutput for the saved file.
Under most circumstances, what is seen in the preview layer matches up with the saved video asset, but if I turn on stabilization (either AVCaptureVideoStabilizationMode.standard or AVCaptureVideoStabilizationMode.cinematic), and set the zoom factor very high (around 130), then the preview and the output become noticeably offset from each other.
The output is consistently above and slightly to the right of what's shown in the preview. I suspect this happens at smaller zoom factors as well, but the effect is minimal enough to only be noticeable at higher zoom factors.
Part of the reason for turning stabilization on in the first place is to more easily line objects up when zoomed in, so merely limiting zoom or turning off stabilization isn't really an option.
I'm curious to know why the preview and output aren't in sync, but ultimately I'm looking for a possible solution that lets me keep 1. zoom 2. stabilization and 3. an accurate preview.
UPDATE: Dec. 12
After messing around with this some more, it seems that sometimes the issue doesn't happen using cinematic stabilization, and my new theory is that it might have to do with specific combinations of AVCaptureDeviceFormats and stabilization settings.

how to capture a video in specific part rather than full screen in iOS

I am capturing a video in my IOS app using AVFoundation. i am able to record the video and able to playback also.
But my problem is that i am showing the capturing video in a view which is around 200 points height.so i expected the video would be recorded in the same dimensions.but when i playback the video its showing that the whole screen has been recorded.
so i want to know is there any way to record the camera preview which was visible to user only.And any help should be appreciated.
the screenshots:
()
()
You cannot think of video resolution in terms of UIView dimensions (or even screen size, for that matter). The camera is going to record at a certain resolution depending on how you set up the AVCaptureSession. For instance, you can change the video quality by setting the session preset via:
[self.captureSession setSessionPreset:AVCaptureSessionPreset640x480]
(It is already set by default to the highest setting.)
Now, when you play the video back, you do have a bit of control over how it is presented. For instance, if you want to play it in a smaller view (who's layer is of type AVPlayerLayer), you can set the video gravity via:
AVCaptureVideoPreviewLayer *previewLayer = (AVCaptureVideoPreviewLayer*)self.previewView.layer;
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
And depending on what you pass for the gravity parameter, you will get different results.
Hopefully this helps. Your question is a little unclear as it seems like you want the camera to only record a certain amount of it's input, but you'd have to put your hand over part of the lens to accomplish that ;)

Capturing GPUImageVideoCamera or AVCaptureSession frames in circular or ring buffer for instant playback

I am using GPUImage's GPUImageVideoCamera initWithSessionPreset:cameraPosition: in order to display video from the rear facing camera on an iOS device (targeting iOS 7). This is filtered and displayed on a GPUImageView. Will not exceed AVCaptureSessionPreset640x480.
At any given moment in the app, I need to recall the past 5 seconds of unfiltered video captured from the rear-facing camera and instantly play this back on another (or the same) GPUImageView.
I can access CMSampleBufferRef via GPUImageVideoCamera's willOutputSampleBuffer: which is passed through from but I'm not sure how one goes about getting the most recent frames into memory in an efficient way such that they can be instantly, seamlessly played back.
I believe the solution is a Circular Buffer using something like TPCircularBuffer but I'm not sure that will work with a video stream. Also wanted to reference unanswered Buffering CMSampleBufferRef into a CFArray and Hold multiple Frames in Memory before sending them to AVAssetWriter as they closely resembled my original plan of attack until I started researching this.

Still pin image capture responding time

I got a problem of the responding time of capturing an image from the still pin.
I have system running video stream at 320*480 from the capture pin and push a button to snap a 720P image from the still pin directly at the same time. But the responding time is too long, around 6 sec, which is not desirable.
I have try other video cap software support snapping a picture while video streaming, the responding time is similar.
I am wondering whether this a hardware problem or a software problem. And how the still pin capture is working actually.Is this from interpolation or change the resolution by hardware.
for example, the camera start at one resolution set keeps sensing and push the data to the buffer through the USB. is it possible for it immediately change to another resolution set and snap an image? is this why the system is taking picture slowly?
Or, is there a way to keep video streaming at high frame rate and snap a high resolution image immediately? No interpolation.
I am doing a project which has the function to snap a image from the video stream. The technology I use is DirectShow. And the responding time is not that long as yours. And the responding time has nothing to do with the streaming frame, according to my experience.
Usually a camera has its own default resolution. It is impossible for it mmediately change to another resolution set and snap an image. So that is not the reason.
Could you please show me some codes? And your camera's type ?

iPhone: Toggling front/back AVCaptureDeviceInput camera when processing individual frames via setSampleBufferDelegate

I've run into an interesting issue when I attempt to switch from using the front camera to using the back camera while processing individual frames via the AVCaptureVideoDataOutput:setSampleBufferDelegate selector. The camera swap works and the preview screen that I'm displaying looks great, it's just that the resulting frames that I capture are no longer in portrait mode, they are in landscape. Also, swapping from the front then back to the back camera will result in the back camera capturing landscape frames. I suspect that since this is the case something is getting screwed up when I swap out the input - it's not the input that's incorrect. I verified this theory by starting the AVCaptureSession with the front facing camera - the frames passed to the buffer delegate are correctly in portrait mode. I've also played with explicitly stopping the AVCaptureSession while the device input is being swapped with no difference in results.
I cribbed from the AVCam demo for inspiration. The suspicious difference between that code and mine is that it records to an AVCaptureMovieFileOutput - it's not processing individual frames.
Any ideas? Why would the orientation of the frames sent to my processor change when I swap out the device input?
Thanks for any response!
Ah ha! I figured it out. For some reason after switching the device input my video output's AVCaptureConnection was getting its orientation reset to landscape-right. To solve the problem, after I swap the input I explicitly ask the video output's AVCaptureConnection to set its orientation to portrait.

Resources