I am facing very weird issue while switching between camera. When user switch the camera from front to rear, user can see the red status bar for a second then disappears automatically with slide up animation. I searched a lot on google & stack-overflow but no luck. I found this question , but its related to audio recording. Here is my code
-(void)toggleCameraIsFront:(BOOL)isFront
{
AVCaptureDevicePosition desiredPosition;
if (isFront) {
desiredPosition = AVCaptureDevicePositionFront;
self.videoDeviceType = VideoDeviceTypeFrontCamera;
}
else {
desiredPosition = AVCaptureDevicePositionBack;
self.videoDeviceType = VideoDeviceTypeRearCamera;
}
for (AVCaptureDevice *d in [AVCaptureDevice devicesWithMediaType: AVMediaTypeVideo])
{
if ([d position] == desiredPosition)
{
AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:d error:nil];
[self.session beginConfiguration];
[self.session removeInput:self.videoInput];
if ([self.session canAddInput:videoDeviceInput])
{
[self.session addInput:videoDeviceInput];
[self setVideoInput:videoDeviceInput];
}
else
{
[self.session addInput:self.videoInput];
}
[self.session commitConfiguration];
break;
}
}
}
Also after camera is switched & try to record the video then below method from AVCaptureVideoDataOutputSampleBufferDelegate not getting called.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
Any kind of help is highly appreciated. Thanks.
This red status bar appears due to audio recording, as you've mentioned a question which also describe that this is due to audio recording.
In order to avoid this you need to remove the audio input from AVCaptureSession
[self.captureSession removeInput:audioInput];
where audioInput is AVCaptureDeviceInput object.
Please check #bruno answer for more clarification.
Related
How can you use the new Vision framework in iOS 11 to track eyes in a video while the head or camera is moving? (using the front camera).
I've found VNDetectFaceLandmarksRequest to be very slow on my iPad - landmarks requests are performed roughly once in 1-2 seconds. I fee like I'm doing something wrong, but there is not much documentation on Apple's site.
I've already watched the WWDC 2017 video on Vision:
https://developer.apple.com/videos/play/wwdc2017/506/
as well as read this guide:
https://github.com/jeffreybergier/Blog-Getting-Started-with-Vision
My code looks roughly like this right now (sorry, it's Objective-C):
// Capture session setup
- (BOOL)setUpCaptureSession {
AVCaptureDevice *captureDevice = [AVCaptureDevice
defaultDeviceWithDeviceType:AVCaptureDeviceTypeBuiltInWideAngleCamera
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionFront];
NSError *error;
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
if (error != nil) {
NSLog(#"Failed to initialize video input: %#", error);
return NO;
}
self.captureOutputQueue = dispatch_queue_create("CaptureOutputQueue",
DISPATCH_QUEUE_SERIAL);
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
captureOutput.alwaysDiscardsLateVideoFrames = YES;
[captureOutput setSampleBufferDelegate:self queue:self.captureOutputQueue];
self.captureSession = [[AVCaptureSession alloc] init];
self.captureSession.sessionPreset = AVCaptureSessionPreset1280x720;
[self.captureSession addInput:captureInput];
[self.captureSession addOutput:captureOutput];
return YES;
}
// Capture output delegate:
- (void)captureOutput:(AVCaptureOutput *)output
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
if (!self.detectionStarted) {
return;
}
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (pixelBuffer == nil) {
return;
}
NSMutableDictionary<VNImageOption, id> *requestOptions = [NSMutableDictionary dictionary];
CFTypeRef cameraIntrinsicData = CMGetAttachment(sampleBuffer,
kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix,
nil);
requestOptions[VNImageOptionCameraIntrinsics] = (__bridge id)(cameraIntrinsicData);
// TODO: Detect device orientation
static const CGImagePropertyOrientation orientation = kCGImagePropertyOrientationRight;
VNDetectFaceLandmarksRequest *landmarksRequest =
[[VNDetectFaceLandmarksRequest alloc] initWithCompletionHandler:^(VNRequest *request, NSError *error) {
if (error != nil) {
NSLog(#"Error while detecting face landmarks: %#", error);
} else {
dispatch_async(dispatch_get_main_queue(), ^{
// Draw eyes in two corresponding CAShapeLayers
});
}
}];
VNImageRequestHandler *requestHandler = [[VNImageRequestHandler alloc] initWithCVPixelBuffer:pixelBuffer
orientation:orientation
options:requestOptions];
NSError *error;
if (![requestHandler performRequests:#[landmarksRequest] error:&error]) {
NSLog(#"Error performing landmarks request: %#", error);
return;
}
}
Is it right to call -performRequests:.. on the same queue as the video output? Based on my experiments this method seems to call the request's completion handler synchronously. Should I not call this method on every frame?
To speed things up I've also tried using VNTrackObjectRequest to track each eye separately after landmarks were detected on the video (by constructing a bounding box from landmarks' region points), but that didn't work very well (still trying to figure it out).
What is the best strategy for tracking eyes on a video? Should I track a face rectangle and then execute a landmarks request inside its area (will it be faster)?
The iOS documentation says you can add and remove inputs while a session is running, for example to switch between front and back cameras.
However when I try this, my session stops. I'm locking the session with beginConfiguration and commitConfiguration calls as follows:
- (void)switchCamera:(UIButton *)sender {
dispatch_async([self sessionQueue], ^{
AVCaptureSession *session = self.captureSession;
[session beginConfiguration];
AVCaptureInput *currentInput = self.currentCameraIsBack ? self.videoDeviceInputBack : self.videoDeviceInputFront;
AVCaptureInput *newInput = self.currentCameraIsBack ? self.videoDeviceInputFront : self.videoDeviceInputBack;
[session removeInput:currentInput];
[session addInput:newInput];
self.currentCameraIsBack = !self.currentCameraIsBack;
[session setSessionPreset:AVCaptureSessionPresetMedium];
[self setCameraOutputProperties];
[session commitConfiguration];
});
}
I am outputting to an AVCaptureMovieFileOutput. Is there anything I need to do to configure this session so it is switchable?
(Note that the OP in this question is trying to add a new input without removing the old one, which isn't the problem here)
Turns out that to do this you need to use an AVAssetWriter and call appendSampleBuffer: yourself with the resulting CMSampleBuffer like so:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
if (self.recording && self.videoWriterInput.isReadyForMoreMediaData) {
[self.videoWriterInput appendSampleBuffer:sampleBuffer];
}
}
I'm capturing audio from external bluetooth microphone. But I can't record anything.
This method is only called one time, at the beginning of the current AvCaptureSession.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
After that I never get called this method for process the audio.
For instantiate the capture session I do this:
self.captureSession.usesApplicationAudioSession = true;
self.captureSession.automaticallyConfiguresApplicationAudioSession = true;
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionAllowBluetooth error:nil];
/* Audio */
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
audioIn = [[AVCaptureDeviceInput alloc] initWithDevice:audioDevice error:nil];
if ( [_captureSession canAddInput:audioIn] ) {
[_captureSession addInput:audioIn];
}
[audioIn release];
audioOut = [[AVCaptureAudioDataOutput alloc] init];
// Put audio on its own queue to ensure that our video processing doesn't cause us to drop audio
dispatch_queue_t audioCaptureQueue = dispatch_queue_create( "com.apple.sample.capturepipeline.audio", DISPATCH_QUEUE_SERIAL );
[audioOut setSampleBufferDelegate:self queue:audioCaptureQueue];
[audioCaptureQueue release];
if ( [self.captureSession canAddOutput:audioOut] ) {
[self.captureSession addOutput:audioOut];
}
_audioConnection = [audioOut connectionWithMediaType:AVMediaTypeAudio];
[audioOut release];
If I use another bluetooth device is always working, but not with this one.
I thought this device could be faulty, but actually is working in another apps to record audio.
Is really strange the problem. Anyone knows what could be happening?
Thanks!
I want to create a custom keyboard, that acts as a barcode scanner.
I already did the whole coding, but the output is not as expected: I am being asked for camera permissions (the first time), but the camera sends no video to the view.
I think, that there might be some restrictions using keyboards for safety reasons?!?
1.) Turn on the torch
-(void) turnFlashOn
{
AVCaptureDevice *flashLight = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
if([flashLight isTorchAvailable] && [flashLight
isTorchModeSupported:AVCaptureTorchModeOn])
{
BOOL success = [flashLight lockForConfiguration:nil];
if(success){
NSError *error;
[flashLight setTorchMode:AVCaptureTorchModeOn];
[flashLight setTorchModeOnWithLevel:1.0 error:&error];
NSLog(#"Error: %#", error);
[flashLight unlockForConfiguration];
NSLog(#"flash turned on -> OK");
}
else
{
NSLog(#"flash turn on -> ERROR");
}
}
}
This gives me this log output, but nothing happens with the flash:
Error: (null)
flash turned on -> OK
2.) Scan the barcode (part of viewDidLoad)
// SCANNER PART
self.captureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *videoCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoCaptureDevice error:&error];
if(videoInput)
[self.captureSession addInput:videoInput];
else
NSLog(#"Error: %#", error);
AVCaptureMetadataOutput *metadataOutput = [[AVCaptureMetadataOutput alloc] init];
[self.captureSession addOutput:metadataOutput];
[metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[metadataOutput setMetadataObjectTypes:#[AVMetadataObjectTypeQRCode, AVMetadataObjectTypeEAN13Code]];
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.captureSession];
camView = [[UIView alloc] initWithFrame: [[UIScreen mainScreen] bounds]];
previewLayer.frame = camView.layer.bounds;
[camView.layer addSublayer:previewLayer];
self.keyboard.barcodeView.clipsToBounds=YES;
camView.center = CGPointMake(self.keyboard.barcodeView.frame.size.width/2, self.keyboard.barcodeView.frame.size.height/2);
[self.keyboard.barcodeView addSubview:camView];
And if I press a special key on my keyboard this one is called:
-(void)scanBarcodeNow{
AudioServicesPlaySystemSound(systemSoundTock);
NSLog(#"Start scanning...");
self.keyboard.barcodeView.hidden=false;
[self.keyboard.barcodeView addSubview:camView];
[self.keyboard.barcodeView setBackgroundColor:[UIColor redColor]];
[self.captureSession startRunning];
}
The only thing happens, is that the keyboard.barcodeView changes its background color to red. I've made this to see, that all the wiring that I've done should be Ok. But no video from the cam is shown....
Can anyone help me out?
The reason you're getting back null is because you don't have access to it. It's actually not a bug. According to Apple guidelines certain APIs are not available to iOS 8 extensions (See bullet #3 below).
It sucks, but I always encourage people to read up on new features and see if what they want to do is possible, before dwelling into an idea (Saves a lot of time). Definitely check out the App Extension Programming Guide for more information.
I am using AV Foundation and I have setup a button that I want to toggle between the front facing and rear facing camera when tapped.
If I run the app, and tap the button, the console tells me that I have successfully changed the device to the front facing camera, but the the screen still shows me what the rear camera is seeing.
I need to update the screen so that it really is showing what the front camera sees and really does capture what the front camera sees.
Here is my IBAction code that is hooked up to my toggle button:
-(IBAction)toggleCamera {
_inputDevice = [self frontCamera];
}
And here is the method implementation that is powering "frontCamera":
- (AVCaptureDevice *)frontCamera {
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices) {
if ([device position] == AVCaptureDevicePositionFront) {
NSLog(#"Device position during frontCamera method: %d", [device position]);
return device;
}
}
return nil;
}
You need to do more than just setting the input device. Do you have an AVCaptureSession set up? If so, each time you swap the camera, you would need to (1) reconfigure the session, (2) remove the current input, (3) add the new input and (4) commit the session configuration:
// Define a flag in your header file to keep track of the current camera input
BOOL isUsingFrontFacingCamera;
-(IBAction)toggleCamera
{
// determine the mode to switch to
AVCaptureDevicePosition desiredPosition;
if (isUsingFrontFacingCamera)
{
desiredPosition = AVCaptureDevicePositionBack;
}
else
desiredPosition = AVCaptureDevicePositionFront;
}
for (AVCaptureDevice *d in [AVCaptureDevice devicesWithMediaType: AVMediaTypeVideo])
{
if ([d position] == desiredPosition)
{
// configure the session
[session beginConfiguration];
// find and clear all the existing inputs
for (AVCaptureInput *oldInput in [session inputs])
{
[session removeInput:oldInput];
}
// add the new input to session
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:d error:nil];
[session addInput:input];
// commit the configuration
[session commitConfiguration];
break;
}
}
// update the flag to reflect the current mode
isUsingFrontFacingCamera = !isUsingFrontFacingCamera;
}
Answer adapted from this post