iOS Custom Keyboard - camera not working - ios

I want to create a custom keyboard, that acts as a barcode scanner.
I already did the whole coding, but the output is not as expected: I am being asked for camera permissions (the first time), but the camera sends no video to the view.
I think, that there might be some restrictions using keyboards for safety reasons?!?
1.) Turn on the torch
-(void) turnFlashOn
{
AVCaptureDevice *flashLight = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
if([flashLight isTorchAvailable] && [flashLight
isTorchModeSupported:AVCaptureTorchModeOn])
{
BOOL success = [flashLight lockForConfiguration:nil];
if(success){
NSError *error;
[flashLight setTorchMode:AVCaptureTorchModeOn];
[flashLight setTorchModeOnWithLevel:1.0 error:&error];
NSLog(#"Error: %#", error);
[flashLight unlockForConfiguration];
NSLog(#"flash turned on -> OK");
}
else
{
NSLog(#"flash turn on -> ERROR");
}
}
}
This gives me this log output, but nothing happens with the flash:
Error: (null)
flash turned on -> OK
2.) Scan the barcode (part of viewDidLoad)
// SCANNER PART
self.captureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *videoCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoCaptureDevice error:&error];
if(videoInput)
[self.captureSession addInput:videoInput];
else
NSLog(#"Error: %#", error);
AVCaptureMetadataOutput *metadataOutput = [[AVCaptureMetadataOutput alloc] init];
[self.captureSession addOutput:metadataOutput];
[metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[metadataOutput setMetadataObjectTypes:#[AVMetadataObjectTypeQRCode, AVMetadataObjectTypeEAN13Code]];
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.captureSession];
camView = [[UIView alloc] initWithFrame: [[UIScreen mainScreen] bounds]];
previewLayer.frame = camView.layer.bounds;
[camView.layer addSublayer:previewLayer];
self.keyboard.barcodeView.clipsToBounds=YES;
camView.center = CGPointMake(self.keyboard.barcodeView.frame.size.width/2, self.keyboard.barcodeView.frame.size.height/2);
[self.keyboard.barcodeView addSubview:camView];
And if I press a special key on my keyboard this one is called:
-(void)scanBarcodeNow{
AudioServicesPlaySystemSound(systemSoundTock);
NSLog(#"Start scanning...");
self.keyboard.barcodeView.hidden=false;
[self.keyboard.barcodeView addSubview:camView];
[self.keyboard.barcodeView setBackgroundColor:[UIColor redColor]];
[self.captureSession startRunning];
}
The only thing happens, is that the keyboard.barcodeView changes its background color to red. I've made this to see, that all the wiring that I've done should be Ok. But no video from the cam is shown....
Can anyone help me out?

The reason you're getting back null is because you don't have access to it. It's actually not a bug. According to Apple guidelines certain APIs are not available to iOS 8 extensions (See bullet #3 below).
It sucks, but I always encourage people to read up on new features and see if what they want to do is possible, before dwelling into an idea (Saves a lot of time). Definitely check out the App Extension Programming Guide for more information.

Related

AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo returns nil on ipad

- (void)setupScanningSession {
// Initalising hte Capture session before doing any video capture/scanning.
NSError *error;
self.captureSession = [[AVCaptureSession alloc] init];
self.captureSession.sessionPreset = AVCaptureSessionPresetMedium;
// Set camera capture device to default and the media type to video.
AVCaptureDevice *captureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// Set video capture input: If there a problem initialising the camera, it will give am error.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
if (!input) {
NSLog(#"Error connecting camera: %#", [error localizedDescription]);
return;
}
// Adding input souce for capture session. i.e., Camera
[self.captureSession addInput:input];
the captureDevice is always nil when I try to run the app on the real device. The iPad is a A1474 with iOS 12.
The same piece of code runs perfectly on my iPhone 8.
After pulling my hair over this, I finally found. I was working with an iPad that was managed by my company and the camera was disabled by the management profile. After deleting this profile, the iOS inbuild camera app also appeared again.
The call of defaultDeviceWithMediaType was not nil anylonger.
Maybe this is helpful for someone else who finds this.

Tracking eyes with Vision framework

How can you use the new Vision framework in iOS 11 to track eyes in a video while the head or camera is moving? (using the front camera).
I've found VNDetectFaceLandmarksRequest to be very slow on my iPad - landmarks requests are performed roughly once in 1-2 seconds. I fee like I'm doing something wrong, but there is not much documentation on Apple's site.
I've already watched the WWDC 2017 video on Vision:
https://developer.apple.com/videos/play/wwdc2017/506/
as well as read this guide:
https://github.com/jeffreybergier/Blog-Getting-Started-with-Vision
My code looks roughly like this right now (sorry, it's Objective-C):
// Capture session setup
- (BOOL)setUpCaptureSession {
AVCaptureDevice *captureDevice = [AVCaptureDevice
defaultDeviceWithDeviceType:AVCaptureDeviceTypeBuiltInWideAngleCamera
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionFront];
NSError *error;
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
if (error != nil) {
NSLog(#"Failed to initialize video input: %#", error);
return NO;
}
self.captureOutputQueue = dispatch_queue_create("CaptureOutputQueue",
DISPATCH_QUEUE_SERIAL);
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
captureOutput.alwaysDiscardsLateVideoFrames = YES;
[captureOutput setSampleBufferDelegate:self queue:self.captureOutputQueue];
self.captureSession = [[AVCaptureSession alloc] init];
self.captureSession.sessionPreset = AVCaptureSessionPreset1280x720;
[self.captureSession addInput:captureInput];
[self.captureSession addOutput:captureOutput];
return YES;
}
// Capture output delegate:
- (void)captureOutput:(AVCaptureOutput *)output
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
if (!self.detectionStarted) {
return;
}
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (pixelBuffer == nil) {
return;
}
NSMutableDictionary<VNImageOption, id> *requestOptions = [NSMutableDictionary dictionary];
CFTypeRef cameraIntrinsicData = CMGetAttachment(sampleBuffer,
kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix,
nil);
requestOptions[VNImageOptionCameraIntrinsics] = (__bridge id)(cameraIntrinsicData);
// TODO: Detect device orientation
static const CGImagePropertyOrientation orientation = kCGImagePropertyOrientationRight;
VNDetectFaceLandmarksRequest *landmarksRequest =
[[VNDetectFaceLandmarksRequest alloc] initWithCompletionHandler:^(VNRequest *request, NSError *error) {
if (error != nil) {
NSLog(#"Error while detecting face landmarks: %#", error);
} else {
dispatch_async(dispatch_get_main_queue(), ^{
// Draw eyes in two corresponding CAShapeLayers
});
}
}];
VNImageRequestHandler *requestHandler = [[VNImageRequestHandler alloc] initWithCVPixelBuffer:pixelBuffer
orientation:orientation
options:requestOptions];
NSError *error;
if (![requestHandler performRequests:#[landmarksRequest] error:&error]) {
NSLog(#"Error performing landmarks request: %#", error);
return;
}
}
Is it right to call -performRequests:.. on the same queue as the video output? Based on my experiments this method seems to call the request's completion handler synchronously. Should I not call this method on every frame?
To speed things up I've also tried using VNTrackObjectRequest to track each eye separately after landmarks were detected on the video (by constructing a bounding box from landmarks' region points), but that didn't work very well (still trying to figure it out).
What is the best strategy for tracking eyes on a video? Should I track a face rectangle and then execute a landmarks request inside its area (will it be faster)?

Red status bar shown while switching camera using AVFoundation

I am facing very weird issue while switching between camera. When user switch the camera from front to rear, user can see the red status bar for a second then disappears automatically with slide up animation. I searched a lot on google & stack-overflow but no luck. I found this question , but its related to audio recording. Here is my code
-(void)toggleCameraIsFront:(BOOL)isFront
{
AVCaptureDevicePosition desiredPosition;
if (isFront) {
desiredPosition = AVCaptureDevicePositionFront;
self.videoDeviceType = VideoDeviceTypeFrontCamera;
}
else {
desiredPosition = AVCaptureDevicePositionBack;
self.videoDeviceType = VideoDeviceTypeRearCamera;
}
for (AVCaptureDevice *d in [AVCaptureDevice devicesWithMediaType: AVMediaTypeVideo])
{
if ([d position] == desiredPosition)
{
AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:d error:nil];
[self.session beginConfiguration];
[self.session removeInput:self.videoInput];
if ([self.session canAddInput:videoDeviceInput])
{
[self.session addInput:videoDeviceInput];
[self setVideoInput:videoDeviceInput];
}
else
{
[self.session addInput:self.videoInput];
}
[self.session commitConfiguration];
break;
}
}
}
Also after camera is switched & try to record the video then below method from AVCaptureVideoDataOutputSampleBufferDelegate not getting called.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
Any kind of help is highly appreciated. Thanks.
This red status bar appears due to audio recording, as you've mentioned a question which also describe that this is due to audio recording.
In order to avoid this you need to remove the audio input from AVCaptureSession
[self.captureSession removeInput:audioInput];
where audioInput is AVCaptureDeviceInput object.
Please check #bruno answer for more clarification.

iPhone still shows rear camera's view even after successfully switching to front camera

I am using AV Foundation and I have setup a button that I want to toggle between the front facing and rear facing camera when tapped.
If I run the app, and tap the button, the console tells me that I have successfully changed the device to the front facing camera, but the the screen still shows me what the rear camera is seeing.
I need to update the screen so that it really is showing what the front camera sees and really does capture what the front camera sees.
Here is my IBAction code that is hooked up to my toggle button:
-(IBAction)toggleCamera {
_inputDevice = [self frontCamera];
}
And here is the method implementation that is powering "frontCamera":
- (AVCaptureDevice *)frontCamera {
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices) {
if ([device position] == AVCaptureDevicePositionFront) {
NSLog(#"Device position during frontCamera method: %d", [device position]);
return device;
}
}
return nil;
}
You need to do more than just setting the input device. Do you have an AVCaptureSession set up? If so, each time you swap the camera, you would need to (1) reconfigure the session, (2) remove the current input, (3) add the new input and (4) commit the session configuration:
// Define a flag in your header file to keep track of the current camera input
BOOL isUsingFrontFacingCamera;
-(IBAction)toggleCamera
{
// determine the mode to switch to
AVCaptureDevicePosition desiredPosition;
if (isUsingFrontFacingCamera)
{
desiredPosition = AVCaptureDevicePositionBack;
}
else
desiredPosition = AVCaptureDevicePositionFront;
}
for (AVCaptureDevice *d in [AVCaptureDevice devicesWithMediaType: AVMediaTypeVideo])
{
if ([d position] == desiredPosition)
{
// configure the session
[session beginConfiguration];
// find and clear all the existing inputs
for (AVCaptureInput *oldInput in [session inputs])
{
[session removeInput:oldInput];
}
// add the new input to session
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:d error:nil];
[session addInput:input];
// commit the configuration
[session commitConfiguration];
break;
}
}
// update the flag to reflect the current mode
isUsingFrontFacingCamera = !isUsingFrontFacingCamera;
}
Answer adapted from this post

AVCaptureSession in modalviewcontroller on iOS5 with ARC

I'm going insane trying to get an AVCaptureSession (in a view controller) to be presented and dismissed in my project. I'm currently on iOS5.1 and have ARC enabled.
I can get it to work fine the first time I present the viewcontroller and start the session but when I dismiss and present a second time the session will not start. I subscribed to the "AVCaptureSessionRuntimeErrorNotification" notification and receive the following error:
"Error Domain=AVFoundationErrorDomain Code=-11819 "Cannot Complete Action" UserInfo=0x1a4020 {NSLocalizedRecoverySuggestion=Try again later., NSLocalizedDescription=Cannot Complete Action}"
I'm assuming that something is not being properly released in my session, but with ARC there are no releases and I instead set everything to be released to nil.
my viewDidLoad methods basically just triggers initCamera
initCamera method:
AVCaptureSession *tmpSession = [[AVCaptureSession alloc] init];
session = tmpSession;
session.sessionPreset = AVCaptureSessionPresetMedium;
captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
captureVideoPreviewLayer.frame = self.vImagePreview.bounds;
[self.vImagePreview.layer addSublayer:captureVideoPreviewLayer];
rearCamera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
input = [AVCaptureDeviceInput deviceInputWithDevice:rearCamera error:&error];
if (!input) {
// Handle the error appropriately.
NSLog(#"ERROR: trying to open camera: %#", error);
}
[session addInput:input];
videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey, nil];
[videoDataOutput setVideoSettings:outputSettings];
[videoDataOutput setAlwaysDiscardsLateVideoFrames:YES];
queue = dispatch_queue_create("cameraQueue", DISPATCH_QUEUE_SERIAL);
[videoDataOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
[session addOutput:videoDataOutput];
NSNotificationCenter *notify =
[NSNotificationCenter defaultCenter];
[notify addObserver: self
selector: #selector(onVideoError:)
name: AVCaptureSessionRuntimeErrorNotification
object: session];
[session startRunning];
[rearCamera lockForConfiguration:nil];
rearCamera.whiteBalanceMode = AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance;
rearCamera.exposureMode = AVCaptureExposureModeContinuousAutoExposure;
rearCamera.focusMode = AVCaptureFocusModeContinuousAutoFocus;
[rearCamera unlockForConfiguration];
The method
captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
gets called no problem the first time I present the modal viewcontroller, but on the second attempt this method stops getting called (because the session does not start)
For clean up I'm calling stopSession from my parent viewcontroller before dismissing and that does the following:
if ([session isRunning]) {
[session removeInput:input];
[session stopRunning];
[vImagePreview removeFromSuperview];
vImagePreview = nil;
input = nil;
videoDataOutput = nil;
captureVideoPreviewLayer = nil;
session = nil;
queue = nil;
}
I feel like I've tried all sorts of things such as performing a dispatch_sync(queue, ^{}) on the queue to wait for it to be flushed, but that doesn't seem to make a difference (when calling the dispatch_sync I removed the dispatch_release call in my init camera method). I've also tried using the dispatch_set_finalizer_f(queue, capture_cleanup) method suggested in another question but I don't know what needs to actually go in the capture_cleanup method because all of the examples I find are non-ARC code where they call release on pointer to self. I've also combed through all of the sample code I can find from Apple (SquareCam and AVCam) but these are also non-ARC. Any help would be greatly appreciated.
I realized that I was performing a setFocusPointOfInterest on my rear camera and for some reason it was corrupting the session on relaunch. I don't understand why this caused the issue but I will be looking into that.
You might try converting the SquareCam project to ARC before using the source in your program. I was able to do so by using a __bridge cast in the places the converter was complaining, and also replace the "bail:" goto's with simple if statements.

Resources