Tracking eyes with Vision framework - ios

How can you use the new Vision framework in iOS 11 to track eyes in a video while the head or camera is moving? (using the front camera).
I've found VNDetectFaceLandmarksRequest to be very slow on my iPad - landmarks requests are performed roughly once in 1-2 seconds. I fee like I'm doing something wrong, but there is not much documentation on Apple's site.
I've already watched the WWDC 2017 video on Vision:
https://developer.apple.com/videos/play/wwdc2017/506/
as well as read this guide:
https://github.com/jeffreybergier/Blog-Getting-Started-with-Vision
My code looks roughly like this right now (sorry, it's Objective-C):
// Capture session setup
- (BOOL)setUpCaptureSession {
AVCaptureDevice *captureDevice = [AVCaptureDevice
defaultDeviceWithDeviceType:AVCaptureDeviceTypeBuiltInWideAngleCamera
mediaType:AVMediaTypeVideo
position:AVCaptureDevicePositionFront];
NSError *error;
AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
if (error != nil) {
NSLog(#"Failed to initialize video input: %#", error);
return NO;
}
self.captureOutputQueue = dispatch_queue_create("CaptureOutputQueue",
DISPATCH_QUEUE_SERIAL);
AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];
captureOutput.alwaysDiscardsLateVideoFrames = YES;
[captureOutput setSampleBufferDelegate:self queue:self.captureOutputQueue];
self.captureSession = [[AVCaptureSession alloc] init];
self.captureSession.sessionPreset = AVCaptureSessionPreset1280x720;
[self.captureSession addInput:captureInput];
[self.captureSession addOutput:captureOutput];
return YES;
}
// Capture output delegate:
- (void)captureOutput:(AVCaptureOutput *)output
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
if (!self.detectionStarted) {
return;
}
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (pixelBuffer == nil) {
return;
}
NSMutableDictionary<VNImageOption, id> *requestOptions = [NSMutableDictionary dictionary];
CFTypeRef cameraIntrinsicData = CMGetAttachment(sampleBuffer,
kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix,
nil);
requestOptions[VNImageOptionCameraIntrinsics] = (__bridge id)(cameraIntrinsicData);
// TODO: Detect device orientation
static const CGImagePropertyOrientation orientation = kCGImagePropertyOrientationRight;
VNDetectFaceLandmarksRequest *landmarksRequest =
[[VNDetectFaceLandmarksRequest alloc] initWithCompletionHandler:^(VNRequest *request, NSError *error) {
if (error != nil) {
NSLog(#"Error while detecting face landmarks: %#", error);
} else {
dispatch_async(dispatch_get_main_queue(), ^{
// Draw eyes in two corresponding CAShapeLayers
});
}
}];
VNImageRequestHandler *requestHandler = [[VNImageRequestHandler alloc] initWithCVPixelBuffer:pixelBuffer
orientation:orientation
options:requestOptions];
NSError *error;
if (![requestHandler performRequests:#[landmarksRequest] error:&error]) {
NSLog(#"Error performing landmarks request: %#", error);
return;
}
}
Is it right to call -performRequests:.. on the same queue as the video output? Based on my experiments this method seems to call the request's completion handler synchronously. Should I not call this method on every frame?
To speed things up I've also tried using VNTrackObjectRequest to track each eye separately after landmarks were detected on the video (by constructing a bounding box from landmarks' region points), but that didn't work very well (still trying to figure it out).
What is the best strategy for tracking eyes on a video? Should I track a face rectangle and then execute a landmarks request inside its area (will it be faster)?

Related

Why does this audio session fail to recognise an interruption?

My app synthesises audio from a lookup table. It plays audio successfully but crashes the moment I try to stop playing. Audio playback only needs to exit without restarting so the requirements for handling the interruption are basic. I reread Apple’s Audio Session Programming Guide including the section Responding to Interruptions. However the method handleAudioSessionInterruption does not seem to register an interrupt so I’m obviously missing something.
EDIT See my answer. When I began work on this I knew next to nothing about NSNotificationCenter so I welcome any suggestion for improvement.
Two methods set up the audio session to play in the foreground.
- (void)setUpAudio
{
if (_playQueue == NULL)
{
if ([self setUpAudioSession] == TRUE)
{
[self setUpPlayQueue];
[self setUpPlayQueueBuffers];
}
}
}
- (BOOL)setUpAudioSession
{
BOOL success = NO;
NSError *audioSessionError = nil;
AVAudioSession *session = [AVAudioSession sharedInstance];
// Set up notifications
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(handleAudioSessionInterruption:)
name:AVAudioSessionInterruptionNotification
object:session];
// Set category
success = [session setCategory:AVAudioSessionCategoryPlayback
error:&audioSessionError];
if (!success)
{
NSLog(#"%# Error setting category: %#",
NSStringFromSelector(_cmd), [audioSessionError localizedDescription]);
// Exit early
return success;
}
// Set mode
success = [session setMode:AVAudioSessionModeDefault
error:&audioSessionError];
if (!success)
{
NSLog(#"%# Error setting mode: %#",
NSStringFromSelector(_cmd), [audioSessionError localizedDescription]);
// Exit early
return success;
}
// Set some preferred values
NSTimeInterval bufferDuration = .005; // I would prefer a 5ms buffer duration
success = [session setPreferredIOBufferDuration:bufferDuration
error:&audioSessionError];
if (audioSessionError)
{
NSLog(#"Error %ld, %# %i", (long)audioSessionError.code, audioSessionError.localizedDescription, success);
}
double sampleRate = _audioFormat.mSampleRate; // I would prefer a sample rate of 44.1kHz
success = [session setPreferredSampleRate:sampleRate
error:&audioSessionError];
if (audioSessionError)
{
NSLog(#"Error %ld, %# %i", (long)audioSessionError.code, audioSessionError.localizedDescription, success);
}
success = [session setActive:YES
error:&audioSessionError];
if (!success)
{
NSLog(#"%# Error activating %#",
NSStringFromSelector(_cmd), [audioSessionError localizedDescription]);
}
// Get current values
sampleRate = session.sampleRate;
bufferDuration = session.IOBufferDuration;
NSLog(#"Sample Rate:%0.0fHz I/O Buffer Duration:%f", sampleRate, bufferDuration);
return success;
}
And here is the method that handles the interruption when I press the stop button. However it does not respond.
EDIT The correct method needs block, not selector. See my answer.
- (void)handleAudioSessionInterruption:(NSNotification*)notification
{
if (_playQueue)
{
NSNumber *interruptionType = [[notification userInfo] objectForKey:AVAudioSessionInterruptionTypeKey];
NSNumber *interruptionOption = [[notification userInfo] objectForKey:AVAudioSessionInterruptionOptionKey];
NSLog(#"in-app Audio playback will be stopped by %# %lu", notification.name, (unsigned long)interruptionType.unsignedIntegerValue);
switch (interruptionType.unsignedIntegerValue)
{
case AVAudioSessionInterruptionTypeBegan:
{
if (interruptionOption.unsignedIntegerValue == AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation)
{
NSLog(#"notify other apps that audio is now available");
}
}
break;
default:
break;
}
}
}
Answer My method to handle AudioSessionInterruption did not subscribe the observer correctly with NSNotificationCentre. This has been fixed by adding observer using block, not selector.
The solution replaces deprecated AVAudioSession delegate methods in AudioBufferPlayer, an extremely fit for purpose audio player initially developed for direct audio synthesis by Matthias Hollejmans. Several deprecated functions including InterruptionListenerCallback were later upgraded by Mario Diana. The solution (below) uses NSNotification allowing users to exit AVAudioSession gracefully by pressing a button.
Here is the relevant code.
PlayViewController.m
UIButton action performs an orderly shutdown of synth, invalidates the timer and posts the notification that will exit AVAudioSession
- (void)fromEscButton:(UIButton*)button
{
[self stopConcertClock];
... // code for Exit PlayViewController not shown
}
- (void)stopConcertClock
{
[_synthLock lock];
[_synth stopAllNotes];
[_synthLock unlock];
[timer invalidate];
timer = nil;
[self postAVAudioSessionInterruptionNotification];
NSLog(#"Esc button pressed or sequence ended. Exit PlayViewController ");
}
- (void) postAVAudioSessionInterruptionNotification
{
[[NSNotificationCenter defaultCenter]
postNotificationName:#"AVAudioSessionInterruptionNotification"
object:self];
}
Initialising the AVAudioSession includes subscribing for a single interruption notification before starting startAudioPlayer in AudioBufferPlayer
- (id)init
{
if (self = [super init])
{
NSLog(#"PlayViewController starts MotionListener and AudioSession");
[self startAudioSession];
}
return self;
}
- (void)startAudioSession
{
// Synth and the AudioBufferPlayer must use the same sample rate.
_synthLock = [[NSLock alloc] init];
float sampleRate = 44100.0f;
// Initialise synth to fill the audio buffer with audio samples.
_synth = [[Synth alloc] initWithSampleRate:sampleRate];
// Initialise the audio buffer.
_player = [[AudioBufferPlayer alloc] initWithSampleRate:sampleRate
channels:1
bitsPerChannel:16
packetsPerBuffer:1024];
_player.gain = 0.9f;
__block __weak PlayViewController *weakSelf = self;
_player.block = ^(AudioQueueBufferRef buffer, AudioStreamBasicDescription audioFormat)
{
PlayViewController *blockSelf = weakSelf;
if (blockSelf != nil)
{
// Lock access to the synth. This callback runs on an internal Audio Queue thread and we don't
// want another thread to change the Synth's state while we're still filling up the audio buffer.
[blockSelf -> _synthLock lock];
// Calculate how many packets fit into this buffer. Remember that a packet equals one frame
// because we are dealing with uncompressed audio; a frame is a set of left+right samples
// for stereo sound, or a single sample for mono sound. Each sample consists of one or more
// bytes. So for 16-bit mono sound, each packet is 2 bytes. For stereo it would be 4 bytes.
int packetsPerBuffer = buffer -> mAudioDataBytesCapacity / audioFormat.mBytesPerPacket;
// Let the Synth write into the buffer. The Synth just knows how to fill up buffers
// in a particular format and does not care where they come from.
int packetsWritten = [blockSelf -> _synth fillBuffer:buffer->mAudioData frames:packetsPerBuffer];
// We have to tell the buffer how many bytes we wrote into it.
buffer -> mAudioDataByteSize = packetsWritten * audioFormat.mBytesPerPacket;
[blockSelf -> _synthLock unlock];
}
};
// Set up notifications
[self subscribeForBlockNotification];
[_player startAudioPlayer];
}
- (void)subscribeForBlockNotification
{
NSNotificationCenter * __weak center = [NSNotificationCenter defaultCenter];
id __block token = [center addObserverForName:#"AVAudioSessionInterruptionNotification"
object:nil
queue:[NSOperationQueue mainQueue]
usingBlock:^(NSNotification *note) {
NSLog(#"Received the notification!");
[_player stopAudioPlayer];
[center removeObserver:token];
}];
}
PlayViewController.h
These are relevant interface settings
#interface PlayViewController : UIViewController <EscButtonDelegate>
{
...
// Initialisation of audio player and synth
AudioBufferPlayer* player;
Synth* synth;
NSLock* synthLock;
}
...
- (AudioBufferPlayer*)player;
- (Synth*)synth;
#end
AudioBufferPlayer.m
- (void)stopAudioPlayer
{
[self stopPlayQueue];
[self tearDownPlayQueue];
[self tearDownAudioSession];
}
- (void)stopPlayQueue
{
if (_audioPlaybackQueue != NULL)
{
AudioQueuePause(_audioPlaybackQueue);
AudioQueueReset(_audioPlaybackQueue);
_playing = NO;
}
}
- (void)tearDownPlayQueue
{
AudioQueueDispose(_audioPlaybackQueue, NO);
_audioPlaybackQueue = NULL;
}
- (BOOL)tearDownAudioSession
{
NSError *deactivationError = nil;
BOOL success = [[AVAudioSession sharedInstance] setActive:NO
withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation
error:nil];
if (!success)
{
NSLog(#"%s AVAudioSession Error: %#", __FUNCTION__, deactivationError);
}
return success;
}

Method captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection only called a few times

I'm capturing audio from external bluetooth microphone. But I can't record anything.
This method is only called one time, at the beginning of the current AvCaptureSession.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
After that I never get called this method for process the audio.
For instantiate the capture session I do this:
self.captureSession.usesApplicationAudioSession = true;
self.captureSession.automaticallyConfiguresApplicationAudioSession = true;
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionAllowBluetooth error:nil];
/* Audio */
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
audioIn = [[AVCaptureDeviceInput alloc] initWithDevice:audioDevice error:nil];
if ( [_captureSession canAddInput:audioIn] ) {
[_captureSession addInput:audioIn];
}
[audioIn release];
audioOut = [[AVCaptureAudioDataOutput alloc] init];
// Put audio on its own queue to ensure that our video processing doesn't cause us to drop audio
dispatch_queue_t audioCaptureQueue = dispatch_queue_create( "com.apple.sample.capturepipeline.audio", DISPATCH_QUEUE_SERIAL );
[audioOut setSampleBufferDelegate:self queue:audioCaptureQueue];
[audioCaptureQueue release];
if ( [self.captureSession canAddOutput:audioOut] ) {
[self.captureSession addOutput:audioOut];
}
_audioConnection = [audioOut connectionWithMediaType:AVMediaTypeAudio];
[audioOut release];
If I use another bluetooth device is always working, but not with this one.
I thought this device could be faulty, but actually is working in another apps to record audio.
Is really strange the problem. Anyone knows what could be happening?
Thanks!

iOS Custom Keyboard - camera not working

I want to create a custom keyboard, that acts as a barcode scanner.
I already did the whole coding, but the output is not as expected: I am being asked for camera permissions (the first time), but the camera sends no video to the view.
I think, that there might be some restrictions using keyboards for safety reasons?!?
1.) Turn on the torch
-(void) turnFlashOn
{
AVCaptureDevice *flashLight = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
if([flashLight isTorchAvailable] && [flashLight
isTorchModeSupported:AVCaptureTorchModeOn])
{
BOOL success = [flashLight lockForConfiguration:nil];
if(success){
NSError *error;
[flashLight setTorchMode:AVCaptureTorchModeOn];
[flashLight setTorchModeOnWithLevel:1.0 error:&error];
NSLog(#"Error: %#", error);
[flashLight unlockForConfiguration];
NSLog(#"flash turned on -> OK");
}
else
{
NSLog(#"flash turn on -> ERROR");
}
}
}
This gives me this log output, but nothing happens with the flash:
Error: (null)
flash turned on -> OK
2.) Scan the barcode (part of viewDidLoad)
// SCANNER PART
self.captureSession = [[AVCaptureSession alloc] init];
AVCaptureDevice *videoCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoCaptureDevice error:&error];
if(videoInput)
[self.captureSession addInput:videoInput];
else
NSLog(#"Error: %#", error);
AVCaptureMetadataOutput *metadataOutput = [[AVCaptureMetadataOutput alloc] init];
[self.captureSession addOutput:metadataOutput];
[metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[metadataOutput setMetadataObjectTypes:#[AVMetadataObjectTypeQRCode, AVMetadataObjectTypeEAN13Code]];
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.captureSession];
camView = [[UIView alloc] initWithFrame: [[UIScreen mainScreen] bounds]];
previewLayer.frame = camView.layer.bounds;
[camView.layer addSublayer:previewLayer];
self.keyboard.barcodeView.clipsToBounds=YES;
camView.center = CGPointMake(self.keyboard.barcodeView.frame.size.width/2, self.keyboard.barcodeView.frame.size.height/2);
[self.keyboard.barcodeView addSubview:camView];
And if I press a special key on my keyboard this one is called:
-(void)scanBarcodeNow{
AudioServicesPlaySystemSound(systemSoundTock);
NSLog(#"Start scanning...");
self.keyboard.barcodeView.hidden=false;
[self.keyboard.barcodeView addSubview:camView];
[self.keyboard.barcodeView setBackgroundColor:[UIColor redColor]];
[self.captureSession startRunning];
}
The only thing happens, is that the keyboard.barcodeView changes its background color to red. I've made this to see, that all the wiring that I've done should be Ok. But no video from the cam is shown....
Can anyone help me out?
The reason you're getting back null is because you don't have access to it. It's actually not a bug. According to Apple guidelines certain APIs are not available to iOS 8 extensions (See bullet #3 below).
It sucks, but I always encourage people to read up on new features and see if what they want to do is possible, before dwelling into an idea (Saves a lot of time). Definitely check out the App Extension Programming Guide for more information.

Red status bar shown while switching camera using AVFoundation

I am facing very weird issue while switching between camera. When user switch the camera from front to rear, user can see the red status bar for a second then disappears automatically with slide up animation. I searched a lot on google & stack-overflow but no luck. I found this question , but its related to audio recording. Here is my code
-(void)toggleCameraIsFront:(BOOL)isFront
{
AVCaptureDevicePosition desiredPosition;
if (isFront) {
desiredPosition = AVCaptureDevicePositionFront;
self.videoDeviceType = VideoDeviceTypeFrontCamera;
}
else {
desiredPosition = AVCaptureDevicePositionBack;
self.videoDeviceType = VideoDeviceTypeRearCamera;
}
for (AVCaptureDevice *d in [AVCaptureDevice devicesWithMediaType: AVMediaTypeVideo])
{
if ([d position] == desiredPosition)
{
AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:d error:nil];
[self.session beginConfiguration];
[self.session removeInput:self.videoInput];
if ([self.session canAddInput:videoDeviceInput])
{
[self.session addInput:videoDeviceInput];
[self setVideoInput:videoDeviceInput];
}
else
{
[self.session addInput:self.videoInput];
}
[self.session commitConfiguration];
break;
}
}
}
Also after camera is switched & try to record the video then below method from AVCaptureVideoDataOutputSampleBufferDelegate not getting called.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
Any kind of help is highly appreciated. Thanks.
This red status bar appears due to audio recording, as you've mentioned a question which also describe that this is due to audio recording.
In order to avoid this you need to remove the audio input from AVCaptureSession
[self.captureSession removeInput:audioInput];
where audioInput is AVCaptureDeviceInput object.
Please check #bruno answer for more clarification.

How to define a constant method

I have a fairly lengthy method for a stop motion app that is slightly different for each of the various options pressed, timers, self timers, etc
Can define the main body of the method:
// initiate a still image capture, return immediately
// the completionHandler is called when a sample buffer has been captured
AVCaptureConnection *stillImageConnection = [stillImageOutput connectionWithMediaType:AVMediaTypeVideo];
[stillImageOutput captureStillImageAsynchronouslyFromConnection:stillImageConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *__strong error) {
// set up the AVAssetWriter using the format description from the first sample buffer captured
if ( !assetWriter ) {
outputURL = [NSURL fileURLWithPath:[NSString stringWithFormat:#"%#/%llu.mov", NSTemporaryDirectory(), mach_absolute_time()]];
//NSLog(#"Writing movie to \"%#\"", outputURL);
CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(imageDataSampleBuffer);
if ( NO == [self setupAssetWriterForURL:outputURL formatDescription:formatDescription] )
return;
}
// re-time the sample buffer - in this sample frameDuration is set to 5 fps
CMSampleTimingInfo timingInfo = kCMTimingInfoInvalid;
timingInfo.duration = frameDuration;
timingInfo.presentationTimeStamp = nextPTS;
CMSampleBufferRef sbufWithNewTiming = NULL;
OSStatus err = CMSampleBufferCreateCopyWithNewTiming(kCFAllocatorDefault,
imageDataSampleBuffer,
1, // numSampleTimingEntries
&timingInfo,
&sbufWithNewTiming);
if (err)
return;
// append the sample buffer if we can and increment presnetation time
if ( [assetWriterInput isReadyForMoreMediaData] ) {
if ([assetWriterInput appendSampleBuffer:sbufWithNewTiming]) {
nextPTS = CMTimeAdd(frameDuration, nextPTS);
}
else {
NSError *error = [assetWriter error];
NSLog(#"failed to append sbuf: %#", error);
}
}
// release the copy of the sample buffer we made
CFRelease(sbufWithNewTiming);
}];
and just make variations of the method with the timers etc
First I tried making a singleton but although I got the method called I had other issues with the saving and writing to file. Can I make a MACRO out of a method?
I researched on SO here iOS create macro
Am I on the right track? can i define a method rather than image as in that example
Making a macro out of a method, while possible, is a terrible idea for a variety of reasons.
Why not just make it a class method? You won't have to worry about management of a class instance, and it won't muddy up the global namespace.

Resources