cv::Mat doesn't match UIImageView width? - ios

I am using AVFoundation to capture video frames, process with opencv and display the result in an UIImageView on the new iPad. The opencv process does the followings ("inImg" is the video frame) :
cv::Mat testROI = inImg.rowRange(0,100);
testROI = testROI.colRange(0,10);
testROI.setTo(255); // this is a BGRA frame.
However, instead of getting a vertical white bar (100 row x 10 col) on the top left corner of the frame, I got 100 stair-like horizontal lines from top right corner to the bottom left, each with 10 pixel long.
After some investigation, I realized that the width of the displayed frame seems to be 8 pixel wider than the cv::Mat. (i.e. the 9th pixel of the 2nd row is right below the 1st pixel of the 1st row.).
The video frame itself is shown correctly (no displacement between rows).
The problem appears when the AVCaptureSession.sessionPreset is AVCaptureSessionPresetMedium (frame rows=480, cols=360) but does not appear when it is AVCaptureSessionPresetHigh (frame rows=640, cols=480).
There are 360 cols shown in full screen. (I tried traverse and modify the cv::Mat pixel-by-pixel. Pixel 1-360 were shown correctly. 361-368 disappeared and 369 was shown right under pixel 1).
I tried combinations of imageview.contentMode (UIViewContentModeScaleAspectFill and UIViewContentModeScaleAspectFit) and imageview.clipsToBound (YES/NO) but no luck.
What could be the problem?
Thank you very much.
I use the following code to create the AVCaptureSession:
NSArray* devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
if ([devices count] == 0) {
NSLog(#"No video capture devices found");
return NO;
}
for (AVCaptureDevice *device in devices) {
if ([device position] == AVCaptureDevicePositionFront) {
_captureDevice = device;
}
}
NSError* error_exp = nil;
if ([_captureDevice lockForConfiguration:&error_exp]) {
[_captureDevice setWhiteBalanceMode:AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance];
[_captureDevice unlockForConfiguration];
}
// Create the capture session
_captureSession = [[AVCaptureSession alloc] init];
_captureSession.sessionPreset = AVCaptureSessionPresetMedium;
// Create device input
NSError *error = nil;
AVCaptureDeviceInput *input = [[AVCaptureDeviceInput alloc] initWithDevice:_captureDevice error:&error];
// Create and configure device output
_videoOutput = [[AVCaptureVideoDataOutput alloc] init];
dispatch_queue_t queue = dispatch_queue_create("cameraQueue", NULL);
[_videoOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
_videoOutput.alwaysDiscardsLateVideoFrames = YES;
OSType format = kCVPixelFormatType_32BGRA;
_videoOutput.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithUnsignedInt:format]forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// Connect up inputs and outputs
if ([_captureSession canAddInput:input]) {
[_captureSession addInput:input];
}
if ([_captureSession canAddOutput:_videoOutput]) {
[_captureSession addOutput:_videoOutput];
}
AVCaptureConnection * captureConnection = [_videoOutput connectionWithMediaType:AVMediaTypeVideo];
if (captureConnection.isVideoMinFrameDurationSupported)
captureConnection.videoMinFrameDuration = CMTimeMake(1, 60);
if (captureConnection.isVideoMaxFrameDurationSupported)
captureConnection.videoMaxFrameDuration = CMTimeMake(1, 60);
if (captureConnection.supportsVideoMirroring)
[captureConnection setVideoMirrored:NO];
[captureConnection setVideoOrientation:AVCaptureVideoOrientationPortraitUpsideDown];
When a frame is received, the followings is called:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
#autoreleasepool {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
OSType format = CVPixelBufferGetPixelFormatType(pixelBuffer);
CGRect videoRect = CGRectMake(0.0f, 0.0f, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer));
AVCaptureConnection *currentConnection = [[_videoOutput connections] objectAtIndex:0];
AVCaptureVideoOrientation videoOrientation = [currentConnection videoOrientation];
CGImageRef quartzImage;
// For color mode a 4-channel cv::Mat is created from the BGRA data
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *baseaddress = CVPixelBufferGetBaseAddress(pixelBuffer);
cv::Mat mat(videoRect.size.height, videoRect.size.width, CV_8UC4, baseaddress, 0);
if ([self doFrame]) { // a flag to switch processing ON/OFF
[self processFrame:mat videoRect:videoRect videoOrientation:videoOrientation]; // "processFrame" is the opencv function shown above
}
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
quartzImage = [self.context createCGImage:ciImage fromRect:ciImage.extent];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0 orientation:UIImageOrientationUp];
CGImageRelease(quartzImage);
[self.imageView performSelectorOnMainThread:#selector(setImage:) withObject:image waitUntilDone:YES];

I assume you're using the constructor Mat(int _rows, int _cols, int _type, void* _data, size_t _step=AUTO_STEP) and that AUTO_STEP is 0 and assumes that the row stride is width*bytesPerPixel.
This is generally wrong — it's very common to align rows to some larger boundary. In this case, 360 is not a multiple of 16 but 368 is; which strongly suggests that it's aligning to 16-pixel boundaries (perhaps to assist algorithms that process in 16×16 blocks?).
Try
cv::Mat mat(videoRect.size.height, videoRect.size.width, CV_8UC4, baseaddress, CVPixelBufferGetBytesPerRow(pixelBuffer));

Related

Why is Yp Cb Cr image buffer all shuffled in iOS 13?

I have a computer vision app that takes grayscale images from sensor and processes them. The image acquisition for iOS is written in Obj-C and the image processing is performed in C++ using OpenCV. As I only need the luminance data, I acquire the image in YUV (or Yp Cb Cr) 420 bi-planar full range format and just assign the buffer's data to an OpenCV Mat object (see aquisition code below). This worked great so far, until the brand new iOS 13 came out... For some reason, on iOS 13 the image I obtain is misaligned, resulting in diagonal stripes. By looking at the image I obtain, I suspect this is the consequence of a change in ordering of the buffer's Y Cb an Cr components or a change in the buffer's stride. Does anyone know if iOS 13 introduces this kind of changes and how I could update my code to avoid this, preferably in a backward-compatible manner?
Here is my image acquisition code:
//capture config
- (void)initialize {
AVCaptureDevice *frontCameraDevice;
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *device in devices) {
if (device.position == AVCaptureDevicePositionFront) {
frontCameraDevice = device;
}
}
if (frontCameraDevice == nil) {
NSLog(#"Front camera device not found");
return;
}
_session = [[AVCaptureSession alloc] init];
_session.sessionPreset = AVCaptureSessionPreset640x480;
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:frontCameraDevice error: &error];
if (error != nil) {
NSLog(#"Error getting front camera device input: %#", error);
}
if ([_session canAddInput:input]) {
[_session addInput:input];
} else {
NSLog(#"Could not add front camera device input to session");
}
AVCaptureVideoDataOutput *videoOutput = [[AVCaptureVideoDataOutput alloc] init];
// This is the default, but making it explicit
videoOutput.alwaysDiscardsLateVideoFrames = YES;
if ([videoOutput.availableVideoCVPixelFormatTypes containsObject:
[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]]) {
OSType format = kCVPixelFormatType_420YpCbCr8BiPlanarFullRange;
videoOutput.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithUnsignedInt:format]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
} else {
NSLog(#"YUV format not available");
}
[videoOutput setSampleBufferDelegate:self queue:dispatch_queue_create("extrapage.camera.capture.sample.buffer.delegate", DISPATCH_QUEUE_SERIAL)];
if ([_session canAddOutput:videoOutput]) {
[_session addOutput:videoOutput];
} else {
NSLog(#"Could not add video output to session");
}
AVCaptureConnection *captureConnection = [videoOutput connectionWithMediaType:AVMediaTypeVideo];
captureConnection.videoOrientation = AVCaptureVideoOrientationPortrait;
}
//acquisition code
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
if (_listener != nil) {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
OSType format = CVPixelBufferGetPixelFormatType(pixelBuffer);
NSAssert(format == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, #"Only YUV is supported");
// The first plane / channel (at index 0) is the grayscale plane
// See more infomation about the YUV format
// http://en.wikipedia.org/wiki/YUV
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *baseaddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
CGFloat width = CVPixelBufferGetWidth(pixelBuffer);
CGFloat height = CVPixelBufferGetHeight(pixelBuffer);
cv::Mat frame(height, width, CV_8UC1, baseaddress, 0);
[_listener onNewFrame:frame];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}
}
I found the solution to this problem. It was a row stride issue: appearently, in iOS 13, the row stride of the Yp Cb Cr 4:2:0 8 bit bi-planar buffer was changed. Maybe for it to always be a power of 2. Therefore in some cases, the row stride is no longer the same as the width. It was the case for me. The fix is easy, just get the row stride from the buffer's info and pass it to the OpenCV Mat's constructor as shown below.
void *baseaddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
size_t width = CVPixelBufferGetWidthOfPlane(pixelBuffer, 0);
size_t height = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
cv::Mat frame(height, width, CV_8UC1, baseaddress, bytesPerRow);
Note that I also changed how I get the width and height by using the dimensions of the plane instead of the ones of the buffer. For the Y plane, it should always be the same. I am not sure that this makes a difference.
Also be careful: after the Xcode update to support the iOS 13 SDK, I had to uninstall my app from the test device because otherwise, Xcode kept running the old version instead of the newly compiled one.
this is not an answer, but we have a similar problem. I tried with different resolution used in photo capture, only one resolution (2592 x 1936) does not work, other resolutions does work. I think change the resolution to 1440x1920 for example may be a workaround to your problem.

iPhone 6+ (Wrong scale?)

I have an iOS app using the camera to take pictures.
It uses a path(CGPath) drawn on the screen (for example a rectangle), and it takes a photo within that path. The app supports only portrait orientation.
For that to happen I use: AVCaptureSession, AVCaptureStillImageOutput, AVCaptureDevice, AVCaptureVideoPreviewLayer
(I guess all familiar to developers making this kind of apps).
My code uses UIScreen.mainScreen().bounds and UIScreen.mainScreen().scale to adapt do various devices and do its job.
It all goes fine(on iPhone 5, iPhone 6), until I try the app on an iPhone 6+ (running iOS 9.3.1) and see that something is wrong.
The picture taken is not layed out in the right place anymore.
I had someone try on an iPhone 6+, and by putting an appropriate message I was able to confirm that (UIScreen.mainScreen().scale) is what it shoud be: 3.0.
I have put the proper size launch images(640 × 960, 640 × 1136, 750 × 1334, 1242 × 2208) in the project.
So what could be the problem?
I use the code below in an app, it works on 6+.
The code starts a AVCaptureSession, pulling video input from the device's camera.
As it does so, it continuously updates the runImage var, from the captureOutput delegate function.
When the user wants to take a picture, the takePhoto method is called. This method creates a temporary UIImageview and feeds the runImage into it. This temp UIImageView is then used to draw another variable called currentImage to the scale of the device.
The currentImage, in my case, is square, matching the previewHolder frame, but I suppose you can make anything you want.
Declare these:
AVCaptureDevice * device;
AVCaptureDeviceInput * input;
AVCaptureVideoDataOutput * output;
AVCaptureSession * session;
AVCaptureVideoPreviewLayer * preview;
AVCaptureConnection * connection;
UIImage * runImage;
Load scanner:
-(void)loadScanner
{
device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
input = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
output = [AVCaptureVideoDataOutput new];
session = [AVCaptureSession new];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
[session addInput:input];
[session addOutput:output];
[output setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[output setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
preview = [AVCaptureVideoPreviewLayer layerWithSession:session];
preview.videoGravity = AVLayerVideoGravityResizeAspectFill;
preview.frame = previewHolder.bounds;
connection = preview.connection;
[connection setVideoOrientation:AVCaptureVideoOrientationPortrait];
[previewHolder.layer insertSublayer:preview atIndex:0];
}
Ongoing image capture, updates runImage var.
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
runImage = [self imageForBuffer:sampleBuffer];
}
Related to above.
-(UIImage *)imageForBuffer:(CMSampleBufferRef)sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage *image = [UIImage imageWithCGImage:quartzImage];
CGImageRelease(quartzImage);
UIImage * rotated = [[UIImage alloc] initWithCGImage:image.CGImage scale:1.0 orientation:UIImageOrientationRight];
return rotated;
}
On take photo:
-(void)takePhoto
{
UIImageView * temp = [UIImageView new];
temp.frame = previewHolder.frame;
temp.image = runImage;
temp.contentMode = UIViewContentModeScaleAspectFill;
temp.clipsToBounds = true;
[self.view addSubview:temp];
UIGraphicsBeginImageContextWithOptions(temp.bounds.size, NO, [UIScreen mainScreen].scale);
[temp drawViewHierarchyInRect:temp.bounds afterScreenUpdates:YES];
currentImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[temp removeFromSuperview];
//further code...
}
In case someone else has the same issue. Here is what made things go wrong for me:
I was naming a file : xyz#2x.png.
When UIScreen.mainScreen().scale == 3.0 (case of an iPhone 6+)
it has to be named : xyz#3x.png.

How to focus the near by objects in iOS 8?

I'm developing a QR code reader. My Codes are 1cm long and width. I'm using AVFoundation metadata to capture the machine readable codes and it works fine. But at the same time i need to take a picture of the QR code with the logo (Which is located in mid of the QR code). So I'm using AVCaptureVideoDataOutput and didOutputSampleBuffer to get the image stills. The problem comes in clarity of the image. it looks always blurry in the edges of the codes and logo. So i did a research on manual controls in and made some code changes for manual focusing but no luck till now.
How to focus (which is 10cm away from the camera and tiny)the near by objects?
Do we have any other way of getting the image after successful scan from the metadata?
What is difference between setFocusModeLockedWithLensPosition and focusPointOfInterest ?
Here is my code (part of it)
// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
_session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
_session.sessionPreset = AVCaptureSessionPresetHigh;
// Find a suitable AVCaptureDevice
_device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
if ([_device lockForConfiguration:&error]) {
[_device setAutoFocusRangeRestriction:AVCaptureAutoFocusRangeRestrictionNone];
[_device setFocusModeLockedWithLensPosition:0.5 completionHandler:nil];
//[device setFocusMode:AVCaptureFocusModeAutoFocus];
// _device.focusPointOfInterest = CGPointMake(0.5,0.5);
// device.videoZoomFactor = 1.0 + 10;
[_device unlockForConfiguration];
}
// if ([_device isSmoothAutoFocusEnabled])
// {
// _device.smoothAutoFocusEnabled = NO;
// }
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:_device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[_session addInput:input];
// For scanning QR code
AVCaptureMetadataOutput *metaDataOutput = [[AVCaptureMetadataOutput alloc] init];
// Have to add the output before setting metadata types
[_session addOutput:metaDataOutput];
[metaDataOutput setMetadataObjectTypes:#[AVMetadataObjectTypeQRCode]];
[metaDataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
//For saving the image to camera roll
_stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[_stillImageOutput setOutputSettings:outputSettings];
[_session addOutput:_stillImageOutput];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[_session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// Start the session running to start the flow of data
[self startCapturingWithSession:_session];
// Assign session to an ivar.
[self setSession:_session];
}
- (void)startCapturingWithSession: (AVCaptureSession *) captureSession
{
NSLog(#"Adding video preview layer");
[self setPreviewLayer:[[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession]];
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
//----- DISPLAY THE PREVIEW LAYER -----
//Display it full screen under out view controller existing controls
NSLog(#"Display the preview layer");
CGRect layerRect = [[[self view] layer] bounds];
[self.previewLayer setBounds:layerRect];
[self.previewLayer setPosition:CGPointMake(CGRectGetMidX(layerRect),
CGRectGetMidY(layerRect))];
[self.previewLayer setAffineTransform:CGAffineTransformMakeScale(3.5, 3.5)];
//[[[self view] layer] addSublayer:[[self CaptureManager] self.previewLayer]];
//We use this instead so it goes on a layer behind our UI controls (avoids us having to manually bring each control to the front):
UIView *CameraView = [[UIView alloc] init];
[[self view] addSubview:CameraView];
[self.view sendSubviewToBack:CameraView];
[[CameraView layer] addSublayer:self.previewLayer];
//----- START THE CAPTURE SESSION RUNNING -----
[captureSession startRunning];
[self switchONFlashLight];
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
[connection setVideoOrientation:AVCaptureVideoOrientationLandscapeLeft];
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
}
/ Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection{
if (metadataObjects != nil && [metadataObjects count] > 0) {
AVMetadataMachineReadableCodeObject *metadataObj = [metadataObjects objectAtIndex:0];
// if ([_device lockForConfiguration:nil]){
// [_device setAutoFocusRangeRestriction:AVCaptureAutoFocusRangeRestrictionNear];
// _device.focusPointOfInterest = CGPointMake(metadataObj.bounds.origin.x, metadataObj.bounds.origin.y);
// [_device unlockForConfiguration];
// }
if ([[metadataObj type] isEqualToString:AVMetadataObjectTypeQRCode]) {
[_lblStatus performSelectorOnMainThread:#selector(setText:) withObject:[metadataObj stringValue] waitUntilDone:NO];
}
}
}
Aiming for iOS 8 and latest iPhones only.
After did regressive research and got inputs from photographers. I'm sharing my answers for the future readers.
As of iOS 8 apple provides only three focus modes. Which are
typedef NS_ENUM(NSInteger, AVCaptureFocusMode) {
AVCaptureFocusModeLocked = 0,
AVCaptureFocusModeAutoFocus = 1,
AVCaptureFocusModeContinuousAutoFocus = 2,
} NS_AVAILABLE(10_7, 4_0);
To focus an object which is very near to the lens we can use AVCaptureAutoFocusRangeRestrictionNear
but for my need due to restrictions on minimum focus length with the iPhone cameras it is not possible to get the clear image of my codes.
AFAIK there is no way to get image data from metadata. My question itself wrong. but how ever you can get the image buffers from video frames. check out Capturing Video Frames as UIImage Objects for more info.
setFocusModeLockedWithLensPosition will lock the focus mode and will allow us to set the particular lens position which starts from 0.0 to 1.0.
focusPointOfInterest dont change the focus mode but it will just set points for focus. Best example would be tap to focus.

Capturing a still image from camera in OpenCV Mat format

I am developing an iOS application and trying to get a still image snapshot from the camera using capture session but I'm unable to convert it successfully to an OpenCV Mat.
The still image output is created using this code:
- (void)createStillImageOutput;
{
// setup still image output with jpeg codec
self.stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecJPEG, AVVideoCodecKey, nil];
[self.stillImageOutput setOutputSettings:outputSettings];
[self.captureSession addOutput:self.stillImageOutput];
for (AVCaptureConnection *connection in self.stillImageOutput.connections) {
for (AVCaptureInputPort *port in [connection inputPorts]) {
if ([port.mediaType isEqual:AVMediaTypeVideo]) {
self.videoCaptureConnection = connection;
break;
}
}
if (self.videoCaptureConnection) {
break;
}
}
NSLog(#"[Camera] still image output created");
}
And then attempting to capture a still image using this code:
[self.stillImageOutput captureStillImageAsynchronouslyFromConnection:self.videoCaptureConnection
completionHandler:
^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
if (error == nil && imageSampleBuffer != NULL)
{
NSData *jpegData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
}
I need a way to create an OpenCV Mat based on the pixel data in the buffer.
I have tried creating a Mat using this code which I have taken from OpenCV video camera class camera here:
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void* bufferAddress;
size_t width;
size_t height;
size_t bytesPerRow;
CGColorSpaceRef colorSpace;
CGContextRef context;
int format_opencv;
OSType format = CVPixelBufferGetPixelFormatType(imageBuffer);
if (format == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
format_opencv = CV_8UC1;
bufferAddress = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
width = CVPixelBufferGetWidthOfPlane(imageBuffer, 0);
height = CVPixelBufferGetHeightOfPlane(imageBuffer, 0);
bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0);
} else { // expect kCVPixelFormatType_32BGRA
format_opencv = CV_8UC4;
bufferAddress = CVPixelBufferGetBaseAddress(imageBuffer);
width = CVPixelBufferGetWidth(imageBuffer);
height = CVPixelBufferGetHeight(imageBuffer);
bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
}
cv::Mat image(height, width, format_opencv, bufferAddress, bytesPerRow);
but it fails to get the actual height and width of the image on the CVPixelBufferGetWidth or CVPixelBufferGetHeight call and so the creation of Mat fails.
I'm aware that I can create a UIImage based on the pixel data using this code:
UIImage* newImage = [UIImage imageWithData:jpegData];
But I prefer to construct a CvMat directly as it is the case in the OpenCV CvVideoCamera class because I'm interested in handling the image in OpenCV only and I don't want to consume time converting again and not risk losing quality or having issues with orientation (anyway the UIImagetoCV conversion function provided by OpenCV is causing me memory leaks and not freeing the memory).
Please advise how can I get the image as an OpenCV Mat.
Thanks in advance.
I have found the solution for my problem.
the Solution is :
by overriding this method in the OpenCv Camera Class : "createVideoPreviewLayer"
it should look like this :
- (void)createVideoPreviewLayer;
{
self.parentView.layer.sublayers = nil;
if (captureVideoPreviewLayer == nil) {
captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc]
initWithSession:self.captureSession];
}
if (self.parentView != nil) {
captureVideoPreviewLayer.frame = self.parentView.bounds;
captureVideoPreviewLayer.videoGravity =
AVLayerVideoGravityResizeAspectFill;
[self.parentView.layer addSublayer:captureVideoPreviewLayer];
}
NSLog(#"[Camera] created AVCaptureVideoPreviewLayer");
}
you should add this line to the "createVideoPreviewLayer" method will solve
the problem :
self.parentView.layer.sublayers = nil;
and you need to use the pause method instead of the stop method

Why is my image not updating when i call it from the capture output protocol?

I am trying to do something very simple. I want to display the video layer in full screen, and once every second update an UIImage with the CMSampleBufferRef i got at that time. However i am running into two different problems. The first one is that changing the:
[connection setVideoMaxFrameDuration:CMTimeMake(1, 1)];
[connection setVideoMinFrameDuration:CMTimeMake(1, 1)];
Will also modify the video preview layer, I thought it would only modify the rate at where av foundation sends the information to the delegate but it seems to affect the entire session (which looks more obvious). So this makes my video update every second. I guess i could omit those lines and simply add a timer in the delegate so that every second it sends the CMSampleBufferRef to another method to process it. But i dunno if this is the right approach.
My second problem is that the UIImageView is NOT updating, or sometimes it just updates once and doesn't change after. I am using this method to update it:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
//NSData *jpeg = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:sampleBuffer] ;
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
[imageView setImage:image];
// Add your code here that uses the image.
NSLog(#"update");
}
Which i took from the apple examples. The method is being called correctly every second which i checked by reading the update message. But the image is not changing at all. Also is the sampleBuffer automatically destroyed or do i have to release it?
This are the other 2 important methods:
View Did Load:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
session = [[AVCaptureSession alloc] init];
// Add inputs and outputs.
if ([session canSetSessionPreset:AVCaptureSessionPreset640x480]) {
session.sessionPreset = AVCaptureSessionPreset640x480;
}
else {
// Handle the failure.
NSLog(#"Cannot set session preset to 640x480");
}
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
NSLog(#"Could create input: %#", error);
}
if ([session canAddInput:input]) {
[session addInput:input];
}
else {
// Handle the failure.
NSLog(#"Could not add input");
}
// DATA OUTPUT
dataOutput = [[AVCaptureVideoDataOutput alloc] init];
if ([session canAddOutput:dataOutput]) {
[session addOutput:dataOutput];
dataOutput.videoSettings =
[NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey: (id)kCVPixelBufferPixelFormatTypeKey];
//dataOutput.minFrameDuration = CMTimeMake(1, 15);
//dataOutput.minFrameDuration = CMTimeMake(1, 1);
AVCaptureConnection *connection = [dataOutput connectionWithMediaType:AVMediaTypeVideo];
[connection setVideoMaxFrameDuration:CMTimeMake(1, 1)];
[connection setVideoMinFrameDuration:CMTimeMake(1, 1)];
}
else {
// Handle the failure.
NSLog(#"Could not add output");
}
// DATA OUTPUT END
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[dataOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[captureVideoPreviewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
[captureVideoPreviewLayer setBounds:videoLayer.layer.bounds];
[captureVideoPreviewLayer setPosition:videoLayer.layer.position];
[videoLayer.layer addSublayer:captureVideoPreviewLayer];
[session startRunning];
}
Covert the CMSampleBufferRef to UIImage:
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
Thanks in advance for any help you can give me.
From the documentation for the captureOutput:didOutputSampleBuffer:fromConnection: method :
This method is called on the dispatch queue specified by the output’s sampleBufferCallbackQueue property.
This means that if you need to update the UI using the buffer in this method you need to do that on the main queue like this :
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer: (CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
dispatch_async(dispatch_get_main_queue(), ^{
[imageView setImage:image];
});
}
EDIT : About your first questions :
I'm not sure I'm understanding the problem, but if you want to update the image only once every second you can also have a "lastImageUpdateTime" value to compare to in the "didOutputSampleBuffer" method and see if enough time passed and only update the image there, and ignore the sample buffer otherwise.

Resources