I have an iOS app using the camera to take pictures.
It uses a path(CGPath) drawn on the screen (for example a rectangle), and it takes a photo within that path. The app supports only portrait orientation.
For that to happen I use: AVCaptureSession, AVCaptureStillImageOutput, AVCaptureDevice, AVCaptureVideoPreviewLayer
(I guess all familiar to developers making this kind of apps).
My code uses UIScreen.mainScreen().bounds and UIScreen.mainScreen().scale to adapt do various devices and do its job.
It all goes fine(on iPhone 5, iPhone 6), until I try the app on an iPhone 6+ (running iOS 9.3.1) and see that something is wrong.
The picture taken is not layed out in the right place anymore.
I had someone try on an iPhone 6+, and by putting an appropriate message I was able to confirm that (UIScreen.mainScreen().scale) is what it shoud be: 3.0.
I have put the proper size launch images(640 × 960, 640 × 1136, 750 × 1334, 1242 × 2208) in the project.
So what could be the problem?
I use the code below in an app, it works on 6+.
The code starts a AVCaptureSession, pulling video input from the device's camera.
As it does so, it continuously updates the runImage var, from the captureOutput delegate function.
When the user wants to take a picture, the takePhoto method is called. This method creates a temporary UIImageview and feeds the runImage into it. This temp UIImageView is then used to draw another variable called currentImage to the scale of the device.
The currentImage, in my case, is square, matching the previewHolder frame, but I suppose you can make anything you want.
Declare these:
AVCaptureDevice * device;
AVCaptureDeviceInput * input;
AVCaptureVideoDataOutput * output;
AVCaptureSession * session;
AVCaptureVideoPreviewLayer * preview;
AVCaptureConnection * connection;
UIImage * runImage;
Load scanner:
-(void)loadScanner
{
device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
input = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil];
output = [AVCaptureVideoDataOutput new];
session = [AVCaptureSession new];
[session setSessionPreset:AVCaptureSessionPresetPhoto];
[session addInput:input];
[session addOutput:output];
[output setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
[output setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
preview = [AVCaptureVideoPreviewLayer layerWithSession:session];
preview.videoGravity = AVLayerVideoGravityResizeAspectFill;
preview.frame = previewHolder.bounds;
connection = preview.connection;
[connection setVideoOrientation:AVCaptureVideoOrientationPortrait];
[previewHolder.layer insertSublayer:preview atIndex:0];
}
Ongoing image capture, updates runImage var.
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
runImage = [self imageForBuffer:sampleBuffer];
}
Related to above.
-(UIImage *)imageForBuffer:(CMSampleBufferRef)sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage *image = [UIImage imageWithCGImage:quartzImage];
CGImageRelease(quartzImage);
UIImage * rotated = [[UIImage alloc] initWithCGImage:image.CGImage scale:1.0 orientation:UIImageOrientationRight];
return rotated;
}
On take photo:
-(void)takePhoto
{
UIImageView * temp = [UIImageView new];
temp.frame = previewHolder.frame;
temp.image = runImage;
temp.contentMode = UIViewContentModeScaleAspectFill;
temp.clipsToBounds = true;
[self.view addSubview:temp];
UIGraphicsBeginImageContextWithOptions(temp.bounds.size, NO, [UIScreen mainScreen].scale);
[temp drawViewHierarchyInRect:temp.bounds afterScreenUpdates:YES];
currentImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[temp removeFromSuperview];
//further code...
}
In case someone else has the same issue. Here is what made things go wrong for me:
I was naming a file : xyz#2x.png.
When UIScreen.mainScreen().scale == 3.0 (case of an iPhone 6+)
it has to be named : xyz#3x.png.
Related
I'm developing a QR code reader. My Codes are 1cm long and width. I'm using AVFoundation metadata to capture the machine readable codes and it works fine. But at the same time i need to take a picture of the QR code with the logo (Which is located in mid of the QR code). So I'm using AVCaptureVideoDataOutput and didOutputSampleBuffer to get the image stills. The problem comes in clarity of the image. it looks always blurry in the edges of the codes and logo. So i did a research on manual controls in and made some code changes for manual focusing but no luck till now.
How to focus (which is 10cm away from the camera and tiny)the near by objects?
Do we have any other way of getting the image after successful scan from the metadata?
What is difference between setFocusModeLockedWithLensPosition and focusPointOfInterest ?
Here is my code (part of it)
// Create and configure a capture session and start it running
- (void)setupCaptureSession
{
NSError *error = nil;
// Create the session
_session = [[AVCaptureSession alloc] init];
// Configure the session to produce lower resolution video frames, if your
// processing algorithm can cope. We'll specify medium quality for the
// chosen device.
_session.sessionPreset = AVCaptureSessionPresetHigh;
// Find a suitable AVCaptureDevice
_device = [AVCaptureDevice
defaultDeviceWithMediaType:AVMediaTypeVideo];
if ([_device lockForConfiguration:&error]) {
[_device setAutoFocusRangeRestriction:AVCaptureAutoFocusRangeRestrictionNone];
[_device setFocusModeLockedWithLensPosition:0.5 completionHandler:nil];
//[device setFocusMode:AVCaptureFocusModeAutoFocus];
// _device.focusPointOfInterest = CGPointMake(0.5,0.5);
// device.videoZoomFactor = 1.0 + 10;
[_device unlockForConfiguration];
}
// if ([_device isSmoothAutoFocusEnabled])
// {
// _device.smoothAutoFocusEnabled = NO;
// }
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:_device
error:&error];
if (!input) {
// Handling the error appropriately.
}
[_session addInput:input];
// For scanning QR code
AVCaptureMetadataOutput *metaDataOutput = [[AVCaptureMetadataOutput alloc] init];
// Have to add the output before setting metadata types
[_session addOutput:metaDataOutput];
[metaDataOutput setMetadataObjectTypes:#[AVMetadataObjectTypeQRCode]];
[metaDataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
//For saving the image to camera roll
_stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[_stillImageOutput setOutputSettings:outputSettings];
[_session addOutput:_stillImageOutput];
// Create a VideoDataOutput and add it to the session
AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
[_session addOutput:output];
// Configure your output.
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
// Specify the pixel format
output.videoSettings =
[NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// Start the session running to start the flow of data
[self startCapturingWithSession:_session];
// Assign session to an ivar.
[self setSession:_session];
}
- (void)startCapturingWithSession: (AVCaptureSession *) captureSession
{
NSLog(#"Adding video preview layer");
[self setPreviewLayer:[[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession]];
[self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
//----- DISPLAY THE PREVIEW LAYER -----
//Display it full screen under out view controller existing controls
NSLog(#"Display the preview layer");
CGRect layerRect = [[[self view] layer] bounds];
[self.previewLayer setBounds:layerRect];
[self.previewLayer setPosition:CGPointMake(CGRectGetMidX(layerRect),
CGRectGetMidY(layerRect))];
[self.previewLayer setAffineTransform:CGAffineTransformMakeScale(3.5, 3.5)];
//[[[self view] layer] addSublayer:[[self CaptureManager] self.previewLayer]];
//We use this instead so it goes on a layer behind our UI controls (avoids us having to manually bring each control to the front):
UIView *CameraView = [[UIView alloc] init];
[[self view] addSubview:CameraView];
[self.view sendSubviewToBack:CameraView];
[[CameraView layer] addSublayer:self.previewLayer];
//----- START THE CAPTURE SESSION RUNNING -----
[captureSession startRunning];
[self switchONFlashLight];
}
// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Create a UIImage from the sample buffer data
[connection setVideoOrientation:AVCaptureVideoOrientationLandscapeLeft];
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
}
/ Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection{
if (metadataObjects != nil && [metadataObjects count] > 0) {
AVMetadataMachineReadableCodeObject *metadataObj = [metadataObjects objectAtIndex:0];
// if ([_device lockForConfiguration:nil]){
// [_device setAutoFocusRangeRestriction:AVCaptureAutoFocusRangeRestrictionNear];
// _device.focusPointOfInterest = CGPointMake(metadataObj.bounds.origin.x, metadataObj.bounds.origin.y);
// [_device unlockForConfiguration];
// }
if ([[metadataObj type] isEqualToString:AVMetadataObjectTypeQRCode]) {
[_lblStatus performSelectorOnMainThread:#selector(setText:) withObject:[metadataObj stringValue] waitUntilDone:NO];
}
}
}
Aiming for iOS 8 and latest iPhones only.
After did regressive research and got inputs from photographers. I'm sharing my answers for the future readers.
As of iOS 8 apple provides only three focus modes. Which are
typedef NS_ENUM(NSInteger, AVCaptureFocusMode) {
AVCaptureFocusModeLocked = 0,
AVCaptureFocusModeAutoFocus = 1,
AVCaptureFocusModeContinuousAutoFocus = 2,
} NS_AVAILABLE(10_7, 4_0);
To focus an object which is very near to the lens we can use AVCaptureAutoFocusRangeRestrictionNear
but for my need due to restrictions on minimum focus length with the iPhone cameras it is not possible to get the clear image of my codes.
AFAIK there is no way to get image data from metadata. My question itself wrong. but how ever you can get the image buffers from video frames. check out Capturing Video Frames as UIImage Objects for more info.
setFocusModeLockedWithLensPosition will lock the focus mode and will allow us to set the particular lens position which starts from 0.0 to 1.0.
focusPointOfInterest dont change the focus mode but it will just set points for focus. Best example would be tap to focus.
I use AVCaptureSession to receive image from camera of iPhone. It return image in delegate function. In this function, I create image and call other thread to process this image:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
// static bool isFirstTime = true;
// if (isFirstTime == false) {
// return;
// }
// isFirstTime = false;
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
//Lock the image buffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
//Get information about the image
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
//Create a CGImageRef from the CVImageBufferRef
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst/*kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast*/);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
// release some components
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage* uiimage = [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationDown];
CGImageRelease(newImage);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
//[self performSelectorOnMainThread:#selector(setImageForImageView:) withObject:uiimage waitUntilDone:YES];
if(processImageThread == nil || (processImageThread != nil && processImageThread.isExecuting == false)){
[processImageThread release];
processImageThread = [[NSThread alloc] initWithTarget:self selector:#selector(processImage:) object:uiimage];
[processImageThread start];
}
[pool drain];
}
I process image on another thread, use CIFilters:
- (void) processImage:(UIImage*)image{
NSLog(#"Begin process");
CIImage* ciimage = [CIImage imageWithCGImage:image.CGImage];
CIFilter* filter = [CIFilter filterWithName:#"CIColorMonochrome"];// keysAndValues:kCIInputImageKey, ciimage, "inputRadius", [NSNumber numberWithFloat:10.0f], nil];
[filter setDefaults];
[filter setValue:ciimage forKey:#"inputImage"];
[filter setValue:[CIColor colorWithRed:0.5 green:0.5 blue:1.0] forKey:#"inputColor"];
CIImage* ciResult = [filter outputImage];
CIContext* context = [CIContext contextWithOptions:nil];
CGImageRef cgImage = [context createCGImage:ciResult fromRect:[ciResult extent]];
UIImage* uiResult = [UIImage imageWithCGImage:cgImage scale:1.0 orientation:UIImageOrientationRight];
CFRelease(cgImage);
[self performSelectorOnMainThread:#selector(setImageForImageView:) withObject:uiResult waitUntilDone:YES];
NSLog(#"End process");
}
And set result image for a layer:
- (void) setImageForImageView:(UIImage*)image{
self.view.layer.contents = image.CGImage;
}
But it is very laggy. I found a open source, it create a real time image effect application very smooth (also use AVCaptureSession. So, what is difference here (my code and their code) ? How to create real time image effect processing application ?
This is the link of open source: https://github.com/gobackspaces/DLCImagePickerController#readme
The open source sample that you specified in your question using an outstanding open source library GPUImage by BradLarson for the real time photo and video processing. This library uses GPU-based filters (OpenGL ES 2.0) for image processing. Comparatively it is faster than the CPU-based image fileters that you are using by the core image framework.
GPUImage
The GPUImage framework is a BSD-licensed iOS library that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies. In comparison to Core Image (part of iOS 5.0), GPUImage allows you to write your own custom filters, supports deployment to iOS 4.0, and has a simpler interface. However, it currently lacks some of the more advanced features of Core Image, such as facial detection.
For massively parallel operations like processing images or live video frames, GPUs have some significant performance advantages over CPUs. On an iPhone 4, a simple image filter can be over 100 times faster to perform on the GPU than an equivalent CPU-based filter.
I need to obtain the UIImage from uncompressed image data from CMSampleBufferRef. I'm using the code:
captureStillImageOutput captureStillImageAsynchronouslyFromConnection:connection
completionHandler:^(CMSampleBufferRef imageSampleBuffer, NSError *error)
{
// that famous function from Apple docs found on a lot of websites
// does NOT work for still images
UIImage *capturedImage = [self imageFromSampleBuffer:imageSampleBuffer];
}
http://developer.apple.com/library/ios/#qa/qa1702/_index.html is a link to imageFromSampleBuffer function.
But it does not work properly. :(
There is a jpegStillImageNSDataRepresentation:imageSampleBuffer method, but it gives the compressed data (well, because JPEG).
How can I get UIImage created with the most raw non-compressed data after capturing Still Image?
Maybe, I should specify some settings to video output? I'm currently using those:
captureStillImageOutput = [[AVCaptureStillImageOutput alloc] init];
captureStillImageOutput.outputSettings = #{ (id)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
I've noticed, that output has a default value for AVVideoCodecKey, which is AVVideoCodecJPEG. Can it be avoided in any way, or does it even matter when capturing still image?
I found something there: Raw image data from camera like "645 PRO" , but I need just a UIImage, without using OpenCV or OGLES or other 3rd party.
The method imageFromSampleBuffer does work in fact I'm using a changed version of it, but if I remember correctly you need to set the outputSettings right. I think you need to set the key as kCVPixelBufferPixelFormatTypeKey and the value as kCVPixelFormatType_32BGRA.
So for example:
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* outputSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[newStillImageOutput setOutputSettings:outputSettings];
EDIT
I am using those settings to take stillImages not video.
Is your sessionPreset AVCaptureSessionPresetPhoto? There may be problems with that
AVCaptureSession *newCaptureSession = [[AVCaptureSession alloc] init];
[newCaptureSession setSessionPreset:AVCaptureSessionPresetPhoto];
EDIT 2
The part about saving it to UIImage is identical with the one from the documentation. That's the reason I was asking for other origins of the problem, but I guess that was just grasping for straws.
There is another way I know of, but that requires OpenCV.
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
I guess that is of no help to you, sorry. I don't know enough to think of other origins for your problem.
Here's a more efficient way:
UIImage *image = [UIImage imageWithData:[self imageToBuffer:sampleBuffer]];
- (NSData *) imageToBuffer:(CMSampleBufferRef)source {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(source);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void *src_buff = CVPixelBufferGetBaseAddress(imageBuffer);
NSData *data = [NSData dataWithBytes:src_buff length:bytesPerRow * height];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return data;
}
I am using AVFoundation to capture video frames, process with opencv and display the result in an UIImageView on the new iPad. The opencv process does the followings ("inImg" is the video frame) :
cv::Mat testROI = inImg.rowRange(0,100);
testROI = testROI.colRange(0,10);
testROI.setTo(255); // this is a BGRA frame.
However, instead of getting a vertical white bar (100 row x 10 col) on the top left corner of the frame, I got 100 stair-like horizontal lines from top right corner to the bottom left, each with 10 pixel long.
After some investigation, I realized that the width of the displayed frame seems to be 8 pixel wider than the cv::Mat. (i.e. the 9th pixel of the 2nd row is right below the 1st pixel of the 1st row.).
The video frame itself is shown correctly (no displacement between rows).
The problem appears when the AVCaptureSession.sessionPreset is AVCaptureSessionPresetMedium (frame rows=480, cols=360) but does not appear when it is AVCaptureSessionPresetHigh (frame rows=640, cols=480).
There are 360 cols shown in full screen. (I tried traverse and modify the cv::Mat pixel-by-pixel. Pixel 1-360 were shown correctly. 361-368 disappeared and 369 was shown right under pixel 1).
I tried combinations of imageview.contentMode (UIViewContentModeScaleAspectFill and UIViewContentModeScaleAspectFit) and imageview.clipsToBound (YES/NO) but no luck.
What could be the problem?
Thank you very much.
I use the following code to create the AVCaptureSession:
NSArray* devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
if ([devices count] == 0) {
NSLog(#"No video capture devices found");
return NO;
}
for (AVCaptureDevice *device in devices) {
if ([device position] == AVCaptureDevicePositionFront) {
_captureDevice = device;
}
}
NSError* error_exp = nil;
if ([_captureDevice lockForConfiguration:&error_exp]) {
[_captureDevice setWhiteBalanceMode:AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance];
[_captureDevice unlockForConfiguration];
}
// Create the capture session
_captureSession = [[AVCaptureSession alloc] init];
_captureSession.sessionPreset = AVCaptureSessionPresetMedium;
// Create device input
NSError *error = nil;
AVCaptureDeviceInput *input = [[AVCaptureDeviceInput alloc] initWithDevice:_captureDevice error:&error];
// Create and configure device output
_videoOutput = [[AVCaptureVideoDataOutput alloc] init];
dispatch_queue_t queue = dispatch_queue_create("cameraQueue", NULL);
[_videoOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
_videoOutput.alwaysDiscardsLateVideoFrames = YES;
OSType format = kCVPixelFormatType_32BGRA;
_videoOutput.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithUnsignedInt:format]forKey:(id)kCVPixelBufferPixelFormatTypeKey];
// Connect up inputs and outputs
if ([_captureSession canAddInput:input]) {
[_captureSession addInput:input];
}
if ([_captureSession canAddOutput:_videoOutput]) {
[_captureSession addOutput:_videoOutput];
}
AVCaptureConnection * captureConnection = [_videoOutput connectionWithMediaType:AVMediaTypeVideo];
if (captureConnection.isVideoMinFrameDurationSupported)
captureConnection.videoMinFrameDuration = CMTimeMake(1, 60);
if (captureConnection.isVideoMaxFrameDurationSupported)
captureConnection.videoMaxFrameDuration = CMTimeMake(1, 60);
if (captureConnection.supportsVideoMirroring)
[captureConnection setVideoMirrored:NO];
[captureConnection setVideoOrientation:AVCaptureVideoOrientationPortraitUpsideDown];
When a frame is received, the followings is called:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
#autoreleasepool {
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
OSType format = CVPixelBufferGetPixelFormatType(pixelBuffer);
CGRect videoRect = CGRectMake(0.0f, 0.0f, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer));
AVCaptureConnection *currentConnection = [[_videoOutput connections] objectAtIndex:0];
AVCaptureVideoOrientation videoOrientation = [currentConnection videoOrientation];
CGImageRef quartzImage;
// For color mode a 4-channel cv::Mat is created from the BGRA data
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *baseaddress = CVPixelBufferGetBaseAddress(pixelBuffer);
cv::Mat mat(videoRect.size.height, videoRect.size.width, CV_8UC4, baseaddress, 0);
if ([self doFrame]) { // a flag to switch processing ON/OFF
[self processFrame:mat videoRect:videoRect videoOrientation:videoOrientation]; // "processFrame" is the opencv function shown above
}
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
quartzImage = [self.context createCGImage:ciImage fromRect:ciImage.extent];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0 orientation:UIImageOrientationUp];
CGImageRelease(quartzImage);
[self.imageView performSelectorOnMainThread:#selector(setImage:) withObject:image waitUntilDone:YES];
I assume you're using the constructor Mat(int _rows, int _cols, int _type, void* _data, size_t _step=AUTO_STEP) and that AUTO_STEP is 0 and assumes that the row stride is width*bytesPerPixel.
This is generally wrong — it's very common to align rows to some larger boundary. In this case, 360 is not a multiple of 16 but 368 is; which strongly suggests that it's aligning to 16-pixel boundaries (perhaps to assist algorithms that process in 16×16 blocks?).
Try
cv::Mat mat(videoRect.size.height, videoRect.size.width, CV_8UC4, baseaddress, CVPixelBufferGetBytesPerRow(pixelBuffer));
I am trying to do something very simple. I want to display the video layer in full screen, and once every second update an UIImage with the CMSampleBufferRef i got at that time. However i am running into two different problems. The first one is that changing the:
[connection setVideoMaxFrameDuration:CMTimeMake(1, 1)];
[connection setVideoMinFrameDuration:CMTimeMake(1, 1)];
Will also modify the video preview layer, I thought it would only modify the rate at where av foundation sends the information to the delegate but it seems to affect the entire session (which looks more obvious). So this makes my video update every second. I guess i could omit those lines and simply add a timer in the delegate so that every second it sends the CMSampleBufferRef to another method to process it. But i dunno if this is the right approach.
My second problem is that the UIImageView is NOT updating, or sometimes it just updates once and doesn't change after. I am using this method to update it:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
//NSData *jpeg = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:sampleBuffer] ;
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
[imageView setImage:image];
// Add your code here that uses the image.
NSLog(#"update");
}
Which i took from the apple examples. The method is being called correctly every second which i checked by reading the update message. But the image is not changing at all. Also is the sampleBuffer automatically destroyed or do i have to release it?
This are the other 2 important methods:
View Did Load:
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
session = [[AVCaptureSession alloc] init];
// Add inputs and outputs.
if ([session canSetSessionPreset:AVCaptureSessionPreset640x480]) {
session.sessionPreset = AVCaptureSessionPreset640x480;
}
else {
// Handle the failure.
NSLog(#"Cannot set session preset to 640x480");
}
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
NSError *error = nil;
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
if (!input) {
// Handle the error appropriately.
NSLog(#"Could create input: %#", error);
}
if ([session canAddInput:input]) {
[session addInput:input];
}
else {
// Handle the failure.
NSLog(#"Could not add input");
}
// DATA OUTPUT
dataOutput = [[AVCaptureVideoDataOutput alloc] init];
if ([session canAddOutput:dataOutput]) {
[session addOutput:dataOutput];
dataOutput.videoSettings =
[NSDictionary dictionaryWithObject: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
forKey: (id)kCVPixelBufferPixelFormatTypeKey];
//dataOutput.minFrameDuration = CMTimeMake(1, 15);
//dataOutput.minFrameDuration = CMTimeMake(1, 1);
AVCaptureConnection *connection = [dataOutput connectionWithMediaType:AVMediaTypeVideo];
[connection setVideoMaxFrameDuration:CMTimeMake(1, 1)];
[connection setVideoMinFrameDuration:CMTimeMake(1, 1)];
}
else {
// Handle the failure.
NSLog(#"Could not add output");
}
// DATA OUTPUT END
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[dataOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[captureVideoPreviewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
[captureVideoPreviewLayer setBounds:videoLayer.layer.bounds];
[captureVideoPreviewLayer setPosition:videoLayer.layer.position];
[videoLayer.layer addSublayer:captureVideoPreviewLayer];
[session startRunning];
}
Covert the CMSampleBufferRef to UIImage:
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
Thanks in advance for any help you can give me.
From the documentation for the captureOutput:didOutputSampleBuffer:fromConnection: method :
This method is called on the dispatch queue specified by the output’s sampleBufferCallbackQueue property.
This means that if you need to update the UI using the buffer in this method you need to do that on the main queue like this :
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer: (CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection {
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
dispatch_async(dispatch_get_main_queue(), ^{
[imageView setImage:image];
});
}
EDIT : About your first questions :
I'm not sure I'm understanding the problem, but if you want to update the image only once every second you can also have a "lastImageUpdateTime" value to compare to in the "didOutputSampleBuffer" method and see if enough time passed and only update the image there, and ignore the sample buffer otherwise.