Cropping with Core Image and AV Foundation - ios

I am taking a photo using AV Foundation, and then I went to crop that image into a square that fits my UI. In the UI, there are two semi-transparent views that show what's being captured, and I want to crop the image to include just what's in between the bottom of the top view and the top of the bottom view:
topView
|
area I want to capture and crop
|
bottom View
The actual capturing of the full image works fine. The problem is using Core Image to crop the image successfully.
// Custom function that takes a photo asynchronously from the capture session and gives
// the photo and error back in a block. Works fine.
[self.captureSession takePhotoWithCompletionBlock:^(UIImage *photo, NSError *error) {
if (photo) {
CIImage *imageToCrop = [CIImage imageWithCGImage:photo.CGImage];
// Find, proportionately, the y-value at which I should start the
// cropping, based on my UI
CGFloat beginningYOfCrop = topView.frame.size.height * photo.size.height / self.view.frame.size.height;
CGFloat endYOfCrop = CGRectGetMinY(bottomView.frame) * photo.size.height / self.view.frame.size.height;
CGRect croppedFrame = CGRectMake(0,
beginningYOfCrop,
photo.size.width,
endYOfCrop - beginningYOfCrop);
// Attempt to transform the croppedFrame to fit Core Image's
// different coordinate system
CGAffineTransform coordinateTransform = CGAffineTransformMakeScale(1.0, -1.0);
coordinateTransform = CGAffineTransformTranslate(coordinateTransform,
0,
-photo.size.height);
CGRectApplyAffineTransform(croppedFrame, coordinateTransform);
imageToCrop = [imageToCrop imageByCroppingToRect:croppedFrame];
// Orient the image correctly
UIImage *filteredImage = [UIImage imageWithCIImage:imageToCrop
scale:1.0
orientation:UIImageOrientationRight];
}
}];

Related

Draw rectangles on image view.image not scaling properly - iOS

I start out with an imageView.image (a photo).
I submit (POST) the imageView.image to remote service (Microsoft face detection) for processing.
Remote service returns JSON of CGRect's for each detected face on the image.
I feed JSON into my UIView to draw the rectangles. I initiate my UIView with a frame of {0, 0, imageView.image.size.width, imageView.image.size.height}. <-- my thinking, a frame equivalent to the size of the imageView.image
Add my UIView as a subview of self.imageView OR self.view (tried both)
End Result:
Rectangles are drawn but they do not appear correctly on the imageView.image. That is, the CGRects generated for each of the faces are supposed to be relative to the image's coordinate space, as returned by the remote service but they appear off once I add my custom view.
I believe I may have a scaling issue of some sort as, if I divide each value in the CGRects / 2 (as a test) I can get an approximation but still off. The microsoft documentation states the detected faces are returned with rectangles indicating the location of faces in the image in Pixels. Yet, aren't they being treated as points when drawing my path?
Also, shouldn't I be initiating my view with a frame equivalent to the imageView.image's frame so that the view matches an identical coordinate space as the submitted image?
Here is a screenshot example of what it looks like if i try to scale down each CGRect by dividing them by 2.
I am new to iOS and broke away from the books to work on this as a self exercise. I can provide more code as needed. Thanks in advance for your insight!
EDIT 1
I add a subview for each rectangle as I iterate over an array of face attributes which include the rectangle for each face via the following method, which gets called during (void)viewDidAppear:(BOOL)animated
- (void)buildFaceRects {
// build an array of CGRect dicts off of JSON returned from analized image
NSMutableArray *array = [self analizeImage:self.imageView.image];
// enumerate over array using block - each obj in array represents one face
[array enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
// build dictionary of rects and attributes for the face
NSDictionary *json = [NSDictionary dictionaryWithObjectsAndKeys:obj[#"attributes"], #"attributes", obj[#"faceId"], #"faceId", obj[#"faceRectangle"], #"faceRectangle", nil];
// initiate face model object with dictionary
ZGCFace *face = [[ZGCFace alloc] initWithJSON:json];
NSLog(#"%#", face.faceId);
NSLog(#"%d", face.age);
NSLog(#"%#", face.gender);
NSLog(#"%f", face.faceRect.origin.x);
NSLog(#"%f", face.faceRect.origin.y);
NSLog(#"%f", face.faceRect.size.height);
NSLog(#"%f", face.faceRect.size.width);
// define frame for subview containing face rectangle
CGRect imageRect = CGRectMake(0, 0, self.imageView.image.size.width, self.imageView.image.size.height);
// initiate rectange subview with face info
ZGCFaceRectView *faceRect = [[ZGCFaceRectView alloc] initWithFace:face frame:imageRect];
// add view as subview of imageview (?)
[self.imageView addSubview:faceRect];
}];
}
EDIT 2:
/* Image info */
UIImageView *iv = self.imageView;
UIImage *img = iv.image;
CGImageRef CGimg = img.CGImage;
// Bitmap dimensions [pixels]
NSUInteger imgWidth = CGImageGetWidth(CGimg);
NSUInteger imgHeight = CGImageGetHeight(CGimg);
NSLog(#"Image dimensions: %lux%lu", imgWidth, imgHeight);
// Image size pixels (size * scale)
CGSize imgSizeInPixels = CGSizeMake(img.size.width * img.scale, img.size.height * img.scale);
NSLog(#"image size in Pixels: %fx%f", imgSizeInPixels.width, imgSizeInPixels.height);
// Image size points
CGSize imgSizeInPoints = img.size;
NSLog(#"image size in Points: %fx%f", imgSizeInPoints.width, imgSizeInPoints.height);
// Calculate Image frame (within imgview) with a contentMode of UIViewContentModeScaleAspectFit
CGFloat imgScale = fminf(CGRectGetWidth(iv.bounds)/imgSizeInPoints.width, CGRectGetHeight(iv.bounds)/imgSizeInPoints.height);
CGSize scaledImgSize = CGSizeMake(imgSizeInPoints.width * imgScale, imgSizeInPoints.height * imgScale);
CGRect imgFrame = CGRectMake(roundf(0.5f*(CGRectGetWidth(iv.bounds)-scaledImgSize.width)), roundf(0.5f*(CGRectGetHeight(iv.bounds)-scaledImgSize.height)), roundf(scaledImgSize.width), roundf(scaledImgSize.height));
// initiate rectange subview with face info
ZGCFaceRectView *faceRect = [[ZGCFaceRectView alloc] initWithFace:face frame:imgFrame];
// add view as subview of image view
[iv addSubview:faceRect];
}];
We've got several problems :
Microsoft returns pixel and iOS uses points. The difference between them is just about screen dimension. For instance on an iPhone 5 : 1 pt = 2 px and on an 3GS 1px = 1 pt. Look at the iOS documentation for more informations.
The frame of your UIImageView is not the image frame. When Microsofts returns the frame of a face, it returns it in the frame of the image and not in the frame of the UIImageView. So we've got a problem of coordinates system.
Be careful about time if you use Autolayout. The frame of a view set by constraints is not the same when ViewDidLoad: is called than when you see it on the screen.
Solution :
I'm just a read-only Objective C developer so I can't give you code. I could in Swift but it's not necessary.
Convert pixels into points. That's easy : use ratio.
Define the frame of a face using what you did. Then you have to move the coordinates you determined from the image frame coordinates system to the UIImageView coordinates system. That's less easy. It depends on the contentMode of your UIImageView. But I quickly find informations about it on the Internet.
If you use AutoLayout, add the frame of the face when AutoLayout finishes to calculate the layouts. So when ViewDidLayoutSubview: is called.
Or, that's better, use constraints to set your frame in the UIImageView.
I hope to be clear enough.
Some links :
iOS Drawing Concepts
Displayed Image Frame In UIImageView

Getting black (empty) image from UIView drawViewHierarchyInRect:afterScreenUpdates:

After successfully using UIView’s new drawViewHierarchyInRect:afterScreenUpdates: method introduced in iOS 7 to obtain an image representation (via UIGraphicsGetImageFromCurrentImageContext()) for blurring my app also needed to obtain just a portion of a view. I managed to get it in the following manner:
UIImage *image;
CGSize blurredImageSize = [_blurImageView frame].size;
UIGraphicsBeginImageContextWithOptions(blurredImageSize, YES, .0f);
[aView drawViewHierarchyInRect: [aView bounds] afterScreenUpdates: YES];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This lets me retrieve aView’s content following _blurImageView’s frame.
Now, however, I would need to obtain a portion of aView, but this time this portion would be “inside”. Below is an image representing what I would like to achieve.
I have already tried creating a new graphics context and setting its size to the portion’s size (red box) and calling aView to draw in the rect that represents the red box’s frame (of course its superview’s frame being equal to aView’s) but the image obtained is all black (empty).
After a lot of tweaking I managed to find something that did the job, however I heavily doubt this is the way to go.
Here’s my [edited-for-Stack Overflow] code that works:
- (UIImage *) imageOfPortionOfABiggerView
{
UIView *bigViewToExtractFrom;
UIImage *image;
UIImage *wholeImage;
CGImageRef _image;
CGRect imageToExtractFrame;
CGFloat screenScale = [[UIScreen mainScreen] scale];
// have to scale the rect due to (I suppose) the screen's scale for Core Graphics.
imageToExtractFrame = CGRectApplyAffineTransform(imageToExtractFrame, CGAffineTransformMakeScale(screenScale, screenScale));
UIGraphicsBeginImageContextWithOptions([bigViewToExtractFrom bounds].size, YES, screenScale);
[bigViewToExtractFrom drawViewHierarchyInRect: [bigViewToExtractFrom bounds] afterScreenUpdates: NO];
wholeImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// obtain a CGImage[Ref] from another CGImage, this lets me specify the rect to extract.
// However since the image is from a UIView which are all at 2x scale (retina) if you specify a rect in points CGImage will not take the screen's scale into consideration and will process the rect in pixels. You'll end up with an image from the wrong rect and half the size.
_image = CGImageCreateWithImageInRect([wholeImage CGImage], imageToExtractFrame);
wholeImage = nil;
// have to specify the image's scale due to CGImage not taking the screen's scale into consideration.
image = [UIImage imageWithCGImage: _image scale: screenScale orientation: UIImageOrientationUp];
CGImageRelease(_image);
return image;
}
I hope this will help anyone that stumped upon my issue. Feel free to improve my snippet.
Thanks

How to crop an image and set its cropping bounds from the original image

I've a UIImageView *userImage whose size is full screen and UIImageView *imageSquare whose size is 320x320. The user will be able to play with userImage to make it bigger, change position, etc. imageSquare is static and should be seen as the cropping view
The code below can crop userImage as the imageSquare.frame.size. My problem is that it crops it from the top of userImage and not from imageSquare.frame.origin, meaning I need to crop it from X and Y coordinates. It's my first time trying to do this and every things I've tried so far can't make it crop from imageSquare.frame.origin.
How could I crop the current view (the one the user is manipulating) of userImage from imageSquare.frame.origin?
CGSize pageSize = imageSquare.frame.size;
UIGraphicsBeginImageContext(pageSize);
CGContextRef resizedContext = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(resizedContext, userImage.frame.origin.x, userImage.frame.origin.y);
[userImage.layer renderInContext:resizedContext];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if (image != nil) {
NSLog(#"is not nil");
NSData *imgData = UIImagePNGRepresentation(image);
imageSquare.image = [[UIImage alloc]initWithData:imgData];
}
You'll need to translate by negative x and y:
CGContextTranslateCTM(resizedContext,
-userImage.frame.origin.x,
-userImage.frame.origin.y);
[userImage.layer renderInContext:resizedContext];

AVCaptureSession with multiple previews

I have an AVCaptureSession running with an AVCaptureVideoPreviewLayer.
I can see the video so I know it's working.
However, I'd like to have a collection view and in each cell add a preview layer so that each cell shows a preview of the video.
If I try to pass the preview layer into the cell and add it as a subLayer then it removes the layer from the other cells so it only ever displays in one cell at a time.
Is there another (better) way of doing this?
I ran into the same problem of needing multiple live views displayed at the same time. The answer of using UIImage above was too slow for what I needed. Here are the two solutions I found:
1. CAReplicatorLayer
The first option is to use a CAReplicatorLayer to duplicate the layer automatically. As the docs say, it will automatically create "...a specified number of copies of its sublayers (the source layer), each copy potentially having geometric, temporal and color transformations applied to it."
This is super useful if there isn't a lot of interaction with the live previews besides simple geometric or color transformations (Think Photo Booth). I have most often seen the CAReplicatorLayer used as a way to create the 'reflection' effect.
Here is some sample code to replicate a CACaptureVideoPreviewLayer:
Init AVCaptureVideoPreviewLayer
AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
[previewLayer setFrame:CGRectMake(0.0, 0.0, self.view.bounds.size.width, self.view.bounds.size.height / 4)];
Init CAReplicatorLayer and set properties
Note: This will replicate the live preview layer four times.
NSUInteger replicatorInstances = 4;
CAReplicatorLayer *replicatorLayer = [CAReplicatorLayer layer];
replicatorLayer.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height / replicatorInstances);
replicatorLayer.instanceCount = instances;
replicatorLayer.instanceTransform = CATransform3DMakeTranslation(0.0, self.view.bounds.size.height / replicatorInstances, 0.0);
Add Layers
Note: From my experience you need to add the layer you want to replicate to the CAReplicatorLayer as a sublayer.
[replicatorLayer addSublayer:previewLayer];
[self.view.layer addSublayer:replicatorLayer];
Downsides
A downside to using CAReplicatorLayer is that it handles all placement of the layer replications. So it will apply any set transformations to each instance and and it will all be contained within itself. E.g. There would be no way to have a replication of a AVCaptureVideoPreviewLayer on two separate cells.
2. Manually Rendering SampleBuffer
This method, albeit a tad more complex, solves the above mentioned downside of CAReplicatorLayer. By manually rendering the live previews, you are able to render as many views as you want. Granted, performance might be affected.
Note: There might be other ways to render the SampleBuffer but I chose OpenGL because of its performance. Code was inspired and altered from CIFunHouse.
Here is how I implemented it:
2.1 Contexts and Session
Setup OpenGL and CoreImage Context
_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
// Note: must be done after the all your GLKViews are properly set up
_ciContext = [CIContext contextWithEAGLContext:_eaglContext
options:#{kCIContextWorkingColorSpace : [NSNull null]}];
Dispatch Queue
This queue will be used for the session and delegate.
self.captureSessionQueue = dispatch_queue_create("capture_session_queue", NULL);
Init your AVSession & AVCaptureVideoDataOutput
Note: I have removed all device capability checks to make this more readable.
dispatch_async(self.captureSessionQueue, ^(void) {
NSError *error = nil;
// get the input device and also validate the settings
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *_videoDevice = nil;
if (!_videoDevice) {
_videoDevice = [videoDevices objectAtIndex:0];
}
// obtain device input
AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:self.videoDevice error:&error];
// obtain the preset and validate the preset
NSString *preset = AVCaptureSessionPresetMedium;
// CoreImage wants BGRA pixel format
NSDictionary *outputSettings = #{(id)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA)};
// create the capture session
self.captureSession = [[AVCaptureSession alloc] init];
self.captureSession.sessionPreset = preset;
:
Note: The following code is the 'magic code'. It is where we are create and add a DataOutput to the AVSession so we can intercept the camera frames using the delegate. This is the breakthrough I needed to figure out how to solve the problem.
:
// create and configure video data output
AVCaptureVideoDataOutput *videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
videoDataOutput.videoSettings = outputSettings;
[videoDataOutput setSampleBufferDelegate:self queue:self.captureSessionQueue];
// begin configure capture session
[self.captureSession beginConfiguration];
// connect the video device input and video data and still image outputs
[self.captureSession addInput:videoDeviceInput];
[self.captureSession addOutput:videoDataOutput];
[self.captureSession commitConfiguration];
// then start everything
[self.captureSession startRunning];
});
2.2 OpenGL Views
We are using GLKView to render our live previews. So if you want 4 live previews, then you need 4 GLKView.
self.livePreviewView = [[GLKView alloc] initWithFrame:self.bounds context:self.eaglContext];
self.livePreviewView = NO;
Because the native video image from the back camera is in UIDeviceOrientationLandscapeLeft (i.e. the home button is on the right), we need to apply a clockwise 90 degree transform so that we can draw the video preview as if we were in a landscape-oriented view; if you're using the front camera and you want to have a mirrored preview (so that the user is seeing themselves in the mirror), you need to apply an additional horizontal flip (by concatenating CGAffineTransformMakeScale(-1.0, 1.0) to the rotation transform)
self.livePreviewView.transform = CGAffineTransformMakeRotation(M_PI_2);
self.livePreviewView.frame = self.bounds;
[self addSubview: self.livePreviewView];
Bind the frame buffer to get the frame buffer width and height. The bounds used by CIContext when drawing to a GLKView are in pixels (not points), hence the need to read from the frame buffer's width and height.
[self.livePreviewView bindDrawable];
In addition, since we will be accessing the bounds in another queue (_captureSessionQueue), we want to obtain this piece of information so that we won't be accessing _videoPreviewView's properties from another thread/queue.
_videoPreviewViewBounds = CGRectZero;
_videoPreviewViewBounds.size.width = _videoPreviewView.drawableWidth;
_videoPreviewViewBounds.size.height = _videoPreviewView.drawableHeight;
dispatch_async(dispatch_get_main_queue(), ^(void) {
CGAffineTransform transform = CGAffineTransformMakeRotation(M_PI_2);
// *Horizontally flip here, if using front camera.*
self.livePreviewView.transform = transform;
self.livePreviewView.frame = self.bounds;
});
Note: If you are using the front camera you can horizontally flip the live preview like this:
transform = CGAffineTransformConcat(transform, CGAffineTransformMakeScale(-1.0, 1.0));
2.3 Delegate Implementation
After we have the Contexts, Sessions, and GLKViews set up we can now render to our views from the AVCaptureVideoDataOutputSampleBufferDelegate method captureOutput:didOutputSampleBuffer:fromConnection:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CMFormatDescriptionRef formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer);
// update the video dimensions information
self.currentVideoDimensions = CMVideoFormatDescriptionGetDimensions(formatDesc);
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:(CVPixelBufferRef)imageBuffer options:nil];
CGRect sourceExtent = sourceImage.extent;
CGFloat sourceAspect = sourceExtent.size.width / sourceExtent.size.height;
You will need to have a reference to each GLKView and it's videoPreviewViewBounds. For easiness, I will assume they are both contained in a UICollectionViewCell. You will need to alter this for your own use-case.
for(CustomLivePreviewCell *cell in self.livePreviewCells) {
CGFloat previewAspect = cell.videoPreviewViewBounds.size.width / cell.videoPreviewViewBounds.size.height;
// To maintain the aspect radio of the screen size, we clip the video image
CGRect drawRect = sourceExtent;
if (sourceAspect > previewAspect) {
// use full height of the video image, and center crop the width
drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0;
drawRect.size.width = drawRect.size.height * previewAspect;
} else {
// use full width of the video image, and center crop the height
drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0;
drawRect.size.height = drawRect.size.width / previewAspect;
}
[cell.livePreviewView bindDrawable];
if (_eaglContext != [EAGLContext currentContext]) {
[EAGLContext setCurrentContext:_eaglContext];
}
// clear eagl view to grey
glClearColor(0.5, 0.5, 0.5, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
// set the blend mode to "source over" so that CI will use that
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
if (sourceImage) {
[_ciContext drawImage:sourceImage inRect:cell.videoPreviewViewBounds fromRect:drawRect];
}
[cell.livePreviewView display];
}
}
This solution lets you have as many live previews as you want using OpenGL to render the buffer of images received from the AVCaptureVideoDataOutputSampleBufferDelegate.
3. Sample Code
Here is a github project I threw together with both soultions: https://github.com/JohnnySlagle/Multiple-Camera-Feeds
implement the AVCaptureSession delegate method which is
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
using this you can get the sample buffer output of each and every video frame. Using the buffer output you can create an image using the method below.
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
// Get a CMSampleBuffer's Core Video image buffer for the media data
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Get the number of bytes per row for the pixel buffer
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
// Get the number of bytes per row for the pixel buffer
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
// Get the pixel buffer width and height
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
// Create a device-dependent RGB color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
// Free up the context and color space
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
// Create an image object from the Quartz image
UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0 orientation:UIImageOrientationRight];
// Release the Quartz image
CGImageRelease(quartzImage);
return (image);
}
so you can add several imageViews to your view and add these lines inside the delegate method that i have mentioned before:
UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
imageViewOne.image = image;
imageViewTwo.image = image;
Simply set the contents of the preview layer to another CALayer:
CGImageRef cgImage = (__bridge CGImage)self.previewLayer.contents;
self.duplicateLayer.contents = (__bridge id)cgImage;
You can do this with the contents of any Metal or OpenGL layer. There was no increase in memory usage or CPU load on my end, either. You're not duplicating anything but a tiny pointer. That's not so with these other "solutions."
I have a sample project that you can download that displays 20 preview layers at the same time from a single camera feed. Each layer has a different effect applied to our.
You can watch a video of the app running, as well as download the source code at:
https://demonicactivity.blogspot.com/2017/05/developer-iphone-video-camera-wall.html?m=1
Working in Swift 5 on iOS 13, I implemented a somewhat simpler version of the answer by #Ushan87. For testing purposes, I dragged a new, small UIImageView on top of my existing AVCaptureVideoPreviewLayer. In the ViewController for that window, I added an IBOutlet for the new view and a variable to describe the correct orientation for the camera being used:
#IBOutlet var testView: UIImageView!
private var extOrientation: UIImage.Orientation = .up
I then implemented the AVCaptureVideoDataOutputSampleBufferDelegate as follows:
// MARK: - AVCaptureVideoDataOutputSampleBufferDelegate
extension CameraViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let imageBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let ciimage : CIImage = CIImage(cvPixelBuffer: imageBuffer)
let image : UIImage = self.convert(cmage: ciimage)
DispatchQueue.main.sync(execute: {() -> Void in
testView.image = image
})
}
// Convert CIImage to CGImage
func convert(cmage:CIImage) -> UIImage
{
let context:CIContext = CIContext.init(options: nil)
let cgImage:CGImage = context.createCGImage(cmage, from: cmage.extent)!
let image:UIImage = UIImage.init(cgImage: cgImage, scale: 1.0, orientation: extOrientation)
return image
}
For my purposes, the performance was fine. I did not notice any lagginess in the new view.
You can't have multiple previews. Only one output stream as the Apple AVFundation says. I've tried many ways but you just can't.

UIImageView's on a UIView Layer, wrong frame output

I am trying to combine 2 UIImageViews and save them as 1. The top-most image is a static "frame", and the lower image is rotatable / scalable.
My issue is that the photo needs to be saved as 640 x 960, however, the actual view that the 2 images sit on is 320 x 480 (so it shows correctly on the users screen). When these 2 images are combined, they are saved on a 640 x 960 view, however, the 2 images themselves are combined as 320 x 480 (as seen in the image example below).
Here is the code that I am currently using to get my wrong results.
CGSize deviceSpec;
if ( IDIOM == IPAD ) { deviceSpec =CGSizeMake(768,1024); } else { deviceSpec =CGSizeMake(640,960); }
UIGraphicsBeginImageContext( deviceSpec );
UIView * rendered = [[UIView alloc] init];
[[rendered layer] setFrame:CGRectMake(0,0,deviceSpec.width,deviceSpec.height)];
[[rendered layer] addSublayer:[self.view layer]];
[[rendered layer] renderInContext:UIGraphicsGetCurrentContext()];
UIImage * draft = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
AssetsLibrary
Should also mention I am saving the image using the following:
ALAssetsLibrary * library = [[[ALAssetsLibrary alloc] init] autorelease];
[library writeImageToSavedPhotosAlbum: draft.CGImage orientation:ALAssetOrientationUp
completionBlock: ^(NSURL *assetURL, NSError *error)
{
if (error) {
NSLog(#"ALAssetLibrary error - %#", error);
} else {
NSLog(#"Image saved: %#", assetURL);
}
}];
The output
Note here, the entire white area is actually 640 x 960 and the 2 images are 320 x 480.
Note: The actual image here is 640x960 (the entire area), where as the actual image (the photo) is 320x480 which is the actual size of the original layers frame.
To support different scales I used the following code to:
// If you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen.
UIGraphicsBeginImageContextWithOptions(compositeView.frame.size, compositeView.opaque, scale);
[compositeView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
So, you need to replace UIGraphicsBeginImageContext with UIGraphicsBeginImageContextWithOptions

Resources