Convert Video frame's aspect ratio to screen dimensions - ios

My app uses the camera, and it processes every frame, looking for some elements inside the image (such as faces).
When I find something on the image, I want to draw on the screen showing the video using drawRect:(CGRect)rect of custom view.
I'm not drawing only rectangles, and I'm not using GPU and GLKView.
When I run video frames with 1920*1080 I can draw the exact location by dividing screen width and height with video frame size.
However when I change to 480*360 video resolution, the elements are not drawn at the correct location, I believe due to aspect ratio differences.
Any idea what conversion I need to do here, perhaps AffineTransfrom?

Yes. it is due to the aspect ratio. In one of my project I had faced a similar problem and I did the following.
Find the scale factor
float xScale = destionation.size.width / imageSize.width;//destination is the max image drawing area.
float yScale = destionation.size.height / imageSize.height;
float scaleFactor = xScale < yScale ? xScale : yScale;
Find the drawing rectangle by
float destinationHeight = destionation.size.height * scaleFactor;
float destinationWidth = destionation.size.width * scaleFactor;
this will give you a rectangle which can hold the image in aspect ratio. And draw your image using this rectangle. Also you can align this rectangle to view's center by multiplying the center point with scale factor.
Now convert your face rectangle from Image size to drawing rectangles by multiplying with scale factor.
Use the converted points to draw the rectangle on destination rectangle.

Related

Get the pixel position of pan On UIImage in a UIImageView

I need the actual pixcel position not the positoin with respect to the UIImageView frame, but the actual pixcel position on UIImage.
UIpangesture recognizer giver the location in UIimageView, so it is of no use.
I can multiply the x and y with scale, but the UIImage scale is always 0.
I need to crop a circular area from UIImage make it blur and place it exactly at the same position
Flow:
Crop circular area from an UIimage usin:g CGImageCreateWithImageInRect
Then roud rect the image using: [[UIBezierPath bezierPathWithRoundedRect:
Blur the round rect image using CIGaussianBlur
Place the round rect blurred image at the x,y position
In the first step I need the actual pixel position where the user tapped
It depends on the image view content mode.
For the scale to fill mode you need to simply multiply the coordinates with image to view ratio:
CGPoint pointOnImage = CGPointMake(pointOfTouch.x*(imageSize.width/frameSize.width), pointOfTouch.y*(imageSize.height/frameSize.height));
For all other modes you need to compute the actual image frame inside the view which have different procedures then.
Adding aspect fit mode from comments:
For aspect fit you need to compute the actual image frame which can be smaller then the image view frame in one of the dimensions and is placed in center:
CGSize imageSize; // the original image size
CGSize imageViewSize; // the image view size
CGFloat imageRatio = imageSize.width/imageSize.height;
CGFloat viewRatio = imageViewSize.width/imageViewSize.height;
CGRect imageFrame = CGRectMake(.0f, .0f, imageViewSize.width, imageViewSize.height);
if(imageRatio > viewRatio) {
// image has room on top and bottom but fits perfectly on left and right
CGSize displayedImageSize = CGSizeMake(imageViewSize.width, imageViewSize.width / imageRatio);
imageFrame = CGRectMake(.0f, (imageViewSize.height-displayedImageSize.height)*.5f, displayedImageSize.width, displayedImageSize.height);
}
else if(imageRatio < viewRatio) {
// image has room on left and right but fits perfectly on top and bottom
CGSize displayedImageSize = CGSizeMake(imageViewSize.height * imageRatio, imageViewSize.height);
imageFrame = CGRectMake((imageViewSize.width-displayedImageSize.width)*.5f, .0f, displayedImageSize.width, displayedImageSize.height);
}
// transform the coordinate
CGPoint locationInImageView; // received from touch
CGPoint locationOnImage = CGPointMake(locationInImageView.x, locationInImageView.y); // copy the original point
locationOnImage = CGPointMake(locationOnImage.x - imageFrame.origin.x, locationOnImage.y - imageFrame.origin.y); // translate to fix the origin
locationOnImage = CGPointMake(locationOnImage.x/imageFrame.size.width, locationOnImage.y/imageFrame.size.height); // transform to relative coordinates
locationOnImage = CGPointMake(locationOnImage.x*imageSize.width, locationOnImage.y*imageSize.height); // scale to original image coordinates
Just a note if you want to ransfer to aspect fill all you need to do is swap < and > in both of the if statements.

iOS - Draw image with CGContext and transform

I am trying to draw an image on top of another image. I have the image's size, transform and origin. My code below shows correct size and transform angle but not at the correct point.
Code:
UIGraphicsBeginImageContextWithOptions(backgroundImage.size, NO, [[UIScreen mainScreen] scale]);
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect baseRect = CGRectMake(0, 0, backgroundImage.size.width, backgroundImage.size.height);
[backgroundImage drawInRect:baseRect];
CGRect newRect = CGRectMake(x, y, width, height);
CGContextTranslateCTM(context, x, y);
CGContextConcatCTM(context, watermarkImageView.transform);
CGContextTranslateCTM(context, -x, -y);
[watermarkImageView.image drawInRect:newRect];
UIImage* result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return result;
The watermark image should be placed like this:
But currently its looking like this:
What did I miss?
Thanks in advance
EDIT
The x,y is the edge of the bounding box
Your code doesn't show what watermarkImageView.transform is and that is important because when you concat transformations, the effects of previous transformations will also effect all the following transformations.
E.g. a translation that moves the object 10 pixels along the x-axis will move the object 10 pixels to the right. However, if you first have a rotation that rotates the object by 45 degrees and then add a translation that moves 10 pixels along the x-axis, the object will not move 10 pixels to the right, it will move 10 pixels along a line that is 45 degrees rotated, which means it will move about 7 pixels up and 7 pixels to the right. That's because a rotation does not really rotate the object itself, it actually rotates the whole coordinate system which causes the object to be drawn rotated.
See this image:
Initially the translation coordinate system (red lines) matches the "real coordinate" system. But after the rotation by 45 degrees, the translation coordinate system has been rotated and now translating across the red lines moves the object diagonally.
Think about a sheet of paper and a stamp. The stamp always has the same position and the same orientation, you cannot move or rotate the stamp. But you can move and rotate the sheet of paper below the stamp! And that's what your transformations do. They transform the sheet before the stamp is pressed upon it.
For most people it is very hard to imagine the effects of transforming the whole space, it's much easier for them to think about transforming the object. The trick is: You must read your transformations in the opposite order than you wrote them. I guess what you want to do is actually:
CGContextTranslateCTM(context, x, y);
CGContextConcatCTM(context, watermarkImageView.transform);
CGContextTranslateCTM(context,
-watermarkImageView.size.with * 0.5,
-watermarkImageView.size.height * 0.5
);
Now read them in the opposite order (from bottom to top). First you center the watermark around (0,0) by moving it up half the height and left half the width. Now the center of your watermark is exactly at (0,0). Then you rotate it as desired. Finally you move it to the desired position. Of course you wrote all transformations the other way round but that's only because you are transforming the coordinate space, not the object.
Centering your watermark prior to rotation is important because rotation always rotates around (0,0) coordinates. If you'd just rotate, the rotation looks like this:
That's not what you want as it will not just rotate the object but also changes its position. If you center the image around (0,0) first, the rotation looks like this instead:
The answer to my question was
I had to translate to the centre of where I want to draw the context.
CGContextTranslateCTM(context, imageView.center.x, imageView.center.y);
Then rotate context.
CGFloat angle = [(NSNumber *)[imageView valueForKeyPath:#"layer.transform.rotation.z"] floatValue];
CGContextRotateCTM(context, angle);
Then draw
[imageView.image drawInRect:CGRectMake(-width * 0.5f, -height * 0.5f, width, height)];

Flip a scaled and rotated uiview in objective c

I am trying to flip a scaled and rotated uiview on the horizontal axis.
Here is the code being used -
CGFloat xScale = selectedFrame.transform.a;
CGFloat yScale = selectedFrame.transform.d;
selectedFrame.layer.transform = CATransform3DScale(CATransform3DMakeRotation(M_PI, 0, 1, 0),xScale, yScale,-1);
The output of this is that the view flips properly and the original scale factor is also manitained but the rotation isnt.
Here are the images to explain the problem -
Original Image (The tiger view has to be flipped on the horizontal axis) -
Flipped image after above code (see that the scale is maintained but the rotation angle isnt and the image s flipped properly) -
Any help would be really appreciated!
To flip the image you are using a rotation of M_PI around the Y axis. To rotate the image, you need to apply another different rotation around the Z axis. These are two different transforms. You can combine them using CATransform3DConcat. Then you can scale the resulting transform.
CATransform3D transform = CATransform3DConcat(CATransform3DMakeRotation(zRotation, 0, 0, 1),CATransform3DMakeRotation(M_PI, 0, 1, 0));
[layer setTransform:CATransform3DScale(transform,xScale, yScale,1)];
The problem with your original code is that you are only applying the Y axis transform.
I'm sure there is a more elegant way to do this, but this works on my simulator.

Crop Image With Desire Rect

Many questions are available on SO, but unfortunately I couldn't solve my problem using them.
I've added a overlay view on my camera, and now want to get image within the blue border (only water bottle).
I tried code chunks like following
CGImageRef imageRef = CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
[UIImageView setImage:[UIImage imageWithCGImage:imageRef]];
CGImageRelease(imageRef);
but having two issues
Either cropped image is getting too big
The orientation changes to -90.
for point 1, I think I'm providing cropRect too small thats why it showing very small part of image with too zoomed view. on my other viewController I have UIImageView (where cropped image need to display) of same size as camera rect within blue border.
So question is how to crop the image and what values should I provide for cropRect?
Assuming the image size 1280* 1080 and your crop view size 320*480 You need to do the following
Convert your crop view's frame to Image size rect (0, 0, 1280, 1080) by find the scale factor
float xScale = 1280 / 320;
float yScale = 1080 / 480;
float scaleFactor = (xScale < yScale) ? xScale : yScale;
Multiply cropView frame by scale factor. This will map the screen coordinates to image size coordinates. Then use the new cropRect with the
CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
The problem with different orientation is that CoreGraphics uses a different coordinate system as compare to view's coordinate system. Quartz 2D Coordinate Systems so try setting
[UIImage imageWithCGImage:imageRef].imageOrientation = largeImage.imageOrientation

AVFoundation photo size and rotation

I'm having a nightmare time trying to correct a photo taken with AVFoundation captureStillImageAsynchronouslyFromConnection to size and orient to exactly what is shown on the screen.
I show the AVCaptureVideoPreviewLayer with this code to make sure it displays the correct way up at all rotations:
previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession];
[previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
previewLayer.frame = CGRectMake(0, 0, self.view.bounds.size.width, self.view.bounds.size.height);
if ([[previewLayer connection] isVideoOrientationSupported])
{
[[previewLayer connection] setVideoOrientation:(AVCaptureVideoOrientation)[UIApplication sharedApplication].statusBarOrientation];
}
[self.view.layer insertSublayer:previewLayer atIndex:0];
Now when I have a returned image it needs cropping as it's much bigger than what was displayed.
I know there are loads of UIImage cropping examples, but the first hurdle I seem to have is finding the correct CGRect to use. When I simply crop to self.view.frame the image is cropped at the wrong location.
The preview is using AVLayerVideoGravityResizeAspectFill and I have my UIImageView also set to AspectFill
So how can I get the correct frame that AVFoundation is displaying on screen from the preview layer?
EDIT ----
Here's an example of the problem i'm facing. Using the front camera of an iPad Mini, the camera using the resolution 720x1280 but the display is 768x0124. The view displays this (See the dado rail at the top of the image:
Then when I take the image and display it, it looks like this:
Obviously the camera display was centred in the view, but the cropped image is taken from the top(none seen) section of the photo.
I'm working on a similar project right now and thought I might be able to help, if you haven't already figured this out.
the first hurdle I seem to have is finding the correct CGRect to use. When I simply crop to self.view.frame the image is cropped at the wrong location.
Let's say your image is 720x1280 and you want your image to be cropped to the rectangle of your display, which is a CGRect of size 768x1024. You can't just pass a rectangle of size 768x1024. First, your image isn't 768 pixels wide. Second, you need to specify the placement of that rectangle with respects to the image (i.e. by specifying the rectangle's origin point). In your example, self.view.frame is a CGRect that has an origin of (0, 0). That's why it's always cropping from the top of your image rather than from the center.
Calculating the cropping rectangle is a bit tricky because you have a few different coordinate systems.
You've got your view controller's view, which has...
...a video preview layer as a sublayer, which is displaying an aspect-filled image, but...
...the AVCaptureOutput returns a UIImage that not only has a different width/height than the video preview, but it also has a different aspect ratio.
So because your preview layer is displaying a centered and cropped preview image (i.e. aspect fill), what you basically want to find is the CGRect that:
Has the same display ratio as self.view.bounds
Has the same smaller dimension size as the smaller dimension of the UIImage (i.e. aspect fit)
Is centered in the UIImage
So something like this:
// Determine the width:height ratio of the crop rect, based on self.bounds
CGFloat widthToHeightRatio = self.bounds.size.width / self.bounds.size.height;
CGRect cropRect;
// Set the crop rect's smaller dimension to match the image's smaller dimension, and
// scale its other dimension according to the width:height ratio.
if (image.size.width < image.size.height) {
cropRect.size.width = image.size.width;
cropRect.size.height = cropRect.size.width / widthToHeightRatio;
} else {
cropRect.size.width = image.size.height * widthToHeightRatio;
cropRect.size.height = image.size.height;
}
// Center the rect in the longer dimension
if (cropRect.size.width < cropRect.size.height) {
cropRect.origin.x = 0;
cropRect.origin.y = (image.size.height - cropRect.size.height) / 2.0;
} else {
cropRect.origin.x = (image.size.width - cropRect.size.width) / 2.0;
cropRect.origin.y = 0;
}
So finally, to go back to your original example where the image is 720x1280, and you want your image to be cropped to the rectangle of your display which is 768x1024, you will end up with a CGRect of size 720x960, with an origin of x = 0, y = 1280-960/2 = 160.

Resources