Face Detection ios7 Coordinates Scaling Issue - ios

I am using the Face Detection API and would like to know how to convert coordinates from large high resolution images to smaller images displayed on an UIImageView. So far, I have inverted the co-ordinate system of my image and container view so that it matches the Core Image coordinate system and I have also calculated the ratio of heights between my high resolution image and the dimensions of my image view, but the coordinates that I am getting are not accurate at all. I am assuming I cannot convert the points from the large image to the small image as easily as I thought. Can anyone please point out my mistake(s)?
[self.shownImageViewer setTransform:CGAffineTransformMakeScale(1,-1)];
[self.view setTransform:CGAffineTransformMakeScale(1,-1)];
//240 x 320
self.shownImageViewer.image = self.imageToShow;
yscale = 320/self.imageToShow.size.height;
xscale = 240/self.imageToShow.size.width;
height = 320;
CIImage *image = [[CIImage alloc] initWithCGImage:[self.imageToShow CGImage]];
CIContext *faceDetectionContext = [CIContext contextWithOptions:nil];
CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:faceDetectionContext options:#{CIDetectorAccuracy:CIDetectorAccuracyHigh}];
NSArray * features = [faceDetector featuresInImage:image options:#{CIDetectorImageOrientation:[NSNumber numberWithInt:6]}];
for(CIFaceFeature *feature in features)
{
if(feature.hasLeftEyePosition)
self.leftEye = feature.leftEyePosition;
if(feature.hasRightEyePosition)
self.rightEye = feature.rightEyePosition;
if(feature.hasMouthPosition)
self.mouth = feature.mouthPosition;
}
NSLog(#"%g and %g",xscale*self.rightEye.x, yscale*self.rightEye.y);
NSLog(#"%g and %g",yscale*self.leftEye.x, yscale*self.leftEye.y);
NSLog(#"%g",height);
self.rightEyeMarker.center = CGPointMake(xscale*self.rightEye.x,yscale*self.rightEye.y);
self.leftEyeMarker.center = CGPointMake(xscale*self.leftEye.x,yscale*self.leftEye.y);

I would start by removing the transform from your image view. Just have the image view display the image in the orientation its in already. This will make the calculations a lot easier.
Now the CIFaceFeature outputs its features in image coordinates. But your imageView might be smaller or bigger. So first, keep it simple and setting the imageView's content mode to top left.
imageView.contentMode = UIViewContentModeTopLeft;
Now you dont have to scale the coordinates at all.
When you are happy with that set the contentMode to something more sensible like aspect fit.
imageView.contentMode = UIViewContentModeScaleAspectFit;
Now you need to scale the x and the y co-ordinates by multiplying each co-ordinate by the aspect fit ratio.
CGFloat xRatio = imageView.frame.size.width / image.size.width;
CGFloat yRatio = imageView.frame.size.height / image.size.height;
CGFloat aspectFitRatio = MIN(xRatio, yRatio);
Lastly you want to add the rotation back in. Try to avoid this if possible, e.g. fix you images so they are upright to begin with.

Related

GPUImageTransformFilter is cropping image instead of scaling it down

I am trying to scale my image using GPUImage, here is my code:
float largerDimension = MAX(img.size.width, img.size.height);
if(largerDimension > 1024){
float scaleRatio = 1024 / largerDimension;
GPUImageTransformFilter *xff = [[GPUImageTransformFilter alloc] init];
xff.affineTransform = CGAffineTransformMakeScale(scaleRatio, scaleRatio);
img = [xff imageByFilteringImage:img];
}
I'm expecting the filter to scale my image, but instead, it's cropping the middle of the image. What am I doing wrong?
Instead of using a transform filter, I've achieved the desired effect using using plain GPUImageFilter and forceProcessingAtSize: method, providing the exact dimensions of my desired output.

Draw rectangles on image view.image not scaling properly - iOS

I start out with an imageView.image (a photo).
I submit (POST) the imageView.image to remote service (Microsoft face detection) for processing.
Remote service returns JSON of CGRect's for each detected face on the image.
I feed JSON into my UIView to draw the rectangles. I initiate my UIView with a frame of {0, 0, imageView.image.size.width, imageView.image.size.height}. <-- my thinking, a frame equivalent to the size of the imageView.image
Add my UIView as a subview of self.imageView OR self.view (tried both)
End Result:
Rectangles are drawn but they do not appear correctly on the imageView.image. That is, the CGRects generated for each of the faces are supposed to be relative to the image's coordinate space, as returned by the remote service but they appear off once I add my custom view.
I believe I may have a scaling issue of some sort as, if I divide each value in the CGRects / 2 (as a test) I can get an approximation but still off. The microsoft documentation states the detected faces are returned with rectangles indicating the location of faces in the image in Pixels. Yet, aren't they being treated as points when drawing my path?
Also, shouldn't I be initiating my view with a frame equivalent to the imageView.image's frame so that the view matches an identical coordinate space as the submitted image?
Here is a screenshot example of what it looks like if i try to scale down each CGRect by dividing them by 2.
I am new to iOS and broke away from the books to work on this as a self exercise. I can provide more code as needed. Thanks in advance for your insight!
EDIT 1
I add a subview for each rectangle as I iterate over an array of face attributes which include the rectangle for each face via the following method, which gets called during (void)viewDidAppear:(BOOL)animated
- (void)buildFaceRects {
// build an array of CGRect dicts off of JSON returned from analized image
NSMutableArray *array = [self analizeImage:self.imageView.image];
// enumerate over array using block - each obj in array represents one face
[array enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
// build dictionary of rects and attributes for the face
NSDictionary *json = [NSDictionary dictionaryWithObjectsAndKeys:obj[#"attributes"], #"attributes", obj[#"faceId"], #"faceId", obj[#"faceRectangle"], #"faceRectangle", nil];
// initiate face model object with dictionary
ZGCFace *face = [[ZGCFace alloc] initWithJSON:json];
NSLog(#"%#", face.faceId);
NSLog(#"%d", face.age);
NSLog(#"%#", face.gender);
NSLog(#"%f", face.faceRect.origin.x);
NSLog(#"%f", face.faceRect.origin.y);
NSLog(#"%f", face.faceRect.size.height);
NSLog(#"%f", face.faceRect.size.width);
// define frame for subview containing face rectangle
CGRect imageRect = CGRectMake(0, 0, self.imageView.image.size.width, self.imageView.image.size.height);
// initiate rectange subview with face info
ZGCFaceRectView *faceRect = [[ZGCFaceRectView alloc] initWithFace:face frame:imageRect];
// add view as subview of imageview (?)
[self.imageView addSubview:faceRect];
}];
}
EDIT 2:
/* Image info */
UIImageView *iv = self.imageView;
UIImage *img = iv.image;
CGImageRef CGimg = img.CGImage;
// Bitmap dimensions [pixels]
NSUInteger imgWidth = CGImageGetWidth(CGimg);
NSUInteger imgHeight = CGImageGetHeight(CGimg);
NSLog(#"Image dimensions: %lux%lu", imgWidth, imgHeight);
// Image size pixels (size * scale)
CGSize imgSizeInPixels = CGSizeMake(img.size.width * img.scale, img.size.height * img.scale);
NSLog(#"image size in Pixels: %fx%f", imgSizeInPixels.width, imgSizeInPixels.height);
// Image size points
CGSize imgSizeInPoints = img.size;
NSLog(#"image size in Points: %fx%f", imgSizeInPoints.width, imgSizeInPoints.height);
// Calculate Image frame (within imgview) with a contentMode of UIViewContentModeScaleAspectFit
CGFloat imgScale = fminf(CGRectGetWidth(iv.bounds)/imgSizeInPoints.width, CGRectGetHeight(iv.bounds)/imgSizeInPoints.height);
CGSize scaledImgSize = CGSizeMake(imgSizeInPoints.width * imgScale, imgSizeInPoints.height * imgScale);
CGRect imgFrame = CGRectMake(roundf(0.5f*(CGRectGetWidth(iv.bounds)-scaledImgSize.width)), roundf(0.5f*(CGRectGetHeight(iv.bounds)-scaledImgSize.height)), roundf(scaledImgSize.width), roundf(scaledImgSize.height));
// initiate rectange subview with face info
ZGCFaceRectView *faceRect = [[ZGCFaceRectView alloc] initWithFace:face frame:imgFrame];
// add view as subview of image view
[iv addSubview:faceRect];
}];
We've got several problems :
Microsoft returns pixel and iOS uses points. The difference between them is just about screen dimension. For instance on an iPhone 5 : 1 pt = 2 px and on an 3GS 1px = 1 pt. Look at the iOS documentation for more informations.
The frame of your UIImageView is not the image frame. When Microsofts returns the frame of a face, it returns it in the frame of the image and not in the frame of the UIImageView. So we've got a problem of coordinates system.
Be careful about time if you use Autolayout. The frame of a view set by constraints is not the same when ViewDidLoad: is called than when you see it on the screen.
Solution :
I'm just a read-only Objective C developer so I can't give you code. I could in Swift but it's not necessary.
Convert pixels into points. That's easy : use ratio.
Define the frame of a face using what you did. Then you have to move the coordinates you determined from the image frame coordinates system to the UIImageView coordinates system. That's less easy. It depends on the contentMode of your UIImageView. But I quickly find informations about it on the Internet.
If you use AutoLayout, add the frame of the face when AutoLayout finishes to calculate the layouts. So when ViewDidLayoutSubview: is called.
Or, that's better, use constraints to set your frame in the UIImageView.
I hope to be clear enough.
Some links :
iOS Drawing Concepts
Displayed Image Frame In UIImageView

Resizing a photograph using UIGraphics but final image is slightly blurry

I am trying to resize an image using UIGraphics. The image is one taken with the camera, and I am using this code:
CGSize origImageSize = photograph.size;
//this saves as 140*140 for retina
CGRect newRect = CGRectMake(0, 0, 70, 70);
//scaling ratio
float ratio = MAX(newRect.size.width/origImageSize.width, newRect.size.height/origImageSize.height);
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0);
CGRect projectRect;
projectRect.size.width= ratio*origImageSize.width;
projectRect.size.height=ratio*origImageSize.height;
//center the image
projectRect.origin.x= ((newRect.size.width-projectRect.size.width)/2);
projectRect.origin.y=((newRect.size.height-projectRect.size.height)/2);
[photograph drawInRect:projectRect];
//get the image from the image context
UIImage *smallImage = UIGraphicsGetImageFromCurrentImageContext();
For some reason the final photo isn't as sharp, it's slightly blurry. Am I doing anything wrong here? Any pointers would be really appreciated. thanks
I assume you calculate rectangle properly. Then make sure you use integral rectangle. Non-integral values may cause sub pixel rendering.
Run your projectRect through CGRectIntegral to get integral rectangle, then use it to render your image.
projectRect = CGRectIntegral(projectRect);

Trying to crop my UIImage to a 1:1 aspect ratio (square) but it keeps enlarging the image causing it to be blurry. Why?

Given a UIImage, I'm trying to make it into a square. Just chop some of the largest dimension off to make it 1:1 in aspect ratio.
UIImage *pic = [UIImage imageNamed:#"pic"];
CGFloat originalWidth = pic.size.width;
CGFloat originalHeight = pic.size.height;
float smallestDimension = fminf(originalWidth, originalHeight);
CGRect square = CGRectMake(0, 0, smallestDimension, smallestDimension);
CGImageRef imageRef = CGImageCreateWithImageInRect([pic CGImage], square);
UIImage *squareImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
UIImageView *imageView = [[UIImageView alloc] initWithImage:squareImage];
imageView.frame = CGRectMake(100, 100, imageView.bounds.size.width, imageView.bounds.size.height);
[self.view addSubview:imageView];
But this is what it results in:
When it should look like this, but just a little narrower.
Why is this? The images are pic(150x114) / pic#2x(300x228).
The problem is you're mixing up logical and pixel sizes. On non retina devices these two are the same, but on retina devices (like in your case) the pixel size is actually double the logical size.
Usually, when designing your GUI, you can always just think in logical sizes and coordinates, and iOS (or OS X) will make sure, that everything is doubled on retina screens. However, in some cases, especially when creating images yourself, you have to explicitly specify what size you mean.
UIImage's size method returns the logical size. That is the resolution on non-retina screens for instance. This is why CGImageCreateWithImageInRect will only create an new image, from the upper left half of the image.
Multiply your logical size with the scale of the image (1 on non-retina devices, 2 on retina devices):
CGFloat originalWidth = pic.size.width * pic.scale;
CGFloat originalHeight = pic.size.height * pic.scale;
This will make sure, that the new image is created from the full height (or width) of the original image. Now, one remaining problem is, that when you create a new UIImage using
UIImage *squareImage = [UIImage imageWithCGImage:imageRef];
iOS will think, this is a regular, non-retina image and it will display it twice as large as you would expect. To fix this, you have to specify the scale when you create the UIImage:
UIImage *squareImage = [UIImage imageWithCGImage:imageRef
scale:pic.scale
orientation:pic.imageOrientation];

Using one image png file for retina and normal screen in UIImageView

Say there is cool.png file with dimension 200 X 100 pixels and I'd like to use it for both retina and normal devices.
I need to get size of UIImageView 100 X 50 points.
I tried to decrease the size of UIImageView according to image dimensions and visually I don't see any difference whether I prepare two file with and without scale modifier #2x or use UIImageView to scale it by contentMode property.
BOOL retina = [[UIScreen mainScreen] scale] == 2.0 ? YES : NO;
UIImage *img = [UIImage imageNamed:#"cool.png"];
UIImageView *imgView = [[UIImageView alloc] initWithImage:img];
imgView.contentMode = UIViewContentModeScaleToFill;
CGFloat width = img.size.width;
CGFloat height = img.size.height;
if (!retina) {
width = width/2.0;
height = height/2.0;
}
imgView.frame = CGRectMake (somePoint.x, somePoint.y, width, height);
Is there something wrong in the approach?
You are taking the wrong approach here..
CGRect,Point and Size aren't measured in pixels.. They are points.. Points are iOSes coordinate system and will be scaled to the device... So for example if you make a UIView 320 points wide then it will fill the width on both a retina iPhone or iPad...
so if you want your cool.png to display at 100x50 points on all devices then you can simply set the frame to be 100x50, set the image to cool.png then set the imgView.contentView to UIViewContentModeScaleAspectFit... This will then rescale the 200x100 image to fit of its a non retina device... Then if it was a retina device it would be at full resolution (200x100) but in the 100x50 points...
However the #2x system was made for a reason as having to scale the images down increases loading times as it has to be scaled but if you can't use #2x images then you can still do the above

Resources