Draw rectangles on image view.image not scaling properly - iOS - ios

I start out with an imageView.image (a photo).
I submit (POST) the imageView.image to remote service (Microsoft face detection) for processing.
Remote service returns JSON of CGRect's for each detected face on the image.
I feed JSON into my UIView to draw the rectangles. I initiate my UIView with a frame of {0, 0, imageView.image.size.width, imageView.image.size.height}. <-- my thinking, a frame equivalent to the size of the imageView.image
Add my UIView as a subview of self.imageView OR self.view (tried both)
End Result:
Rectangles are drawn but they do not appear correctly on the imageView.image. That is, the CGRects generated for each of the faces are supposed to be relative to the image's coordinate space, as returned by the remote service but they appear off once I add my custom view.
I believe I may have a scaling issue of some sort as, if I divide each value in the CGRects / 2 (as a test) I can get an approximation but still off. The microsoft documentation states the detected faces are returned with rectangles indicating the location of faces in the image in Pixels. Yet, aren't they being treated as points when drawing my path?
Also, shouldn't I be initiating my view with a frame equivalent to the imageView.image's frame so that the view matches an identical coordinate space as the submitted image?
Here is a screenshot example of what it looks like if i try to scale down each CGRect by dividing them by 2.
I am new to iOS and broke away from the books to work on this as a self exercise. I can provide more code as needed. Thanks in advance for your insight!
EDIT 1
I add a subview for each rectangle as I iterate over an array of face attributes which include the rectangle for each face via the following method, which gets called during (void)viewDidAppear:(BOOL)animated
- (void)buildFaceRects {
// build an array of CGRect dicts off of JSON returned from analized image
NSMutableArray *array = [self analizeImage:self.imageView.image];
// enumerate over array using block - each obj in array represents one face
[array enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
// build dictionary of rects and attributes for the face
NSDictionary *json = [NSDictionary dictionaryWithObjectsAndKeys:obj[#"attributes"], #"attributes", obj[#"faceId"], #"faceId", obj[#"faceRectangle"], #"faceRectangle", nil];
// initiate face model object with dictionary
ZGCFace *face = [[ZGCFace alloc] initWithJSON:json];
NSLog(#"%#", face.faceId);
NSLog(#"%d", face.age);
NSLog(#"%#", face.gender);
NSLog(#"%f", face.faceRect.origin.x);
NSLog(#"%f", face.faceRect.origin.y);
NSLog(#"%f", face.faceRect.size.height);
NSLog(#"%f", face.faceRect.size.width);
// define frame for subview containing face rectangle
CGRect imageRect = CGRectMake(0, 0, self.imageView.image.size.width, self.imageView.image.size.height);
// initiate rectange subview with face info
ZGCFaceRectView *faceRect = [[ZGCFaceRectView alloc] initWithFace:face frame:imageRect];
// add view as subview of imageview (?)
[self.imageView addSubview:faceRect];
}];
}
EDIT 2:
/* Image info */
UIImageView *iv = self.imageView;
UIImage *img = iv.image;
CGImageRef CGimg = img.CGImage;
// Bitmap dimensions [pixels]
NSUInteger imgWidth = CGImageGetWidth(CGimg);
NSUInteger imgHeight = CGImageGetHeight(CGimg);
NSLog(#"Image dimensions: %lux%lu", imgWidth, imgHeight);
// Image size pixels (size * scale)
CGSize imgSizeInPixels = CGSizeMake(img.size.width * img.scale, img.size.height * img.scale);
NSLog(#"image size in Pixels: %fx%f", imgSizeInPixels.width, imgSizeInPixels.height);
// Image size points
CGSize imgSizeInPoints = img.size;
NSLog(#"image size in Points: %fx%f", imgSizeInPoints.width, imgSizeInPoints.height);
// Calculate Image frame (within imgview) with a contentMode of UIViewContentModeScaleAspectFit
CGFloat imgScale = fminf(CGRectGetWidth(iv.bounds)/imgSizeInPoints.width, CGRectGetHeight(iv.bounds)/imgSizeInPoints.height);
CGSize scaledImgSize = CGSizeMake(imgSizeInPoints.width * imgScale, imgSizeInPoints.height * imgScale);
CGRect imgFrame = CGRectMake(roundf(0.5f*(CGRectGetWidth(iv.bounds)-scaledImgSize.width)), roundf(0.5f*(CGRectGetHeight(iv.bounds)-scaledImgSize.height)), roundf(scaledImgSize.width), roundf(scaledImgSize.height));
// initiate rectange subview with face info
ZGCFaceRectView *faceRect = [[ZGCFaceRectView alloc] initWithFace:face frame:imgFrame];
// add view as subview of image view
[iv addSubview:faceRect];
}];

We've got several problems :
Microsoft returns pixel and iOS uses points. The difference between them is just about screen dimension. For instance on an iPhone 5 : 1 pt = 2 px and on an 3GS 1px = 1 pt. Look at the iOS documentation for more informations.
The frame of your UIImageView is not the image frame. When Microsofts returns the frame of a face, it returns it in the frame of the image and not in the frame of the UIImageView. So we've got a problem of coordinates system.
Be careful about time if you use Autolayout. The frame of a view set by constraints is not the same when ViewDidLoad: is called than when you see it on the screen.
Solution :
I'm just a read-only Objective C developer so I can't give you code. I could in Swift but it's not necessary.
Convert pixels into points. That's easy : use ratio.
Define the frame of a face using what you did. Then you have to move the coordinates you determined from the image frame coordinates system to the UIImageView coordinates system. That's less easy. It depends on the contentMode of your UIImageView. But I quickly find informations about it on the Internet.
If you use AutoLayout, add the frame of the face when AutoLayout finishes to calculate the layouts. So when ViewDidLayoutSubview: is called.
Or, that's better, use constraints to set your frame in the UIImageView.
I hope to be clear enough.
Some links :
iOS Drawing Concepts
Displayed Image Frame In UIImageView

Related

Fit image in UIImageView using UIViewContentModeScaleAspectFit

I'm facing a really weird problem with UIImageView, I was trying to set an image - which created by take the screenshot of the current view - to an ImageView with content mode is UIViewContentModeScaleAspectFit.
It worked fine when I set the image by the interface builder in the xib file or when I set the image created by [UIImage imageNamed:]. They both worked fine with UIViewContentModeScaleAspectFit.
But when I take the snap shot of a view and set the image to the image view, the image did not fit to the UIImageView. I've tried all the solutions I found on here like .ClipsToBound = YES but they didn't work at all. I'm really confused by now.
Here's the code when I take the screen shot and create the UIImage:
- (UIImage *)screenshotWithRect:(CGRect)captureRect
{
CGFloat scale = [[UIScreen mainScreen] scale];
UIImage *screenshot;
UIGraphicsBeginImageContextWithOptions(self.frame.size, NO, scale);
CGContextClipToRect (UIGraphicsGetCurrentContext(),captureRect);
{
if(UIGraphicsGetCurrentContext() == nil)
{
NSLog(#"UIGraphicsGetCurrentContext is nil. You may have a UIView (%#) with no really frame (%#)", [self class], NSStringFromCGRect(self.frame));
}
else
{
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
screenshot = UIGraphicsGetImageFromCurrentImageContext();
}
}
UIGraphicsEndImageContext();
return screenshot;
}
And when I set the image to the image view
UIImage* snap = [[UIImage alloc] init];
// start snap shot
UIView* superView = [self.view superview];
CGRect cutRect = [superView convertRect:self.cutView.frame fromView:_viewToCut];
snap = [superView screenshotWithRect:cutRect];
[self.view addSubview:self.editCutFrameView];
// end snap shot -> show edit view
[self.editCutFrameView setImage:snap];
Here's a picture compare the 2 results:
Many thanks for your help.
UPDATE: As #Saheb Roy mentioned about the size, I checked the image size and it's about 400x500px and the thumbnail.png's size is 512x512px so I think it's not about the size of the image.
This is because in the second case, the snapshot image is itself exactly that size as you can see. Hence the image is not being stretched or fitted accordingly.
Earlier images are fitting to screen accordingly as the images were bigger than the imageview but with different ratio or same than that of the image.
But the one where it is not fitting to the imageview, the image itself is of that much size, i.e. smaller than that of the imageview, hence it is NOT being fitted to the bounds.

Dynamically Add and Center UIImages in UIView

I am trying to think through this fun "problem".
I have a UIView (288x36) that will hold up to 8 UIImages (36x36).
Any number (up to 8) can be placed in the UIView.
I want the "collection" of these images to be centered in the UIView.
If there is only 1 image, then it's center will be in the center of the UIView. If 2, then the max width of the image will be in the center of the UIView and the 2nd images starting point will also be in the center, etc etc.
Has anyone dealt with this before? Any code examples.
I am not very advanced but trying to learn.
Suppose your UIView is called view and your array of images is imageArray:
float remainingSpace = view.bounds.size.width - imageArray.count*36; //This is the space to remaining
float spaceOnLeft = remainingSpace/2.0; //Just the space to the left of the images
for (UIImage *image in images) {
UIImageView *imageView = [[UIImageView alloc] initWithFrame:CGRectMake(spaceOnLeft, 0, 36, 36)];
imageView.image = image;
[view addSubview:imageView];
spaceOnLeft += 36.0; // so the next image is spaced 36 points to the right
}

Face Detection ios7 Coordinates Scaling Issue

I am using the Face Detection API and would like to know how to convert coordinates from large high resolution images to smaller images displayed on an UIImageView. So far, I have inverted the co-ordinate system of my image and container view so that it matches the Core Image coordinate system and I have also calculated the ratio of heights between my high resolution image and the dimensions of my image view, but the coordinates that I am getting are not accurate at all. I am assuming I cannot convert the points from the large image to the small image as easily as I thought. Can anyone please point out my mistake(s)?
[self.shownImageViewer setTransform:CGAffineTransformMakeScale(1,-1)];
[self.view setTransform:CGAffineTransformMakeScale(1,-1)];
//240 x 320
self.shownImageViewer.image = self.imageToShow;
yscale = 320/self.imageToShow.size.height;
xscale = 240/self.imageToShow.size.width;
height = 320;
CIImage *image = [[CIImage alloc] initWithCGImage:[self.imageToShow CGImage]];
CIContext *faceDetectionContext = [CIContext contextWithOptions:nil];
CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:faceDetectionContext options:#{CIDetectorAccuracy:CIDetectorAccuracyHigh}];
NSArray * features = [faceDetector featuresInImage:image options:#{CIDetectorImageOrientation:[NSNumber numberWithInt:6]}];
for(CIFaceFeature *feature in features)
{
if(feature.hasLeftEyePosition)
self.leftEye = feature.leftEyePosition;
if(feature.hasRightEyePosition)
self.rightEye = feature.rightEyePosition;
if(feature.hasMouthPosition)
self.mouth = feature.mouthPosition;
}
NSLog(#"%g and %g",xscale*self.rightEye.x, yscale*self.rightEye.y);
NSLog(#"%g and %g",yscale*self.leftEye.x, yscale*self.leftEye.y);
NSLog(#"%g",height);
self.rightEyeMarker.center = CGPointMake(xscale*self.rightEye.x,yscale*self.rightEye.y);
self.leftEyeMarker.center = CGPointMake(xscale*self.leftEye.x,yscale*self.leftEye.y);
I would start by removing the transform from your image view. Just have the image view display the image in the orientation its in already. This will make the calculations a lot easier.
Now the CIFaceFeature outputs its features in image coordinates. But your imageView might be smaller or bigger. So first, keep it simple and setting the imageView's content mode to top left.
imageView.contentMode = UIViewContentModeTopLeft;
Now you dont have to scale the coordinates at all.
When you are happy with that set the contentMode to something more sensible like aspect fit.
imageView.contentMode = UIViewContentModeScaleAspectFit;
Now you need to scale the x and the y co-ordinates by multiplying each co-ordinate by the aspect fit ratio.
CGFloat xRatio = imageView.frame.size.width / image.size.width;
CGFloat yRatio = imageView.frame.size.height / image.size.height;
CGFloat aspectFitRatio = MIN(xRatio, yRatio);
Lastly you want to add the rotation back in. Try to avoid this if possible, e.g. fix you images so they are upright to begin with.

iOS Sprite Kit - SKSpriteNode's .centerRect property not working

I was going through the SpriteKit documentation by Apple and came across a really useful feature that I could use when programming my UI. The problem is I can't get it to work.
Please see this page and scroll down to "Resizing a Sprite" - Apple Docs
I have literally copied the image dimensions and used the same code incase I was doing something wrong. But I always end up with a stretched looking image rather than the correct "end caps" staying the same scale.
I am referring to this code:
SKSpriteNode *button = [SKSpriteNode spriteWithImageNamed:#"stretchable_button.png"];
button.centerRect = CGRectMake(12.0/28.0,12.0/28.0,4.0/28.0,4.0/28.0);
What am I doing wrong? Is there a step I have missed?
EDIT:
Here is the code I have been using. I stripped it of my button class and tried to use it with an SKSPriteNode but still the problem persists. I also changed the image just to make sure it wasnt that. The image im using is a 32x32 at normal size.
SKSpriteNode *button = [SKSpriteNode spriteNodeWithImageNamed:#"Button.png"];
[self addChild:button];
button.position = ccp(200, 200);
button.size = CGSizeMake(128, 64);
button.centerRect = CGRectMake(9/32, 9/32, 14/32, 14/32);
The .centerRect property works as documented if you adjust the sprites .scale property.
Try:
SKTexture *texture = [SKTexture textureWithImageNamed:#"Button.png"];
SKSpriteNode *button = [[SKSpriteNode alloc] initWithTexture:texture];
button.centerRect = CGRectMake(9/32, 9/32, 14/32, 14/32);
[self addChild:button];
button.xScale = 128.0/texture.size.width;
button.yScale = 64.0/texture.size.height;
9/32 is integer division, so the result passed to CGRectMake is zero. Ditto the other three parameters. If you use floating point literals like the example you cite, you might get better results.
Here's a refresh of how exactly this works. By the way, my image size width is 48 pixels and height is 52 pixels, but this doesn't matter at all. Any image can be used:
SKSpriteNode *button = [SKSpriteNode spriteNodeWithImageNamed:#"Button.png"];
//(x, y, width, height). First two values are the four corners of the image that you DON'T want touched/resized (They will just be moved).
//The second two values represent how much of images width & height you want cut out & used as stretching material. Cut out happens from the center of the image.
button.centerRect = CGRectMake(20/button1.frame.size.width, 20/button1.frame.size.height, 5/button1.frame.size.width, 15/button1.frame.size.height);
button.position = CGPointMake(self.frame.size.width/2, self.frame.size.height/2); //Positions sprite in the middle of the screen.
button.xScale = 4; //Resizes width (This is all I needed).
//button.yScale = 2; //Resizes height (Commented out because I didn't need this. You can uncomment if the button needs to be higher).
[self addChild:button];
Read the section called "Resizing a Sprite" in this document: https://developer.apple.com/library/ios/documentation/GraphicsAnimation/Conceptual/SpriteKit_PG/Sprites/Sprites.html#//apple_ref/doc/uid/TP40013043-CH9-SW10
'Figure 2-4 A stretchable button texture' demonstrates how the (x, y, width, height) works.
Based on rwr's answer here is a working init method for a SKSpriteNode. I use this in my own game. Basically you make insets of 10px all around the the output image. And then call it like this:
[[HudBoxScalable alloc] initWithTexture:[atlas textureNamed:#"hud_box_9grid.png"] inset:10 size:CGSizeMake(300, 100) delegate:(id<HudBoxDelegate>)clickedObject];
-(id) initWithTexture:(SKTexture *)texture inset:(float)inset size:(CGSize)size {
if (self=[super initWithTexture:texture]) {
self.centerRect = CGRectMake(inset/texture.size.width,inset/texture.size.height,(texture.size.width-inset*2)/texture.size.width,(texture.size.height-inset*2)/texture.size.height);
self.xScale = size.width/texture.size.width;
self.yScale = size.height/texture.size.height;
}
return self;
}

How to add Stretchable UIImage in the CALayer.contents?

I have a CALayer and I want to add to it a stretchable image. If I just do:
_layer.contents = (id)[[UIImage imageNamed:#"grayTrim.png"] resizableImageWithCapInsets:UIEdgeInsetsMake(0.0, 15.0, 0.0, 15.0)].CGImage;
it won't work since the layers' default contentGravity is kCAGravityResize.
I've read that this could be accomplished using the contentsCenter but I cannot seem to figure out how exactly would I use that to achieve the stretched image in my CALayer.
Any ideas are welcome!
Horatiu
The response to this question is this one. Lets say you have a stretchable image which stretches only in width and has the height fixed (for simplicity sake).
The image is 31px width ( 15px fixed size - doesn't stretch -, 1px will be stretched)
Assuming your layer is a CALayer subclass your init method should look like this:
- (id)init
{
self = [super init];
if (self) {
UIImage *stretchableImage = (id)[UIImage imageNamed:#"stretchableImage.png"];
self.contents = (id)stretchableImage.CGImage;
self.contentsScale = [UIScreen mainScreen].scale; //<-needed for the retina display, otherwise our image will not be scaled properly
self.contentsCenter = CGRectMake(15.0/stretchableImage.size.width,0.0/stretchableImage.size.height,1.0/stretchableImage.size.width,0.0/stretchableImage.size.height);
}
return self;
}
as per documentation the contentsCenter rectangle must have values between 0-1.
Defaults to the unit rectangle (0.0,0.0) (1.0,1.0) resulting in the entire image being scaled. If the rectangle extends outside the unit rectangle the result is undefined.
This is it. Hopefully someone else will find this useful and it will save some development time.

Resources